text
stringlengths 1
22.8M
|
|---|
Józef Morozewicz (27 March 1865 – 12 June 1941) was a Polish mineralogist and petrologist. He was the founder and first director of the National Geological Institute (Państwowy Instytut Geologiczny) from 1919-1937, as well as the founder and first president of the League of Protection of Nature (Liga Ochrony Przyrody).
External links
Further reading
1865 births
1941 deaths
Mineralogists
Polish geologists
Petrologists
|
```objective-c
/*
* 'OpenSSL for Ruby' project
* All rights reserved.
*/
/*
* This program is licensed under the same licence as Ruby.
* (See the file 'LICENCE'.)
*/
#if !defined(_OSSL_NS_SPKI_H_)
#define _OSSL_NS_SPKI_H_
extern VALUE mNetscape;
extern VALUE cSPKI;
extern VALUE eSPKIError;
void Init_ossl_ns_spki(void);
#endif /* _OSSL_NS_SPKI_H_ */
```
|
Tony William Negus is an Australian diplomat and retired police officer who was the Commissioner of the Australian Federal Police (AFP), being sworn in on 7 September 2009 for a five-year term. He was the sixth Commissioner of the AFP and the second appointed from within the AFP. On 1 December 2014, he was appointed Australian High Commissioner to Canada, effective on 15 January 2015.
Education
Negus holds a master's degree in Public Policy and Administration, and a Graduate Diploma in Executive Leadership. At Harvard University he has completed the Executive Leadership Program.
Career
Negus started his law enforcement career in traffic operations in Canberra in 1982, and later as a detective in the Australian Capital Territory. He worked in community policing, federal investigations, human resources, and protection as well as in national operations in Brisbane, Sydney, and Canberra.
In June 2005, Negus was awarded the Australian Police Medal (APM). One year later, in July 2006, Negus was appointed National Manager of Human Resources, with responsibility for learning and development, professional standards, and people strategies. Before he was appointed as Commissioner of the AFP he had been Deputy Commissioner Operations since October 2007, where he had responsibility for border operations, economic and special operations, forensics and data centres, high technology crime operations, internal liaison networks, and international deployments.
He stepped down from his role as Commissioner at the end of his term in September 2014 and was replaced by his former deputy Andrew Colvin.
See also
Australian Federal Police
Law enforcement in Australia
References
External links
AFP Police Commissioner official website; accessed 18 January 2015
Profile, thepowerindex.com.au; accessed 18 January 2015
Living people
Harvard Business School alumni
Commissioners of the Australian Federal Police
High Commissioners of Australia to Canada
People from the Australian Capital Territory
People from New South Wales
Officers of the Order of Australia
Recipients of the Australian Police Medal
Place of birth missing (living people)
Year of birth missing (living people)
|
```c
/* ssl/s3_cbc.c */
/* ====================================================================
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
*
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
*
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in
* the documentation and/or other materials provided with the
* distribution.
*
* 3. All advertising materials mentioning features or use of this
* software must display the following acknowledgment:
* "This product includes software developed by the OpenSSL Project
* for use in the OpenSSL Toolkit. (path_to_url"
*
* 4. The names "OpenSSL Toolkit" and "OpenSSL Project" must not be used to
* endorse or promote products derived from this software without
* prior written permission. For written permission, please contact
* openssl-core@openssl.org.
*
* 5. Products derived from this software may not be called "OpenSSL"
* nor may "OpenSSL" appear in their names without prior written
* permission of the OpenSSL Project.
*
* 6. Redistributions of any form whatsoever must retain the following
* acknowledgment:
* "This product includes software developed by the OpenSSL Project
* for use in the OpenSSL Toolkit (path_to_url"
*
* THIS SOFTWARE IS PROVIDED BY THE OpenSSL PROJECT ``AS IS'' AND ANY
* EXPRESSED OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
* PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE OpenSSL PROJECT OR
* ITS CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
* NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
* LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
* STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
* OF THE POSSIBILITY OF SUCH DAMAGE.
* ====================================================================
*
* This product includes cryptographic software written by Eric Young
* (eay@cryptsoft.com). This product includes software written by Tim
* Hudson (tjh@cryptsoft.com).
*
*/
#include "../crypto/constant_time_locl.h"
#include "ssl_locl.h"
#include <openssl/md5.h>
#include <openssl/sha.h>
/*
* MAX_HASH_BIT_COUNT_BYTES is the maximum number of bytes in the hash's
* length field. (SHA-384/512 have 128-bit length.)
*/
#define MAX_HASH_BIT_COUNT_BYTES 16
/*
* MAX_HASH_BLOCK_SIZE is the maximum hash block size that we'll support.
* Currently SHA-384/512 has a 128-byte block size and that's the largest
* supported by TLS.)
*/
#define MAX_HASH_BLOCK_SIZE 128
/*-
* ssl3_cbc_remove_padding removes padding from the decrypted, SSLv3, CBC
* record in |rec| by updating |rec->length| in constant time.
*
* block_size: the block size of the cipher used to encrypt the record.
* returns:
* 0: (in non-constant time) if the record is publicly invalid.
* 1: if the padding was valid
* -1: otherwise.
*/
int ssl3_cbc_remove_padding(const SSL *s,
SSL3_RECORD *rec,
unsigned block_size, unsigned mac_size)
{
unsigned padding_length, good;
const unsigned overhead = 1 /* padding length byte */ + mac_size;
/*
* These lengths are all public so we can test them in non-constant time.
*/
if (overhead > rec->length)
return 0;
padding_length = rec->data[rec->length - 1];
good = constant_time_ge(rec->length, padding_length + overhead);
/* SSLv3 requires that the padding is minimal. */
good &= constant_time_ge(block_size, padding_length + 1);
padding_length = good & (padding_length + 1);
rec->length -= padding_length;
rec->type |= padding_length << 8; /* kludge: pass padding length */
return constant_time_select_int(good, 1, -1);
}
/*-
* tls1_cbc_remove_padding removes the CBC padding from the decrypted, TLS, CBC
* record in |rec| in constant time and returns 1 if the padding is valid and
* -1 otherwise. It also removes any explicit IV from the start of the record
* without leaking any timing about whether there was enough space after the
* padding was removed.
*
* block_size: the block size of the cipher used to encrypt the record.
* returns:
* 0: (in non-constant time) if the record is publicly invalid.
* 1: if the padding was valid
* -1: otherwise.
*/
int tls1_cbc_remove_padding(const SSL *s,
SSL3_RECORD *rec,
unsigned block_size, unsigned mac_size)
{
unsigned padding_length, good, to_check, i;
const unsigned overhead = 1 /* padding length byte */ + mac_size;
/* Check if version requires explicit IV */
if (SSL_USE_EXPLICIT_IV(s)) {
/*
* These lengths are all public so we can test them in non-constant
* time.
*/
if (overhead + block_size > rec->length)
return 0;
/* We can now safely skip explicit IV */
rec->data += block_size;
rec->input += block_size;
rec->length -= block_size;
} else if (overhead > rec->length)
return 0;
padding_length = rec->data[rec->length - 1];
/*
* NB: if compression is in operation the first packet may not be of even
* length so the padding bug check cannot be performed. This bug
* workaround has been around since SSLeay so hopefully it is either
* fixed now or no buggy implementation supports compression [steve]
*/
if ((s->options & SSL_OP_TLS_BLOCK_PADDING_BUG) && !s->expand) {
/* First packet is even in size, so check */
if ((CRYPTO_memcmp(s->s3->read_sequence, "\0\0\0\0\0\0\0\0", 8) == 0) &&
!(padding_length & 1)) {
s->s3->flags |= TLS1_FLAGS_TLS_PADDING_BUG;
}
if ((s->s3->flags & TLS1_FLAGS_TLS_PADDING_BUG) && padding_length > 0) {
padding_length--;
}
}
if (EVP_CIPHER_flags(s->enc_read_ctx->cipher) & EVP_CIPH_FLAG_AEAD_CIPHER) {
/* padding is already verified */
rec->length -= padding_length + 1;
return 1;
}
good = constant_time_ge(rec->length, overhead + padding_length);
/*
* The padding consists of a length byte at the end of the record and
* then that many bytes of padding, all with the same value as the length
* byte. Thus, with the length byte included, there are i+1 bytes of
* padding. We can't check just |padding_length+1| bytes because that
* leaks decrypted information. Therefore we always have to check the
* maximum amount of padding possible. (Again, the length of the record
* is public information so we can use it.)
*/
to_check = 255; /* maximum amount of padding. */
if (to_check > rec->length - 1)
to_check = rec->length - 1;
for (i = 0; i < to_check; i++) {
unsigned char mask = constant_time_ge_8(padding_length, i);
unsigned char b = rec->data[rec->length - 1 - i];
/*
* The final |padding_length+1| bytes should all have the value
* |padding_length|. Therefore the XOR should be zero.
*/
good &= ~(mask & (padding_length ^ b));
}
/*
* If any of the final |padding_length+1| bytes had the wrong value, one
* or more of the lower eight bits of |good| will be cleared.
*/
good = constant_time_eq(0xff, good & 0xff);
padding_length = good & (padding_length + 1);
rec->length -= padding_length;
rec->type |= padding_length << 8; /* kludge: pass padding length */
return constant_time_select_int(good, 1, -1);
}
/*-
* ssl3_cbc_copy_mac copies |md_size| bytes from the end of |rec| to |out| in
* constant time (independent of the concrete value of rec->length, which may
* vary within a 256-byte window).
*
* ssl3_cbc_remove_padding or tls1_cbc_remove_padding must be called prior to
* this function.
*
* On entry:
* rec->orig_len >= md_size
* md_size <= EVP_MAX_MD_SIZE
*
* If CBC_MAC_ROTATE_IN_PLACE is defined then the rotation is performed with
* variable accesses in a 64-byte-aligned buffer. Assuming that this fits into
* a single or pair of cache-lines, then the variable memory accesses don't
* actually affect the timing. CPUs with smaller cache-lines [if any] are
* not multi-core and are not considered vulnerable to cache-timing attacks.
*/
#define CBC_MAC_ROTATE_IN_PLACE
void ssl3_cbc_copy_mac(unsigned char *out,
const SSL3_RECORD *rec,
unsigned md_size, unsigned orig_len)
{
#if defined(CBC_MAC_ROTATE_IN_PLACE)
unsigned char rotated_mac_buf[64 + EVP_MAX_MD_SIZE];
unsigned char *rotated_mac;
#else
unsigned char rotated_mac[EVP_MAX_MD_SIZE];
#endif
/*
* mac_end is the index of |rec->data| just after the end of the MAC.
*/
unsigned mac_end = rec->length;
unsigned mac_start = mac_end - md_size;
/*
* scan_start contains the number of bytes that we can ignore because the
* MAC's position can only vary by 255 bytes.
*/
unsigned scan_start = 0;
unsigned i, j;
unsigned div_spoiler;
unsigned rotate_offset;
OPENSSL_assert(orig_len >= md_size);
OPENSSL_assert(md_size <= EVP_MAX_MD_SIZE);
#if defined(CBC_MAC_ROTATE_IN_PLACE)
rotated_mac = rotated_mac_buf + ((0 - (size_t)rotated_mac_buf) & 63);
#endif
/* This information is public so it's safe to branch based on it. */
if (orig_len > md_size + 255 + 1)
scan_start = orig_len - (md_size + 255 + 1);
/*
* div_spoiler contains a multiple of md_size that is used to cause the
* modulo operation to be constant time. Without this, the time varies
* based on the amount of padding when running on Intel chips at least.
* The aim of right-shifting md_size is so that the compiler doesn't
* figure out that it can remove div_spoiler as that would require it to
* prove that md_size is always even, which I hope is beyond it.
*/
div_spoiler = md_size >> 1;
div_spoiler <<= (sizeof(div_spoiler) - 1) * 8;
rotate_offset = (div_spoiler + mac_start - scan_start) % md_size;
memset(rotated_mac, 0, md_size);
for (i = scan_start, j = 0; i < orig_len; i++) {
unsigned char mac_started = constant_time_ge_8(i, mac_start);
unsigned char mac_ended = constant_time_ge_8(i, mac_end);
unsigned char b = rec->data[i];
rotated_mac[j++] |= b & mac_started & ~mac_ended;
j &= constant_time_lt(j, md_size);
}
/* Now rotate the MAC */
#if defined(CBC_MAC_ROTATE_IN_PLACE)
j = 0;
for (i = 0; i < md_size; i++) {
/* in case cache-line is 32 bytes, touch second line */
((volatile unsigned char *)rotated_mac)[rotate_offset ^ 32];
out[j++] = rotated_mac[rotate_offset++];
rotate_offset &= constant_time_lt(rotate_offset, md_size);
}
#else
memset(out, 0, md_size);
rotate_offset = md_size - rotate_offset;
rotate_offset &= constant_time_lt(rotate_offset, md_size);
for (i = 0; i < md_size; i++) {
for (j = 0; j < md_size; j++)
out[j] |= rotated_mac[i] & constant_time_eq_8(j, rotate_offset);
rotate_offset++;
rotate_offset &= constant_time_lt(rotate_offset, md_size);
}
#endif
}
/*
* u32toLE serialises an unsigned, 32-bit number (n) as four bytes at (p) in
* little-endian order. The value of p is advanced by four.
*/
#define u32toLE(n, p) \
(*((p)++)=(unsigned char)(n), \
*((p)++)=(unsigned char)(n>>8), \
*((p)++)=(unsigned char)(n>>16), \
*((p)++)=(unsigned char)(n>>24))
/*
* These functions serialize the state of a hash and thus perform the
* standard "final" operation without adding the padding and length that such
* a function typically does.
*/
static void tls1_md5_final_raw(void *ctx, unsigned char *md_out)
{
MD5_CTX *md5 = ctx;
u32toLE(md5->A, md_out);
u32toLE(md5->B, md_out);
u32toLE(md5->C, md_out);
u32toLE(md5->D, md_out);
}
static void tls1_sha1_final_raw(void *ctx, unsigned char *md_out)
{
SHA_CTX *sha1 = ctx;
l2n(sha1->h0, md_out);
l2n(sha1->h1, md_out);
l2n(sha1->h2, md_out);
l2n(sha1->h3, md_out);
l2n(sha1->h4, md_out);
}
#define LARGEST_DIGEST_CTX SHA_CTX
#ifndef OPENSSL_NO_SHA256
static void tls1_sha256_final_raw(void *ctx, unsigned char *md_out)
{
SHA256_CTX *sha256 = ctx;
unsigned i;
for (i = 0; i < 8; i++) {
l2n(sha256->h[i], md_out);
}
}
# undef LARGEST_DIGEST_CTX
# define LARGEST_DIGEST_CTX SHA256_CTX
#endif
#ifndef OPENSSL_NO_SHA512
static void tls1_sha512_final_raw(void *ctx, unsigned char *md_out)
{
SHA512_CTX *sha512 = ctx;
unsigned i;
for (i = 0; i < 8; i++) {
l2n8(sha512->h[i], md_out);
}
}
# undef LARGEST_DIGEST_CTX
# define LARGEST_DIGEST_CTX SHA512_CTX
#endif
/*
* ssl3_cbc_record_digest_supported returns 1 iff |ctx| uses a hash function
* which ssl3_cbc_digest_record supports.
*/
char ssl3_cbc_record_digest_supported(const EVP_MD_CTX *ctx)
{
#ifdef OPENSSL_FIPS
if (FIPS_mode())
return 0;
#endif
switch (EVP_MD_CTX_type(ctx)) {
case NID_md5:
case NID_sha1:
#ifndef OPENSSL_NO_SHA256
case NID_sha224:
case NID_sha256:
#endif
#ifndef OPENSSL_NO_SHA512
case NID_sha384:
case NID_sha512:
#endif
return 1;
default:
return 0;
}
}
/*-
* ssl3_cbc_digest_record computes the MAC of a decrypted, padded SSLv3/TLS
* record.
*
* ctx: the EVP_MD_CTX from which we take the hash function.
* ssl3_cbc_record_digest_supported must return true for this EVP_MD_CTX.
* md_out: the digest output. At most EVP_MAX_MD_SIZE bytes will be written.
* md_out_size: if non-NULL, the number of output bytes is written here.
* header: the 13-byte, TLS record header.
* data: the record data itself, less any preceeding explicit IV.
* data_plus_mac_size: the secret, reported length of the data and MAC
* once the padding has been removed.
* data_plus_mac_plus_padding_size: the public length of the whole
* record, including padding.
* is_sslv3: non-zero if we are to use SSLv3. Otherwise, TLS.
*
* On entry: by virtue of having been through one of the remove_padding
* functions, above, we know that data_plus_mac_size is large enough to contain
* a padding byte and MAC. (If the padding was invalid, it might contain the
* padding too. )
* Returns 1 on success or 0 on error
*/
int ssl3_cbc_digest_record(const EVP_MD_CTX *ctx,
unsigned char *md_out,
size_t *md_out_size,
const unsigned char header[13],
const unsigned char *data,
size_t data_plus_mac_size,
size_t data_plus_mac_plus_padding_size,
const unsigned char *mac_secret,
unsigned mac_secret_length, char is_sslv3)
{
union {
double align;
unsigned char c[sizeof(LARGEST_DIGEST_CTX)];
} md_state;
void (*md_final_raw) (void *ctx, unsigned char *md_out);
void (*md_transform) (void *ctx, const unsigned char *block);
unsigned md_size, md_block_size = 64;
unsigned sslv3_pad_length = 40, header_length, variance_blocks,
len, max_mac_bytes, num_blocks,
num_starting_blocks, k, mac_end_offset, c, index_a, index_b;
unsigned int bits; /* at most 18 bits */
unsigned char length_bytes[MAX_HASH_BIT_COUNT_BYTES];
/* hmac_pad is the masked HMAC key. */
unsigned char hmac_pad[MAX_HASH_BLOCK_SIZE];
unsigned char first_block[MAX_HASH_BLOCK_SIZE];
unsigned char mac_out[EVP_MAX_MD_SIZE];
unsigned i, j, md_out_size_u;
EVP_MD_CTX md_ctx;
/*
* mdLengthSize is the number of bytes in the length field that
* terminates * the hash.
*/
unsigned md_length_size = 8;
char length_is_big_endian = 1;
/*
* This is a, hopefully redundant, check that allows us to forget about
* many possible overflows later in this function.
*/
OPENSSL_assert(data_plus_mac_plus_padding_size < 1024 * 1024);
switch (EVP_MD_CTX_type(ctx)) {
case NID_md5:
if (MD5_Init((MD5_CTX *)md_state.c) <= 0)
return 0;
md_final_raw = tls1_md5_final_raw;
md_transform =
(void (*)(void *ctx, const unsigned char *block))MD5_Transform;
md_size = 16;
sslv3_pad_length = 48;
length_is_big_endian = 0;
break;
case NID_sha1:
if (SHA1_Init((SHA_CTX *)md_state.c) <= 0)
return 0;
md_final_raw = tls1_sha1_final_raw;
md_transform =
(void (*)(void *ctx, const unsigned char *block))SHA1_Transform;
md_size = 20;
break;
#ifndef OPENSSL_NO_SHA256
case NID_sha224:
if (SHA224_Init((SHA256_CTX *)md_state.c) <= 0)
return 0;
md_final_raw = tls1_sha256_final_raw;
md_transform =
(void (*)(void *ctx, const unsigned char *block))SHA256_Transform;
md_size = 224 / 8;
break;
case NID_sha256:
if (SHA256_Init((SHA256_CTX *)md_state.c) <= 0)
return 0;
md_final_raw = tls1_sha256_final_raw;
md_transform =
(void (*)(void *ctx, const unsigned char *block))SHA256_Transform;
md_size = 32;
break;
#endif
#ifndef OPENSSL_NO_SHA512
case NID_sha384:
if (SHA384_Init((SHA512_CTX *)md_state.c) <= 0)
return 0;
md_final_raw = tls1_sha512_final_raw;
md_transform =
(void (*)(void *ctx, const unsigned char *block))SHA512_Transform;
md_size = 384 / 8;
md_block_size = 128;
md_length_size = 16;
break;
case NID_sha512:
if (SHA512_Init((SHA512_CTX *)md_state.c) <= 0)
return 0;
md_final_raw = tls1_sha512_final_raw;
md_transform =
(void (*)(void *ctx, const unsigned char *block))SHA512_Transform;
md_size = 64;
md_block_size = 128;
md_length_size = 16;
break;
#endif
default:
/*
* ssl3_cbc_record_digest_supported should have been called first to
* check that the hash function is supported.
*/
OPENSSL_assert(0);
if (md_out_size)
*md_out_size = 0;
return 0;
}
OPENSSL_assert(md_length_size <= MAX_HASH_BIT_COUNT_BYTES);
OPENSSL_assert(md_block_size <= MAX_HASH_BLOCK_SIZE);
OPENSSL_assert(md_size <= EVP_MAX_MD_SIZE);
header_length = 13;
if (is_sslv3) {
header_length = mac_secret_length + sslv3_pad_length + 8 /* sequence
* number */ +
1 /* record type */ +
2 /* record length */ ;
}
/*
* variance_blocks is the number of blocks of the hash that we have to
* calculate in constant time because they could be altered by the
* padding value. In SSLv3, the padding must be minimal so the end of
* the plaintext varies by, at most, 15+20 = 35 bytes. (We conservatively
* assume that the MAC size varies from 0..20 bytes.) In case the 9 bytes
* of hash termination (0x80 + 64-bit length) don't fit in the final
* block, we say that the final two blocks can vary based on the padding.
* TLSv1 has MACs up to 48 bytes long (SHA-384) and the padding is not
* required to be minimal. Therefore we say that the final six blocks can
* vary based on the padding. Later in the function, if the message is
* short and there obviously cannot be this many blocks then
* variance_blocks can be reduced.
*/
variance_blocks = is_sslv3 ? 2 : 6;
/*
* From now on we're dealing with the MAC, which conceptually has 13
* bytes of `header' before the start of the data (TLS) or 71/75 bytes
* (SSLv3)
*/
len = data_plus_mac_plus_padding_size + header_length;
/*
* max_mac_bytes contains the maximum bytes of bytes in the MAC,
* including * |header|, assuming that there's no padding.
*/
max_mac_bytes = len - md_size - 1;
/* num_blocks is the maximum number of hash blocks. */
num_blocks =
(max_mac_bytes + 1 + md_length_size + md_block_size -
1) / md_block_size;
/*
* In order to calculate the MAC in constant time we have to handle the
* final blocks specially because the padding value could cause the end
* to appear somewhere in the final |variance_blocks| blocks and we can't
* leak where. However, |num_starting_blocks| worth of data can be hashed
* right away because no padding value can affect whether they are
* plaintext.
*/
num_starting_blocks = 0;
/*
* k is the starting byte offset into the conceptual header||data where
* we start processing.
*/
k = 0;
/*
* mac_end_offset is the index just past the end of the data to be MACed.
*/
mac_end_offset = data_plus_mac_size + header_length - md_size;
/*
* c is the index of the 0x80 byte in the final hash block that contains
* application data.
*/
c = mac_end_offset % md_block_size;
/*
* index_a is the hash block number that contains the 0x80 terminating
* value.
*/
index_a = mac_end_offset / md_block_size;
/*
* index_b is the hash block number that contains the 64-bit hash length,
* in bits.
*/
index_b = (mac_end_offset + md_length_size) / md_block_size;
/*
* bits is the hash-length in bits. It includes the additional hash block
* for the masked HMAC key, or whole of |header| in the case of SSLv3.
*/
/*
* For SSLv3, if we're going to have any starting blocks then we need at
* least two because the header is larger than a single block.
*/
if (num_blocks > variance_blocks + (is_sslv3 ? 1 : 0)) {
num_starting_blocks = num_blocks - variance_blocks;
k = md_block_size * num_starting_blocks;
}
bits = 8 * mac_end_offset;
if (!is_sslv3) {
/*
* Compute the initial HMAC block. For SSLv3, the padding and secret
* bytes are included in |header| because they take more than a
* single block.
*/
bits += 8 * md_block_size;
memset(hmac_pad, 0, md_block_size);
OPENSSL_assert(mac_secret_length <= sizeof(hmac_pad));
memcpy(hmac_pad, mac_secret, mac_secret_length);
for (i = 0; i < md_block_size; i++)
hmac_pad[i] ^= 0x36;
md_transform(md_state.c, hmac_pad);
}
if (length_is_big_endian) {
memset(length_bytes, 0, md_length_size - 4);
length_bytes[md_length_size - 4] = (unsigned char)(bits >> 24);
length_bytes[md_length_size - 3] = (unsigned char)(bits >> 16);
length_bytes[md_length_size - 2] = (unsigned char)(bits >> 8);
length_bytes[md_length_size - 1] = (unsigned char)bits;
} else {
memset(length_bytes, 0, md_length_size);
length_bytes[md_length_size - 5] = (unsigned char)(bits >> 24);
length_bytes[md_length_size - 6] = (unsigned char)(bits >> 16);
length_bytes[md_length_size - 7] = (unsigned char)(bits >> 8);
length_bytes[md_length_size - 8] = (unsigned char)bits;
}
if (k > 0) {
if (is_sslv3) {
unsigned overhang;
/*
* The SSLv3 header is larger than a single block. overhang is
* the number of bytes beyond a single block that the header
* consumes: either 7 bytes (SHA1) or 11 bytes (MD5). There are no
* ciphersuites in SSLv3 that are not SHA1 or MD5 based and
* therefore we can be confident that the header_length will be
* greater than |md_block_size|. However we add a sanity check just
* in case
*/
if (header_length <= md_block_size) {
/* Should never happen */
return 0;
}
overhang = header_length - md_block_size;
md_transform(md_state.c, header);
memcpy(first_block, header + md_block_size, overhang);
memcpy(first_block + overhang, data, md_block_size - overhang);
md_transform(md_state.c, first_block);
for (i = 1; i < k / md_block_size - 1; i++)
md_transform(md_state.c, data + md_block_size * i - overhang);
} else {
/* k is a multiple of md_block_size. */
memcpy(first_block, header, 13);
memcpy(first_block + 13, data, md_block_size - 13);
md_transform(md_state.c, first_block);
for (i = 1; i < k / md_block_size; i++)
md_transform(md_state.c, data + md_block_size * i - 13);
}
}
memset(mac_out, 0, sizeof(mac_out));
/*
* We now process the final hash blocks. For each block, we construct it
* in constant time. If the |i==index_a| then we'll include the 0x80
* bytes and zero pad etc. For each block we selectively copy it, in
* constant time, to |mac_out|.
*/
for (i = num_starting_blocks; i <= num_starting_blocks + variance_blocks;
i++) {
unsigned char block[MAX_HASH_BLOCK_SIZE];
unsigned char is_block_a = constant_time_eq_8(i, index_a);
unsigned char is_block_b = constant_time_eq_8(i, index_b);
for (j = 0; j < md_block_size; j++) {
unsigned char b = 0, is_past_c, is_past_cp1;
if (k < header_length)
b = header[k];
else if (k < data_plus_mac_plus_padding_size + header_length)
b = data[k - header_length];
k++;
is_past_c = is_block_a & constant_time_ge_8(j, c);
is_past_cp1 = is_block_a & constant_time_ge_8(j, c + 1);
/*
* If this is the block containing the end of the application
* data, and we are at the offset for the 0x80 value, then
* overwrite b with 0x80.
*/
b = constant_time_select_8(is_past_c, 0x80, b);
/*
* If this the the block containing the end of the application
* data and we're past the 0x80 value then just write zero.
*/
b = b & ~is_past_cp1;
/*
* If this is index_b (the final block), but not index_a (the end
* of the data), then the 64-bit length didn't fit into index_a
* and we're having to add an extra block of zeros.
*/
b &= ~is_block_b | is_block_a;
/*
* The final bytes of one of the blocks contains the length.
*/
if (j >= md_block_size - md_length_size) {
/* If this is index_b, write a length byte. */
b = constant_time_select_8(is_block_b,
length_bytes[j -
(md_block_size -
md_length_size)], b);
}
block[j] = b;
}
md_transform(md_state.c, block);
md_final_raw(md_state.c, block);
/* If this is index_b, copy the hash value to |mac_out|. */
for (j = 0; j < md_size; j++)
mac_out[j] |= block[j] & is_block_b;
}
EVP_MD_CTX_init(&md_ctx);
if (EVP_DigestInit_ex(&md_ctx, ctx->digest, NULL /* engine */ ) <= 0)
goto err;
if (is_sslv3) {
/* We repurpose |hmac_pad| to contain the SSLv3 pad2 block. */
memset(hmac_pad, 0x5c, sslv3_pad_length);
if (EVP_DigestUpdate(&md_ctx, mac_secret, mac_secret_length) <= 0
|| EVP_DigestUpdate(&md_ctx, hmac_pad, sslv3_pad_length) <= 0
|| EVP_DigestUpdate(&md_ctx, mac_out, md_size) <= 0)
goto err;
} else {
/* Complete the HMAC in the standard manner. */
for (i = 0; i < md_block_size; i++)
hmac_pad[i] ^= 0x6a;
if (EVP_DigestUpdate(&md_ctx, hmac_pad, md_block_size) <= 0
|| EVP_DigestUpdate(&md_ctx, mac_out, md_size) <= 0)
goto err;
}
EVP_DigestFinal(&md_ctx, md_out, &md_out_size_u);
if (md_out_size)
*md_out_size = md_out_size_u;
EVP_MD_CTX_cleanup(&md_ctx);
return 1;
err:
EVP_MD_CTX_cleanup(&md_ctx);
return 0;
}
#ifdef OPENSSL_FIPS
/*
* Due to the need to use EVP in FIPS mode we can't reimplement digests but
* we can ensure the number of blocks processed is equal for all cases by
* digesting additional data.
*/
void tls_fips_digest_extra(const EVP_CIPHER_CTX *cipher_ctx,
EVP_MD_CTX *mac_ctx, const unsigned char *data,
size_t data_len, size_t orig_len)
{
size_t block_size, digest_pad, blocks_data, blocks_orig;
if (EVP_CIPHER_CTX_mode(cipher_ctx) != EVP_CIPH_CBC_MODE)
return;
block_size = EVP_MD_CTX_block_size(mac_ctx);
/*-
* We are in FIPS mode if we get this far so we know we have only SHA*
* digests and TLS to deal with.
* Minimum digest padding length is 17 for SHA384/SHA512 and 9
* otherwise.
* Additional header is 13 bytes. To get the number of digest blocks
* processed round up the amount of data plus padding to the nearest
* block length. Block length is 128 for SHA384/SHA512 and 64 otherwise.
* So we have:
* blocks = (payload_len + digest_pad + 13 + block_size - 1)/block_size
* equivalently:
* blocks = (payload_len + digest_pad + 12)/block_size + 1
* HMAC adds a constant overhead.
* We're ultimately only interested in differences so this becomes
* blocks = (payload_len + 29)/128
* for SHA384/SHA512 and
* blocks = (payload_len + 21)/64
* otherwise.
*/
digest_pad = block_size == 64 ? 21 : 29;
blocks_orig = (orig_len + digest_pad) / block_size;
blocks_data = (data_len + digest_pad) / block_size;
/*
* MAC enough blocks to make up the difference between the original and
* actual lengths plus one extra block to ensure this is never a no op.
* The "data" pointer should always have enough space to perform this
* operation as it is large enough for a maximum length TLS buffer.
*/
EVP_DigestSignUpdate(mac_ctx, data,
(blocks_orig - blocks_data + 1) * block_size);
}
#endif
```
|
Sphicosa is a genus of flies in the family Empididae.
Species
S. albipennis Smith, 1962
S. coriacea (Bigot, 1889)
S. globosa Smith, 1962
S. lecta Collin, 1933
S. longirostris Smith, 1962
S. nigra Philippi, 1865
S. plaumanni Smith, 1962
S. setipalpis Smith, 1962
S. uniseta Smith, 1962
References
Empidoidea genera
Empididae
|
The Wrocław University of Environmental and Life Sciences (former Agricultural University and Agricultural Academy in Wrocław) is a state university established as an independent university in 1951. UPWr is one of the best specialist universities in Poland. It conducts training and research in the field of food, environmental and veterinary sciences.
In the Perspektywy ranking – the most prestigious and comprehensive ranking of universities in Poland – in 2020 UPWr was placed second in the group of natural and agricultural universities and the 25th among all universities in the country. In addition, two degree programmes – geodesy and cartography and food science – have been voted the best in the country.
For several years, the university has been listed in the international Shanghai Ranking among the best universities in the world in the fields of: Food Science & Technology, Veterinary Science and Chemical Engineering.
History
1856–1945. The Lviv Academy of Veterinary Medicine was established in 1881 as the third institution of this kind in Poland, alongside those in Vilnus and Warsaw. The Faculty of Agriculture dates back to 1856, when the Rural Agricultural School was opened in Dublany near Lvov, in the eastern outskirts of Poland. Initiated by the Galicia Parliament and later confirmed by the decree of the Minister of Agriculture, the school was transformed into the School of Agriculture, which later gained the status of the Polish Academy of Agriculture in 1901. The academy was incorporated into the Lviv Polytechnic, together with the School of Forestry, which resulted in the establishment of the Faculty of Agriculture and Forestry in 1919. The last transformation was decreed by the Council of Ministers.
Breslau. In 1881 the Institute of Agriculture was
opened at the Royal University of Breslau. The address of the institute was 5 Mattiaplatz, and in 1923–1945 at 25 Hansastrasse (today C.K. Norwida Street), which is now the location of the main building of Wroclaw University of Environmental and Life Sciences. The academic research facilities and scholars were placed there to provide the foundation for future university development.
1945–1951 Wroclaw. On 24 August 1945 the State National Council signed a decree to establish a completely new institution of higher education called the State University and Polytechnic in Wroclaw. The university comprised ten faculties, having as parts the Faculty of Veterinary Medicine and the Faculty of Agriculture with the Gardening Division. The
building of the Institute of Agriculture housed academic facilities and the
scholars from the Faculty of Agriculture and Forestry of the Lvov Polytechnic, together with the professors of the Academy of Veterinary Medicine in Lwów who became the academic staff of the newly established university. In 1945, 302 students enrolled on the first year to study veterinary medicine and agriculture.
After 1951 the School of Agriculture was separated from the State University and Polytechnic in Wroclaw by a decree by the Council of Ministers on 17 November 1951 and became a separate entity. The newly created institution included four faculties: the Faculty of Agriculture, the Faculty of Veterinary Medicine, the Faculty of Water Reclamation and the Faculty of Zoology. The School of Agriculture gained the status of Wroclaw Academy of Agriculture on 28 September 1972 by a decree of the Council of Ministers. The government bill on 23 November 2006 nominated Wroclaw Academy of Agriculture as Wroclaw University of Environmental and Life Sciences. Currently, the university is an interdisciplinary institution with a focus on environment and nature studies. The structure of the university comprises five faculties and several interdepartmental units.
Group of Rectors
1951–1954: prof. dr hab. Stanisław Tołpa – botanist
1954–1955: prof. dr hab. Alfred Senze – physiopathologist
1955–1959: prof. dr hab. Aleksander Tychowski – agricultural technologist
1959–1965: prof. dr hab. Alfred Senze – physiopathologist
1965–1969: prof. dr hab. Tadeusz Garbuliński – veterinary pharmacologist
1969–1981: prof. dr hab. Ryszard Badura – veterinary surgeon
1981–1981: prof. dr hab. Józef Dzieżyc – agrotechnician
1982–1984: prof. dr hab. Henryk Balbierz – pathophysiologist
1984–1986: prof. dr hab. Bronisław Jabłoński – agrotechnician
1986–1990: prof. dr hab. Jerzy Juszczak – animal technician
1990–1996: prof. dr hab. Jerzy Kowalski – hydrologist
1996–2002: prof. dr hab. Tadeusz Szulc – animal technician
2002–2008: prof. dr hab. Michał Mazurkiewicz – poultry pathologist
2008–2016: prof. dr hab. Roman Kołacz – animal technician
2016–2020: prof. dr hab. inż. Tadeusz Trziszka – food technologist
Od 2020: prof. dr hab. inż. Jarosław Bosy – surveyor
Courses of study
Currently, the university offers the possibility of studying in twenty-three first-cycle (bachelor's or engineering) and second-cycle (supplementary master's) majors and uniform master's studies at five faculties:
Possessed permissions and leading disciplines
The Wrocław University of Life Sciences is authorized to conduct first-cycle and second-cycle studies in 28 fields of study, conduct third-cycle studies (UPWr Doctoral School), and conduct post-graduate studies. These powers are contained in seven leading disciplines from three areas:
Field of Agricultural Sciences:
veterinary science
animal science and fisheries
agriculture and horticulture
nutrition and food technology
The field of exact and natural sciences:
biological sciences
Field of engineering and technical sciences:
environmental engineering, mining and energy
civil engineering and transport
References
External links
Recruitment to the Wrocław University of Environmental and Life Sciences
Movies about the Wrocław University of Environmental and Life Sciences
Universities and colleges in Wrocław
Agricultural universities and colleges in Poland
Universities and colleges in Poland
|
```scilab
\S[300]\s[80]OUT=$(TERM=vt100 LANG=C smenu -c -n 4 t0007.in)
\S[300]\s[200]your_sha256_hashhhjjjhhhhhh\
\W[75x24]\S[2000]\W[45x24]\S[2000]\W[55x24]\S[2000]\r
\S[300]\s[80]echo ":$\s[80]OUT:"
exit 0
```
|
```json
PODS:
- acrawriter (1.0.3):
- themis (~> 0.10.4)
- GRKOpenSSLFramework (1.0.2.16)
- themis (0.10.4):
- themis/themis-openssl (= 0.10.4)
- themis/themis-openssl (0.10.4):
- GRKOpenSSLFramework (~> 1.0.1)
- themis/themis-openssl/core (= 0.10.4)
- themis/themis-openssl/objcwrapper (= 0.10.4)
- themis/themis-openssl/core (0.10.4):
- GRKOpenSSLFramework (~> 1.0.1)
- themis/themis-openssl/objcwrapper (0.10.4):
- GRKOpenSSLFramework (~> 1.0.1)
- themis/themis-openssl/core
DEPENDENCIES:
- acrawriter (= 1.0.3)
SPEC REPOS:
path_to_url
- acrawriter
- GRKOpenSSLFramework
- themis
SPEC CHECKSUMS:
acrawriter: ea9dab06ca801176c3e3299a04b488f3b5f47b56
GRKOpenSSLFramework: 35944e317e6336b2944ad70b059d84db6b2d8532
themis: 56654bb58700ece55439716058943f8f435b228d
PODFILE CHECKSUM: 68ba2374d972b2371807f2fbaf28f4d1dc74052f
COCOAPODS: 1.7.0
```
|
```python
#!/pxrpythonsubst
#
#
# path_to_url
from __future__ import print_function
from pxr import Ar, Tf, Sdf, Usd, UsdMtlx, UsdShade
import unittest
def _EmptyLayer():
stage = Usd.Stage.CreateInMemory()
return stage.GetRootLayer().ExportToString()
class TestFileFormat(unittest.TestCase):
def test_EmptyFile(self):
"""
Verify that an empty MaterialX document fails.
"""
with self.assertRaises(Tf.ErrorException) as e:
UsdMtlx._TestString('')
def test_MissingFile(self):
"""
Verify that a missing MaterialX file fails.
"""
with self.assertRaises(Tf.ErrorException) as e:
UsdMtlx._TestFile('non-existent-file.xml')
def test_BadMagic(self):
"""
Verify that a MaterialX file with a bad XML header fails.
"""
with self.assertRaises(Tf.ErrorException) as e:
UsdMtlx._TestString('''<?not_xml version="1.0" ?>''')
def test_EmptyXMLDocument(self):
"""
Verify that a MaterialX file with only an XML header fails.
"""
with self.assertRaises(Tf.ErrorException) as e:
UsdMtlx._TestString('''<?xml version="1.0" ?>''')
def test_MissingMaterialXDocument(self):
"""
Verify that a MaterialX file without a materialx element is okay.
"""
stage = UsdMtlx._TestString(
'''<?xml version="1.0" ?>
<not_materialx version="1.35">
</not_materialx>
''')
self.assertEqual(stage.GetRootLayer().ExportToString(),
_EmptyLayer())
def test_EmptyMaterialXDocument(self):
"""
Verify that a file with an empty a materialx element is okay.
"""
stage = UsdMtlx._TestString(
'''<?xml version="1.0" ?>
<materialx version="1.35">
</materialx>
''')
self.assertEqual(stage.GetRootLayer().ExportToString(),
_EmptyLayer())
def test_DuplicateName(self):
"""
Verify that a MaterialX file with duplicate element names fails.
"""
with self.assertRaises(Tf.ErrorException) as e:
UsdMtlx._TestString(
'''<?xml version="1.0" ?>
<materialx version="1.35">
<typedef name="type1">
<typedef name="type1">
</materialx>
''')
def test_Cycle(self):
"""
Verify that a MaterialX file with an inherits cycle fails.
"""
with self.assertRaises(Tf.ErrorException) as e:
UsdMtlx._TestString(
'''<?xml version="1.0" ?>
<materialx version="1.35">
<nodedef name="n1" type="float" node="test" inherit="n2">
<nodedef name="n2" type="float" node="test" inherit="n1">
</materialx>
''')
def test_NodeGraphs(self):
"""
Test general MaterialX node graph conversions.
"""
stage = UsdMtlx._TestFile('NodeGraphs.mtlx', nodeGraphs=True)
stage.GetRootLayer().Export('NodeGraphs.usda')
def test_MultiBindInputs(self):
"""
Test MaterialX conversion with mutliple bind inputs.
"""
stage = UsdMtlx._TestFile('MultiBindInputs.mtlx')
# Get the node graph and make sure there are exactly 3 inputs
nodeGraph = UsdShade.NodeGraph.Get(stage,
Sdf.Path('/MaterialX/Materials/layered/ND_layerShader'))
inputs = nodeGraph.GetInputs()
self.assertEqual(len(inputs), 3)
# Make sure each input is connected as expected
inputToSource = {
'weight_1':
'/MaterialX/Materials/layered/NodeGraphs/layered_layer1_gradient',
'weight_2':
'/MaterialX/Materials/layered/NodeGraphs/layered_layer2_gradient',
'weight_3':
'/MaterialX/Materials/layered/NodeGraphs/layered_layer3_gradient'
}
for inputName, source in inputToSource.items():
input = nodeGraph.GetInput(inputName)
self.assertEqual(input.HasConnectedSource(), True)
self.assertEqual(
input.GetConnectedSources()[0][0].source.GetPath(), source)
def test_MultiOutputNodes(self):
"""
Test MaterialX nodes with multiple outputs
"""
stage = UsdMtlx._TestFile('MultiOutputNode.mtlx')
testInfo = [
('/MaterialX/Materials/test_m/test_ng/specular',
'artistic_ior', 'extinction'),
('/MaterialX/Materials/test_m/test_ng/ior',
'artistic_ior', 'ior')
]
for path, connNodeName, connectionName in testInfo:
node = UsdShade.Shader.Get(stage, path)
conn = node.GetInput('in').GetConnectedSources()[0][0]
self.assertEqual(conn.source.GetPath().name, connNodeName)
self.assertEqual(conn.sourceName, connectionName)
def test_nodesWithoutNodegraphs(self):
"""
Test MaterialX material with nodes not contained in a nodegraph and no
explicit outputs
"""
stage = UsdMtlx._TestFile('GraphlessNodes.mtlx')
stage.GetRootLayer().Export('GraphlessNodes.usda')
def test_NodegraphsWithInputs(self):
"""
Test that inputs on nodegraphs are found and connected when used
inside that nodegraph
"""
stage = UsdMtlx._TestFile('NodeGraphInputs.mtlx')
path = '/MaterialX/Materials/test_material/test_nodegraph/mult1'
node = UsdShade.Shader.Get(stage, path)
conn = node.GetInput('in2').GetConnectedSources()[0][0]
self.assertEqual(conn.source.GetPath().name, 'test_nodegraph')
self.assertEqual(conn.sourceName, 'scale')
def test_Looks(self):
"""
Test general MaterialX look conversions.
"""
stage = UsdMtlx._TestFile('Looks.mtlx')
stage.GetRootLayer().Export('Looks.usda')
def test_StdlibShaderRefs(self):
"""
Test that we can use a shader nodedef from the MaterialX stdlib.
"""
stage = UsdMtlx._TestFile('usd_preview_surface_gold.mtlx')
# check stage contents
mprim = stage.GetPrimAtPath("/MaterialX/Materials/USD_Gold")
self.assertTrue(mprim)
material = UsdShade.Material(mprim)
self.assertTrue(material)
input = material.GetInput("specularColor")
self.assertTrue(input)
self.assertEqual(input.GetFullName(),"inputs:specularColor")
def test_customNodeDefs(self):
"""
Test that custom nodedefs are flattend out and replaced with
their associated nodegraph
"""
stage = UsdMtlx._TestFile('CustomNodeDef.mtlx')
stage.GetRootLayer().Export('CustomNodeDef.usda')
@unittest.skipIf(not hasattr(Ar.Resolver, "CreateIdentifier"),
"Requires Ar 2.0")
def test_XInclude(self):
"""
Verify documents referenced via XInclude statements are read
properly.
"""
stage = UsdMtlx._TestFile('include/Include.mtlx')
stage.GetRootLayer().Export('Include.usda')
stage = UsdMtlx._TestFile('include/Include.usdz[Include.mtlx]')
stage.GetRootLayer().Export('Include_From_Usdz.usda')
@unittest.skipIf(not hasattr(Ar.Resolver, "CreateIdentifier"),
"Requires Ar 2.0")
def test_EmbedInUSDZ(self):
"""
Verify that a MaterialX file can be read from within a .usdz file.
"""
stage = UsdMtlx._TestFile(
'usd_preview_surface_gold.usdz[usd_preview_surface_gold.mtlx]')
stage.GetRootLayer().Export('usd_preview_surface_gold.usda')
def test_Capabilities(self):
self.assertTrue(Sdf.FileFormat.FormatSupportsReading('.mtlx'))
self.assertFalse(Sdf.FileFormat.FormatSupportsWriting('.mtlx'))
self.assertFalse(Sdf.FileFormat.FormatSupportsEditing('.mtlx'))
def test_ExpandFilePrefix(self):
"""
Test active file prefix defined by the fileprefix attribute
in a parent tag.
"""
stage = UsdMtlx._TestFile('ExpandFilePrefix.mtlx')
for nodeName, expectedResult in [
('image_base', 'outer_scope/textures/base.tif'),
('image_spec', 'inner_scope/textures/spec.tif')
]:
primPath = f'/MaterialX/Materials/test_material/test_nodegraph/{nodeName}'
shader = UsdShade.Shader.Get(stage, primPath)
self.assertTrue(shader)
fileInput = shader.GetInput('file')
self.assertTrue(fileInput)
actualResult = fileInput.Get().path
self.assertEqual(actualResult, expectedResult)
if __name__ == '__main__':
unittest.main()
```
|
```smalltalk
using System.Net.Sockets;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using Renci.SshNet.Messages.Transport;
namespace Renci.SshNet.Tests.Classes
{
[TestClass]
public class your_sha256_hashrictKex : SessionTest_ConnectingBase
{
protected override bool ServerSupportsStrictKex
{
get
{
return false;
}
}
protected override void ActionAfterKexInit()
{
var ignoreMessage = new IgnoreMessage();
var ignore = ignoreMessage.GetPacket(8, null);
// MitM sends ignore message to client
_ = ServerSocket.Send(ignore, 4, ignore.Length - 4, SocketFlags.None);
// MitM drops server message
ServerOutboundPacketSequence++;
}
[TestMethod]
public void DoesNotThrowException()
{
Session.Connect();
}
}
}
```
|
Clark County is a county in the U.S. state of South Dakota. As of the 2020 census, the population was 3,837. Its county seat is Clark. The county was created in 1873 and organized in 1881. It was named for Newton Clark, a Dakota Territory legislator in 1873.
Geography
Clark County terrain consists of rolling hills, dotted with lakes and ponds especially in the east central portion. The area is mostly devoted to agriculture. The county has a total area of , of which is land and (1.0%) is water.
Major highways
U.S. Highway 212
South Dakota Highway 20
South Dakota Highway 25
South Dakota Highway 28
Adjacent counties
Day County - north
Codington County - east
Hamlin County - southeast
Kingsbury County - south
Beadle County - southwest
Spink County - west
Protected areas
Christopherson State Public Shooting Area
Dry Lake Number Two State Public Shooting Area
Fordham State Public Shooting Area
McPeek State Public Shooting Area
Stairs Slough State Public Shooting Area
Willow Lake State Public Shooting Area
Lakes and reservoirs
Baileys Lake
Dry Lake Number One
Dry Lake Number Two
Mud Lake
Reid Lake
Swan Lake
Willow Lake
Demographics
2020 census
As of the census of 2020, there were 3,837 people.
2010 census
As of the census of 2010, there were 3,691 people, 1,445 households, and 929 families residing in the county. The population density was . There were 1,710 housing units at an average density of . The racial makeup of the county was 98.1% white, 0.2% black or African American, 0.1% Asian, 0.1% American Indian, 0.8% from other races, and 0.8% from two or more races. Those of Hispanic or Latino origin made up 1.7% of the population. In terms of ancestry, 52.0% were German, 29.4% were Norwegian, 9.7% were Irish, 7.8% were English, 5.4% were Swedish, and 3.5% were American.
Of the 1,445 households, 23.0% had children under the age of 18 living with them, 55.8% were married couples living together, 4.8% had a female householder with no husband present, 35.7% were non-families, and 32.5% of all households were made up of individuals. The average household size was 2.22 and the average family size was 2.79. The median age was 45.7 years.
The median income for a household in the county was $43,894 and the median income for a family was $55,575. Males had a median income of $33,606 versus $24,952 for females. The per capita income for the county was $23,909. About 7.5% of families and 13.1% of the population were below the poverty line, including 24.5% of those under age 18 and 12.6% of those age 65 or over.
Communities
Cities
Clark (county seat)
Willow Lake
Towns
Bradley
Garden City
Naples
Raymond
Vienna
Census-designated place
Collins Colony
Crocker
Fordham Colony
Hillcrest Colony
Mayfield Colony
Silver Lake Colony
Unincorporated communities
Carpenter
Elrod
Townships
Ash
Blaine
Collins
Cottonwood
Darlington
Day
Eden
Fordham
Foxton
Garfield
Hague
Lake
Lincoln
Logan
Maydell
Merton
Mount Pleasant
Pleasant
Raymond
Richland
Rosedale
Spring Valley
Thorp
Warren
Washington
Woodland
Politics
Clark County is a reliable state bellwether, having voted for South Dakota's statewide winner in every presidential election since statehood, similar to Jackson County and Jones County. It is a strongly Republican county, having voted for the Republican candidate in presidential election since 1968.
See also
National Register of Historic Places listings in Clark County, South Dakota
References
1881 establishments in Dakota Territory
Populated places established in 1881
|
```xml
<Documentation>
<Docs DocId="T:MapKit.MKDirections">
<summary>An Apple-provided route to a destination.</summary>
<remarks>
<para>Application developers should be aware that sending too many routing requests from the same device may lead to a throttling error (see <see cref="F:MapKit.MKErrorCode.LoadingThrottled" />).</para>
<para>To request routing, application developers must set the <c>MKDirectionsApplicationSupportedModes</c> key in the application's <c>info.plist</c> file. The following example shows an automobile routing request:</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
var gg = new CLLocationCoordinate2D(37.8197, -122.4786);
var sfo = new CLLocationCoordinate2D(37.6189, -122.3750);
var ggAnnotation = new MKPointAnnotation() { Title = "Golden Gate Bridge", Coordinate = gg };
var sfoAnnotation = new MKPointAnnotation() { Title = "SFO", Coordinate = sfo };
mapView.ShowAnnotations(new MKPointAnnotation[] { ggAnnotation, sfoAnnotation }, false);
var emptyDict = new NSDictionary();
var req = new MKDirectionsRequest() {
Source = new MKMapItem(new MKPlacemark(gg, emptyDict)),
Destination = new MKMapItem(new MKPlacemark(sfo, emptyDict)),
TransportType = MKDirectionsTransportType.Automobile
};
var dir = new MKDirections(req);
dir.CalculateDirections((response, error) => {
if(error == null){
var route = response.Routes[0];
var rteLine = new MKPolylineRenderer(route.Polyline) {
LineWidth = 5.0f,
StrokeColor = UIColor.Purple
};
mapView.GetRendererForOverlay = (mv, ol) => rteLine;
mapView.AddOverlay(route.Polyline, MKOverlayLevel.AboveRoads);
}else{
Console.WriteLine(error);
}
});
]]></code>
</example>
<para>
<img href="~/MapKit/_images/MKDirections.png" alt="Screenshot showing map routing" />
</para>
</remarks>
<related type="externalDocumentation" href="path_to_url">Apple documentation for <c>MKDirections</c></related>
</Docs>
</Documentation>
```
|
Antoni Zdanowski (1895–1948) was a Polish social and union activist, and also an editor of Robotniczy Przegląd Gospodarczy.
In 1917 he became a member of Polish Socialist Party in Russia.
During 1919–1920 he was a secretary of Central Commission of Trade Unions and from 1925–1939 he served as its vice-general secretary. In 1921, he co-founded the Warsaw Housing Cooperative.
During World War II Zdanowski was a member of Polish Socialist Party - Liberty Equality Independence and in the years of 1940–1945 manager of its Central Party Administration.
He and his wife, Janina Pajdak, were arrested in 1947 by the by Office of Security. She died that same year in prison. He died the following year (1948) in prison.
References
1895 births
1948 deaths
Polish cooperative organizers
Polish trade unionists
Polish editors
Polish Socialist Party politicians
Prisoners who died in Polish People's Republic detention
|
Robert Edison Sandiford (born 1968) is a Canadian novelist, short story writer and essayist. Born in Montreal, Quebec, he co-founded with the poet Linda M. Deane ArtsEtc, a periodical devoted to culture in Barbados. In 2003, his short story "Reckoning" was awarded the Barbados Governor General's Award for Literary Excellence.
Bibliography
Fiction
12 X 93 — 1993 (with Sonja Skarstedt & Brian Busby)
Winter, Spring, Summer, Fall: Stories — 1995
The Tree of Youth and Other Stories — 2005
Intimacy 101: Rooms & Suites — 2013
And Sometimes They Fly: A Novel — 2013
Fairfield: The Last Sad Stories of G. Brandon Sisnett — 2015
Graphic novels
Attractive Forces — 1997 (with Justin Norman)
Stray Moonbeams — 2002 (with Justin Norman & Brandon Graham)
Great Moves — 2010 (with Geof Isherwood)
Non-fiction
Sand for Snow: A Caribbean-Canadian Chronicle — 2003
External links
Robert Edison Sandiford - (Biography) Writers' Union of Canada
Robert Edison Sandiford
1968 births
Canadian male short story writers
Black Canadian writers
Canadian people of Barbadian descent
Living people
Writers from Montreal
20th-century Canadian short story writers
21st-century Canadian short story writers
20th-century Canadian male writers
21st-century Canadian male writers
|
Richard Charles Uryan Rhys, 9th Baron Dynevor (19 June 1935 – 12 November 2008) was a British peer.
He was educated at Eton and at Magdalene College, Cambridge. In 1959 he married Lucy Catherine King, the only daughter of Sir John Knewstub Maurice Rothenstein CBE. They had one son and three daughters. The marriage was dissolved in 1978. His chief interest lay in The Black Raven Press of which he was a director.
In 1962 Lord Dynevor inherited the remaining holdings of the Llandeilo Estate, comprising 23 farms, and 2,000 acres (8 km2), a ruined castle, a deer park with a herd of rare long horned white cattle, and a substantial death duties bill. The death duties were owed on both the 7th and 8th Barons.
Attempts were made to save the patrimony but eventually the castle was sold to a private buyer in 1974. The National Trust bought the deer park and the outer park at Dinefwr in 1987. Newton House was purchased by the Trust in 1990 having been through several hands since first sold by Lord Dynevor in 1974. It was in a very poor state of repair. The East Drive was acquired in 1992. The Home Farm was acquired in 2002. Cadw and the National Trust now control the estate of some 700 acres (3 km2).
References
Rees, Thomas; “The Beauties of England and Wales”, 1815. Reprinted in A Carmarthenshire Anthology, edited by Lyn Hughes, Christopher Davies, 1985
1935 births
People educated at Eton College
Alumni of Magdalene College, Cambridge
2008 deaths
09
Richard
Dynevor
|
Events in the year 2020 in Guinea-Bissau.
Incumbents
President: José Mário Vaz (until 27 February) Umaro Sissoco Embaló (from 27 February)
Prime Minister: Aristides Gomes (until 28 February) Nuno Gomes Nabiam (from 28 February)
Events
March
25 March – The country confirmed its first two COVID-19 cases, a Congolese U.N. employee and an Indian citizen.
28 March – A month-long state of emergency with night-time curfew was introduced in the country.
April
26 April – The first COVID-19 death was recorded in the country. The state of emergency was extended until 11 May as a result.
29 April – Prime Minister Nuno Gomes Nabiam, Interior Minister Botche Candé, Secretary of State for Public Order Mario Fambé, and Secretary of State for Regional Integration Monica Buaro da Costa tested positive for COVID-19.
May
1 May – The Minister of Public Health Antonio Deuna tested positive for COVID-19.
June
16 June – It was reported by Reuters that 9% of health care workers have been infected with COVID-19. According to Joana Cortez, a WHO expert in the country, the three main Bissau hospitals are currently facing rooms filled with COVID-19 patients and a breakdown in essential medical services.
26 June – President Umaro Sissoco Embaló announced a one-month extension of the state of emergency, but lifted the curfew.
Deaths
References
Guinea-Bissau
Years of the 21st century in Guinea-Bissau
2020s in Guinea-Bissau
Guinea-Bissau
|
```objective-c
#ifndef QTMATERIALAUTOCOMPLETESTATEMACHINE_H
#define QTMATERIALAUTOCOMPLETESTATEMACHINE_H
#include <QStateMachine>
#include "qtmaterialautocomplete.h"
class QtMaterialAutoCompleteStateMachine : public QStateMachine
{
Q_OBJECT
public:
explicit QtMaterialAutoCompleteStateMachine(QWidget *menu);
~QtMaterialAutoCompleteStateMachine();
signals:
void shouldOpen();
void shouldClose();
void shouldFade();
private:
Q_DISABLE_COPY(QtMaterialAutoCompleteStateMachine)
QWidget *const m_menu;
QState *const m_closedState;
QState *const m_openState;
QState *const m_closingState;
};
#endif // QTMATERIALAUTOCOMPLETESTATEMACHINE_H
```
|
```c++
#include "exportcameratrackpopup.h"
// Tnz6 includes
#include "tapp.h"
#include "mainwindow.h"
#include "filebrowser.h"
#include "menubarcommandids.h"
//// TnzQt includes
#include "toonzqt/colorfield.h"
#include "toonzqt/filefield.h"
#include "toonzqt/doublefield.h"
//// TnzLib includes
#include "toonz/txsheet.h"
#include "toonz/tcamera.h"
#include "toonz/txshlevel.h"
#include "toonz/txshsimplelevel.h"
#include "toonz/txshcell.h"
#include "toonz/tstageobjecttree.h"
#include "toonz/toonzscene.h"
#include "toonz/txshleveltypes.h"
#include "toonz/dpiscale.h"
#include "toonz/tproject.h"
#include "toonz/txsheethandle.h"
#include "toonz/tscenehandle.h"
#include "filebrowserpopup.h"
//// TnzCore includes
#include "trop.h"
#include "tsystem.h"
#include "tenv.h"
#include "tropcm.h"
#include "tpalette.h"
//// Qt includes
#include <QLabel>
#include <QPushButton>
#include <QImage>
#include <QPainter>
#include <QPainterPath>
#include <QScrollBar>
#include <QMouseEvent>
#include <QCheckBox>
#include <QComboBox>
#include <QLineEdit>
#include <QFontComboBox>
#include <QHBoxLayout>
#include <QVBoxLayout>
#include <QGridLayout>
#include <QGroupBox>
#include <QRegExpValidator>
#include <QPolygonF>
#include <QVector2D>
#include <QFontMetricsF>
TEnv::DoubleVar CameraTrackExportBgOpacity("CameraTrackExportBgOpacity", 0.5);
TEnv::StringVar CameraTrackExportLineColor(
"CameraTrackExportLineColor", QColor(Qt::red).name().toStdString());
TEnv::IntVar CameraTrackExportCamRectOnKeys("CameraTrackExportCamRectOnKeys",
1);
TEnv::IntVar CameraTrackExportCamRectOnTags("CameraTrackExportCamRectOnTags",
0);
TEnv::IntVar CameraTrackExportLineTL("CameraTrackExportLineTL", 0);
TEnv::IntVar CameraTrackExportLineTR("CameraTrackExportLineTR", 0);
TEnv::IntVar CameraTrackExportLineCenter("CameraTrackExportLineCenter", 1);
TEnv::IntVar CameraTrackExportLineBL("CameraTrackExportLineBL", 0);
TEnv::IntVar CameraTrackExportLineBR("CameraTrackExportLineBR", 0);
TEnv::IntVar CameraTrackExportGraduationInterval(
"CameraTrackExportGraduationInterval", 1);
TEnv::IntVar CameraTrackExportNumberAt("CameraTrackExportNumberAt",
(int)Qt::TopLeftCorner);
TEnv::IntVar CameraTrackExportNumbersOnLine("CameraTrackExportNumbersOnLine",
1);
TEnv::StringVar CameraTrackExportFont("CameraTrackExportFont", "");
TEnv::IntVar CameraTrackExportFontSize("CameraTrackExportFontSize", 30);
namespace {
void getCameraPlacement(TAffine& aff, TXsheet* xsh, double row,
const TStageObjectId& objId,
const TStageObjectId& cameraId) {
TStageObject* pegbar =
xsh->getStageObjectTree()->getStageObject(objId, false);
if (!pegbar) return;
TAffine objAff = pegbar->getPlacement(row);
double objZ = pegbar->getZ(row);
double noScaleZ = pegbar->getGlobalNoScaleZ();
TStageObject* camera = xsh->getStageObject(cameraId);
TAffine cameraAff = camera->getPlacement(row);
double cameraZ = camera->getZ(row);
bool isVisible = TStageObject::perspective(aff, cameraAff, cameraZ, objAff,
objZ, noScaleZ);
aff = aff.inv() * cameraAff;
}
// recursively find key frame
bool isKey(int frame, TStageObject* obj, TXsheet* xsh) {
if (obj->isKeyframe(frame)) return true;
if (obj->getParent() != TStageObjectId::NoneId)
return isKey(frame, xsh->getStageObject(obj->getParent()), xsh);
return false;
}
void drawOutlinedText(QPainter& p, const QPointF& pos, const QString& str) {
QPainterPath path;
path.addText(pos, p.font(), str);
p.setPen(QPen(Qt::white, 3));
p.drawPath(path);
p.setPen(Qt::NoPen);
p.drawPath(path);
}
// {0,1,2,3,6,8,9,10} => "1-4,7,9-11"
QString framesToString(const QList<int>& frames) {
QString frameStr;
bool prevIsHyphen = false;
for (int i = 0; i < frames.size(); i++) {
int f = frames.at(i);
if (i == 0) {
frameStr = QString::number(f + 1);
continue;
}
if (i == frames.size() - 1) {
if (prevIsHyphen)
frameStr += QString::number(f + 1);
else
frameStr += ", " + QString::number(f + 1);
break;
}
if (frames.at(i - 1) == f - 1) {
if (frames.at(i + 1) == f + 1) {
if (prevIsHyphen)
continue;
else {
frameStr += "-";
prevIsHyphen = true;
}
} else {
if (prevIsHyphen) {
frameStr += QString::number(f + 1);
prevIsHyphen = false;
} else {
frameStr += ", " + QString::number(f + 1);
}
}
} else {
frameStr += ", " + QString::number(f + 1);
}
}
return frameStr;
}
} // namespace
//your_sha256_hash-------------
CameraTrackPreviewPane::CameraTrackPreviewPane(QWidget* parent)
: QWidget(parent), m_scaleFactor(1.0) {}
void CameraTrackPreviewPane::paintEvent(QPaintEvent* event) {
QPainter painter(this);
painter.setRenderHint(QPainter::SmoothPixmapTransform, true);
painter.setRenderHint(QPainter::Antialiasing, true);
QSize pmSize((double)m_pixmap.width() * m_scaleFactor,
(double)m_pixmap.height() * m_scaleFactor);
painter.drawPixmap(
0, 0,
m_pixmap.scaled(pmSize, Qt::KeepAspectRatio, Qt::SmoothTransformation));
}
void CameraTrackPreviewPane::setPixmap(QPixmap pm) {
m_pixmap = pm;
resize(pm.size() * m_scaleFactor);
update();
}
void CameraTrackPreviewPane::doZoom(double d_scale) {
m_scaleFactor += d_scale;
if (m_scaleFactor > 1.0)
m_scaleFactor = 1.0;
else if (m_scaleFactor < 0.1)
m_scaleFactor = 0.1;
resize(m_pixmap.size() * m_scaleFactor);
update();
}
void CameraTrackPreviewPane::fitScaleTo(QSize size) {
double tmp_scaleFactor =
std::min((double)size.width() / (double)m_pixmap.width(),
(double)size.height() / (double)m_pixmap.height());
m_scaleFactor = tmp_scaleFactor;
if (m_scaleFactor > 1.0)
m_scaleFactor = 1.0;
else if (m_scaleFactor < 0.1)
m_scaleFactor = 0.1;
resize(m_pixmap.size() * m_scaleFactor);
update();
}
//your_sha256_hash-------------
void CameraTrackPreviewArea::mousePressEvent(QMouseEvent* e) {
m_mousePos = e->pos();
}
void CameraTrackPreviewArea::mouseMoveEvent(QMouseEvent* e) {
QPoint d = m_mousePos - e->pos();
horizontalScrollBar()->setValue(horizontalScrollBar()->value() + d.x());
verticalScrollBar()->setValue(verticalScrollBar()->value() + d.y());
m_mousePos = e->pos();
}
void CameraTrackPreviewArea::contextMenuEvent(QContextMenuEvent* event) {
QMenu* menu = new QMenu(this);
QAction* fitAction = menu->addAction(tr("Fit To Window"));
connect(fitAction, SIGNAL(triggered()), this, SLOT(fitToWindow()));
menu->exec(event->globalPos());
}
void CameraTrackPreviewArea::fitToWindow() {
dynamic_cast<CameraTrackPreviewPane*>(widget())->fitScaleTo(rect().size());
}
void CameraTrackPreviewArea::wheelEvent(QWheelEvent* event) {
int delta = 0;
switch (event->source()) {
case Qt::MouseEventNotSynthesized: {
delta = event->angleDelta().y();
}
case Qt::MouseEventSynthesizedBySystem: {
QPoint numPixels = event->pixelDelta();
QPoint numDegrees = event->angleDelta() / 8;
if (!numPixels.isNull()) {
delta = event->pixelDelta().y();
} else if (!numDegrees.isNull()) {
QPoint numSteps = numDegrees / 15;
delta = numSteps.y();
}
break;
}
default: // Qt::MouseEventSynthesizedByQt,
// Qt::MouseEventSynthesizedByApplication
{
std::cout << "not supported event: Qt::MouseEventSynthesizedByQt, "
"Qt::MouseEventSynthesizedByApplication"
<< std::endl;
break;
}
} // end switch
if (delta == 0) {
event->accept();
return;
}
dynamic_cast<CameraTrackPreviewPane*>(widget())->doZoom((delta > 0) ? 0.1
: -0.1);
event->accept();
}
//********************************************************************************
// ExportCameraTrackPopup implementation
//********************************************************************************
ExportCameraTrackPopup::ExportCameraTrackPopup()
: DVGui::Dialog(TApp::instance()->getMainWindow(), false, false,
"ExportCameraTrack") {
setWindowTitle(tr("Export Camera Track"));
m_previewPane = new CameraTrackPreviewPane(this);
m_previewArea = new CameraTrackPreviewArea(this);
m_targetColumnCombo = new QComboBox(this);
m_bgOpacityField = new DVGui::DoubleField(this);
m_lineColorFld = new DVGui::ColorField(this, false, TPixel32(255, 0, 0));
m_cameraRectOnKeysCB = new QCheckBox(tr("Draw On Keyframes"), this);
m_cameraRectOnTagsCB = new QCheckBox(tr("Draw On Navigation Tags"), this);
m_cameraRectFramesEdit = new QLineEdit(this);
m_lineTL_CB = new QCheckBox(tr("Top Left"), this);
m_lineTR_CB = new QCheckBox(tr("Top Right"), this);
m_lineCenter_CB = new QCheckBox(tr("Center"), this);
m_lineBL_CB = new QCheckBox(tr("Bottom Left"), this);
m_lineBR_CB = new QCheckBox(tr("Bottom Right"), this);
m_graduationIntervalCombo = new QComboBox(this);
m_numberAtCombo = new QComboBox(this);
m_numbersOnLineCB = new QCheckBox(tr("Draw Numbers On Track Line"), this);
m_fontCombo = new QFontComboBox(this);
m_fontSizeEdit = new DVGui::IntLineEdit(this, 30, 5, 300);
QPushButton* exportButton = new QPushButton(tr("Export"), this);
QPushButton* cancelButton = new QPushButton(tr("Cancel"), this);
//----------
m_previewArea->setWidget(m_previewPane);
m_previewArea->setAlignment(Qt::AlignCenter);
m_previewArea->setBackgroundRole(QPalette::Dark);
m_previewArea->setStyleSheet("background-color:gray;");
m_targetColumnCombo->setSizeAdjustPolicy(QComboBox::AdjustToContents);
m_bgOpacityField->setRange(0.0, 1.0);
m_bgOpacityField->setValue(0.5);
m_cameraRectFramesEdit->setValidator(
new QRegExpValidator(QRegExp("^(\\d+,)*\\d+$"), this));
m_cameraRectFramesEdit->setToolTip(
tr("Specify frame numbers where the camera rectangles will be drawn. "
"Separate numbers by comma \",\" ."));
m_numberAtCombo->addItem(tr("Top Left"), (int)Qt::TopLeftCorner);
m_numberAtCombo->addItem(tr("Top Right"), (int)Qt::TopRightCorner);
m_numberAtCombo->addItem(tr("Bottom Left"), (int)Qt::BottomLeftCorner);
m_numberAtCombo->addItem(tr("Bottom Right"), (int)Qt::BottomRightCorner);
m_graduationIntervalCombo->addItem(tr("None"), 0);
m_graduationIntervalCombo->addItem(tr("All frames"), 1);
m_graduationIntervalCombo->addItem(tr("Every 2 frames"), 2);
m_graduationIntervalCombo->addItem(tr("Every 3 frames"), 3);
m_graduationIntervalCombo->addItem(tr("Every 4 frames"), 4);
m_graduationIntervalCombo->addItem(tr("Every 5 frames"), 5);
m_graduationIntervalCombo->addItem(tr("Every 6 frames"), 6);
m_graduationIntervalCombo->addItem(tr("Every 8 frames"), 8);
m_graduationIntervalCombo->addItem(tr("Every 10 frames"), 10);
m_graduationIntervalCombo->addItem(tr("Every 12 frames"), 12);
exportButton->setFocusPolicy(Qt::NoFocus);
cancelButton->setFocusPolicy(Qt::NoFocus);
//----------
QHBoxLayout* mainLay = new QHBoxLayout();
mainLay->setMargin(0);
mainLay->setSpacing(5);
{
mainLay->addWidget(m_previewArea, 1);
QVBoxLayout* rightLay = new QVBoxLayout();
rightLay->setMargin(10);
rightLay->setSpacing(10);
QGridLayout* appearanceLay = new QGridLayout();
appearanceLay->setMargin(0);
appearanceLay->setHorizontalSpacing(5);
appearanceLay->setVerticalSpacing(10);
{
appearanceLay->addWidget(new QLabel(tr("Target Column:"), this), 0, 0,
Qt::AlignRight | Qt::AlignVCenter);
appearanceLay->addWidget(m_targetColumnCombo, 0, 1, Qt::AlignLeft);
appearanceLay->addWidget(new QLabel(tr("Background:"), this), 1, 0,
Qt::AlignRight | Qt::AlignVCenter);
appearanceLay->addWidget(m_bgOpacityField, 1, 1);
appearanceLay->addWidget(new QLabel(tr("Line Color:"), this), 2, 0,
Qt::AlignRight | Qt::AlignVCenter);
appearanceLay->addWidget(m_lineColorFld, 2, 1, Qt::AlignLeft);
}
appearanceLay->setColumnStretch(1, 1);
rightLay->addLayout(appearanceLay, 0);
QGroupBox* cameraRectGB = new QGroupBox(tr("Camera Rectangles"), this);
QGridLayout* cameraRectLay = new QGridLayout();
cameraRectLay->setMargin(10);
cameraRectLay->setHorizontalSpacing(5);
cameraRectLay->setVerticalSpacing(10);
{
cameraRectLay->addWidget(m_cameraRectOnKeysCB, 0, 0, 1, 2, Qt::AlignLeft);
cameraRectLay->addWidget(m_cameraRectOnTagsCB, 1, 0, 1, 2, Qt::AlignLeft);
cameraRectLay->addWidget(new QLabel(tr("Specify Frames Manually:"), this),
2, 0, Qt::AlignRight | Qt::AlignVCenter);
cameraRectLay->addWidget(m_cameraRectFramesEdit, 2, 1);
}
cameraRectLay->setColumnStretch(1, 1);
cameraRectGB->setLayout(cameraRectLay);
rightLay->addWidget(cameraRectGB, 0);
QGroupBox* trackLineGB = new QGroupBox(tr("Track Lines"), this);
QGridLayout* trackLineLay = new QGridLayout();
trackLineLay->setMargin(10);
trackLineLay->setHorizontalSpacing(10);
trackLineLay->setVerticalSpacing(10);
{
trackLineLay->addWidget(m_lineTL_CB, 0, 0);
trackLineLay->addWidget(m_lineTR_CB, 0, 1);
trackLineLay->addWidget(m_lineCenter_CB, 1, 0, Qt::AlignRight);
trackLineLay->addWidget(m_lineBL_CB, 2, 0);
trackLineLay->addWidget(m_lineBR_CB, 2, 1);
trackLineLay->addWidget(
new QLabel(tr("Graduation Marks Interval:"), this), 3, 0,
Qt::AlignRight | Qt::AlignVCenter);
trackLineLay->addWidget(m_graduationIntervalCombo, 3, 1, Qt::AlignLeft);
}
trackLineLay->setColumnStretch(1, 1);
trackLineGB->setLayout(trackLineLay);
rightLay->addWidget(trackLineGB, 0);
QGroupBox* frameNumberGB = new QGroupBox(tr("Frame Numbers"), this);
QGridLayout* frameNumberLay = new QGridLayout();
frameNumberLay->setMargin(10);
frameNumberLay->setHorizontalSpacing(5);
frameNumberLay->setVerticalSpacing(10);
{
frameNumberLay->addWidget(new QLabel(tr("Camera Rect Corner:"), this), 0,
0, Qt::AlignRight | Qt::AlignVCenter);
frameNumberLay->addWidget(m_numberAtCombo, 0, 1, Qt::AlignLeft);
frameNumberLay->addWidget(m_numbersOnLineCB, 1, 0, 1, 2);
frameNumberLay->addWidget(new QLabel(tr("Font Family:"), this), 2, 0,
Qt::AlignRight | Qt::AlignVCenter);
frameNumberLay->addWidget(m_fontCombo, 2, 1, Qt::AlignLeft);
frameNumberLay->addWidget(new QLabel(tr("Font Size:"), this), 3, 0,
Qt::AlignRight | Qt::AlignVCenter);
frameNumberLay->addWidget(m_fontSizeEdit, 3, 1, Qt::AlignLeft);
}
frameNumberLay->setColumnStretch(1, 1);
frameNumberGB->setLayout(frameNumberLay);
rightLay->addWidget(frameNumberGB, 0);
rightLay->addStretch(1);
QHBoxLayout* buttonsLay = new QHBoxLayout();
buttonsLay->setMargin(5);
buttonsLay->setSpacing(20);
{
buttonsLay->addWidget(exportButton, 0);
buttonsLay->addWidget(cancelButton, 0);
}
rightLay->setAlignment(Qt::AlignCenter);
rightLay->addLayout(buttonsLay, 0);
mainLay->addLayout(rightLay, 0);
}
m_topLayout->addLayout(mainLay, 1);
//----------
loadSettings();
connect(m_targetColumnCombo, SIGNAL(activated(int)), this,
SLOT(updatePreview()));
connect(m_bgOpacityField, SIGNAL(valueEditedByHand()), this,
SLOT(updatePreview()));
connect(m_lineColorFld, SIGNAL(colorChanged(const TPixel32&, bool)), this,
SLOT(updatePreview()));
connect(m_cameraRectOnKeysCB, SIGNAL(clicked(bool)), this,
SLOT(updatePreview()));
connect(m_cameraRectOnTagsCB, SIGNAL(clicked(bool)), this,
SLOT(updatePreview()));
connect(m_cameraRectFramesEdit, SIGNAL(editingFinished()), this,
SLOT(updatePreview()));
connect(m_lineTL_CB, SIGNAL(clicked(bool)), this, SLOT(updatePreview()));
connect(m_lineTR_CB, SIGNAL(clicked(bool)), this, SLOT(updatePreview()));
connect(m_lineCenter_CB, SIGNAL(clicked(bool)), this, SLOT(updatePreview()));
connect(m_lineBL_CB, SIGNAL(clicked(bool)), this, SLOT(updatePreview()));
connect(m_lineBR_CB, SIGNAL(clicked(bool)), this, SLOT(updatePreview()));
connect(m_graduationIntervalCombo, SIGNAL(activated(int)), this,
SLOT(updatePreview()));
connect(m_numberAtCombo, SIGNAL(activated(int)), this, SLOT(updatePreview()));
connect(m_numbersOnLineCB, SIGNAL(clicked(bool)), this,
SLOT(updatePreview()));
connect(m_fontCombo, SIGNAL(currentFontChanged(const QFont&)), this,
SLOT(updatePreview()));
connect(m_fontSizeEdit, SIGNAL(editingFinished()), this,
SLOT(updatePreview()));
connect(cancelButton, SIGNAL(clicked()), this, SLOT(close()));
connect(exportButton, SIGNAL(clicked()), this, SLOT(onExport()));
}
//--------------------------------------------------------------
// register settings to the user env file on close
void ExportCameraTrackPopup::saveSettings() {
CameraTrackExportBgOpacity = m_bgOpacityField->getValue();
TPixel32 col = m_lineColorFld->getColor();
CameraTrackExportLineColor = QColor(col.r, col.g, col.b).name().toStdString();
CameraTrackExportCamRectOnKeys = (m_cameraRectOnKeysCB->isChecked()) ? 1 : 0;
CameraTrackExportCamRectOnTags = (m_cameraRectOnTagsCB->isChecked()) ? 1 : 0;
CameraTrackExportLineTL = (m_lineTL_CB->isChecked()) ? 1 : 0;
CameraTrackExportLineTR = (m_lineTR_CB->isChecked()) ? 1 : 0;
CameraTrackExportLineCenter = (m_lineCenter_CB->isChecked()) ? 1 : 0;
CameraTrackExportLineBL = (m_lineBL_CB->isChecked()) ? 1 : 0;
CameraTrackExportLineBR = (m_lineBR_CB->isChecked()) ? 1 : 0;
CameraTrackExportGraduationInterval =
m_graduationIntervalCombo->currentData().toInt();
CameraTrackExportNumberAt = m_numberAtCombo->currentData().toInt();
CameraTrackExportNumbersOnLine = (m_numbersOnLineCB->isChecked()) ? 1 : 0;
CameraTrackExportFont = m_fontCombo->currentFont().family().toStdString();
CameraTrackExportFontSize = m_fontSizeEdit->getValue();
}
//--------------------------------------------------------------
//
// load settings from the user env file on ctor
void ExportCameraTrackPopup::loadSettings() {
m_bgOpacityField->setValue(CameraTrackExportBgOpacity);
QColor lineColor(QString::fromStdString(CameraTrackExportLineColor));
m_lineColorFld->setColor(
TPixel32(lineColor.red(), lineColor.green(), lineColor.blue()));
m_cameraRectOnKeysCB->setChecked(CameraTrackExportCamRectOnKeys != 0);
m_cameraRectOnTagsCB->setChecked(CameraTrackExportCamRectOnTags != 0);
m_lineTL_CB->setChecked(CameraTrackExportLineTL != 0);
m_lineTR_CB->setChecked(CameraTrackExportLineTR != 0);
m_lineCenter_CB->setChecked(CameraTrackExportLineCenter != 0);
m_lineBL_CB->setChecked(CameraTrackExportLineBL != 0);
m_lineBR_CB->setChecked(CameraTrackExportLineBR != 0);
m_graduationIntervalCombo->setCurrentIndex(
m_graduationIntervalCombo->findData(
(int)CameraTrackExportGraduationInterval));
m_numberAtCombo->setCurrentIndex(
m_numberAtCombo->findData((int)CameraTrackExportNumberAt));
m_numbersOnLineCB->setChecked(CameraTrackExportNumbersOnLine != 0);
QString tmplFont = QString::fromStdString(CameraTrackExportFont);
if (!tmplFont.isEmpty()) m_fontCombo->setCurrentFont(QFont(tmplFont));
m_fontSizeEdit->setValue(CameraTrackExportFontSize);
}
//--------------------------------------------------------------
void ExportCameraTrackPopup::initialize() {
updateTargetColumnComboItems();
updatePreview();
}
//--------------------------------------------------------------
void ExportCameraTrackPopup::updateTargetColumnComboItems() {
m_targetColumnCombo->clear();
ToonzScene* scene = TApp::instance()->getCurrentScene()->getScene();
TXsheet* xsh = TApp::instance()->getCurrentXsheet()->getXsheet();
for (int col = 0; col < xsh->getColumnCount(); col++) {
TXshLevelP level;
int r0, r1;
xsh->getCellRange(col, r0, r1);
if (r1 < 0) continue;
for (int r = r0; r <= r1; r++)
if (level = xsh->getCell(r, col).m_level) {
break;
}
if (!level) continue;
int type = level->getType();
if (!(type & RASTER_TYPE)) continue;
TXshSimpleLevelP sl = level->getSimpleLevel();
if (!sl) continue;
QString itemName = tr("Col %1 (%2)")
.arg(col + 1)
.arg(QString::fromStdWString(sl->getName()));
m_targetColumnCombo->addItem(itemName, col);
}
}
//--------------------------------------------------------------
QImage ExportCameraTrackPopup::generateCameraTrackImg(
const ExportCameraTrackInfo& info, bool isPreview) {
ToonzScene* scene = TApp::instance()->getCurrentScene()->getScene();
TXsheet* xsh = TApp::instance()->getCurrentXsheet()->getXsheet();
// obtain target level
TXshLevelP level;
TFrameId fId;
int r0, r1;
xsh->getCellRange(info.columnId, r0, r1);
if (r1 < 0) return QImage();
for (int r = r0; r <= r1; r++)
if (level = xsh->getCell(r, info.columnId).m_level) {
fId = xsh->getCell(r, info.columnId).getFrameId();
break;
}
if (!level) return QImage();
int type = level->getType();
if (!(type & RASTER_TYPE)) return QImage();
TXshSimpleLevelP sl = level->getSimpleLevel();
if (!sl) return QImage();
// construct output image
TStageObjectId cameraId = xsh->getStageObjectTree()->getCurrentCameraId();
TCamera* camera = xsh->getStageObject(cameraId)->getCamera();
TDimension camDim = camera->getRes();
TDimension imgRes = sl->getResolution();
TImageP tImg = sl->getFullsampledFrame(fId, ImageManager::dontPutInCache);
TRaster32P imgRas(tImg->raster()->getSize());
TRaster32P src32 = tImg->raster();
TRasterCM32P srcCM = tImg->raster();
if (src32)
imgRas = src32;
else if (srcCM)
TRop::convert(imgRas, srcCM, tImg->getPalette(), TRect());
else
TRop::convert(imgRas, tImg->raster());
QImage colImg(imgRas->getRawData(), imgRes.lx, imgRes.ly,
QImage::Format_ARGB32_Premultiplied);
QImage img = colImg.mirrored(false, true);
TPointD imgDpi = sl->getImageDpi();
if (imgDpi != TPointD()) {
img.setDotsPerMeterX((int)std::round(imgDpi.x / 0.0254));
img.setDotsPerMeterY((int)std::round(imgDpi.y / 0.0254));
}
// draw
enum CornerId {
TopLeft = Qt::TopLeftCorner,
TopRight = Qt::TopRightCorner,
BottomLeft = Qt::BottomLeftCorner,
BottomRight = Qt::BottomRightCorner,
Center = Qt::BottomRightCorner + 1
};
QMap<int, QPainterPath> trackPaths; // [ CornerId, Path data ]
QList<QMap<int, QPointF>>
cornerPointsTrack; // [ CornerId, Position ] for each frame
if (info.lineTL) trackPaths.insert((int)TopLeft, QPainterPath());
if (info.lineTR) trackPaths.insert((int)TopRight, QPainterPath());
if (info.lineCenter) trackPaths.insert((int)Center, QPainterPath());
if (info.lineBL) trackPaths.insert((int)BottomLeft, QPainterPath());
if (info.lineBR) trackPaths.insert((int)BottomRight, QPainterPath());
TAffine aff;
TAffine dpiAffInv = getDpiAffine(sl.getPointer(), fId, true).inv();
TAffine camDpiAff = getDpiAffine(camera);
TStageObjectId colId = TStageObjectId::ColumnId(info.columnId);
QMap<int, TPointD> camCorners = {
{(int)TopLeft, TPointD(-camDim.lx / 2, camDim.ly / 2)},
{(int)TopRight, TPointD(camDim.lx / 2, camDim.ly / 2)},
{(int)BottomLeft, TPointD(-camDim.lx / 2, -camDim.ly / 2)},
{(int)BottomRight, TPointD(camDim.lx / 2, -camDim.ly / 2)},
{(int)Center, TPointD()}};
for (int f = 0; f < scene->getFrameCount(); f++) {
getCameraPlacement(aff, xsh, (double)f, colId, cameraId);
TAffine affTmp = dpiAffInv * aff * camDpiAff;
// corner points
QMap<int, QPointF> cornerPoints;
for (int c = TopLeft; c <= Center; c++) {
TPointD p = affTmp * camCorners[c];
cornerPoints.insert(c, QPointF(p.x, -p.y));
}
cornerPointsTrack.append(cornerPoints);
if (trackPaths.isEmpty()) continue;
// track paths will plot every 0.1 frames
for (int df = 0; df < 10; df++) {
double tmpF = (double)f + (double)df * 0.1;
getCameraPlacement(aff, xsh, (double)tmpF, colId, cameraId);
affTmp = dpiAffInv * aff * camDpiAff;
for (int c = TopLeft; c <= Center; c++) {
if (!trackPaths.contains(c)) continue;
TPointD p = affTmp * camCorners[c];
if (f == 0 && df == 0)
trackPaths[c].moveTo(QPointF(p.x, -p.y));
else
trackPaths[c].lineTo(QPointF(p.x, -p.y));
}
if (f == scene->getFrameCount() - 1) break;
}
}
QPainter p(&img);
p.setRenderHints(QPainter::Antialiasing | QPainter::TextAntialiasing);
p.setBrush(QColor(255, 255, 255, 255. * (1. - info.bgOpacity)));
p.setPen(Qt::NoPen);
p.drawRect(0, 0, imgRes.lx, imgRes.ly);
p.translate(imgRes.lx / 2.0, imgRes.ly / 2.0);
// camera rect
QSet<int> rectFrames = info.cameraRectFrames;
if (info.cameraRectOnKeys || info.cameraRectOnTags) {
// check keyframes
for (int f = 0; f < scene->getFrameCount(); f++) {
if (rectFrames.contains(f)) continue;
if (info.cameraRectOnKeys &&
(isKey(f, xsh->getStageObject(cameraId), xsh) ||
isKey(f, xsh->getStageObject(colId), xsh)))
rectFrames.insert(f);
else if (info.cameraRectOnTags && xsh->isFrameTagged(f))
rectFrames.insert(f);
}
}
struct cameraRectData {
QPolygonF polygon;
QVector<QPointF> centerCrossPoints;
QPointF textPos;
QVector2D offsetVec;
QList<int> frames;
};
QList<cameraRectData>
camRectDataList; // gather frames with the same camera rectangle
for (auto rectFrame : rectFrames) {
if (rectFrame < 0 || cornerPointsTrack.size() <= rectFrame) continue;
QMap<int, QPointF> cornerPoints = cornerPointsTrack.at(rectFrame);
QPolygonF camRectPolygon({cornerPoints[TopLeft], cornerPoints[TopRight],
cornerPoints[BottomRight],
cornerPoints[BottomLeft]});
QVector<QPointF> centerCrossPoints = {
cornerPoints[TopLeft] * 0.51 + cornerPoints[BottomRight] * 0.49,
cornerPoints[TopLeft] * 0.49 + cornerPoints[BottomRight] * 0.51,
cornerPoints[TopRight] * 0.51 + cornerPoints[BottomLeft] * 0.49,
cornerPoints[TopRight] * 0.49 + cornerPoints[BottomLeft] * 0.51};
QPointF textPos = cornerPoints[(int)info.numberAt];
int oppositeId;
switch ((int)info.numberAt) {
case TopLeft:
oppositeId = TopRight;
break;
case TopRight:
oppositeId = TopLeft;
break;
case BottomLeft:
oppositeId = BottomRight;
break;
case BottomRight:
oppositeId = BottomLeft;
break;
}
QVector2D offsetVec =
QVector2D(textPos - cornerPoints[oppositeId]).normalized();
bool found = false;
for (int i = 0; i < camRectDataList.size(); i++)
if (camRectDataList[i].polygon == camRectPolygon) {
found = true;
camRectDataList[i].frames.append(rectFrame);
break;
}
if (!found)
camRectDataList.append(
{camRectPolygon, centerCrossPoints, textPos, offsetVec, {rectFrame}});
}
p.setFont(info.font);
for (auto& data : camRectDataList) {
p.setPen(QPen(info.lineColor, 2));
p.setBrush(Qt::NoBrush);
p.drawPolygon(data.polygon);
// draw cross mark at center of the frame
bool drawCross = true;
// if the graduation mark is at the camera center, do not draw the cross
if (info.lineCenter && info.graduationInterval > 0) {
for (auto frame : data.frames)
if (frame % info.graduationInterval == 0) {
drawCross = false;
break;
}
}
if (drawCross) {
p.setPen(QPen(info.lineColor, 1));
p.drawLines(data.centerCrossPoints);
}
// generate frame number string
std::sort(data.frames.begin(), data.frames.end());
QString frameStr = framesToString(data.frames);
// draw frame number string
QFontMetricsF fm(info.font);
QRectF textRect = fm.boundingRect(frameStr).adjusted(-5, 0, 5, 0);
QPointF textPos = data.textPos + QPointF(5, 0);
textRect.translate(textPos);
while (data.polygon.intersects(textRect)) {
textRect.translate(data.offsetVec.toPointF());
textPos += QPointF(data.offsetVec.toPointF());
}
p.setBrush(info.lineColor);
drawOutlinedText(p, textPos, frameStr);
}
QFont smallFont(info.font);
smallFont.setPixelSize(info.font.pixelSize() * 2 / 3);
p.setFont(smallFont);
// track lines
QMap<int, QPainterPath>::const_iterator itr = trackPaths.constBegin();
while (itr != trackPaths.constEnd()) {
if (info.lineCenter && itr.key() != Center)
p.setPen(QPen(info.lineColor, 1, Qt::DashLine));
else
p.setPen(QPen(info.lineColor, 1));
p.setBrush(Qt::NoBrush);
p.drawPath(itr.value());
if (info.graduationInterval == 0 ||
(info.lineCenter && itr.key() != Center)) {
++itr;
continue;
}
// draw graduation
QList<QPair<QPointF, QList<int>>> graduations;
for (int f = 0; f < scene->getFrameCount(); f++) {
if (f % info.graduationInterval != 0) continue;
QPointF gPos = cornerPointsTrack[f].value(itr.key());
QPointF prev =
(f == 0) ? gPos : cornerPointsTrack[f - 1].value(itr.key());
QPointF next = (f == scene->getFrameCount() - 1)
? gPos
: cornerPointsTrack[f + 1].value(itr.key());
QPointF graduationVec =
QVector2D(next - prev).normalized().toPointF() * 5.0;
graduationVec = QPointF(-graduationVec.y(), graduationVec.x());
p.drawLine(gPos - graduationVec, gPos + graduationVec);
// draw frame
if (!info.numbersOnLine) continue;
if (itr.key() != Center && rectFrames.contains(f)) continue;
bool found = false;
for (auto& g : graduations) {
if (g.first == gPos) {
found = true;
g.second.append(f);
break;
}
}
if (!found) graduations.append(QPair<QPointF, QList<int>>(gPos, {f}));
}
for (auto& g : graduations) {
std::sort(g.second.begin(), g.second.end());
QString frameStr = framesToString(g.second);
QFontMetricsF fm(smallFont);
QRectF textRect = fm.boundingRect(frameStr).adjusted(-5, 0, 5, 0);
QPointF pos = g.first + QPointF(5, 0);
if (info.numberAt == Qt::TopLeftCorner ||
info.numberAt == Qt::BottomLeftCorner)
pos += QPointF(-textRect.width(), 0);
p.setBrush(info.lineColor);
drawOutlinedText(p, pos, frameStr);
}
++itr;
}
return img;
}
//--------------------------------------------------------------
void ExportCameraTrackPopup::getInfoFromUI(ExportCameraTrackInfo& info) {
// target column
if (m_targetColumnCombo->count() == 0) return;
info.columnId = m_targetColumnCombo->currentData().toInt();
// appearance settimgs
info.bgOpacity = m_bgOpacityField->getValue();
TPixel32 lineTCol = m_lineColorFld->getColor();
info.lineColor = QColor((int)lineTCol.r, (int)lineTCol.g, (int)lineTCol.b);
// camera rect settings
info.cameraRectOnKeys = m_cameraRectOnKeysCB->isChecked();
info.cameraRectOnTags = m_cameraRectOnTagsCB->isChecked();
#if QT_VERSION >= QT_VERSION_CHECK(5, 14, 0)
QStringList framesStrList =
m_cameraRectFramesEdit->text().split(",", Qt::SkipEmptyParts);
#else
QStringList framesStrList =
m_cameraRectFramesEdit->text().split(",", QString::SkipEmptyParts);
#endif
for (auto fStr : framesStrList) {
bool ok;
int f = fStr.toInt(&ok);
if (ok) info.cameraRectFrames.insert(f - 1);
}
// track line settings
info.lineTL = m_lineTL_CB->isChecked();
info.lineTR = m_lineTR_CB->isChecked();
info.lineCenter = m_lineCenter_CB->isChecked();
info.lineBL = m_lineBL_CB->isChecked();
info.lineBR = m_lineBR_CB->isChecked();
info.graduationInterval = m_graduationIntervalCombo->currentData().toInt();
// frame number settings
info.numberAt = (Qt::Corner)(m_numberAtCombo->currentData().toInt());
info.numbersOnLine = m_numbersOnLineCB->isChecked();
;
info.font = m_fontCombo->currentFont();
info.font.setPixelSize(m_fontSizeEdit->getValue());
}
//--------------------------------------------------------------
void ExportCameraTrackPopup::updatePreview() {
ExportCameraTrackInfo info;
getInfoFromUI(info);
if (info.columnId == -1) return;
QImage img = generateCameraTrackImg(info, true);
m_previewPane->setPixmap(QPixmap::fromImage(img));
}
//--------------------------------------------------------------
void ExportCameraTrackPopup::onExport() {
ExportCameraTrackInfo info;
getInfoFromUI(info);
if (info.columnId == -1) return;
QImage img = generateCameraTrackImg(info, false);
ToonzScene* scene = TApp::instance()->getCurrentScene()->getScene();
static GenericSaveFilePopup* savePopup = 0;
if (!savePopup) {
savePopup =
new GenericSaveFilePopup(QObject::tr("Export Camera Track Image"));
savePopup->setFilterTypes({"jpg", "jpeg", "bmp", "png", "tif"});
}
if (!scene->isUntitled())
savePopup->setFolder(scene->getScenePath().getParentDir());
else
savePopup->setFolder(
TProjectManager::instance()->getCurrentProject()->getScenesPath());
TXsheet* xsh = TApp::instance()->getCurrentXsheet()->getXsheet();
TStageObjectId cameraId = xsh->getStageObjectTree()->getCurrentCameraId();
std::string cameraName = xsh->getStageObject(cameraId)->getName();
savePopup->setFilename(TFilePath(cameraName + ".tif"));
TFilePath fp = savePopup->getPath();
if (fp.isEmpty()) return;
std::string type = fp.getType();
if (type == "")
fp = fp.withType("tif");
else if (type != "jpg" && type != "jpeg" && type != "bmp" && type != "png" &&
type != "tif") {
DVGui::MsgBoxInPopup(DVGui::WARNING,
tr("Please specify one of the following file formats; "
"jpg, jpeg, bmp, png, and tif"));
return;
}
img.save(fp.getQString());
}
//********************************************************************************
// Export Camera Track Command instantiation
//********************************************************************************
OpenPopupCommandHandler<ExportCameraTrackPopup> ExportCameraTrackPopupCommand(
MI_ExportCameraTrack);
```
|
```shell
#!/usr/bin/env sh
./build/tools/caffe train \
--solver=models/bvlc_reference_caffenet/solver.prototxt \
--snapshot=models/bvlc_reference_caffenet/caffenet_train_10000.solverstate.h5
```
|
Fordice is a surname. Notable people with the surname include:
Kirk Fordice (1934–2004), American politician and businessman
Pat Fordice (1934–2007), First Lady of Mississippi, wife of Kirk
|
```objective-c
/*
*
* in the file LICENSE in the source distribution or at
* path_to_url
*/
#ifndef OSSL_INTERNAL_THREAD_ARCH_H
# define OSSL_INTERNAL_THREAD_ARCH_H
# include <openssl/configuration.h>
# include <openssl/e_os2.h>
# include "internal/time.h"
# if defined(_WIN32)
# include <windows.h>
# endif
# if defined(OPENSSL_THREADS) && defined(OPENSSL_SYS_UNIX)
# define OPENSSL_THREADS_POSIX
# elif defined(OPENSSL_THREADS) && defined(OPENSSL_SYS_VMS)
# define OPENSSL_THREADS_POSIX
# elif defined(OPENSSL_THREADS) && defined(OPENSSL_SYS_WINDOWS) && \
defined(_WIN32_WINNT)
# if _WIN32_WINNT >= 0x0600
# define OPENSSL_THREADS_WINNT
# elif _WIN32_WINNT >= 0x0501
# define OPENSSL_THREADS_WINNT
# define OPENSSL_THREADS_WINNT_LEGACY
# else
# define OPENSSL_THREADS_NONE
# endif
# else
# define OPENSSL_THREADS_NONE
# endif
# include <openssl/crypto.h>
typedef struct crypto_mutex_st CRYPTO_MUTEX;
typedef struct crypto_condvar_st CRYPTO_CONDVAR;
CRYPTO_MUTEX *ossl_crypto_mutex_new(void);
void ossl_crypto_mutex_lock(CRYPTO_MUTEX *mutex);
int ossl_crypto_mutex_try_lock(CRYPTO_MUTEX *mutex);
void ossl_crypto_mutex_unlock(CRYPTO_MUTEX *mutex);
void ossl_crypto_mutex_free(CRYPTO_MUTEX **mutex);
CRYPTO_CONDVAR *ossl_crypto_condvar_new(void);
void ossl_crypto_condvar_wait(CRYPTO_CONDVAR *cv, CRYPTO_MUTEX *mutex);
void ossl_crypto_condvar_wait_timeout(CRYPTO_CONDVAR *cv, CRYPTO_MUTEX *mutex,
OSSL_TIME deadline);
void ossl_crypto_condvar_broadcast(CRYPTO_CONDVAR *cv);
void ossl_crypto_condvar_signal(CRYPTO_CONDVAR *cv);
void ossl_crypto_condvar_free(CRYPTO_CONDVAR **cv);
typedef uint32_t CRYPTO_THREAD_RETVAL;
typedef CRYPTO_THREAD_RETVAL (*CRYPTO_THREAD_ROUTINE)(void *);
typedef CRYPTO_THREAD_RETVAL (*CRYPTO_THREAD_ROUTINE_CB)(void *,
void (**)(void *),
void **);
# define CRYPTO_THREAD_NO_STATE 0UL
# define CRYPTO_THREAD_FINISHED (1UL << 0)
# define CRYPTO_THREAD_JOIN_AWAIT (1UL << 1)
# define CRYPTO_THREAD_JOINED (1UL << 2)
# define CRYPTO_THREAD_GET_STATE(THREAD, FLAG) ((THREAD)->state & (FLAG))
# define CRYPTO_THREAD_GET_ERROR(THREAD, FLAG) (((THREAD)->state >> 16) & (FLAG))
typedef struct crypto_thread_st {
uint32_t state;
void *data;
CRYPTO_THREAD_ROUTINE routine;
CRYPTO_THREAD_RETVAL retval;
void *handle;
CRYPTO_MUTEX *lock;
CRYPTO_MUTEX *statelock;
CRYPTO_CONDVAR *condvar;
unsigned long thread_id;
int joinable;
OSSL_LIB_CTX *ctx;
} CRYPTO_THREAD;
# if defined(OPENSSL_THREADS)
# define CRYPTO_THREAD_UNSET_STATE(THREAD, FLAG) \
do { \
(THREAD)->state &= ~(FLAG); \
} while ((void)0, 0)
# define CRYPTO_THREAD_SET_STATE(THREAD, FLAG) \
do { \
(THREAD)->state |= (FLAG); \
} while ((void)0, 0)
# define CRYPTO_THREAD_SET_ERROR(THREAD, FLAG) \
do { \
(THREAD)->state |= ((FLAG) << 16); \
} while ((void)0, 0)
# define CRYPTO_THREAD_UNSET_ERROR(THREAD, FLAG) \
do { \
(THREAD)->state &= ~((FLAG) << 16); \
} while ((void)0, 0)
# else
# define CRYPTO_THREAD_UNSET_STATE(THREAD, FLAG)
# define CRYPTO_THREAD_SET_STATE(THREAD, FLAG)
# define CRYPTO_THREAD_SET_ERROR(THREAD, FLAG)
# define CRYPTO_THREAD_UNSET_ERROR(THREAD, FLAG)
# endif /* defined(OPENSSL_THREADS) */
CRYPTO_THREAD * ossl_crypto_thread_native_start(CRYPTO_THREAD_ROUTINE routine,
void *data, int joinable);
int ossl_crypto_thread_native_spawn(CRYPTO_THREAD *thread);
int ossl_crypto_thread_native_join(CRYPTO_THREAD *thread,
CRYPTO_THREAD_RETVAL *retval);
int ossl_crypto_thread_native_perform_join(CRYPTO_THREAD *thread,
CRYPTO_THREAD_RETVAL *retval);
int ossl_crypto_thread_native_exit(void);
int ossl_crypto_thread_native_is_self(CRYPTO_THREAD *thread);
int ossl_crypto_thread_native_clean(CRYPTO_THREAD *thread);
#endif /* OSSL_INTERNAL_THREAD_ARCH_H */
```
|
```xml
import { Column, CreateDateColumn } from "../../../../src"
export class Comment {
@CreateDateColumn()
createdAt: Date
@Column()
savedBy: string
}
```
|
Thank God for Girls may refer to:
Thank God for Girls (album), by Benny Mardones in 1978
"Thank God for Girls" (song), by Weezer in 2015
|
St Paul's Girls' School is a private day school for girls, aged 11 to 18, located in Brook Green, Hammersmith, in West London, England.
History
St Paul's Girls' School was founded by the Worshipful Company of Mercers in 1904, using part of the endowment of the foundation set up by John Colet, to create a girls' school to complement the boys' school he had founded in the sixteenth century. The governors hold proprietorial responsibility, and some are representatives of the Universities of Oxford, Cambridge and London.
The buildings for the school were designed by the architect Gerald Horsley, son of the painter John Callcott Horsley and one of the founder members of the Art Workers Guild.
The school has had several distinguished directors of music, most notably Gustav Holst (1905–34) and Herbert Howells (1936–62). Holst composed his St Paul's and Brook Green suites for the pupils at the school. Holst also composed what is arguably his best known work, "The Planets", while teaching at St Paul's. John Linton Gardner held a part-time position as director of music at the school.
Exam results
St Paul's girls regularly perform extremely well in the GCSEs and A Levels. In 2014, 99.3% of GCSEs were graded at A*s or As with 93.6% graded at A* alone. This was the highest ever A* percentage achieved by the school and in the country. In 2016, the school achieved the highest A Level results in its history with 60.0% of entries achieving an A* grade and 93.8% of entries achieving A* or A grades.
In the 2020 GCSE and IGCSE results, students were awarded the higher of their centre-assessed grade and the statistically adjusted calculated grade. 86% of entries were awarded a 9 grade (1% point higher than the 2019 outcome) and 97.9% of entries gained an 8 or 9 (which are equivalent to the old A* grade).
In the 2020 A level and Pre-U results, 64.6% of entries attained an A* grade at A level or the Pre-U equivalent D1 or D2, while 92.4% of entries achieved an A* or A grade and 98.4% a B grade or higher (or the Pre-U equivalent).
Music
Gustav Holst was director of music at the school from 1905 to 1934 when he died, including the period he composed his orchestral suites, including St Paul's Suite and The Planets. He was succeeded by Herbert Howells before John Gardner followed in the 1960s. Gardner wrote many memorable pieces for the school, including his popular Christmas carols Tomorrow Shall Be My Dancing Day and The Holly and the Ivy. Hilary Davan Wetton was director of music from 1979 to 1994. In 1988 a CD with Children's favorite songs was released on the label Spectrum records.
Drama
The school's main theatre, where most school productions are staged, is named after alumna Celia Johnson. Other productions are staged in the drama studio which is a smaller space.
Bursaries and scholarships
Bursaries
The school awards means-tested bursaries to students who join in Y7 and for students arriving in Y12. Bursaries fund up to 100% of tuition fees on a sliding scale depending on family income and assets, plus exam entry fees and a grant towards textbooks. Holders of 100% bursaries entering in Y12 also receive an extra package to cover additional expenses, such as the cost of sports equipment and music tuition.
Scholarships
Year 7: The school awards up to four academic scholarships and, usually, about three or four music scholarships to 11+ entrants (worth £100 a year; the music scholarship also includes free tuition in two instruments).
Year 12: The school may also award music scholarships to current students and to new joiners (worth free tuition in two instruments), and two art scholarships (worth £250 a year) to internal and external candidates. The Nora Day music scholarship (worth up to 50% of school fees plus free tuition in two instruments) is awarded every other year to a new joiner who shows exceptional musical potential. The school also awards scholarships worth £250 a year for academic distinction in the "Senior Scholarship", a dissertation written by students in the summer holiday following Y12.
Sport
Rowing
The school has an active rowing club called the St Paul's Girls' School Boat Club which is based on the River Thames. The club is affiliated to British Rowing (boat code SPG) and has produced four British champion crews at the 1992 British Rowing Championships, 2002 British Rowing Championships, 2003 British Rowing Championships and 2011 British Rowing Championships.
High Mistresses
The headmistress of St Paul's Girls' School is known as the High Mistress.
Frances Ralph Grey (d.1935), High Mistress 1903–1927
Ethel Strudwick (1880–1954), High Mistress 1927–1948, daughter of the Pre-Raphaelite painter John Melhuish Strudwick
Margaret Osborn (1906–1985), High Mistress 1948–1963
Alison Munro (1914–2008), High Mistress 1964–1974
Heather Brigstocke, Baroness Brigstocke (1929–2004), High Mistress 1974–1989
Helen Elizabeth Webber Williams (born 1938), High Mistress 1989–1992
Janet Gough (born 1940), High Mistress 1993–1998
Elizabeth Mary Diggory (1945–2007), High Mistress 1998–2006
Clarissa Mary Farr (born 1958), High Mistress 2006–2017
Sarah Fletcher, High Mistress 2017–present
Old Paulinas
Alumnae of the school, known as "Old Paulinas", include:
Arts
Gillian Ayres – artist
Mischa Barton – actress
Nicola Beauman – publisher, founder of Persephone Books
Helen Binyon – artist
Lesley Blanch – author
Justin Blanco White – architect
Celia Brayfield – author
Sophie Hunter – theatre and opera director
Brigid Brophy – dramatist
Lucy Briers – actress
Margaret Calvert – graphic artist
Miranda Carter – biographer
Edie Campbell – model
Cecilia Chancellor – model
Joan Cross – singer
Emma Darwin – author
Monica Dickens – author
Suzi Digby – conductor and musician
Flora Fraser – author
Justine Frischmann – musician
Gluck (Hannah Gluckstein) – artist
Francesca Gonshaw – actress
Imogen Holst – musician
Sarah Hobson – travel writer
Ursula Howells – actress
Celia Johnson – actress
Rachel Johnson – journalist and editor
Jane M. Joseph – musician and composer
Amy Key Clarke – poet and author
Marghanita Laski – author
Nicola LeFanu – composer
Amanda Levete – architect
Alice Lowe – actress/author
Jessica Mann – author
Yvonne Mitchell – actress/author
Emily Mortimer – actress
Lucy Moss - playwright/director
Santha Rama Rau – author
Joely Richardson – actress
Natasha Richardson – actress
Georgina Rylance – actress
Katherine Shonfield – architect
Dodie Smith – playwright
Catherine Storr – author
Imogen Stubbs – actress
Emma Tennant – author
Angela Thirkell – author
Mary Treadgold – author
Salley Vickers – author
Samantha Weinberg – author
Rachel Weisz – actress
Antonia White – author
Business
Isabel dos Santos – wealthiest woman in Africa as of 2020
Grace Beverley – founder of Tala and Shreddy
Culinary arts
Thomasina Miers – chef and founder of Wahaca restaurant chain
Henrietta Lovell – founder of the Rare Tea Company
Education
Eleanora Carus-Wilson – economic historian
Sheila Forbes – principal, St Hilda's College, Oxford
Henrietta Harrison – professor of Modern Chinese Studies, University of Oxford
Jessica Rawson – warden, Merton College, Oxford
Barbara Reynolds – scholar
Joan Robinson – economist
Humanitarianism
Myrtle Solomon – pacifist and former chair War Resisters' International
Law
Sonia Proudman – High Court Judge
Journalism and media
Emily Buchanan – BBC World Affairs correspondent
Clemency Burton-Hill – broadcaster and author
Edie Campbell – model and socialite
Victoria Coren Mitchell – presenter, poker player
Daisy Donovan – TV presenter
Stephanie Flanders – BBC Economics editor
Amelia Gentleman – journalist
Bridget Harrison – journalist
Bronwen Maddox – senior journalist at 'The Times' newspaper
Veronica Pedrosa – Al Jazeera English correspondent
Sophie Raworth – news reader
Susanna Reid – news presenter
Anne Scott-James – journalist and editor
Alexandra Shulman – editor-in-chief, Vogue 1992–2017
Carol Thatcher – journalist
Erica Wagner – author, critic, and literary editor of The Times
Eirene White, Baroness White – journalist and Labour politician
Petronella Wyatt – journalist
Politics
Jane Bonham Carter – Liberal Democrat peer
Vicky Ford, Conservative MP and formerly MEP
Harriet Harman – Labour MP, former Acting Leader of the Labour Party, former Leader of the Opposition and former Cabinet minister
Susan Kramer – former Liberal Democrat MP
Mavis Tate – Conservative MP and women's rights campaigner
Anne-Marie Trevelyan, Conservative MP
Jo Valentine, Baroness Valentine – member of the British House of Lords
Eirene White, Baroness White – Labour Minister of State then life peer
Shirley Williams – former Labour Education Secretary and co-founder of the Social Democratic Party
Science
Kate Bingham – venture capitalist
Ruth Bowden – anatomist
Caroline Deys – doctor
Rosalind Franklin – scientist, research led to discovery of the structure of DNA
Jean Ginsburg – physiologist, endocrinologist
Christine Hamill – mathematician
Kathleen Kenyon – archaeologist
Irene Manton – botanist
Sidnie Manton – entomologist
Onora O'Neill – philosopher
Cecilia Payne-Gaposchkin – astronomer
Catherine Peckham – doctor and scientist
Joan Beauchamp Procter – zoologist, herpetologist
Sport
Kitty Godfree – tennis player
Lara Prior-Palmer – equestrian
Cecilia Robinson – cricket
Zoe de Toledo – rowing
Notable former staff
Margaret Cole – socialist politician, former Classics teacher
Gustav Holst – composer, pioneer of music education for girls
Nicola LeFanu – director of music during the 1970s
Controversy
The school was in the news in November 2017 with allegations of sexual abuse between the 1970s and 1990s. One teacher resigned on 22 November 2017 amidst these allegations.
References
External links
Official School Website
ISI Inspection Reports
Profile on the ISC website
SPGS at The Good Schools Guide
Tatler School Guide
Private girls' schools in London
Educational institutions established in 1904
Private schools in the London Borough of Hammersmith and Fulham
Member schools of the Girls' Schools Association
History of the London Borough of Hammersmith and Fulham
1904 establishments in England
|
```javascript
/*
Myrtille: A native HTML4/5 Remote Desktop Protocol client.
path_to_url
Unless required by applicable law or agreed to in writing, software
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
*/
/*****************************************************************************************************************************************************************************************************/
/*** Touchscreen ***/
/*****************************************************************************************************************************************************************************************************/
function Touchscreen(base, config, dialog, display, network, user)
{
this.init = function()
{
try
{
user.addListener('touchmove', function(e) { touchMove(e); }, user.getPassiveEventListeners() ? { passive: true } : false);
user.addListener('touchstart', function(e) { touchTap(e, 1); }, user.getPassiveEventListeners() ? { passive: true } : false);
user.addListener('touchend', function(e) { touchTap(e, 0); }, user.getPassiveEventListeners() ? { passive: true } : false);
}
catch (exc)
{
dialog.showDebug('touchscreen init error: ' + exc.message);
throw exc;
}
};
function processEvent(e)
{
if (e == null)
e = window.event;
if (e == null)
return false;
if (!setTouchPosition(e))
return false;
return true;
}
function setTouchPosition(e)
{
var scrollLeft = (document.documentElement.scrollLeft ? document.documentElement.scrollLeft : document.body.scrollLeft);
var scrollTop = (document.documentElement.scrollTop ? document.documentElement.scrollTop : document.body.scrollTop);
//dialog.showDebug('browser width: ' + display.getBrowserWidth() + ', height: ' + display.getBrowserHeight());
//dialog.showDebug('scroll left: ' + scrollLeft + ', top: ' + scrollTop);
//dialog.showDebug('horizontal offset: ' + display.getHorizontalOffset() + ', vertical: ' + display.getVerticalOffset());
if (e.touches[0])
{
touchX = Math.round(e.touches[0].pageX ? e.touches[0].pageX : e.touches[0].clientX + scrollLeft) - display.getHorizontalOffset();
touchY = Math.round(e.touches[0].pageY ? e.touches[0].pageY : e.touches[0].clientY + scrollTop) - display.getVerticalOffset();
}
//dialog.showDebug('touch X: ' + touchX + ', Y: ' + touchY);
if (touchX < 0 || touchY < 0 || touchX > display.getBrowserWidth() + scrollLeft - display.getHorizontalOffset() || touchY > display.getBrowserHeight() + scrollTop - display.getVerticalOffset())
{
//dialog.showDebug('touch out of bounds, ignoring');
return false;
}
//dialog.showDebug('*************************');
return true;
}
/*************************************************************************************************************************************************************************************************/
/*** Move ***/
/*************************************************************************************************************************************************************************************************/
// last touch position
var lastTouchX = null;
var lastTouchY = null;
// current touch position
var touchX;
var touchY;
// touch move sampling (same mechanism as mouse)
var touchMoveCount = 0;
function touchMove(e)
{
try
{
//dialog.showDebug('touch move');
if (!processEvent(e))
return false;
// the touch move event can be fired repeatedly if there is an external application stealing the focus to the browser (i.e.: windows task manager, fiddler, etc. on a 1 sec interval)
// when the browser gets the focus back, it fires a touch move event...
// so, if the touch didn't moved since last call, exit
if (lastTouchX != null && lastTouchY != null && lastTouchX == touchX && lastTouchY == touchY)
{
//dialog.showDebug('touch move repeated, ignoring');
return false;
}
// detect gestures (simple swipe for now, may evolve into more advanced gestures)
var gesture = false;
var xDiff;
var yDiff;
if (user.getVerticalSwipeEnabled())
{
xDiff = lastTouchX - touchX;
yDiff = lastTouchY - touchY;
// horizontal move is more significant than vertical
if (Math.abs(xDiff) > Math.abs(yDiff))
{
if (xDiff > 0)
{
//dialog.showDebug('left swipe');
}
else
{
//dialog.showDebug('right swipe');
}
}
else
{
if (yDiff > 0)
{
//dialog.showDebug('up swipe');
}
else
{
//dialog.showDebug('down swipe');
}
// handle gestures
gesture = true;
}
}
touchMoveCount++;
var send = true;
if (!gesture)
{
// sampling (same mechanism as mouse)
if (config.getMouseMoveSamplingRate() == 5 ||
config.getMouseMoveSamplingRate() == 10 ||
config.getMouseMoveSamplingRate() == 20 ||
config.getMouseMoveSamplingRate() == 25 ||
config.getMouseMoveSamplingRate() == 50)
{
send = touchMoveCount % (100 / config.getMouseMoveSamplingRate()) == 0;
}
// sampling debug: display a dot at the current touch move position (green: move sent, red: dropped) - only if canvas is enabled
/*
if (config.getDebugEnabled() && config.getDisplayMode() == config.getDisplayModeEnum().CANVAS)
{
display.getCanvas().getCanvasContext().fillStyle = send ? '#00FF00' : '#FF0000';
display.getCanvas().getCanvasContext().fillRect(touchX, touchY, 1, 1);
}
*/
if (send)
{
user.triggerActivity();
sendEvent(base.getCommandEnum().SEND_MOUSE_MOVE.text + touchX + '-' + touchY); // same event as mouse move
}
}
else
{
user.triggerActivity();
// cancel the touch tap preceding the gesture
// replace it by a touch move to set the gesture initial position
if (touchTapTimeout != null)
{
//dialog.showDebug('cancelling touch tap');
window.clearTimeout(touchTapTimeout);
touchTapTimeout = null;
touchTapCancelled = true;
sendEvent(base.getCommandEnum().SEND_MOUSE_MOVE.text + lastTouchTapX + '-' + lastTouchTapY);
}
else
{
// sample touch gestures to avoid sending too many mouse wheel events and thus scrolling too fast
// some browsers may support a finer touch definition; this is the case of firefox compared to chrome (!)
send = touchMoveCount % (display.isFirefoxBrowser() ? 10 : 5) == 0;
// sampling debug: display a dot at the current touch move position (green: move sent, red: dropped) - only if canvas is enabled
/*
if (config.getDebugEnabled() && config.getDisplayMode() == config.getDisplayModeEnum().CANVAS)
{
display.getCanvas().getCanvasContext().fillStyle = send ? '#00FF00' : '#FF0000';
display.getCanvas().getCanvasContext().fillRect(touchX, touchY, 1, 1);
}
*/
if (send)
{
if (yDiff > 0)
{
// scroll down
sendEvent(base.getCommandEnum().SEND_MOUSE_WHEEL_DOWN.text + lastTouchTapX + '-' + lastTouchTapY);
}
else
{
// scroll up
sendEvent(base.getCommandEnum().SEND_MOUSE_WHEEL_UP.text + lastTouchTapX + '-' + lastTouchTapY);
}
}
}
}
// update the last touch position
lastTouchX = touchX;
lastTouchY = touchY;
}
catch (exc)
{
dialog.showDebug('touchscreen move error: ' + exc.message);
}
user.cancelEvent(e);
return false;
}
/*************************************************************************************************************************************************************************************************/
/*** Tap ***/
/*************************************************************************************************************************************************************************************************/
// last touch tap position
var lastTouchTapX = null;
var lastTouchTapY = null;
// wait for a potential gesture following a touch tap
// if there is a gesture, the touch tap is cancelled
var touchTapTimeout = null;
var touchTapCancelled = false;
function touchTap(e, start)
{
try
{
//dialog.showDebug('touch tap');
if (touchTapCancelled)
{
//dialog.showDebug('touch tap cancelled');
touchTapCancelled = false;
return false;
}
if (!processEvent(e))
return false;
touchTapTimeout = window.setTimeout(function()
{
user.triggerActivity();
//dialog.showDebug('touch ' + (start ? 'start' : 'end'));
if (user.getRightClickButton() != null && user.getRightClickButton().value == 'Right-Click ON')
{
//dialog.showDebug('emulating mouse right click ' + (start ? 'down' : 'up'));
sendEvent(base.getCommandEnum().SEND_MOUSE_RIGHT_BUTTON.text + start + touchX + '-' + touchY);
if (!start)
{
user.toggleRightClick(user.getRightClickButton());
}
}
else
{
sendEvent(base.getCommandEnum().SEND_MOUSE_LEFT_BUTTON.text + start + touchX + '-' + touchY); // same event as mouse left button
}
}, 250);
// update the last touch tap position
lastTouchTapX = touchX;
lastTouchTapY = touchY;
}
catch (exc)
{
dialog.showDebug('touchscreen tap ' + (start ? 'start' : 'end') + ' error: ' + exc.message);
}
user.cancelEvent(e);
return false;
}
/*************************************************************************************************************************************************************************************************/
/*** Network ***/
/*************************************************************************************************************************************************************************************************/
function sendEvent(touchEvent)
{
if (touchEvent != null)
{
// pass the event to the network
network.processUserEvent('mouse', touchEvent); // same logic as mouse
}
}
}
```
|
```php
<?php
namespace Illuminate\Foundation;
class AliasLoader
{
/**
* The array of class aliases.
*
* @var array
*/
protected $aliases;
/**
* Indicates if a loader has been registered.
*
* @var bool
*/
protected $registered = false;
/**
* The singleton instance of the loader.
*
* @var \Illuminate\Foundation\AliasLoader
*/
protected static $instance;
/**
* Create a new AliasLoader instance.
*
* @param array $aliases
*/
private function __construct($aliases)
{
$this->aliases = $aliases;
}
/**
* Get or create the singleton alias loader instance.
*
* @param array $aliases
* @return \Illuminate\Foundation\AliasLoader
*/
public static function getInstance(array $aliases = [])
{
if (is_null(static::$instance)) {
return static::$instance = new static($aliases);
}
$aliases = array_merge(static::$instance->getAliases(), $aliases);
static::$instance->setAliases($aliases);
return static::$instance;
}
/**
* Load a class alias if it is registered.
*
* @param string $alias
* @return bool|null
*/
public function load($alias)
{
if (isset($this->aliases[$alias])) {
return class_alias($this->aliases[$alias], $alias);
}
}
/**
* Add an alias to the loader.
*
* @param string $class
* @param string $alias
* @return void
*/
public function alias($class, $alias)
{
$this->aliases[$class] = $alias;
}
/**
* Register the loader on the auto-loader stack.
*
* @return void
*/
public function register()
{
if (! $this->registered) {
$this->prependToLoaderStack();
$this->registered = true;
}
}
/**
* Prepend the load method to the auto-loader stack.
*
* @return void
*/
protected function prependToLoaderStack()
{
spl_autoload_register([$this, 'load'], true, true);
}
/**
* Get the registered aliases.
*
* @return array
*/
public function getAliases()
{
return $this->aliases;
}
/**
* Set the registered aliases.
*
* @param array $aliases
* @return void
*/
public function setAliases(array $aliases)
{
$this->aliases = $aliases;
}
/**
* Indicates if the loader has been registered.
*
* @return bool
*/
public function isRegistered()
{
return $this->registered;
}
/**
* Set the "registered" state of the loader.
*
* @param bool $value
* @return void
*/
public function setRegistered($value)
{
$this->registered = $value;
}
/**
* Set the value of the singleton alias loader.
*
* @param \Illuminate\Foundation\AliasLoader $loader
* @return void
*/
public static function setInstance($loader)
{
static::$instance = $loader;
}
/**
* Clone method.
*
* @return void
*/
private function __clone()
{
//
}
}
```
|
```kotlin
package mega.privacy.android.shared.original.core.ui.controls.dialogs.internal
import androidx.compose.foundation.layout.fillMaxWidth
import androidx.compose.foundation.layout.padding
import androidx.compose.material.AlertDialog
import androidx.compose.material.LocalAbsoluteElevation
import androidx.compose.material.MaterialTheme
import androidx.compose.material.Text
import androidx.compose.runtime.Composable
import androidx.compose.runtime.CompositionLocalProvider
import androidx.compose.ui.ExperimentalComposeUiApi
import androidx.compose.ui.Modifier
import androidx.compose.ui.platform.testTag
import androidx.compose.ui.semantics.semantics
import androidx.compose.ui.semantics.testTagsAsResourceId
import androidx.compose.ui.text.font.FontWeight
import androidx.compose.ui.unit.dp
import androidx.compose.ui.window.DialogProperties
import mega.privacy.android.shared.original.core.ui.controls.buttons.TextMegaButton
import mega.privacy.android.shared.original.core.ui.theme.MegaOriginalTheme
import mega.privacy.android.shared.original.core.ui.theme.extensions.conditional
import mega.privacy.android.shared.original.core.ui.utils.composeLet
@Composable
internal fun BaseMegaAlertDialog(
text: String?,
confirmButtonText: String,
cancelButtonText: String?,
onConfirm: () -> Unit,
onDismiss: () -> Unit,
modifier: Modifier = Modifier,
title: String? = null,
onCancel: () -> Unit = onDismiss,
dismissOnClickOutside: Boolean = true,
dismissOnBackPress: Boolean = true,
) = BaseMegaAlertDialog(
content = text?.composeLet {
Text(
text = text,
style = MaterialTheme.typography.subtitle1,
color = MegaOriginalTheme.colors.text.secondary,
modifier = Modifier.testTag(CONTENT_TAG),
)
},
confirmButtonText = confirmButtonText,
cancelButtonText = cancelButtonText,
onConfirm = onConfirm,
onDismiss = onDismiss,
modifier = modifier,
title = title,
onCancel = onCancel,
dismissOnClickOutside = dismissOnClickOutside,
dismissOnBackPress = dismissOnBackPress,
)
@Composable
internal fun BaseMegaAlertDialog(
confirmButtonText: String,
cancelButtonText: String?,
onConfirm: () -> Unit,
onDismiss: () -> Unit,
modifier: Modifier = Modifier,
content: @Composable (() -> Unit)? = null,
title: String? = null,
onCancel: () -> Unit = onDismiss,
dismissOnClickOutside: Boolean = true,
dismissOnBackPress: Boolean = true,
cancelEnabled: Boolean = true,
confirmEnabled: Boolean = true,
) = BaseMegaAlertDialog(
text = content,
buttons = {
AlertDialogFlowRow {
cancelButtonText?.let {
TextMegaButton(
modifier = Modifier.testTag(CANCEL_TAG),
text = cancelButtonText,
onClick = onCancel,
enabled = cancelEnabled
)
}
TextMegaButton(
modifier = Modifier.testTag(CONFIRM_TAG),
text = confirmButtonText,
onClick = onConfirm,
enabled = confirmEnabled
)
}
},
onDismiss = onDismiss,
modifier = modifier,
title = title,
dismissOnClickOutside = dismissOnClickOutside,
dismissOnBackPress = dismissOnBackPress
)
@Composable
internal fun BaseMegaAlertDialog(
text: String,
buttons: @Composable (() -> Unit),
onDismiss: () -> Unit,
modifier: Modifier = Modifier,
title: String? = null,
dismissOnClickOutside: Boolean = true,
dismissOnBackPress: Boolean = true,
) = BaseMegaAlertDialog(
text = {
Text(
text = text,
style = MaterialTheme.typography.subtitle1,
color = MegaOriginalTheme.colors.text.secondary,
modifier = Modifier.testTag(CONTENT_TAG),
)
},
buttons = buttons,
onDismiss = onDismiss,
modifier = modifier,
title = title,
dismissOnClickOutside = dismissOnClickOutside,
dismissOnBackPress = dismissOnBackPress
)
@OptIn(ExperimentalComposeUiApi::class)
@Composable
private fun BaseMegaAlertDialog(
buttons: @Composable (() -> Unit),
onDismiss: () -> Unit,
modifier: Modifier = Modifier,
text: @Composable (() -> Unit)? = null,
title: String? = null,
dismissOnClickOutside: Boolean = true,
dismissOnBackPress: Boolean = true,
) = CompositionLocalProvider(LocalAbsoluteElevation provides 24.dp) {
AlertDialog(
modifier = modifier.semantics { testTagsAsResourceId = true },
backgroundColor = MegaOriginalTheme.colors.background.surface1,
title = title?.composeLet {
Text(
modifier = Modifier
.testTag(TITLE_TAG)
.conditional(text == null) {
// For dialog without text, add padding to the bottom of the title
padding(bottom = 18.dp)
}
.fillMaxWidth(),
text = it,
fontWeight = FontWeight.Medium,
style = MaterialTheme.typography.h6,
color = MegaOriginalTheme.colors.text.primary,
)
},
text = text,
onDismissRequest = onDismiss,
buttons = buttons,
properties = DialogProperties(
dismissOnBackPress = dismissOnBackPress,
dismissOnClickOutside = dismissOnClickOutside,
),
)
}
/**
* Confirm button's test tag
*/
const val CONFIRM_TAG = "mega_alert_dialog:button_confirm"
/**
* Cancel button's test tag
*/
const val CANCEL_TAG = "mega_alert_dialog:button_cancel"
internal const val TITLE_TAG = "mega_alert_dialog:text_title"
internal const val CONTENT_TAG = "mega_alert_dialog:text_content"
internal const val OPTION1_TAG = "mega_alert_dialog:button_option1"
internal const val OPTION2_TAG = "mega_alert_dialog:button_option2"
```
|
```php
<?php
namespace Laravel\Scout\Attributes;
use Attribute;
use Illuminate\Support\Arr;
#[Attribute]
class SearchUsingFullText
{
/**
* The full-text columns.
*
* @var array
*/
public $columns = [];
/**
* The full-text options.
*/
public $options = [];
/**
* Create a new attribute instance.
*
* @param array $columns
* @param array $options
* @return void
*/
public function __construct($columns, $options = [])
{
$this->columns = Arr::wrap($columns);
$this->options = Arr::wrap($options);
}
}
```
|
```java
/*
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
*
* path_to_url
*
* Unless required by applicable law or agreed to in writing,
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* specific language governing permissions and limitations
*/
package org.apache.pulsar.broker.admin.v2;
import io.swagger.annotations.Api;
import io.swagger.annotations.ApiOperation;
import io.swagger.annotations.ApiParam;
import io.swagger.annotations.ApiResponse;
import io.swagger.annotations.ApiResponses;
import io.swagger.annotations.Example;
import io.swagger.annotations.ExampleProperty;
import java.io.OutputStreamWriter;
import java.nio.charset.StandardCharsets;
import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.Set;
import javax.ws.rs.Consumes;
import javax.ws.rs.DELETE;
import javax.ws.rs.DefaultValue;
import javax.ws.rs.GET;
import javax.ws.rs.POST;
import javax.ws.rs.PUT;
import javax.ws.rs.Path;
import javax.ws.rs.PathParam;
import javax.ws.rs.Produces;
import javax.ws.rs.QueryParam;
import javax.ws.rs.WebApplicationException;
import javax.ws.rs.container.AsyncResponse;
import javax.ws.rs.container.Suspended;
import javax.ws.rs.core.MediaType;
import javax.ws.rs.core.Response;
import javax.ws.rs.core.StreamingOutput;
import org.apache.pulsar.broker.admin.impl.NamespacesBase;
import org.apache.pulsar.broker.admin.impl.OffloaderObjectsScannerUtils;
import org.apache.pulsar.broker.web.RestException;
import org.apache.pulsar.client.api.SubscriptionType;
import org.apache.pulsar.common.api.proto.CommandGetTopicsOfNamespace.Mode;
import org.apache.pulsar.common.naming.NamespaceName;
import org.apache.pulsar.common.policies.data.AuthAction;
import org.apache.pulsar.common.policies.data.AutoSubscriptionCreationOverride;
import org.apache.pulsar.common.policies.data.AutoTopicCreationOverride;
import org.apache.pulsar.common.policies.data.BacklogQuota;
import org.apache.pulsar.common.policies.data.BacklogQuota.BacklogQuotaType;
import org.apache.pulsar.common.policies.data.BookieAffinityGroupData;
import org.apache.pulsar.common.policies.data.DelayedDeliveryPolicies;
import org.apache.pulsar.common.policies.data.DispatchRate;
import org.apache.pulsar.common.policies.data.EntryFilters;
import org.apache.pulsar.common.policies.data.InactiveTopicPolicies;
import org.apache.pulsar.common.policies.data.NamespaceOperation;
import org.apache.pulsar.common.policies.data.OffloadPolicies;
import org.apache.pulsar.common.policies.data.OffloadPoliciesImpl;
import org.apache.pulsar.common.policies.data.PersistencePolicies;
import org.apache.pulsar.common.policies.data.Policies;
import org.apache.pulsar.common.policies.data.PolicyName;
import org.apache.pulsar.common.policies.data.PolicyOperation;
import org.apache.pulsar.common.policies.data.PublishRate;
import org.apache.pulsar.common.policies.data.RetentionPolicies;
import org.apache.pulsar.common.policies.data.SchemaAutoUpdateCompatibilityStrategy;
import org.apache.pulsar.common.policies.data.SchemaCompatibilityStrategy;
import org.apache.pulsar.common.policies.data.SubscribeRate;
import org.apache.pulsar.common.policies.data.SubscriptionAuthMode;
import org.apache.pulsar.common.policies.data.TopicHashPositions;
import org.apache.pulsar.common.policies.data.impl.AutoSubscriptionCreationOverrideImpl;
import org.apache.pulsar.common.policies.data.impl.AutoTopicCreationOverrideImpl;
import org.apache.pulsar.common.policies.data.impl.BacklogQuotaImpl;
import org.apache.pulsar.common.policies.data.impl.BookieAffinityGroupDataImpl;
import org.apache.pulsar.common.policies.data.impl.BundlesDataImpl;
import org.apache.pulsar.common.policies.data.impl.DispatchRateImpl;
import org.apache.pulsar.common.util.FutureUtil;
import org.apache.pulsar.metadata.api.MetadataStoreException;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
@Path("/namespaces")
@Produces(MediaType.APPLICATION_JSON)
@Consumes(MediaType.APPLICATION_JSON)
@Api(value = "/namespaces", description = "Namespaces admin apis", tags = "namespaces")
public class Namespaces extends NamespacesBase {
@GET
@Path("/{tenant}")
@ApiOperation(value = "Get the list of all the namespaces for a certain tenant.",
response = String.class, responseContainer = "Set")
@ApiResponses(value = {@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Tenant doesn't exist")})
public void getTenantNamespaces(@Suspended final AsyncResponse response,
@PathParam("tenant") String tenant) {
internalGetTenantNamespaces(tenant)
.thenAccept(response::resume)
.exceptionally(ex -> {
log.error("[{}] Failed to get namespaces list: {}", clientAppId(), ex);
resumeAsyncResponseExceptionally(response, ex);
return null;
});
}
@GET
@Path("/{tenant}/{namespace}/topics")
@ApiOperation(value = "Get the list of all the topics under a certain namespace.",
response = String.class, responseContainer = "Set")
@ApiResponses(value = {
@ApiResponse(code = 403, message = "Don't have admin or operate permission on the namespace"),
@ApiResponse(code = 404, message = "Tenant or cluster or namespace doesn't exist")})
public void getTopics(@Suspended AsyncResponse response,
@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace,
@QueryParam("mode") @DefaultValue("PERSISTENT") Mode mode,
@ApiParam(value = "Include system topic")
@QueryParam("includeSystemTopic") boolean includeSystemTopic) {
validateNamespaceName(tenant, namespace);
validateNamespaceOperationAsync(NamespaceName.get(tenant, namespace), NamespaceOperation.GET_TOPICS)
// Validate that namespace exists, throws 404 if it doesn't exist
.thenCompose(__ -> getNamespacePoliciesAsync(namespaceName))
.thenCompose(policies -> internalGetListOfTopics(policies, mode))
.thenApply(topics -> filterSystemTopic(topics, includeSystemTopic))
.thenAccept(response::resume)
.exceptionally(ex -> {
log.error("Failed to get topics list for namespace {}", namespaceName, ex);
resumeAsyncResponseExceptionally(response, ex);
return null;
});
}
@GET
@Path("/{tenant}/{namespace}")
@ApiOperation(value = "Get the dump all the policies specified for a namespace.", response = Policies.class)
@ApiResponses(value = { @ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Tenant or cluster or namespace doesn't exist") })
public void getPolicies(@Suspended AsyncResponse response,
@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace) {
validateNamespaceName(tenant, namespace);
validateNamespacePolicyOperationAsync(NamespaceName.get(tenant, namespace), PolicyName.ALL,
PolicyOperation.READ)
.thenCompose(__ -> getNamespacePoliciesAsync(namespaceName))
.thenAccept(response::resume)
.exceptionally(ex -> {
log.error("Failed to get policies for namespace {}", namespaceName, ex);
resumeAsyncResponseExceptionally(response, ex);
return null;
});
}
@PUT
@Path("/{tenant}/{namespace}")
@ApiOperation(value = "Creates a new namespace with the specified policies")
@ApiResponses(value = {
@ApiResponse(code = 204, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Tenant or cluster doesn't exist"),
@ApiResponse(code = 409, message = "Namespace already exists"),
@ApiResponse(code = 412, message = "Namespace name is not valid") })
public void createNamespace(@Suspended AsyncResponse response,
@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace,
@ApiParam(value = "Policies for the namespace") Policies policies) {
validateNamespaceName(tenant, namespace);
policies = getDefaultPolicesIfNull(policies);
internalCreateNamespace(policies)
.thenAccept(__ -> response.resume(Response.noContent().build()))
.exceptionally(ex -> {
Throwable root = FutureUtil.unwrapCompletionException(ex);
if (root instanceof MetadataStoreException.AlreadyExistsException) {
response.resume(new RestException(Response.Status.CONFLICT, "Namespace already exists"));
} else {
log.error("[{}] Failed to create namespace {}", clientAppId(), namespaceName, ex);
resumeAsyncResponseExceptionally(response, ex);
}
return null;
});
}
@DELETE
@Path("/{tenant}/{namespace}")
@ApiOperation(value = "Delete a namespace and all the topics under it.")
@ApiResponses(value = {
@ApiResponse(code = 204, message = "Operation successful"),
@ApiResponse(code = 307, message = "Current broker doesn't serve the namespace"),
@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Tenant or cluster or namespace doesn't exist"),
@ApiResponse(code = 405, message = "Broker doesn't allow forced deletion of namespaces"),
@ApiResponse(code = 409, message = "Namespace is not empty") })
public void deleteNamespace(@Suspended final AsyncResponse asyncResponse, @PathParam("tenant") String tenant,
@PathParam("namespace") String namespace,
@QueryParam("force") @DefaultValue("false") boolean force,
@QueryParam("authoritative") @DefaultValue("false") boolean authoritative) {
validateNamespaceName(tenant, namespace);
internalDeleteNamespaceAsync(force)
.thenAccept(__ -> {
log.info("[{}] Successful delete namespace {}", clientAppId(), namespace);
asyncResponse.resume(Response.noContent().build());
})
.exceptionally(ex -> {
if (!isRedirectException(ex)) {
log.error("[{}] Failed to delete namespace {}", clientAppId(), namespaceName, ex);
}
resumeAsyncResponseExceptionally(asyncResponse, ex);
return null;
});
}
@DELETE
@Path("/{tenant}/{namespace}/{bundle}")
@ApiOperation(value = "Delete a namespace bundle and all the topics under it.")
@ApiResponses(value = {
@ApiResponse(code = 204, message = "Operation successful"),
@ApiResponse(code = 307, message = "Current broker doesn't serve the namespace"),
@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Tenant or cluster or namespace doesn't exist"),
@ApiResponse(code = 409, message = "Namespace bundle is not empty")})
public void deleteNamespaceBundle(@Suspended AsyncResponse response, @PathParam("tenant") String tenant,
@PathParam("namespace") String namespace,
@PathParam("bundle") String bundleRange,
@QueryParam("force") @DefaultValue("false") boolean force,
@QueryParam("authoritative") @DefaultValue("false") boolean authoritative) {
validateNamespaceName(tenant, namespace);
internalDeleteNamespaceBundleAsync(bundleRange, authoritative, force)
.thenRun(() -> response.resume(Response.noContent().build()))
.exceptionally(ex -> {
if (!isRedirectException(ex)) {
log.error("[{}] Failed to delete namespace bundle {}", clientAppId(), namespaceName, ex);
}
resumeAsyncResponseExceptionally(response, ex);
return null;
});
}
@GET
@Path("/{tenant}/{namespace}/permissions")
@ApiOperation(value = "Retrieve the permissions for a namespace.",
notes = "Returns a nested map structure which Swagger does not fully support for display. "
+ "Structure: Map<String, Set<AuthAction>>. Please refer to this structure for details.",
response = AuthAction.class, responseContainer = "Map")
@ApiResponses(value = { @ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Tenant or cluster or namespace doesn't exist"),
@ApiResponse(code = 409, message = "Namespace is not empty") })
public void getPermissions(@Suspended AsyncResponse response,
@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace) {
validateNamespaceName(tenant, namespace);
validateNamespaceOperationAsync(NamespaceName.get(tenant, namespace), NamespaceOperation.GET_PERMISSION)
.thenCompose(__ -> getAuthorizationService().getPermissionsAsync(namespaceName))
.thenAccept(permissions -> response.resume(permissions))
.exceptionally(ex -> {
log.error("Failed to get permissions for namespace {}", namespaceName, ex);
resumeAsyncResponseExceptionally(response, ex);
return null;
});
}
@GET
@Path("/{tenant}/{namespace}/permissions/subscription")
@ApiOperation(value = "Retrieve the permissions for a subscription.",
notes = "Returns a nested map structure which Swagger does not fully support for display. "
+ "Structure: Map<String, Set<String>>. Please refer to this structure for details.",
response = String.class, responseContainer = "Map")
@ApiResponses(value = {@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Tenant or cluster or namespace doesn't exist"),
@ApiResponse(code = 409, message = "Namespace is not empty")})
public void getPermissionOnSubscription(@Suspended AsyncResponse response,
@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace) {
validateNamespaceName(tenant, namespace);
validateNamespaceOperationAsync(NamespaceName.get(tenant, namespace), NamespaceOperation.GET_PERMISSION)
.thenCompose(__ -> getAuthorizationService().getSubscriptionPermissionsAsync(namespaceName))
.thenAccept(permissions -> response.resume(permissions))
.exceptionally(ex -> {
log.error("[{}] Failed to get permissions on subscription for namespace {}: {} ", clientAppId(),
namespaceName, ex.getCause().getMessage(), ex);
resumeAsyncResponseExceptionally(response, ex);
return null;
});
}
@POST
@Path("/{tenant}/{namespace}/permissions/{role}")
@ApiOperation(value = "Grant a new permission to a role on a namespace.")
@ApiResponses(value = {
@ApiResponse(code = 204, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Tenant or cluster or namespace doesn't exist"),
@ApiResponse(code = 409, message = "Concurrent modification"),
@ApiResponse(code = 501, message = "Authorization is not enabled")})
public void grantPermissionOnNamespace(@Suspended AsyncResponse asyncResponse,
@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace,
@PathParam("role") String role,
@ApiParam(value = "List of permissions for the specified role") Set<AuthAction> actions) {
validateNamespaceName(tenant, namespace);
internalGrantPermissionOnNamespaceAsync(role, actions)
.thenAccept(__ -> asyncResponse.resume(Response.noContent().build()))
.exceptionally(ex -> {
log.error("[{}] Failed to set permissions for namespace {}: {}",
clientAppId(), namespaceName, ex.getCause().getMessage(), ex);
resumeAsyncResponseExceptionally(asyncResponse, ex);
return null;
});
}
@POST
@Path("/{property}/{namespace}/permissions/subscription/{subscription}")
@ApiOperation(hidden = true, value = "Grant a new permission to roles for a subscription."
+ "[Tenant admin is allowed to perform this operation]")
@ApiResponses(value = {
@ApiResponse(code = 204, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Property or cluster or namespace doesn't exist"),
@ApiResponse(code = 409, message = "Concurrent modification"),
@ApiResponse(code = 501, message = "Authorization is not enabled") })
public void grantPermissionOnSubscription(@Suspended AsyncResponse asyncResponse,
@PathParam("property") String property,
@PathParam("namespace") String namespace,
@PathParam("subscription") String subscription,
@ApiParam(value = "List of roles for the specified subscription") Set<String> roles) {
validateNamespaceName(property, namespace);
internalGrantPermissionOnSubscriptionAsync(subscription, roles)
.thenAccept(__ -> asyncResponse.resume(Response.noContent().build()))
.exceptionally(ex -> {
log.error("[{}] Failed to grant permission on subscription for role {}:{} - "
+ "namespaceName {}: {}",
clientAppId(), roles, subscription, namespaceName, ex.getCause().getMessage(), ex);
resumeAsyncResponseExceptionally(asyncResponse, ex);
return null;
});
}
@DELETE
@Path("/{tenant}/{namespace}/permissions/{role}")
@ApiOperation(value = "Revoke all permissions to a role on a namespace.")
@ApiResponses(value = {
@ApiResponse(code = 204, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Tenant or cluster or namespace doesn't exist") })
public void revokePermissionsOnNamespace(@Suspended AsyncResponse asyncResponse,
@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace, @PathParam("role") String role) {
validateNamespaceName(tenant, namespace);
internalRevokePermissionsOnNamespaceAsync(role)
.thenAccept(__ -> asyncResponse.resume(Response.noContent().build()))
.exceptionally(ex -> {
log.error("[{}] Failed to revoke permission on role {} - namespace {}: {}",
clientAppId(), role, namespace, ex.getCause().getMessage(), ex);
resumeAsyncResponseExceptionally(asyncResponse, ex);
return null;
});
}
@DELETE
@Path("/{property}/{namespace}/permissions/{subscription}/{role}")
@ApiOperation(hidden = true, value = "Revoke subscription admin-api access permission for a role.")
@ApiResponses(value = {
@ApiResponse(code = 204, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Property or cluster or namespace doesn't exist") })
public void revokePermissionOnSubscription(@Suspended AsyncResponse asyncResponse,
@PathParam("property") String property,
@PathParam("namespace") String namespace, @PathParam("subscription") String subscription,
@PathParam("role") String role) {
validateNamespaceName(property, namespace);
internalRevokePermissionsOnSubscriptionAsync(subscription, role)
.thenAccept(__ -> asyncResponse.resume(Response.noContent().build()))
.exceptionally(ex -> {
log.error("[{}] Failed to revoke permission on subscription for role {}:{} - namespace {}: {}",
clientAppId(), role, subscription, namespace, ex.getCause().getMessage(), ex);
resumeAsyncResponseExceptionally(asyncResponse, ex);
return null;
});
}
@GET
@Path("/{tenant}/{namespace}/replication")
@ApiOperation(value = "Get the replication clusters for a namespace.",
response = String.class, responseContainer = "Set")
@ApiResponses(value = {@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Tenant or cluster or namespace doesn't exist"),
@ApiResponse(code = 412, message = "Namespace is not global")})
public void getNamespaceReplicationClusters(@Suspended AsyncResponse asyncResponse,
@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace) {
validateNamespaceName(tenant, namespace);
internalGetNamespaceReplicationClustersAsync()
.thenAccept(asyncResponse::resume)
.exceptionally(e -> {
log.error("[{}] Failed to get namespace replication clusters on namespace {}", clientAppId(),
namespace, e);
resumeAsyncResponseExceptionally(asyncResponse, e);
return null;
});
}
@POST
@Path("/{tenant}/{namespace}/replication")
@ApiOperation(value = "Set the replication clusters for a namespace.")
@ApiResponses(value = {
@ApiResponse(code = 200, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Tenant or cluster or namespace doesn't exist"),
@ApiResponse(code = 409, message = "Peer-cluster can't be part of replication-cluster"),
@ApiResponse(code = 412, message = "Namespace is not global or invalid cluster ids") })
public void setNamespaceReplicationClusters(@Suspended AsyncResponse asyncResponse,
@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace,
@ApiParam(value = "List of replication clusters", required = true) List<String> clusterIds) {
validateNamespaceName(tenant, namespace);
internalSetNamespaceReplicationClusters(clusterIds)
.thenAccept(asyncResponse::resume)
.exceptionally(e -> {
log.error("[{}] Failed to set namespace replication clusters on namespace {}",
clientAppId(), namespace, e);
resumeAsyncResponseExceptionally(asyncResponse, e);
return null;
});
}
@GET
@Path("/{tenant}/{namespace}/messageTTL")
@ApiOperation(value = "Get the message TTL for the namespace", response = Integer.class)
@ApiResponses(value = { @ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Tenant or cluster or namespace doesn't exist") })
public void getNamespaceMessageTTL(@Suspended AsyncResponse asyncResponse, @PathParam("tenant") String tenant,
@PathParam("namespace") String namespace) {
validateNamespaceName(tenant, namespace);
validateNamespacePolicyOperationAsync(NamespaceName.get(tenant, namespace), PolicyName.TTL,
PolicyOperation.READ)
.thenCompose(__ -> getNamespacePoliciesAsync(namespaceName))
.thenAccept(policies -> asyncResponse.resume(policies.message_ttl_in_seconds))
.exceptionally(ex -> {
log.error("Failed to get namespace message TTL for namespace {}", namespaceName, ex);
resumeAsyncResponseExceptionally(asyncResponse, ex);
return null;
});
}
@POST
@Path("/{tenant}/{namespace}/messageTTL")
@ApiOperation(value = "Set message TTL in seconds for namespace")
@ApiResponses(value = {
@ApiResponse(code = 204, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Tenant or cluster or namespace doesn't exist"),
@ApiResponse(code = 412, message = "Invalid TTL") })
public void setNamespaceMessageTTL(@Suspended AsyncResponse asyncResponse, @PathParam("tenant") String tenant,
@PathParam("namespace") String namespace,
@ApiParam(value = "TTL in seconds for the specified namespace", required = true)
int messageTTL) {
validateNamespaceName(tenant, namespace);
internalSetNamespaceMessageTTLAsync(messageTTL)
.thenAccept(__ -> asyncResponse.resume(Response.noContent().build()))
.exceptionally(ex -> {
log.error("Failed to set namespace message TTL for namespace {}", namespaceName, ex);
resumeAsyncResponseExceptionally(asyncResponse, ex);
return null;
});
}
@DELETE
@Path("/{tenant}/{namespace}/messageTTL")
@ApiOperation(value = "Remove message TTL in seconds for namespace")
@ApiResponses(value = {
@ApiResponse(code = 204, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Tenant or cluster or namespace doesn't exist"),
@ApiResponse(code = 412, message = "Invalid TTL")})
public void removeNamespaceMessageTTL(@Suspended AsyncResponse asyncResponse,
@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace) {
validateNamespaceName(tenant, namespace);
internalSetNamespaceMessageTTLAsync(null)
.thenAccept(__ -> asyncResponse.resume(Response.noContent().build()))
.exceptionally(ex -> {
log.error("Failed to remove namespace message TTL for namespace {}", namespaceName, ex);
resumeAsyncResponseExceptionally(asyncResponse, ex);
return null;
});
}
@GET
@Path("/{tenant}/{namespace}/subscriptionExpirationTime")
@ApiOperation(value = "Get the subscription expiration time for the namespace", response = Integer.class)
@ApiResponses(value = { @ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Tenant or cluster or namespace doesn't exist") })
public void getSubscriptionExpirationTime(@Suspended AsyncResponse asyncResponse,
@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace) {
validateNamespaceName(tenant, namespace);
validateNamespacePolicyOperationAsync(namespaceName, PolicyName.SUBSCRIPTION_EXPIRATION_TIME,
PolicyOperation.READ)
.thenCompose(__ -> getNamespacePoliciesAsync(namespaceName))
.thenAccept(policies -> asyncResponse.resume(policies.subscription_expiration_time_minutes))
.exceptionally(ex -> {
log.error("[{}] Failed to get subscription expiration time for namespace {}: {} ", clientAppId(),
namespaceName, ex.getCause().getMessage(), ex);
resumeAsyncResponseExceptionally(asyncResponse, ex);
return null;
});
}
@POST
@Path("/{tenant}/{namespace}/subscriptionExpirationTime")
@ApiOperation(value = "Set subscription expiration time in minutes for namespace")
@ApiResponses(value = {
@ApiResponse(code = 204, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Tenant or cluster or namespace doesn't exist"),
@ApiResponse(code = 412, message = "Invalid expiration time")})
public void setSubscriptionExpirationTime(@Suspended AsyncResponse asyncResponse,
@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace,
@ApiParam(value =
"Expiration time in minutes for the specified namespace",
required = true) int expirationTime) {
validateNamespaceName(tenant, namespace);
internalSetSubscriptionExpirationTimeAsync(expirationTime)
.thenAccept(__ -> asyncResponse.resume(Response.noContent().build()))
.exceptionally(ex -> {
log.error("[{}] Failed to set subscription expiration time for namespace {}: {} ", clientAppId(),
namespaceName, ex.getCause().getMessage(), ex);
resumeAsyncResponseExceptionally(asyncResponse, ex);
return null;
});
}
@DELETE
@Path("/{tenant}/{namespace}/subscriptionExpirationTime")
@ApiOperation(value = "Remove subscription expiration time for namespace")
@ApiResponses(value = {
@ApiResponse(code = 204, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Tenant or cluster or namespace doesn't exist")})
public void removeSubscriptionExpirationTime(@Suspended AsyncResponse asyncResponse,
@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace) {
validateNamespaceName(tenant, namespace);
internalSetSubscriptionExpirationTimeAsync(null)
.thenAccept(__ -> asyncResponse.resume(Response.noContent().build()))
.exceptionally(ex -> {
log.error("[{}] Failed to remove subscription expiration time for namespace {}: {} ", clientAppId(),
namespaceName, ex.getCause().getMessage(), ex);
resumeAsyncResponseExceptionally(asyncResponse, ex);
return null;
});
}
@GET
@Path("/{tenant}/{namespace}/deduplication")
@ApiOperation(value = "Get broker side deduplication for all topics in a namespace", response = Boolean.class)
@ApiResponses(value = { @ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Tenant or cluster or namespace doesn't exist") })
public void getDeduplication(@Suspended AsyncResponse asyncResponse, @PathParam("tenant") String tenant,
@PathParam("namespace") String namespace) {
validateNamespaceName(tenant, namespace);
internalGetDeduplicationAsync()
.thenAccept(deduplication -> asyncResponse.resume(deduplication))
.exceptionally(ex -> {
log.error("Failed to get broker deduplication config for namespace {}", namespace, ex);
resumeAsyncResponseExceptionally(asyncResponse, ex);
return null;
});
}
@POST
@Path("/{tenant}/{namespace}/deduplication")
@ApiOperation(value = "Enable or disable broker side deduplication for all topics in a namespace")
@ApiResponses(value = {
@ApiResponse(code = 204, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Tenant or cluster or namespace doesn't exist") })
public void modifyDeduplication(@Suspended AsyncResponse asyncResponse, @PathParam("tenant") String tenant,
@PathParam("namespace") String namespace,
@ApiParam(value = "Flag for disabling or enabling broker side deduplication "
+ "for all topics in the specified namespace", required = true)
boolean enableDeduplication) {
validateNamespaceName(tenant, namespace);
internalModifyDeduplicationAsync(enableDeduplication)
.thenAccept(__ -> asyncResponse.resume(Response.noContent().build()))
.exceptionally(ex -> {
log.error("Failed to modify broker deduplication config for namespace {}", namespaceName, ex);
resumeAsyncResponseExceptionally(asyncResponse, ex);
return null;
});
}
@DELETE
@Path("/{tenant}/{namespace}/deduplication")
@ApiOperation(value = "Remove broker side deduplication for all topics in a namespace")
@ApiResponses(value = {
@ApiResponse(code = 204, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Tenant or cluster or namespace doesn't exist") })
public void removeDeduplication(@Suspended AsyncResponse asyncResponse, @PathParam("tenant") String tenant,
@PathParam("namespace") String namespace) {
validateNamespaceName(tenant, namespace);
internalModifyDeduplicationAsync(null)
.thenAccept(__ -> asyncResponse.resume(Response.noContent().build()))
.exceptionally(e -> {
Throwable ex = FutureUtil.unwrapCompletionException(e);
log.error("Failed to remove broker deduplication config for namespace {}", namespaceName, ex);
resumeAsyncResponseExceptionally(asyncResponse, ex);
return null;
});
}
@GET
@Path("/{tenant}/{namespace}/autoTopicCreation")
@ApiOperation(value = "Get autoTopicCreation info in a namespace", response = AutoTopicCreationOverrideImpl.class)
@ApiResponses(value = {@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Tenant or namespace doesn't exist")})
public void getAutoTopicCreation(@Suspended AsyncResponse asyncResponse,
@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace) {
validateNamespaceName(tenant, namespace);
internalGetAutoTopicCreationAsync()
.thenAccept(asyncResponse::resume)
.exceptionally(ex -> {
log.error("Failed to get autoTopicCreation info for namespace {}", namespaceName, ex);
resumeAsyncResponseExceptionally(asyncResponse, ex);
return null;
});
}
@POST
@Path("/{tenant}/{namespace}/autoTopicCreation")
@ApiOperation(value = "Override broker's allowAutoTopicCreation setting for a namespace")
@ApiResponses(value = {
@ApiResponse(code = 204, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Tenant or cluster or namespace doesn't exist"),
@ApiResponse(code = 406, message = "The number of partitions should be less than or"
+ " equal to maxNumPartitionsPerPartitionedTopic"),
@ApiResponse(code = 400, message = "Invalid autoTopicCreation override")})
public void setAutoTopicCreation(
@Suspended final AsyncResponse asyncResponse,
@PathParam("tenant") String tenant, @PathParam("namespace") String namespace,
@ApiParam(value = "Settings for automatic topic creation", required = true)
AutoTopicCreationOverride autoTopicCreationOverride) {
validateNamespaceName(tenant, namespace);
internalSetAutoTopicCreationAsync(autoTopicCreationOverride)
.thenAccept(__ -> {
String autoOverride = (autoTopicCreationOverride != null
&& autoTopicCreationOverride.isAllowAutoTopicCreation()) ? "enabled" : "disabled";
log.info("[{}] Successfully {} autoTopicCreation on namespace {}", clientAppId(),
autoOverride, namespaceName);
asyncResponse.resume(Response.noContent().build());
})
.exceptionally(e -> {
Throwable ex = FutureUtil.unwrapCompletionException(e);
log.error("[{}] Failed to set autoTopicCreation status on namespace {}", clientAppId(),
namespaceName,
ex);
if (ex instanceof MetadataStoreException.NotFoundException) {
asyncResponse.resume(new RestException(Response.Status.NOT_FOUND, "Namespace does not exist"));
} else {
resumeAsyncResponseExceptionally(asyncResponse, ex);
}
return null;
});
}
@DELETE
@Path("/{tenant}/{namespace}/autoTopicCreation")
@ApiOperation(value = "Remove override of broker's allowAutoTopicCreation in a namespace")
@ApiResponses(value = {
@ApiResponse(code = 204, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Tenant or cluster or namespace doesn't exist") })
public void removeAutoTopicCreation(@Suspended final AsyncResponse asyncResponse,
@PathParam("tenant") String tenant, @PathParam("namespace") String namespace) {
validateNamespaceName(tenant, namespace);
internalSetAutoTopicCreationAsync(null)
.thenAccept(__ -> {
log.info("[{}] Successfully remove autoTopicCreation on namespace {}",
clientAppId(), namespaceName);
asyncResponse.resume(Response.noContent().build());
})
.exceptionally(e -> {
Throwable ex = FutureUtil.unwrapCompletionException(e);
log.error("[{}] Failed to remove autoTopicCreation status on namespace {}", clientAppId(),
namespaceName,
ex);
if (ex instanceof MetadataStoreException.NotFoundException) {
asyncResponse.resume(new RestException(Response.Status.NOT_FOUND, "Namespace does not exist"));
} else {
resumeAsyncResponseExceptionally(asyncResponse, ex);
}
return null;
});
}
@POST
@Path("/{tenant}/{namespace}/autoSubscriptionCreation")
@ApiOperation(value = "Override broker's allowAutoSubscriptionCreation setting for a namespace")
@ApiResponses(value = {
@ApiResponse(code = 204, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Tenant or cluster or namespace doesn't exist"),
@ApiResponse(code = 400, message = "Invalid autoSubscriptionCreation override")})
public void setAutoSubscriptionCreation(
@Suspended final AsyncResponse asyncResponse,
@PathParam("tenant") String tenant, @PathParam("namespace") String namespace,
@ApiParam(value = "Settings for automatic subscription creation")
AutoSubscriptionCreationOverride autoSubscriptionCreationOverride) {
validateNamespaceName(tenant, namespace);
internalSetAutoSubscriptionCreationAsync(autoSubscriptionCreationOverride)
.thenAccept(__ -> {
log.info("[{}] Successfully set autoSubscriptionCreation on namespace {}",
clientAppId(), namespaceName);
asyncResponse.resume(Response.noContent().build());
})
.exceptionally(e -> {
Throwable ex = FutureUtil.unwrapCompletionException(e);
log.error("[{}] Failed to set autoSubscriptionCreation on namespace {}", clientAppId(),
namespaceName, ex);
if (ex instanceof MetadataStoreException.NotFoundException) {
asyncResponse.resume(new RestException(Response.Status.NOT_FOUND, "Namespace does not exist"));
} else {
resumeAsyncResponseExceptionally(asyncResponse, ex);
}
return null;
});
}
@GET
@Path("/{tenant}/{namespace}/autoSubscriptionCreation")
@ApiOperation(value = "Get autoSubscriptionCreation info in a namespace",
response = AutoSubscriptionCreationOverrideImpl.class)
@ApiResponses(value = {@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Tenant or namespace doesn't exist")})
public void getAutoSubscriptionCreation(@Suspended final AsyncResponse asyncResponse,
@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace) {
validateNamespaceName(tenant, namespace);
internalGetAutoSubscriptionCreationAsync()
.thenAccept(asyncResponse::resume)
.exceptionally(ex -> {
log.error("Failed to get autoSubscriptionCreation for namespace {}", namespaceName, ex);
resumeAsyncResponseExceptionally(asyncResponse, ex);
return null;
});
}
@DELETE
@Path("/{tenant}/{namespace}/autoSubscriptionCreation")
@ApiOperation(value = "Remove override of broker's allowAutoSubscriptionCreation in a namespace")
@ApiResponses(value = {
@ApiResponse(code = 204, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Tenant or cluster or namespace doesn't exist") })
public void removeAutoSubscriptionCreation(@Suspended final AsyncResponse asyncResponse,
@PathParam("tenant") String tenant, @PathParam("namespace") String namespace) {
validateNamespaceName(tenant, namespace);
internalSetAutoSubscriptionCreationAsync(null)
.thenAccept(__ -> {
log.info("[{}] Successfully set autoSubscriptionCreation on namespace {}",
clientAppId(), namespaceName);
asyncResponse.resume(Response.noContent().build());
})
.exceptionally(e -> {
Throwable ex = FutureUtil.unwrapCompletionException(e);
log.error("[{}] Failed to set autoSubscriptionCreation on namespace {}", clientAppId(),
namespaceName, ex);
if (ex instanceof MetadataStoreException.NotFoundException) {
asyncResponse.resume(new RestException(Response.Status.NOT_FOUND, "Namespace does not exist"));
} else {
resumeAsyncResponseExceptionally(asyncResponse, ex);
}
return null;
});
}
@GET
@Path("/{tenant}/{namespace}/bundles")
@ApiOperation(value = "Get the bundles split data.", response = BundlesDataImpl.class)
@ApiResponses(value = { @ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Tenant or cluster or namespace doesn't exist"),
@ApiResponse(code = 412, message = "Namespace is not setup to split in bundles") })
public void getBundlesData(@Suspended final AsyncResponse asyncResponse,
@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace) {
validateNamespaceName(tenant, namespace);
validatePoliciesReadOnlyAccessAsync()
.thenCompose(__ -> validateNamespaceOperationAsync(NamespaceName.get(tenant, namespace),
NamespaceOperation.GET_BUNDLE))
.thenCompose(__ -> getNamespacePoliciesAsync(namespaceName))
.thenAccept(policies -> asyncResponse.resume(policies.bundles))
.exceptionally(ex -> {
log.error("[{}] Failed to get bundle data for namespace {}", clientAppId(),
namespaceName, ex);
resumeAsyncResponseExceptionally(asyncResponse, ex);
return null;
});
}
@PUT
@Path("/{tenant}/{namespace}/unload")
@ApiOperation(value = "Unload namespace",
notes = "Unload an active namespace from the current broker serving it. Performing this operation will"
+ " let the brokerremoves all producers, consumers, and connections using this namespace,"
+ " and close all topics (includingtheir persistent store). During that operation,"
+ " the namespace is marked as tentatively unavailable until thebroker completes "
+ "the unloading action. This operation requires strictly super user privileges,"
+ " since it wouldresult in non-persistent message loss and"
+ " unexpected connection closure to the clients.")
@ApiResponses(value = {
@ApiResponse(code = 204, message = "Operation successful"),
@ApiResponse(code = 307, message = "Current broker doesn't serve the namespace"),
@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Tenant or namespace doesn't exist"),
@ApiResponse(code = 412, message = "Namespace is already unloaded or Namespace has bundles activated")})
public void unloadNamespace(@Suspended final AsyncResponse asyncResponse,
@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace) {
try {
validateNamespaceName(tenant, namespace);
} catch (WebApplicationException wae) {
asyncResponse.resume(wae);
return;
}
internalUnloadNamespaceAsync()
.thenAccept(__ -> {
log.info("[{}] Successfully unloaded all the bundles in namespace {}", clientAppId(),
namespaceName);
asyncResponse.resume(Response.noContent().build());
})
.exceptionally(ex -> {
if (!isRedirectException(ex)) {
log.error("[{}] Failed to unload namespace {}", clientAppId(), namespaceName, ex);
}
resumeAsyncResponseExceptionally(asyncResponse, ex);
return null;
});
}
@PUT
@Path("/{tenant}/{namespace}/{bundle}/unload")
@ApiOperation(value = "Unload a namespace bundle")
@ApiResponses(value = {
@ApiResponse(code = 204, message = "Operation successful"),
@ApiResponse(code = 307, message = "Current broker doesn't serve the namespace"),
@ApiResponse(code = 404, message = "Namespace doesn't exist"),
@ApiResponse(code = 403, message = "Don't have admin permission") })
public void unloadNamespaceBundle(@Suspended final AsyncResponse asyncResponse,
@PathParam("tenant") String tenant, @PathParam("namespace") String namespace,
@PathParam("bundle") String bundleRange,
@QueryParam("authoritative") @DefaultValue("false") boolean authoritative,
@QueryParam("destinationBroker") String destinationBroker) {
validateNamespaceName(tenant, namespace);
internalUnloadNamespaceBundleAsync(bundleRange, destinationBroker, authoritative)
.thenAccept(__ -> {
log.info("[{}] Successfully unloaded namespace bundle {}",
clientAppId(), bundleRange);
asyncResponse.resume(Response.noContent().build());
})
.exceptionally(ex -> {
if (!isRedirectException(ex)) {
log.error("[{}] Failed to unload namespace bundle {}/{}",
clientAppId(), namespaceName, bundleRange, ex);
}
resumeAsyncResponseExceptionally(asyncResponse, ex);
return null;
});
}
@PUT
@Path("/{tenant}/{namespace}/{bundle}/split")
@ApiOperation(value = "Split a namespace bundle")
@ApiResponses(value = {
@ApiResponse(code = 204, message = "Operation successful"),
@ApiResponse(code = 307, message = "Current broker doesn't serve the namespace"),
@ApiResponse(code = 404, message = "Namespace doesn't exist"),
@ApiResponse(code = 403, message = "Don't have admin permission") })
public void splitNamespaceBundle(
@Suspended final AsyncResponse asyncResponse,
@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace,
@PathParam("bundle") String bundleRange,
@QueryParam("authoritative") @DefaultValue("false") boolean authoritative,
@QueryParam("unload") @DefaultValue("false") boolean unload,
@QueryParam("splitAlgorithmName") String splitAlgorithmName,
@ApiParam("splitBoundaries") List<Long> splitBoundaries) {
validateNamespaceName(tenant, namespace);
internalSplitNamespaceBundleAsync(bundleRange, authoritative, unload, splitAlgorithmName, splitBoundaries)
.thenAccept(__ -> {
log.info("[{}] Successfully split namespace bundle {}", clientAppId(), bundleRange);
asyncResponse.resume(Response.noContent().build());
})
.exceptionally(ex -> {
if (!isRedirectException(ex)) {
log.error("[{}] Failed to split namespace bundle {}/{} due to {}",
clientAppId(), namespaceName, bundleRange, ex.getMessage());
}
Throwable realCause = FutureUtil.unwrapCompletionException(ex);
if (realCause instanceof IllegalArgumentException) {
asyncResponse.resume(new RestException(Response.Status.PRECONDITION_FAILED,
"Split bundle failed due to invalid request"));
} else {
resumeAsyncResponseExceptionally(asyncResponse, ex);
}
return null;
});
}
@GET
@Path("/{tenant}/{namespace}/{bundle}/topicHashPositions")
@ApiOperation(value = "Get hash positions for topics", response = TopicHashPositions.class)
@ApiResponses(value = {
@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Namespace does not exist")})
public void getTopicHashPositions(
@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace,
@PathParam("bundle") String bundleRange,
@QueryParam("topics") List<String> topics,
@Suspended AsyncResponse asyncResponse) {
validateNamespaceName(tenant, namespace);
internalGetTopicHashPositionsAsync(bundleRange, topics)
.thenAccept(asyncResponse::resume)
.exceptionally(ex -> {
if (!isRedirectException(ex)) {
log.error("[{}] {} Failed to get topic list for bundle {}.", clientAppId(),
namespaceName, bundleRange);
}
resumeAsyncResponseExceptionally(asyncResponse, ex);
return null;
});
}
@POST
@Path("/{property}/{namespace}/publishRate")
@ApiOperation(hidden = true, value = "Set publish-rate throttling for all topics of the namespace")
@ApiResponses(value = {
@ApiResponse(code = 204, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin permission") })
public void setPublishRate(@Suspended AsyncResponse asyncResponse, @PathParam("property") String property,
@PathParam("namespace") String namespace,
@ApiParam(value = "Publish rate for all topics of the specified namespace") PublishRate publishRate) {
validateNamespaceName(property, namespace);
internalSetPublishRateAsync(publishRate)
.thenAccept(__ -> asyncResponse.resume(Response.noContent().build()))
.exceptionally(ex -> {
resumeAsyncResponseExceptionally(asyncResponse, ex);
return null;
});
}
@DELETE
@Path("/{property}/{namespace}/publishRate")
@ApiOperation(hidden = true, value = "Set publish-rate throttling for all topics of the namespace")
@ApiResponses(value = {
@ApiResponse(code = 204, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin permission") })
public void removePublishRate(@Suspended AsyncResponse asyncResponse, @PathParam("property") String property,
@PathParam("namespace") String namespace) {
validateNamespaceName(property, namespace);
internalRemovePublishRateAsync()
.thenAccept(__ -> asyncResponse.resume(Response.noContent().build()))
.exceptionally(ex -> {
log.error("[{}] Failed to remove the publish_max_message_rate for cluster on namespace {}",
clientAppId(), namespaceName, ex);
resumeAsyncResponseExceptionally(asyncResponse, ex);
return null;
});
}
@GET
@Path("/{property}/{namespace}/publishRate")
@ApiOperation(hidden = true,
value = "Get publish-rate configured for the namespace, null means publish-rate not configured, "
+ "-1 means msg-publish-rate or byte-publish-rate not configured in publish-rate yet",
response = PublishRate.class)
@ApiResponses(value = {@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Namespace does not exist")})
public void getPublishRate(@Suspended AsyncResponse asyncResponse,
@PathParam("property") String property,
@PathParam("namespace") String namespace) {
validateNamespaceName(property, namespace);
internalGetPublishRateAsync()
.thenAccept(asyncResponse::resume)
.exceptionally(ex -> {
log.error("Failed to get publish rate for namespace {}", namespaceName, ex);
resumeAsyncResponseExceptionally(asyncResponse, ex);
return null;
});
}
@POST
@Path("/{tenant}/{namespace}/dispatchRate")
@ApiOperation(value = "Set dispatch-rate throttling for all topics of the namespace")
@ApiResponses(value = {
@ApiResponse(code = 204, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin permission") })
public void setDispatchRate(@Suspended AsyncResponse asyncResponse, @PathParam("tenant") String tenant,
@PathParam("namespace") String namespace,
@ApiParam(value = "Dispatch rate for all topics of the specified namespace")
DispatchRateImpl dispatchRate) {
validateNamespaceName(tenant, namespace);
internalSetTopicDispatchRateAsync(dispatchRate)
.thenAccept(__ -> asyncResponse.resume(Response.noContent().build()))
.exceptionally(ex -> {
log.error("[{}] Failed to update the dispatchRate for cluster on namespace {}", clientAppId(),
namespaceName, ex);
resumeAsyncResponseExceptionally(asyncResponse, ex);
return null;
});
}
@DELETE
@Path("/{tenant}/{namespace}/dispatchRate")
@ApiOperation(value = "Delete dispatch-rate throttling for all topics of the namespace")
@ApiResponses(value = {
@ApiResponse(code = 204, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin permission") })
public void deleteDispatchRate(@Suspended AsyncResponse asyncResponse, @PathParam("tenant") String tenant,
@PathParam("namespace") String namespace) {
validateNamespaceName(tenant, namespace);
internalDeleteTopicDispatchRateAsync()
.thenAccept(__ -> asyncResponse.resume(Response.noContent().build()))
.exceptionally(ex -> {
log.error("[{}] Failed to delete the dispatchRate for cluster on namespace {}", clientAppId(),
namespaceName, ex);
resumeAsyncResponseExceptionally(asyncResponse, ex);
return null;
});
}
@GET
@Path("/{tenant}/{namespace}/dispatchRate")
@ApiOperation(value = "Get dispatch-rate configured for the namespace, null means dispatch-rate not configured, "
+ "-1 means msg-dispatch-rate or byte-dispatch-rate not configured in dispatch-rate yet",
response = DispatchRate.class)
@ApiResponses(value = { @ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Namespace does not exist") })
public void getDispatchRate(@Suspended AsyncResponse asyncResponse, @PathParam("tenant") String tenant,
@PathParam("namespace") String namespace) {
validateNamespaceName(tenant, namespace);
internalGetTopicDispatchRateAsync()
.thenAccept(dispatchRate -> asyncResponse.resume(dispatchRate))
.exceptionally(ex -> {
resumeAsyncResponseExceptionally(asyncResponse, ex);
return null;
});
}
@POST
@Path("/{tenant}/{namespace}/subscriptionDispatchRate")
@ApiOperation(value = "Set Subscription dispatch-rate throttling for all topics of the namespace")
@ApiResponses(value = {
@ApiResponse(code = 204, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin permission")})
public void setSubscriptionDispatchRate(@Suspended AsyncResponse asyncResponse,
@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace,
@ApiParam(value =
"Subscription dispatch rate for all topics of the specified namespace")
DispatchRateImpl dispatchRate) {
validateNamespaceName(tenant, namespace);
internalSetSubscriptionDispatchRateAsync(dispatchRate)
.thenAccept(__ -> asyncResponse.resume(Response.noContent().build()))
.exceptionally(ex -> {
log.error("[{}] Failed to update the subscription dispatchRate for cluster on namespace {}",
clientAppId(), namespaceName, ex);
resumeAsyncResponseExceptionally(asyncResponse, ex);
return null;
});
}
@GET
@Path("/{tenant}/{namespace}/subscriptionDispatchRate")
@ApiOperation(value = "Get subscription dispatch-rate configured for the namespace, null means subscription "
+ "dispatch-rate not configured, -1 means msg-dispatch-rate or byte-dispatch-rate not configured "
+ "in dispatch-rate yet", response = DispatchRate.class)
@ApiResponses(value = {@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Namespace does not exist")})
public void getSubscriptionDispatchRate(@Suspended AsyncResponse asyncResponse,
@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace) {
validateNamespaceName(tenant, namespace);
internalGetSubscriptionDispatchRateAsync()
.thenAccept(asyncResponse::resume)
.exceptionally(ex -> {
log.error("[{}] Failed to get the subscription dispatchRate for cluster on namespace {}",
clientAppId(), namespaceName, ex);
resumeAsyncResponseExceptionally(asyncResponse, ex);
return null;
});
}
@DELETE
@Path("/{tenant}/{namespace}/subscriptionDispatchRate")
@ApiOperation(value = "Delete Subscription dispatch-rate throttling for all topics of the namespace")
@ApiResponses(value = {
@ApiResponse(code = 204, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin permission") })
public void deleteSubscriptionDispatchRate(@Suspended AsyncResponse asyncResponse,
@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace) {
validateNamespaceName(tenant, namespace);
internalDeleteSubscriptionDispatchRateAsync()
.thenAccept(__ -> asyncResponse.resume(Response.noContent().build()))
.exceptionally(ex -> {
log.error("Failed to delete the subscription dispatchRate for cluster on namespace {}",
clientAppId(), namespaceName, ex);
resumeAsyncResponseExceptionally(asyncResponse, ex);
return null;
});
}
@DELETE
@Path("/{tenant}/{namespace}/subscribeRate")
@ApiOperation(value = "Delete subscribe-rate throttling for all topics of the namespace")
@ApiResponses(value = {
@ApiResponse(code = 204, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin permission")})
public void deleteSubscribeRate(@Suspended AsyncResponse asyncResponse, @PathParam("tenant") String tenant,
@PathParam("namespace") String namespace) {
validateNamespaceName(tenant, namespace);
internalDeleteSubscribeRateAsync()
.thenAccept(__ -> asyncResponse.resume(Response.noContent().build()))
.exceptionally(ex -> {
log.error("[{}] Failed to delete the subscribeRate for cluster on namespace {}",
clientAppId(), namespaceName, ex);
resumeAsyncResponseExceptionally(asyncResponse, ex);
return null;
});
}
@POST
@Path("/{tenant}/{namespace}/subscribeRate")
@ApiOperation(value = "Set subscribe-rate throttling for all topics of the namespace")
@ApiResponses(value = {
@ApiResponse(code = 204, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin permission")})
public void setSubscribeRate(@Suspended AsyncResponse asyncResponse, @PathParam("tenant") String tenant,
@PathParam("namespace") String namespace,
@ApiParam(value = "Subscribe rate for all topics of the specified namespace")
SubscribeRate subscribeRate) {
validateNamespaceName(tenant, namespace);
internalSetSubscribeRateAsync(subscribeRate)
.thenAccept(__ -> asyncResponse.resume(Response.noContent().build()))
.exceptionally(ex -> {
log.error("[{}] Failed to update the subscribeRate for cluster on namespace {}",
clientAppId(), namespaceName, ex);
resumeAsyncResponseExceptionally(asyncResponse, ex);
return null;
});
}
@GET
@Path("/{tenant}/{namespace}/subscribeRate")
@ApiOperation(value = "Get subscribe-rate configured for the namespace", response = SubscribeRate.class)
@ApiResponses(value = {@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Namespace does not exist")})
public void getSubscribeRate(@Suspended AsyncResponse asyncResponse, @PathParam("tenant") String tenant,
@PathParam("namespace") String namespace) {
validateNamespaceName(tenant, namespace);
internalGetSubscribeRateAsync()
.thenAccept(subscribeRate -> asyncResponse.resume(subscribeRate))
.exceptionally(ex -> {
log.error("[{}] Failed to get subscribe rate for namespace {}", clientAppId(), namespaceName, ex);
resumeAsyncResponseExceptionally(asyncResponse, ex);
return null;
});
}
@DELETE
@Path("/{tenant}/{namespace}/replicatorDispatchRate")
@ApiOperation(value = "Remove replicator dispatch-rate throttling for all topics of the namespace")
@ApiResponses(value = {
@ApiResponse(code = 204, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin permission")})
public void removeReplicatorDispatchRate(@Suspended AsyncResponse asyncResponse,
@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace) {
validateNamespaceName(tenant, namespace);
internalRemoveReplicatorDispatchRate(asyncResponse);
}
@POST
@Path("/{tenant}/{namespace}/replicatorDispatchRate")
@ApiOperation(value = "Set replicator dispatch-rate throttling for all topics of the namespace")
@ApiResponses(value = {
@ApiResponse(code = 204, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin permission")})
public void setReplicatorDispatchRate(@Suspended AsyncResponse asyncResponse,
@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace,
@ApiParam(value =
"Replicator dispatch rate for all topics of the specified namespace") DispatchRateImpl dispatchRate) {
validateNamespaceName(tenant, namespace);
internalSetReplicatorDispatchRate(asyncResponse, dispatchRate);
}
@GET
@Path("/{tenant}/{namespace}/replicatorDispatchRate")
@ApiOperation(value = "Get replicator dispatch-rate configured for the namespace, null means replicator "
+ "dispatch-rate not configured, -1 means msg-dispatch-rate or byte-dispatch-rate not configured "
+ "in dispatch-rate yet", response = DispatchRateImpl.class)
@ApiResponses(value = { @ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Namespace does not exist") })
public void getReplicatorDispatchRate(@Suspended final AsyncResponse asyncResponse,
@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace) {
validateNamespaceName(tenant, namespace);
internalGetReplicatorDispatchRate(asyncResponse);
}
@GET
@Path("/{tenant}/{namespace}/backlogQuotaMap")
@ApiOperation(value = "Get backlog quota map on a namespace.",
response = BacklogQuotaImpl.class, responseContainer = "Map")
@ApiResponses(value = { @ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Namespace does not exist") })
public void getBacklogQuotaMap(
@Suspended final AsyncResponse asyncResponse,
@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace) {
validateNamespaceName(tenant, namespace);
internalGetBacklogQuotaMap(asyncResponse);
}
@POST
@Path("/{tenant}/{namespace}/backlogQuota")
@ApiOperation(value = " Set a backlog quota for all the topics on a namespace.")
@ApiResponses(value = {
@ApiResponse(code = 204, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Namespace does not exist"),
@ApiResponse(code = 409, message = "Concurrent modification"),
@ApiResponse(code = 412,
message = "Specified backlog quota exceeds retention quota."
+ " Increase retention quota and retry request")})
public void setBacklogQuota(
@Suspended final AsyncResponse asyncResponse,
@PathParam("tenant") String tenant, @PathParam("namespace") String namespace,
@QueryParam("backlogQuotaType") BacklogQuotaType backlogQuotaType,
@ApiParam(value = "Backlog quota for all topics of the specified namespace") BacklogQuota backlogQuota) {
validateNamespaceName(tenant, namespace);
internalSetBacklogQuota(asyncResponse, backlogQuotaType, backlogQuota);
}
@DELETE
@Path("/{tenant}/{namespace}/backlogQuota")
@ApiOperation(value = "Remove a backlog quota policy from a namespace.")
@ApiResponses(value = {
@ApiResponse(code = 204, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Namespace does not exist"),
@ApiResponse(code = 409, message = "Concurrent modification") })
public void removeBacklogQuota(
@Suspended final AsyncResponse asyncResponse,
@PathParam("tenant") String tenant, @PathParam("namespace") String namespace,
@QueryParam("backlogQuotaType") BacklogQuotaType backlogQuotaType) {
validateNamespaceName(tenant, namespace);
internalRemoveBacklogQuota(asyncResponse, backlogQuotaType);
}
@GET
@Path("/{tenant}/{namespace}/retention")
@ApiOperation(value = "Get retention config on a namespace.", response = RetentionPolicies.class)
@ApiResponses(value = { @ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Namespace does not exist") })
public void getRetention(@Suspended final AsyncResponse asyncResponse,
@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace) {
validateNamespaceName(tenant, namespace);
validateNamespacePolicyOperationAsync(namespaceName, PolicyName.RETENTION, PolicyOperation.READ)
.thenCompose(__ -> getNamespacePoliciesAsync(namespaceName))
.thenAccept(policies -> asyncResponse.resume(policies.retention_policies))
.exceptionally(ex -> {
log.error("[{}] Failed to get retention config on a namespace {}", clientAppId(), namespaceName,
ex);
resumeAsyncResponseExceptionally(asyncResponse, ex);
return null;
});
}
@POST
@Path("/{tenant}/{namespace}/retention")
@ApiOperation(value = " Set retention configuration on a namespace.")
@ApiResponses(value = {
@ApiResponse(code = 200, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Namespace does not exist"),
@ApiResponse(code = 409, message = "Concurrent modification"),
@ApiResponse(code = 412, message = "Retention Quota must exceed backlog quota") })
public void setRetention(@PathParam("tenant") String tenant, @PathParam("namespace") String namespace,
@ApiParam(value = "Retention policies for the specified namespace") RetentionPolicies retention) {
validateNamespaceName(tenant, namespace);
internalSetRetention(retention);
}
@DELETE
@Path("/{tenant}/{namespace}/retention")
@ApiOperation(value = " Remove retention configuration on a namespace.")
@ApiResponses(value = {
@ApiResponse(code = 200, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Namespace does not exist"),
@ApiResponse(code = 409, message = "Concurrent modification"),
@ApiResponse(code = 412, message = "Retention Quota must exceed backlog quota") })
public void removeRetention(@PathParam("tenant") String tenant, @PathParam("namespace") String namespace,
@ApiParam(value = "Retention policies for the specified namespace") RetentionPolicies retention) {
validateNamespaceName(tenant, namespace);
internalSetRetention(null);
}
@POST
@Path("/{tenant}/{namespace}/persistence")
@ApiOperation(value = "Set the persistence configuration for all the topics on a namespace.")
@ApiResponses(value = {
@ApiResponse(code = 204, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Namespace does not exist"),
@ApiResponse(code = 409, message = "Concurrent modification"),
@ApiResponse(code = 400, message = "Invalid persistence policies")})
public void setPersistence(@Suspended final AsyncResponse asyncResponse, @PathParam("tenant") String tenant,
@PathParam("namespace") String namespace,
@ApiParam(value = "Persistence policies for the specified namespace", required = true)
PersistencePolicies persistence) {
validateNamespaceName(tenant, namespace);
internalSetPersistenceAsync(persistence)
.thenAccept(__ -> asyncResponse.resume(Response.noContent().build()))
.exceptionally(ex -> {
log.error("[{}] Failed to update the persistence for a namespace {}", clientAppId(), namespaceName,
ex);
resumeAsyncResponseExceptionally(asyncResponse, ex);
return null;
});
}
@DELETE
@Path("/{tenant}/{namespace}/persistence")
@ApiOperation(value = "Delete the persistence configuration for all topics on a namespace")
@ApiResponses(value = {
@ApiResponse(code = 204, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin permission") })
public void deletePersistence(@Suspended final AsyncResponse asyncResponse, @PathParam("tenant") String tenant,
@PathParam("namespace") String namespace) {
validateNamespaceName(tenant, namespace);
internalDeletePersistenceAsync()
.thenAccept(__ -> asyncResponse.resume(Response.noContent().build()))
.exceptionally(ex -> {
log.error("[{}] Failed to delete the persistence for a namespace {}", clientAppId(), namespaceName,
ex);
resumeAsyncResponseExceptionally(asyncResponse, ex);
return null;
});
}
@POST
@Path("/{tenant}/{namespace}/persistence/bookieAffinity")
@ApiOperation(value = "Set the bookie-affinity-group to namespace-persistent policy.")
@ApiResponses(value = {
@ApiResponse(code = 200, message = "Operation successful"),
@ApiResponse(code = 307, message = "Current broker doesn't serve the namespace"),
@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Namespace does not exist"),
@ApiResponse(code = 409, message = "Concurrent modification")})
public void setBookieAffinityGroup(@PathParam("tenant") String tenant, @PathParam("namespace") String namespace,
@ApiParam(value = "Bookie affinity group for the specified namespace")
BookieAffinityGroupData bookieAffinityGroup) {
validateNamespaceName(tenant, namespace);
internalSetBookieAffinityGroup(bookieAffinityGroup);
}
@GET
@Path("/{property}/{namespace}/persistence/bookieAffinity")
@ApiOperation(value = "Get the bookie-affinity-group from namespace-local policy.",
response = BookieAffinityGroupDataImpl.class)
@ApiResponses(value = {
@ApiResponse(code = 307, message = "Current broker doesn't serve the namespace"),
@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Namespace does not exist"),
@ApiResponse(code = 409, message = "Concurrent modification") })
public BookieAffinityGroupData getBookieAffinityGroup(@PathParam("property") String property,
@PathParam("namespace") String namespace) {
validateNamespaceName(property, namespace);
return internalGetBookieAffinityGroup();
}
@DELETE
@Path("/{property}/{namespace}/persistence/bookieAffinity")
@ApiOperation(value = "Delete the bookie-affinity-group from namespace-local policy.")
@ApiResponses(value = {
@ApiResponse(code = 204, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Namespace does not exist"),
@ApiResponse(code = 409, message = "Concurrent modification") })
public void deleteBookieAffinityGroup(@PathParam("property") String property,
@PathParam("namespace") String namespace) {
validateNamespaceName(property, namespace);
internalDeleteBookieAffinityGroup();
}
@GET
@Path("/{tenant}/{namespace}/persistence")
@ApiOperation(value = "Get the persistence configuration for a namespace.", response = PersistencePolicies.class)
@ApiResponses(value = { @ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Namespace does not exist"),
@ApiResponse(code = 409, message = "Concurrent modification") })
public void getPersistence(
@Suspended final AsyncResponse asyncResponse,
@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace) {
validateNamespaceName(tenant, namespace);
validateNamespacePolicyOperationAsync(namespaceName, PolicyName.PERSISTENCE, PolicyOperation.READ)
.thenCompose(__ -> getNamespacePoliciesAsync(namespaceName))
.thenAccept(policies -> asyncResponse.resume(policies.persistence))
.exceptionally(ex -> {
log.error("[{}] Failed to get persistence configuration for a namespace {}", clientAppId(),
namespaceName, ex);
resumeAsyncResponseExceptionally(asyncResponse, ex);
return null;
});
}
@POST
@Path("/{tenant}/{namespace}/clearBacklog")
@ApiOperation(value = "Clear backlog for all topics on a namespace.")
@ApiResponses(value = {
@ApiResponse(code = 204, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin or operate permission on the namespace"),
@ApiResponse(code = 404, message = "Namespace does not exist") })
public void clearNamespaceBacklog(@Suspended final AsyncResponse asyncResponse, @PathParam("tenant") String tenant,
@PathParam("namespace") String namespace,
@QueryParam("authoritative") @DefaultValue("false") boolean authoritative) {
try {
validateNamespaceName(tenant, namespace);
internalClearNamespaceBacklog(asyncResponse, authoritative);
} catch (WebApplicationException wae) {
asyncResponse.resume(wae);
} catch (Exception e) {
asyncResponse.resume(new RestException(e));
}
}
@POST
@Path("/{tenant}/{namespace}/{bundle}/clearBacklog")
@ApiOperation(value = "Clear backlog for all topics on a namespace bundle.")
@ApiResponses(value = {
@ApiResponse(code = 204, message = "Operation successful"),
@ApiResponse(code = 307, message = "Current broker doesn't serve the namespace"),
@ApiResponse(code = 403, message = "Don't have admin or operate permission on the namespace"),
@ApiResponse(code = 404, message = "Namespace does not exist") })
public void clearNamespaceBundleBacklog(@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace, @PathParam("bundle") String bundleRange,
@QueryParam("authoritative") @DefaultValue("false") boolean authoritative) {
validateNamespaceName(tenant, namespace);
internalClearNamespaceBundleBacklog(bundleRange, authoritative);
}
@POST
@Path("/{tenant}/{namespace}/clearBacklog/{subscription}")
@ApiOperation(value = "Clear backlog for a given subscription on all topics on a namespace.")
@ApiResponses(value = {
@ApiResponse(code = 204, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin or operate permission on the namespace"),
@ApiResponse(code = 404, message = "Namespace does not exist") })
public void clearNamespaceBacklogForSubscription(@Suspended final AsyncResponse asyncResponse,
@PathParam("tenant") String tenant, @PathParam("namespace") String namespace,
@PathParam("subscription") String subscription,
@QueryParam("authoritative") @DefaultValue("false") boolean authoritative) {
try {
validateNamespaceName(tenant, namespace);
internalClearNamespaceBacklogForSubscription(asyncResponse, subscription, authoritative);
} catch (WebApplicationException wae) {
asyncResponse.resume(wae);
} catch (Exception e) {
asyncResponse.resume(new RestException(e));
}
}
@POST
@Path("/{tenant}/{namespace}/{bundle}/clearBacklog/{subscription}")
@ApiOperation(value = "Clear backlog for a given subscription on all topics on a namespace bundle.")
@ApiResponses(value = {
@ApiResponse(code = 204, message = "Operation successful"),
@ApiResponse(code = 307, message = "Current broker doesn't serve the namespace"),
@ApiResponse(code = 403, message = "Don't have admin or operate permission on the namespace"),
@ApiResponse(code = 404, message = "Namespace does not exist") })
public void clearNamespaceBundleBacklogForSubscription(@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace, @PathParam("subscription") String subscription,
@PathParam("bundle") String bundleRange,
@QueryParam("authoritative") @DefaultValue("false") boolean authoritative) {
validateNamespaceName(tenant, namespace);
internalClearNamespaceBundleBacklogForSubscription(subscription, bundleRange, authoritative);
}
@POST
@Path("/{tenant}/{namespace}/unsubscribe/{subscription}")
@ApiOperation(value = "Unsubscribes the given subscription on all topics on a namespace.")
@ApiResponses(value = {
@ApiResponse(code = 204, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin or operate permission on the namespacen"),
@ApiResponse(code = 404, message = "Namespace does not exist") })
public void unsubscribeNamespace(@Suspended final AsyncResponse asyncResponse, @PathParam("tenant") String tenant,
@PathParam("namespace") String namespace,
@PathParam("subscription") String subscription,
@QueryParam("authoritative") @DefaultValue("false") boolean authoritative) {
try {
validateNamespaceName(tenant, namespace);
internalUnsubscribeNamespace(asyncResponse, subscription, authoritative);
} catch (WebApplicationException wae) {
asyncResponse.resume(wae);
} catch (Exception e) {
asyncResponse.resume(new RestException(e));
}
}
@POST
@Path("/{tenant}/{namespace}/{bundle}/unsubscribe/{subscription}")
@ApiOperation(value = "Unsubscribes the given subscription on all topics on a namespace bundle.")
@ApiResponses(value = {
@ApiResponse(code = 204, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin or operate permission on the namespace"),
@ApiResponse(code = 404, message = "Namespace does not exist") })
public void unsubscribeNamespaceBundle(@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace, @PathParam("subscription") String subscription,
@PathParam("bundle") String bundleRange,
@QueryParam("authoritative") @DefaultValue("false") boolean authoritative) {
validateNamespaceName(tenant, namespace);
internalUnsubscribeNamespaceBundle(subscription, bundleRange, authoritative);
}
@POST
@Path("/{tenant}/{namespace}/subscriptionAuthMode")
@ApiOperation(value = " Set a subscription auth mode for all the topics on a namespace.")
@ApiResponses(value = {
@ApiResponse(code = 204, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Namespace does not exist"),
@ApiResponse(code = 409, message = "Concurrent modification")})
public void setSubscriptionAuthMode(@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace, @ApiParam(value =
"Subscription auth mode for all topics of the specified namespace")
SubscriptionAuthMode subscriptionAuthMode) {
validateNamespaceName(tenant, namespace);
internalSetSubscriptionAuthMode(subscriptionAuthMode);
}
@GET
@Path("/{tenant}/{namespace}/subscriptionAuthMode")
@ApiOperation(value = "Get subscription auth mode in a namespace", response = SubscriptionAuthMode.class)
@ApiResponses(value = {@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Tenant or namespace doesn't exist")})
public void getSubscriptionAuthMode(
@Suspended final AsyncResponse asyncResponse,
@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace) {
validateNamespaceName(tenant, namespace);
validateNamespacePolicyOperationAsync(namespaceName, PolicyName.SUBSCRIPTION_AUTH_MODE, PolicyOperation.READ)
.thenCompose(__ -> getNamespacePoliciesAsync(namespaceName))
.thenAccept(policies -> asyncResponse.resume(policies.subscription_auth_mode))
.exceptionally(ex -> {
log.error("[{}] Failed to get subscription auth mode in a namespace {}", clientAppId(),
namespaceName, ex);
resumeAsyncResponseExceptionally(asyncResponse, ex);
return null;
});
}
@POST
@Path("/{tenant}/{namespace}/encryptionRequired")
@ApiOperation(value = "Message encryption is required or not for all topics in a namespace")
@ApiResponses(value = {
@ApiResponse(code = 204, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Tenant or cluster or namespace doesn't exist"),
@ApiResponse(code = 409, message = "Concurrent modification"), })
public void modifyEncryptionRequired(
@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace,
@ApiParam(value = "Flag defining if message encryption is required", required = true)
boolean encryptionRequired) {
validateNamespaceName(tenant, namespace);
internalModifyEncryptionRequired(encryptionRequired);
}
@GET
@Path("/{tenant}/{namespace}/encryptionRequired")
@ApiOperation(value = "Get message encryption required status in a namespace", response = Boolean.class)
@ApiResponses(value = {@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Tenant or namespace doesn't exist")})
public void getEncryptionRequired(@Suspended AsyncResponse asyncResponse,
@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace) {
validateNamespaceName(tenant, namespace);
validateNamespacePolicyOperationAsync(namespaceName, PolicyName.ENCRYPTION, PolicyOperation.READ)
.thenCompose(__ -> getNamespacePoliciesAsync(namespaceName))
.thenAccept(policies -> asyncResponse.resume(policies.encryption_required))
.exceptionally(ex -> {
log.error("[{}] Failed to get message encryption required status in a namespace {}", clientAppId(),
namespaceName, ex);
resumeAsyncResponseExceptionally(asyncResponse, ex);
return null;
});
}
@GET
@Path("/{tenant}/{namespace}/delayedDelivery")
@ApiOperation(value = "Get delayed delivery messages config on a namespace.",
response = DelayedDeliveryPolicies.class)
@ApiResponses(value = { @ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Tenant or cluster or namespace doesn't exist"),
@ApiResponse(code = 409, message = "Concurrent modification"), })
public void getDelayedDeliveryPolicies(@Suspended final AsyncResponse asyncResponse,
@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace) {
validateNamespaceName(tenant, namespace);
validateNamespacePolicyOperationAsync(namespaceName, PolicyName.DELAYED_DELIVERY, PolicyOperation.READ)
.thenCompose(__ -> getNamespacePoliciesAsync(namespaceName))
.thenAccept(policies -> asyncResponse.resume(policies.delayed_delivery_policies))
.exceptionally(ex -> {
log.error("[{}] Failed to get delayed delivery messages config on a namespace {}", clientAppId(),
namespaceName, ex);
resumeAsyncResponseExceptionally(asyncResponse, ex);
return null;
});
}
@POST
@Path("/{tenant}/{namespace}/delayedDelivery")
@ApiOperation(value = "Set delayed delivery messages config on a namespace.")
@ApiResponses(value = {
@ApiResponse(code = 204, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Tenant or cluster or namespace doesn't exist"), })
public void setDelayedDeliveryPolicies(@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace,
@ApiParam(value = "Delayed delivery policies for the specified namespace")
DelayedDeliveryPolicies deliveryPolicies) {
validateNamespaceName(tenant, namespace);
internalSetDelayedDelivery(deliveryPolicies);
}
@DELETE
@Path("/{tenant}/{namespace}/delayedDelivery")
@ApiOperation(value = "Delete delayed delivery messages config on a namespace.")
@ApiResponses(value = {
@ApiResponse(code = 204, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Tenant or cluster or namespace doesn't exist"), })
public void removeDelayedDeliveryPolicies(@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace) {
validateNamespaceName(tenant, namespace);
internalSetDelayedDelivery(null);
}
@GET
@Path("/{tenant}/{namespace}/inactiveTopicPolicies")
@ApiOperation(value = "Get inactive topic policies config on a namespace.", response = InactiveTopicPolicies.class)
@ApiResponses(value = { @ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Tenant or cluster or namespace doesn't exist"),
@ApiResponse(code = 409, message = "Concurrent modification"), })
public void getInactiveTopicPolicies(@Suspended final AsyncResponse asyncResponse,
@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace) {
validateNamespaceName(tenant, namespace);
validateNamespacePolicyOperationAsync(namespaceName, PolicyName.INACTIVE_TOPIC, PolicyOperation.READ)
.thenCompose(__ -> getNamespacePoliciesAsync(namespaceName))
.thenAccept(policies -> asyncResponse.resume(policies.inactive_topic_policies))
.exceptionally(ex -> {
log.error("[{}] Failed to get inactive topic policies config on a namespace {}", clientAppId(),
namespaceName, ex);
resumeAsyncResponseExceptionally(asyncResponse, ex);
return null;
});
}
@DELETE
@Path("/{tenant}/{namespace}/inactiveTopicPolicies")
@ApiOperation(value = "Remove inactive topic policies from a namespace.")
@ApiResponses(value = {
@ApiResponse(code = 204, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Namespace does not exist"),
@ApiResponse(code = 409, message = "Concurrent modification")})
public void removeInactiveTopicPolicies(@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace) {
validateNamespaceName(tenant, namespace);
internalSetInactiveTopic(null);
}
@POST
@Path("/{tenant}/{namespace}/inactiveTopicPolicies")
@ApiOperation(value = "Set inactive topic policies config on a namespace.")
@ApiResponses(value = {
@ApiResponse(code = 204, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Tenant or cluster or namespace doesn't exist"), })
public void setInactiveTopicPolicies(@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace,
@ApiParam(value = "Inactive topic policies for the specified namespace")
InactiveTopicPolicies inactiveTopicPolicies) {
validateNamespaceName(tenant, namespace);
internalSetInactiveTopic(inactiveTopicPolicies);
}
@GET
@Path("/{tenant}/{namespace}/maxProducersPerTopic")
@ApiOperation(value = "Get maxProducersPerTopic config on a namespace.", response = Integer.class)
@ApiResponses(value = { @ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Namespace does not exist") })
public void getMaxProducersPerTopic(
@Suspended final AsyncResponse asyncResponse,
@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace) {
validateNamespaceName(tenant, namespace);
validateNamespacePolicyOperationAsync(namespaceName, PolicyName.MAX_PRODUCERS, PolicyOperation.READ)
.thenCompose(__ -> getNamespacePoliciesAsync(namespaceName))
.thenAccept(policies -> asyncResponse.resume(policies.max_producers_per_topic))
.exceptionally(ex -> {
log.error("[{}] Failed to get maxProducersPerTopic config on a namespace {}", clientAppId(),
namespaceName, ex);
resumeAsyncResponseExceptionally(asyncResponse, ex);
return null;
});
}
@POST
@Path("/{tenant}/{namespace}/maxProducersPerTopic")
@ApiOperation(value = " Set maxProducersPerTopic configuration on a namespace.")
@ApiResponses(value = {
@ApiResponse(code = 204, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Namespace does not exist"),
@ApiResponse(code = 409, message = "Concurrent modification"),
@ApiResponse(code = 412, message = "maxProducersPerTopic value is not valid") })
public void setMaxProducersPerTopic(@PathParam("tenant") String tenant, @PathParam("namespace") String namespace,
@ApiParam(value = "Number of maximum producers per topic", required = true) int maxProducersPerTopic) {
validateNamespaceName(tenant, namespace);
internalSetMaxProducersPerTopic(maxProducersPerTopic);
}
@DELETE
@Path("/{tenant}/{namespace}/maxProducersPerTopic")
@ApiOperation(value = "Remove maxProducersPerTopic configuration on a namespace.")
@ApiResponses(value = {
@ApiResponse(code = 204, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Namespace does not exist"),
@ApiResponse(code = 409, message = "Concurrent modification") })
public void removeMaxProducersPerTopic(@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace) {
validateNamespaceName(tenant, namespace);
internalSetMaxProducersPerTopic(null);
}
@GET
@Path("/{tenant}/{namespace}/deduplicationSnapshotInterval")
@ApiOperation(value = "Get deduplicationSnapshotInterval config on a namespace.", response = Integer.class)
@ApiResponses(value = { @ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Namespace does not exist") })
public void getDeduplicationSnapshotInterval(
@Suspended final AsyncResponse asyncResponse,
@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace) {
validateNamespaceName(tenant, namespace);
validateNamespacePolicyOperationAsync(namespaceName, PolicyName.DEDUPLICATION_SNAPSHOT, PolicyOperation.READ)
.thenCompose(__ -> getNamespacePoliciesAsync(namespaceName))
.thenAccept(policies -> asyncResponse.resume(policies.deduplicationSnapshotIntervalSeconds))
.exceptionally(ex -> {
log.error("[{}] Failed to get deduplicationSnapshotInterval config on a namespace {}",
clientAppId(), namespaceName, ex);
resumeAsyncResponseExceptionally(asyncResponse, ex);
return null;
});
}
@POST
@Path("/{tenant}/{namespace}/deduplicationSnapshotInterval")
@ApiOperation(value = "Set deduplicationSnapshotInterval config on a namespace.")
@ApiResponses(value = {
@ApiResponse(code = 204, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Namespace does not exist")})
public void setDeduplicationSnapshotInterval(@PathParam("tenant") String tenant
, @PathParam("namespace") String namespace
, @ApiParam(value = "Interval to take deduplication snapshot per topic", required = true)
Integer interval) {
validateNamespaceName(tenant, namespace);
internalSetDeduplicationSnapshotInterval(interval);
}
@GET
@Path("/{tenant}/{namespace}/maxConsumersPerTopic")
@ApiOperation(value = "Get maxConsumersPerTopic config on a namespace.", response = Integer.class)
@ApiResponses(value = { @ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Namespace does not exist") })
public void getMaxConsumersPerTopic(
@Suspended final AsyncResponse asyncResponse,
@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace) {
validateNamespaceName(tenant, namespace);
validateNamespacePolicyOperationAsync(namespaceName, PolicyName.MAX_CONSUMERS, PolicyOperation.READ)
.thenCompose(__ -> getNamespacePoliciesAsync(namespaceName))
.thenAccept(policies -> asyncResponse.resume(policies.max_consumers_per_topic))
.exceptionally(ex -> {
log.error("[{}] Failed to get maxConsumersPerTopic config on a namespace {}", clientAppId(),
namespaceName, ex);
resumeAsyncResponseExceptionally(asyncResponse, ex);
return null;
});
}
@POST
@Path("/{tenant}/{namespace}/maxConsumersPerTopic")
@ApiOperation(value = " Set maxConsumersPerTopic configuration on a namespace.")
@ApiResponses(value = {
@ApiResponse(code = 204, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Namespace does not exist"),
@ApiResponse(code = 409, message = "Concurrent modification"),
@ApiResponse(code = 412, message = "maxConsumersPerTopic value is not valid") })
public void setMaxConsumersPerTopic(@PathParam("tenant") String tenant, @PathParam("namespace") String namespace,
@ApiParam(value = "Number of maximum consumers per topic", required = true) int maxConsumersPerTopic) {
validateNamespaceName(tenant, namespace);
internalSetMaxConsumersPerTopic(maxConsumersPerTopic);
}
@DELETE
@Path("/{tenant}/{namespace}/maxConsumersPerTopic")
@ApiOperation(value = "Remove maxConsumersPerTopic configuration on a namespace.")
@ApiResponses(value = {
@ApiResponse(code = 204, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Namespace does not exist"),
@ApiResponse(code = 409, message = "Concurrent modification") })
public void removeMaxConsumersPerTopic(@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace) {
validateNamespaceName(tenant, namespace);
internalSetMaxConsumersPerTopic(null);
}
@GET
@Path("/{tenant}/{namespace}/maxConsumersPerSubscription")
@ApiOperation(value = "Get maxConsumersPerSubscription config on a namespace.", response = Integer.class)
@ApiResponses(value = { @ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Namespace does not exist") })
public void getMaxConsumersPerSubscription(
@Suspended final AsyncResponse asyncResponse,
@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace) {
validateNamespaceName(tenant, namespace);
validateNamespacePolicyOperationAsync(namespaceName, PolicyName.MAX_CONSUMERS, PolicyOperation.READ)
.thenCompose(__ -> getNamespacePoliciesAsync(namespaceName))
.thenAccept(polices -> asyncResponse.resume(polices.max_consumers_per_subscription))
.exceptionally(ex -> {
log.error("[{}] Failed to get maxConsumersPerSubscription config on namespace {}: {} ",
clientAppId(), namespaceName, ex.getCause().getMessage(), ex);
resumeAsyncResponseExceptionally(asyncResponse, ex);
return null;
});
}
@POST
@Path("/{tenant}/{namespace}/maxConsumersPerSubscription")
@ApiOperation(value = " Set maxConsumersPerSubscription configuration on a namespace.")
@ApiResponses(value = {
@ApiResponse(code = 204, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Namespace does not exist"),
@ApiResponse(code = 409, message = "Concurrent modification"),
@ApiResponse(code = 412, message = "maxConsumersPerSubscription value is not valid")})
public void setMaxConsumersPerSubscription(@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace,
@ApiParam(value = "Number of maximum consumers per subscription",
required = true)
int maxConsumersPerSubscription) {
validateNamespaceName(tenant, namespace);
internalSetMaxConsumersPerSubscription(maxConsumersPerSubscription);
}
@DELETE
@Path("/{tenant}/{namespace}/maxConsumersPerSubscription")
@ApiOperation(value = " Set maxConsumersPerSubscription configuration on a namespace.")
@ApiResponses(value = {
@ApiResponse(code = 204, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Namespace does not exist"),
@ApiResponse(code = 409, message = "Concurrent modification"),
@ApiResponse(code = 412, message = "maxConsumersPerSubscription value is not valid")})
public void removeMaxConsumersPerSubscription(@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace) {
validateNamespaceName(tenant, namespace);
internalSetMaxConsumersPerSubscription(null);
}
@GET
@Path("/{tenant}/{namespace}/maxUnackedMessagesPerConsumer")
@ApiOperation(value = "Get maxUnackedMessagesPerConsumer config on a namespace.", response = Integer.class)
@ApiResponses(value = { @ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Namespace does not exist") })
public void getMaxUnackedMessagesPerConsumer(@Suspended final AsyncResponse asyncResponse,
@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace) {
validateNamespaceName(tenant, namespace);
validateNamespacePolicyOperationAsync(namespaceName, PolicyName.MAX_UNACKED, PolicyOperation.READ)
.thenCompose(__ -> getNamespacePoliciesAsync(namespaceName))
.thenAccept(policies -> asyncResponse.resume(policies.max_unacked_messages_per_consumer))
.exceptionally(ex -> {
log.error("[{}] Failed to get maxUnackedMessagesPerConsumer config on a namespace {}",
clientAppId(), namespaceName, ex);
resumeAsyncResponseExceptionally(asyncResponse, ex);
return null;
});
}
@POST
@Path("/{tenant}/{namespace}/maxUnackedMessagesPerConsumer")
@ApiOperation(value = " Set maxConsumersPerTopic configuration on a namespace.")
@ApiResponses(value = {
@ApiResponse(code = 204, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Namespace does not exist"),
@ApiResponse(code = 409, message = "Concurrent modification"),
@ApiResponse(code = 412, message = "maxUnackedMessagesPerConsumer value is not valid")})
public void setMaxUnackedMessagesPerConsumer(@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace,
@ApiParam(value = "Number of maximum unacked messages per consumer",
required = true)
int maxUnackedMessagesPerConsumer) {
validateNamespaceName(tenant, namespace);
internalSetMaxUnackedMessagesPerConsumer(maxUnackedMessagesPerConsumer);
}
@DELETE
@Path("/{tenant}/{namespace}/maxUnackedMessagesPerConsumer")
@ApiOperation(value = "Remove maxUnackedMessagesPerConsumer config on a namespace.")
@ApiResponses(value = {
@ApiResponse(code = 204, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Namespace does not exist") })
public void removeMaxUnackedmessagesPerConsumer(@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace) {
validateNamespaceName(tenant, namespace);
internalSetMaxUnackedMessagesPerConsumer(null);
}
@GET
@Path("/{tenant}/{namespace}/maxUnackedMessagesPerSubscription")
@ApiOperation(value = "Get maxUnackedMessagesPerSubscription config on a namespace.", response = Integer.class)
@ApiResponses(value = { @ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Namespace does not exist") })
public void getMaxUnackedmessagesPerSubscription(
@Suspended final AsyncResponse asyncResponse,
@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace) {
validateNamespaceName(tenant, namespace);
validateNamespacePolicyOperationAsync(namespaceName, PolicyName.MAX_UNACKED, PolicyOperation.READ)
.thenCompose(__ -> getNamespacePoliciesAsync(namespaceName))
.thenAccept(policies -> asyncResponse.resume(policies.max_unacked_messages_per_subscription))
.exceptionally(ex -> {
log.error("[{}] Failed to get maxUnackedMessagesPerSubscription config on a namespace {}",
clientAppId(), namespaceName, ex);
resumeAsyncResponseExceptionally(asyncResponse, ex);
return null;
});
}
@POST
@Path("/{tenant}/{namespace}/maxUnackedMessagesPerSubscription")
@ApiOperation(value = " Set maxUnackedMessagesPerSubscription configuration on a namespace.")
@ApiResponses(value = {
@ApiResponse(code = 204, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Namespace does not exist"),
@ApiResponse(code = 409, message = "Concurrent modification"),
@ApiResponse(code = 412, message = "maxUnackedMessagesPerSubscription value is not valid")})
public void setMaxUnackedMessagesPerSubscription(
@PathParam("tenant") String tenant, @PathParam("namespace") String namespace,
@ApiParam(value = "Number of maximum unacked messages per subscription", required = true)
int maxUnackedMessagesPerSubscription) {
validateNamespaceName(tenant, namespace);
internalSetMaxUnackedMessagesPerSubscription(maxUnackedMessagesPerSubscription);
}
@DELETE
@Path("/{tenant}/{namespace}/maxUnackedMessagesPerSubscription")
@ApiOperation(value = "Remove maxUnackedMessagesPerSubscription config on a namespace.")
@ApiResponses(value = {
@ApiResponse(code = 204, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Namespace does not exist") })
public void removeMaxUnackedmessagesPerSubscription(@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace) {
validateNamespaceName(tenant, namespace);
internalSetMaxUnackedMessagesPerSubscription(null);
}
@GET
@Path("/{tenant}/{namespace}/maxSubscriptionsPerTopic")
@ApiOperation(value = "Get maxSubscriptionsPerTopic config on a namespace.", response = Integer.class)
@ApiResponses(value = { @ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Namespace does not exist") })
public void getMaxSubscriptionsPerTopic(@Suspended final AsyncResponse asyncResponse,
@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace) {
validateNamespaceName(tenant, namespace);
validateNamespacePolicyOperationAsync(namespaceName, PolicyName.MAX_SUBSCRIPTIONS, PolicyOperation.READ)
.thenCompose(__ -> getNamespacePoliciesAsync(namespaceName))
.thenAccept(policies -> asyncResponse.resume(policies.max_subscriptions_per_topic))
.exceptionally(ex -> {
log.error("[{}] Failed to get maxSubscriptionsPerTopic config on a namespace {}", clientAppId(),
namespaceName, ex);
resumeAsyncResponseExceptionally(asyncResponse, ex);
return null;
});
}
@POST
@Path("/{tenant}/{namespace}/maxSubscriptionsPerTopic")
@ApiOperation(value = " Set maxSubscriptionsPerTopic configuration on a namespace.")
@ApiResponses(value = {
@ApiResponse(code = 204, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Namespace does not exist"),
@ApiResponse(code = 409, message = "Concurrent modification"),
@ApiResponse(code = 412, message = "maxUnackedMessagesPerSubscription value is not valid")})
public void setMaxSubscriptionsPerTopic(
@PathParam("tenant") String tenant, @PathParam("namespace") String namespace,
@ApiParam(value = "Number of maximum subscriptions per topic", required = true)
int maxSubscriptionsPerTopic) {
validateNamespaceName(tenant, namespace);
internalSetMaxSubscriptionsPerTopic(maxSubscriptionsPerTopic);
}
@DELETE
@Path("/{tenant}/{namespace}/maxSubscriptionsPerTopic")
@ApiOperation(value = "Remove maxSubscriptionsPerTopic configuration on a namespace.")
@ApiResponses(value = {
@ApiResponse(code = 204, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Namespace does not exist"),
@ApiResponse(code = 409, message = "Concurrent modification") })
public void removeMaxSubscriptionsPerTopic(@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace) {
validateNamespaceName(tenant, namespace);
internalSetMaxSubscriptionsPerTopic(null);
}
@POST
@Path("/{tenant}/{namespace}/antiAffinity")
@ApiOperation(value = "Set anti-affinity group for a namespace")
@ApiResponses(value = {
@ApiResponse(code = 204, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Tenant or cluster or namespace doesn't exist"),
@ApiResponse(code = 412, message = "Invalid antiAffinityGroup")})
public void setNamespaceAntiAffinityGroup(@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace,
@ApiParam(value = "Anti-affinity group for the specified namespace",
required = true)
String antiAffinityGroup) {
validateNamespaceName(tenant, namespace);
internalSetNamespaceAntiAffinityGroup(antiAffinityGroup);
}
@GET
@Path("/{tenant}/{namespace}/antiAffinity")
@ApiOperation(value = "Get anti-affinity group of a namespace.", response = String.class)
@ApiResponses(value = { @ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Tenant or cluster or namespace doesn't exist") })
public String getNamespaceAntiAffinityGroup(@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace) {
validateNamespaceName(tenant, namespace);
return internalGetNamespaceAntiAffinityGroup();
}
@DELETE
@Path("/{tenant}/{namespace}/antiAffinity")
@ApiOperation(value = "Remove anti-affinity group of a namespace.")
@ApiResponses(value = {
@ApiResponse(code = 200, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Namespace does not exist"),
@ApiResponse(code = 409, message = "Concurrent modification") })
public void removeNamespaceAntiAffinityGroup(@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace) {
validateNamespaceName(tenant, namespace);
internalRemoveNamespaceAntiAffinityGroup();
}
@GET
@Path("{cluster}/antiAffinity/{group}")
@ApiOperation(value = "Get all namespaces that are grouped by given anti-affinity group in a given cluster."
+ " api can be only accessed by admin of any of the existing tenant",
response = String.class, responseContainer = "List")
@ApiResponses(value = {@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 412, message = "Cluster not exist/Anti-affinity group can't be empty.")})
public List<String> getAntiAffinityNamespaces(@PathParam("cluster") String cluster,
@PathParam("group") String antiAffinityGroup, @QueryParam("tenant") String tenant) {
return internalGetAntiAffinityNamespaces(cluster, antiAffinityGroup, tenant);
}
@GET
@Path("/{tenant}/{namespace}/compactionThreshold")
@ApiOperation(value = "Maximum number of uncompacted bytes in topics before compaction is triggered.",
notes = "The backlog size is compared to the threshold periodically. "
+ "A threshold of 0 disabled automatic compaction", response = Long.class)
@ApiResponses(value = { @ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Namespace doesn't exist") })
public void getCompactionThreshold(
@Suspended final AsyncResponse asyncResponse,
@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace) {
validateNamespaceName(tenant, namespace);
validateNamespacePolicyOperationAsync(namespaceName, PolicyName.COMPACTION, PolicyOperation.READ)
.thenCompose(__ -> getNamespacePoliciesAsync(namespaceName))
.thenAccept(policies -> asyncResponse.resume(policies.compaction_threshold))
.exceptionally(ex -> {
log.error("[{}] Failed to get compaction threshold on namespace {}", clientAppId(), namespaceName,
ex);
resumeAsyncResponseExceptionally(asyncResponse, ex);
return null;
});
}
@PUT
@Path("/{tenant}/{namespace}/compactionThreshold")
@ApiOperation(value = "Set maximum number of uncompacted bytes in a topic before compaction is triggered.",
notes = "The backlog size is compared to the threshold periodically. "
+ "A threshold of 0 disabled automatic compaction")
@ApiResponses(value = {
@ApiResponse(code = 200, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Namespace doesn't exist"),
@ApiResponse(code = 409, message = "Concurrent modification"),
@ApiResponse(code = 412, message = "compactionThreshold value is not valid")})
public void setCompactionThreshold(@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace,
@ApiParam(value = "Maximum number of uncompacted bytes"
+ " in a topic of the specified namespace",
required = true) long newThreshold) {
validateNamespaceName(tenant, namespace);
internalSetCompactionThreshold(newThreshold);
}
@DELETE
@Path("/{tenant}/{namespace}/compactionThreshold")
@ApiOperation(value = "Delete maximum number of uncompacted bytes in a topic before compaction is triggered.",
notes = "The backlog size is compared to the threshold periodically. "
+ "A threshold of 0 disabled automatic compaction")
@ApiResponses(value = {
@ApiResponse(code = 200, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Namespace doesn't exist"),
@ApiResponse(code = 409, message = "Concurrent modification")})
public void deleteCompactionThreshold(@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace) {
validateNamespaceName(tenant, namespace);
internalSetCompactionThreshold(null);
}
@GET
@Path("/{tenant}/{namespace}/offloadThreshold")
@ApiOperation(value = "Maximum number of bytes stored on the pulsar cluster for a topic,"
+ " before the broker will start offloading to longterm storage",
notes = "A negative value disables automatic offloading", response = Long.class)
@ApiResponses(value = { @ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Namespace doesn't exist") })
public void getOffloadThreshold(
@Suspended final AsyncResponse asyncResponse,
@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace) {
validateNamespaceName(tenant, namespace);
validateNamespacePolicyOperationAsync(namespaceName, PolicyName.OFFLOAD, PolicyOperation.READ)
.thenCompose(__ -> getNamespacePoliciesAsync(namespaceName))
.thenAccept(policies -> {
if (policies.offload_policies == null
|| policies.offload_policies.getManagedLedgerOffloadThresholdInBytes() == null) {
asyncResponse.resume(policies.offload_threshold);
} else {
asyncResponse.resume(policies.offload_policies.getManagedLedgerOffloadThresholdInBytes());
}
})
.exceptionally(ex -> {
log.error("[{}] Failed to get offload threshold on namespace {}", clientAppId(), namespaceName, ex);
resumeAsyncResponseExceptionally(asyncResponse, ex);
return null;
});
}
@PUT
@Path("/{tenant}/{namespace}/offloadThreshold")
@ApiOperation(value = "Set maximum number of bytes stored on the pulsar cluster for a topic,"
+ " before the broker will start offloading to longterm storage",
notes = "-1 will revert to using the cluster default."
+ " A negative value disables automatic offloading. ")
@ApiResponses(value = {
@ApiResponse(code = 200, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Namespace doesn't exist"),
@ApiResponse(code = 409, message = "Concurrent modification"),
@ApiResponse(code = 412, message = "offloadThreshold value is not valid")})
public void setOffloadThreshold(@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace,
@ApiParam(value =
"Maximum number of bytes stored on the pulsar cluster"
+ " for a topic of the specified namespace",
required = true) long newThreshold) {
validateNamespaceName(tenant, namespace);
internalSetOffloadThreshold(newThreshold);
}
@GET
@Path("/{tenant}/{namespace}/offloadThresholdInSeconds")
@ApiOperation(value = "Maximum number of bytes stored on the pulsar cluster for a topic,"
+ " before the broker will start offloading to longterm storage",
notes = "A negative value disables automatic offloading", response = Long.class)
@ApiResponses(value = { @ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Namespace doesn't exist") })
public void getOffloadThresholdInSeconds(
@Suspended final AsyncResponse asyncResponse,
@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace) {
validateNamespaceName(tenant, namespace);
validateNamespacePolicyOperationAsync(namespaceName, PolicyName.OFFLOAD, PolicyOperation.READ)
.thenCompose(__ -> getNamespacePoliciesAsync(namespaceName))
.thenAccept(policies -> {
if (policies.offload_policies == null
|| policies.offload_policies.getManagedLedgerOffloadThresholdInSeconds() == null) {
asyncResponse.resume(policies.offload_threshold_in_seconds);
} else {
asyncResponse.resume(policies.offload_policies.getManagedLedgerOffloadThresholdInSeconds());
}
})
.exceptionally(ex -> {
log.error("[{}] Failed to get offload threshold on namespace {}", clientAppId(), namespaceName, ex);
resumeAsyncResponseExceptionally(asyncResponse, ex);
return null;
});
}
@PUT
@Path("/{tenant}/{namespace}/offloadThresholdInSeconds")
@ApiOperation(value = "Set maximum number of seconds stored on the pulsar cluster for a topic,"
+ " before the broker will start offloading to longterm storage",
notes = "A negative value disables automatic offloading")
@ApiResponses(value = {
@ApiResponse(code = 204, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Namespace doesn't exist"),
@ApiResponse(code = 409, message = "Concurrent modification"),
@ApiResponse(code = 412, message = "offloadThresholdInSeconds value is not valid") })
public void setOffloadThresholdInSeconds(
@Suspended final AsyncResponse response,
@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace,
long newThreshold) {
validateNamespaceName(tenant, namespace);
internalSetOffloadThresholdInSecondsAsync(newThreshold)
.thenAccept(response::resume)
.exceptionally(t -> {
resumeAsyncResponseExceptionally(response, t);
return null;
});
}
@GET
@Path("/{tenant}/{namespace}/offloadDeletionLagMs")
@ApiOperation(value = "Number of milliseconds to wait before deleting a ledger segment which has been offloaded"
+ " from the Pulsar cluster's local storage (i.e. BookKeeper)",
notes = "A negative value denotes that deletion has been completely disabled."
+ " 'null' denotes that the topics in the namespace will fall back to the"
+ " broker default for deletion lag.", response = Long.class)
@ApiResponses(value = { @ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Namespace doesn't exist") })
public void getOffloadDeletionLag(
@Suspended final AsyncResponse asyncResponse,
@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace) {
validateNamespaceName(tenant, namespace);
validateNamespacePolicyOperationAsync(namespaceName, PolicyName.OFFLOAD, PolicyOperation.READ)
.thenCompose(__ -> getNamespacePoliciesAsync(namespaceName))
.thenAccept(policies -> {
if (policies.offload_policies == null) {
asyncResponse.resume(policies.offload_deletion_lag_ms);
} else {
asyncResponse.resume(policies.offload_policies.getManagedLedgerOffloadDeletionLagInMillis());
}
})
.exceptionally(ex -> {
log.error("[{}] Failed to get offload deletion lag milliseconds on namespace {}", clientAppId(),
namespaceName, ex);
resumeAsyncResponseExceptionally(asyncResponse, ex);
return null;
});
}
@PUT
@Path("/{tenant}/{namespace}/offloadDeletionLagMs")
@ApiOperation(value = "Set number of milliseconds to wait before deleting a ledger segment which has been offloaded"
+ " from the Pulsar cluster's local storage (i.e. BookKeeper)",
notes = "A negative value disables the deletion completely.")
@ApiResponses(value = {
@ApiResponse(code = 200, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Namespace doesn't exist"),
@ApiResponse(code = 409, message = "Concurrent modification"),
@ApiResponse(code = 412, message = "offloadDeletionLagMs value is not valid")})
public void setOffloadDeletionLag(@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace,
@ApiParam(value =
"New number of milliseconds to wait before deleting a ledger segment"
+ " which has been offloaded",
required = true) long newDeletionLagMs) {
validateNamespaceName(tenant, namespace);
internalSetOffloadDeletionLag(newDeletionLagMs);
}
@DELETE
@Path("/{tenant}/{namespace}/offloadDeletionLagMs")
@ApiOperation(value = "Clear the namespace configured offload deletion lag. The topics in the namespace"
+ " will fallback to using the default configured deletion lag for the broker")
@ApiResponses(value = { @ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 200, message = "Operation successful"),
@ApiResponse(code = 404, message = "Namespace doesn't exist"),
@ApiResponse(code = 409, message = "Concurrent modification") })
public void clearOffloadDeletionLag(@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace) {
validateNamespaceName(tenant, namespace);
internalSetOffloadDeletionLag(null);
}
@GET
@Path("/{tenant}/{namespace}/schemaAutoUpdateCompatibilityStrategy")
@ApiOperation(value = "The strategy used to check the compatibility of new schemas,"
+ " provided by producers, before automatically updating the schema",
notes = "The value AutoUpdateDisabled prevents producers from updating the schema. "
+ " If set to AutoUpdateDisabled, schemas must be updated through the REST api",
response = SchemaAutoUpdateCompatibilityStrategy.class)
@ApiResponses(value = { @ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Namespace doesn't exist"),
@ApiResponse(code = 409, message = "Concurrent modification") })
public SchemaAutoUpdateCompatibilityStrategy getSchemaAutoUpdateCompatibilityStrategy(
@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace) {
validateNamespaceName(tenant, namespace);
return internalGetSchemaAutoUpdateCompatibilityStrategy();
}
@PUT
@Path("/{tenant}/{namespace}/schemaAutoUpdateCompatibilityStrategy")
@ApiOperation(value = "Update the strategy used to check the compatibility of new schemas,"
+ " provided by producers, before automatically updating the schema",
notes = "The value AutoUpdateDisabled prevents producers from updating the schema. "
+ " If set to AutoUpdateDisabled, schemas must be updated through the REST api")
@ApiResponses(value = {
@ApiResponse(code = 200, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Namespace doesn't exist"),
@ApiResponse(code = 409, message = "Concurrent modification")})
public void setSchemaAutoUpdateCompatibilityStrategy(
@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace,
@ApiParam(value = "Strategy used to check the compatibility of new schemas")
SchemaAutoUpdateCompatibilityStrategy strategy) {
validateNamespaceName(tenant, namespace);
internalSetSchemaAutoUpdateCompatibilityStrategy(strategy);
}
@GET
@Path("/{tenant}/{namespace}/schemaCompatibilityStrategy")
@ApiOperation(value = "The strategy of the namespace schema compatibility ",
response = SchemaCompatibilityStrategy.class)
@ApiResponses(value = { @ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Namespace doesn't exist"),
@ApiResponse(code = 409, message = "Concurrent modification") })
public void getSchemaCompatibilityStrategy(
@Suspended final AsyncResponse asyncResponse,
@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace) {
validateNamespaceName(tenant, namespace);
validateNamespacePolicyOperationAsync(namespaceName, PolicyName.SCHEMA_COMPATIBILITY_STRATEGY,
PolicyOperation.READ)
.thenCompose(__ -> getNamespacePoliciesAsync(namespaceName))
.thenAccept(policies -> asyncResponse.resume(policies.schema_compatibility_strategy))
.exceptionally(ex -> {
log.error("[{}] Failed to get the strategy of the namespace schema compatibility {}", clientAppId(),
namespaceName, ex);
resumeAsyncResponseExceptionally(asyncResponse, ex);
return null;
});
}
@PUT
@Path("/{tenant}/{namespace}/schemaCompatibilityStrategy")
@ApiOperation(value = "Update the strategy used to check the compatibility of new schema")
@ApiResponses(value = {
@ApiResponse(code = 200, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Namespace doesn't exist"),
@ApiResponse(code = 409, message = "Concurrent modification")})
public void setSchemaCompatibilityStrategy(
@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace,
@ApiParam(value = "Strategy used to check the compatibility of new schema")
SchemaCompatibilityStrategy strategy) {
validateNamespaceName(tenant, namespace);
internalSetSchemaCompatibilityStrategy(strategy);
}
@GET
@Path("/{tenant}/{namespace}/isAllowAutoUpdateSchema")
@ApiOperation(value = "The flag of whether allow auto update schema", response = Boolean.class)
@ApiResponses(value = { @ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Namespace doesn't exist"),
@ApiResponse(code = 409, message = "Concurrent modification") })
public void getIsAllowAutoUpdateSchema(
@Suspended final AsyncResponse asyncResponse,
@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace) {
validateNamespaceName(tenant, namespace);
validateNamespacePolicyOperationAsync(namespaceName, PolicyName.SCHEMA_COMPATIBILITY_STRATEGY,
PolicyOperation.READ)
.thenCompose(__ -> getNamespacePoliciesAsync(namespaceName))
.thenAccept(policies -> {
if (policies.is_allow_auto_update_schema == null) {
asyncResponse.resume(pulsar().getConfig().isAllowAutoUpdateSchemaEnabled());
} else {
asyncResponse.resume(policies.is_allow_auto_update_schema);
}
})
.exceptionally(ex -> {
log.error("[{}] Failed to get the flag of whether allow auto update schema on a namespace {}",
clientAppId(), namespaceName, ex);
resumeAsyncResponseExceptionally(asyncResponse, ex);
return null;
});
}
@POST
@Path("/{tenant}/{namespace}/isAllowAutoUpdateSchema")
@ApiOperation(value = "Update flag of whether allow auto update schema")
@ApiResponses(value = {
@ApiResponse(code = 200, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Namespace doesn't exist"),
@ApiResponse(code = 409, message = "Concurrent modification")})
public void setIsAllowAutoUpdateSchema(
@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace,
@ApiParam(value = "Flag of whether to allow auto update schema", required = true)
boolean isAllowAutoUpdateSchema) {
validateNamespaceName(tenant, namespace);
internalSetIsAllowAutoUpdateSchema(isAllowAutoUpdateSchema);
}
@GET
@Path("/{tenant}/{namespace}/subscriptionTypesEnabled")
@ApiOperation(value = "The set of whether allow subscription types",
response = SubscriptionType.class, responseContainer = "Set")
@ApiResponses(value = { @ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Namespace doesn't exist"),
@ApiResponse(code = 409, message = "Concurrent modification") })
public void getSubscriptionTypesEnabled(
@Suspended final AsyncResponse asyncResponse,
@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace) {
validateNamespaceName(tenant, namespace);
validateNamespacePolicyOperationAsync(namespaceName, PolicyName.SUBSCRIPTION_AUTH_MODE, PolicyOperation.READ)
.thenCompose(__ -> getNamespacePoliciesAsync(namespaceName))
.thenAccept(policies -> {
Set<SubscriptionType> subscriptionTypes = new HashSet<>();
policies.subscription_types_enabled.forEach(
subType -> subscriptionTypes.add(SubscriptionType.valueOf(subType)));
asyncResponse.resume(subscriptionTypes);
})
.exceptionally(ex -> {
log.error("[{}] Failed to get the set of whether allow subscription types on a namespace {}",
clientAppId(), namespaceName, ex);
resumeAsyncResponseExceptionally(asyncResponse, ex);
return null;
});
}
@POST
@Path("/{tenant}/{namespace}/subscriptionTypesEnabled")
@ApiOperation(value = "Update set of whether allow share sub type")
@ApiResponses(value = {
@ApiResponse(code = 200, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Namespace doesn't exist"),
@ApiResponse(code = 409, message = "Concurrent modification")})
public void setSubscriptionTypesEnabled(
@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace,
@ApiParam(value = "Set of whether allow subscription types", required = true)
Set<SubscriptionType> subscriptionTypesEnabled) {
validateNamespaceName(tenant, namespace);
internalSetSubscriptionTypesEnabled(subscriptionTypesEnabled);
}
@DELETE
@Path("/{tenant}/{namespace}/subscriptionTypesEnabled")
@ApiOperation(value = " Remove subscription types enabled on a namespace.")
@ApiResponses(value = {
@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Tenant or Namespace does not exist"),
@ApiResponse(code = 409, message = "Concurrent modification")})
public void removeSubscriptionTypesEnabled(@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace) {
validateNamespaceName(tenant, namespace);
internalSetSubscriptionTypesEnabled(new HashSet<>());
}
@GET
@Path("/{tenant}/{namespace}/schemaValidationEnforced")
@ApiOperation(value = "Get schema validation enforced flag for namespace.",
notes = "If the flag is set to true, when a producer without a schema attempts to produce to a topic"
+ " with schema in this namespace, the producer will be failed to connect. PLEASE be"
+ " carefully on using this, since non-java clients don't support schema.if you enable"
+ " this setting, it will cause non-java clients failed to produce.",
response = Boolean.class)
@ApiResponses(value = { @ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Tenants or Namespace doesn't exist") })
public void getSchemaValidtionEnforced(
@Suspended final AsyncResponse asyncResponse,
@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace,
@QueryParam("applied") @DefaultValue("false") boolean applied) {
validateNamespaceName(tenant, namespace);
validateNamespacePolicyOperationAsync(namespaceName, PolicyName.SCHEMA_COMPATIBILITY_STRATEGY,
PolicyOperation.READ)
.thenCompose(__ -> getNamespacePoliciesAsync(namespaceName))
.thenAccept(policies -> {
boolean schemaValidationEnforced = policies.schema_validation_enforced;
if (!schemaValidationEnforced && applied) {
asyncResponse.resume(pulsar().getConfiguration().isSchemaValidationEnforced());
} else {
asyncResponse.resume(schemaValidationEnforced);
}
})
.exceptionally(ex -> {
log.error("[{}] Failed to get schema validation enforced flag for namespace {}", clientAppId(),
namespaceName, ex);
resumeAsyncResponseExceptionally(asyncResponse, ex);
return null;
});
}
@POST
@Path("/{tenant}/{namespace}/schemaValidationEnforced")
@ApiOperation(value = "Set schema validation enforced flag on namespace.",
notes = "If the flag is set to true, when a producer without a schema attempts to produce to a topic"
+ " with schema in this namespace, the producer will be failed to connect. PLEASE be"
+ " carefully on using this, since non-java clients don't support schema.if you enable"
+ " this setting, it will cause non-java clients failed to produce.")
@ApiResponses(value = {
@ApiResponse(code = 200, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Tenant or Namespace doesn't exist"),
@ApiResponse(code = 412, message = "schemaValidationEnforced value is not valid")})
public void setSchemaValidationEnforced(@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace,
@ApiParam(value =
"Flag of whether validation is enforced on the specified namespace",
required = true)
boolean schemaValidationEnforced) {
validateNamespaceName(tenant, namespace);
internalSetSchemaValidationEnforced(schemaValidationEnforced);
}
@POST
@Path("/{tenant}/{namespace}/offloadPolicies")
@ApiOperation(value = "Set offload configuration on a namespace.")
@ApiResponses(value = {
@ApiResponse(code = 204, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Namespace does not exist"),
@ApiResponse(code = 409, message = "Concurrent modification"),
@ApiResponse(code = 412,
message = "OffloadPolicies is empty or driver is not supported or bucket is not valid")})
public void setOffloadPolicies(@PathParam("tenant") String tenant, @PathParam("namespace") String namespace,
@ApiParam(value = "Offload policies for the specified namespace", required = true)
OffloadPoliciesImpl offload,
@Suspended final AsyncResponse asyncResponse) {
try {
validateNamespaceName(tenant, namespace);
internalSetOffloadPolicies(asyncResponse, offload);
} catch (WebApplicationException wae) {
asyncResponse.resume(wae);
} catch (Exception e) {
asyncResponse.resume(new RestException(e));
}
}
@DELETE
@Path("/{tenant}/{namespace}/removeOffloadPolicies")
@ApiOperation(value = " Set offload configuration on a namespace.")
@ApiResponses(value = {
@ApiResponse(code = 204, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Namespace does not exist"),
@ApiResponse(code = 409, message = "Concurrent modification"),
@ApiResponse(code = 412,
message = "OffloadPolicies is empty or driver is not supported or bucket is not valid")})
public void removeOffloadPolicies(@PathParam("tenant") String tenant, @PathParam("namespace") String namespace,
@Suspended final AsyncResponse asyncResponse) {
try {
validateNamespaceName(tenant, namespace);
internalRemoveOffloadPolicies(asyncResponse);
} catch (WebApplicationException wae) {
asyncResponse.resume(wae);
} catch (Exception e) {
asyncResponse.resume(new RestException(e));
}
}
@GET
@Path("/{tenant}/{namespace}/offloadPolicies")
@ApiOperation(value = "Get offload configuration on a namespace.", response = OffloadPolicies.class)
@ApiResponses(value = {
@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Namespace does not exist")})
public void getOffloadPolicies(
@Suspended final AsyncResponse asyncResponse,
@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace) {
validateNamespaceName(tenant, namespace);
validateNamespacePolicyOperationAsync(namespaceName, PolicyName.OFFLOAD, PolicyOperation.READ)
.thenCompose(__ -> getNamespacePoliciesAsync(namespaceName))
.thenAccept(policies -> asyncResponse.resume(policies.offload_policies))
.exceptionally(ex -> {
log.error("[{}] Failed to get offload policies on a namespace {}", clientAppId(),
namespaceName, ex);
resumeAsyncResponseExceptionally(asyncResponse, ex);
return null;
});
}
@GET
@Path("/{tenant}/{namespace}/maxTopicsPerNamespace")
@ApiOperation(value = "Get maxTopicsPerNamespace config on a namespace.", response = Integer.class)
@ApiResponses(value = { @ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Tenant or namespace does not exist") })
public void getMaxTopicsPerNamespace(@Suspended final AsyncResponse asyncResponse,
@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace) {
validateNamespaceName(tenant, namespace);
validateNamespacePolicyOperationAsync(namespaceName, PolicyName.MAX_TOPICS, PolicyOperation.READ)
.thenCompose(__ -> getNamespacePoliciesAsync(namespaceName))
.thenAccept(policies -> {
int maxTopicsPerNamespace =
policies.max_topics_per_namespace != null ? policies.max_topics_per_namespace : 0;
asyncResponse.resume(maxTopicsPerNamespace);
})
.exceptionally(ex -> {
log.error("[{}] Failed to get maxTopicsPerNamespace config on a namespace {}", clientAppId(),
namespaceName, ex);
resumeAsyncResponseExceptionally(asyncResponse, ex);
return null;
});
}
@POST
@Path("/{tenant}/{namespace}/maxTopicsPerNamespace")
@ApiOperation(value = "Set maxTopicsPerNamespace config on a namespace.")
@ApiResponses(value = {
@ApiResponse(code = 200, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Tenant or namespace doesn't exist"), })
public void setMaxTopicsPerNamespace(@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace,
@ApiParam(value = "Number of maximum topics for specific namespace",
required = true) int maxTopicsPerNamespace) {
validateNamespaceName(tenant, namespace);
internalSetMaxTopicsPerNamespace(maxTopicsPerNamespace);
}
@DELETE
@Path("/{tenant}/{namespace}/maxTopicsPerNamespace")
@ApiOperation(value = "Remove maxTopicsPerNamespace config on a namespace.")
@ApiResponses(value = {
@ApiResponse(code = 200, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Tenant or namespace doesn't exist"), })
public void removeMaxTopicsPerNamespace(@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace) {
validateNamespaceName(tenant, namespace);
internalRemoveMaxTopicsPerNamespace();
}
@PUT
@Path("/{tenant}/{namespace}/property/{key}/{value}")
@ApiOperation(value = "Put a key value pair property on a namespace.")
@ApiResponses(value = {
@ApiResponse(code = 200, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Tenant or namespace doesn't exist"), })
public void setProperty(
@Suspended final AsyncResponse asyncResponse,
@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace,
@PathParam("key") String key,
@PathParam("value") String value) {
validateNamespaceName(tenant, namespace);
internalSetProperty(key, value, asyncResponse);
}
@GET
@Path("/{tenant}/{namespace}/property/{key}")
@ApiOperation(value = "Get property value for a given key on a namespace.", response = String.class)
@ApiResponses(value = { @ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Tenant or namespace doesn't exist"), })
public void getProperty(
@Suspended final AsyncResponse asyncResponse,
@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace,
@PathParam("key") String key) {
validateNamespaceName(tenant, namespace);
internalGetProperty(key, asyncResponse);
}
@DELETE
@Path("/{tenant}/{namespace}/property/{key}")
@ApiOperation(value = "Remove property value for a given key on a namespace.")
@ApiResponses(value = {
@ApiResponse(code = 204, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Tenant or namespace doesn't exist"), })
public void removeProperty(
@Suspended final AsyncResponse asyncResponse,
@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace,
@PathParam("key") String key) {
validateNamespaceName(tenant, namespace);
internalRemoveProperty(key, asyncResponse);
}
@PUT
@Path("/{tenant}/{namespace}/properties")
@ApiOperation(value = "Put key value pairs property on a namespace.")
@ApiResponses(value = {
@ApiResponse(code = 204, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Tenant or namespace doesn't exist"), })
public void setProperties(
@Suspended final AsyncResponse asyncResponse,
@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace,
@ApiParam(value = "Key value pair properties for the namespace", required = true)
Map<String, String> properties) {
validateNamespaceName(tenant, namespace);
internalSetProperties(properties, asyncResponse);
}
@GET
@Path("/{tenant}/{namespace}/properties")
@ApiOperation(value = "Get key value pair properties for a given namespace.",
response = String.class, responseContainer = "Map")
@ApiResponses(value = { @ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Tenant or namespace doesn't exist"), })
public void getProperties(
@Suspended final AsyncResponse asyncResponse,
@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace) {
validateNamespaceName(tenant, namespace);
internalGetProperties(asyncResponse);
}
@DELETE
@Path("/{tenant}/{namespace}/properties")
@ApiOperation(value = "Clear properties on a given namespace.")
@ApiResponses(value = {
@ApiResponse(code = 204, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Tenant or namespace doesn't exist"), })
public void clearProperties(
@Suspended final AsyncResponse asyncResponse,
@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace) {
validateNamespaceName(tenant, namespace);
internalClearProperties(asyncResponse);
}
@GET
@Path("/{tenant}/{namespace}/resourcegroup")
@ApiOperation(value = "Get the resource group attached to the namespace", response = String.class)
@ApiResponses(value = { @ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Tenant or cluster or namespace doesn't exist") })
public void getNamespaceResourceGroup(
@Suspended final AsyncResponse asyncResponse,
@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace) {
validateNamespaceName(tenant, namespace);
validateNamespacePolicyOperationAsync(NamespaceName.get(tenant, namespace), PolicyName.RESOURCEGROUP,
PolicyOperation.READ)
.thenCompose(__ -> getNamespacePoliciesAsync(namespaceName))
.thenAccept(policies -> asyncResponse.resume(policies.resource_group_name))
.exceptionally(ex -> {
log.error("Failed to get the resource group attached to the namespace {}", namespaceName, ex);
resumeAsyncResponseExceptionally(asyncResponse, ex);
return null;
});
}
@POST
@Path("/{tenant}/{namespace}/resourcegroup/{resourcegroup}")
@ApiOperation(value = "Set resourcegroup for a namespace")
@ApiResponses(value = {
@ApiResponse(code = 200, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Tenant or cluster or namespace doesn't exist"),
@ApiResponse(code = 412, message = "Invalid resourcegroup") })
public void setNamespaceResourceGroup(@PathParam("tenant") String tenant, @PathParam("namespace") String namespace,
@PathParam("resourcegroup") String rgName) {
validateNamespaceName(tenant, namespace);
internalSetNamespaceResourceGroup(rgName);
}
@DELETE
@Path("/{tenant}/{namespace}/resourcegroup")
@ApiOperation(value = "Delete resourcegroup for a namespace")
@ApiResponses(value = {
@ApiResponse(code = 200, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Tenant or cluster or namespace doesn't exist"),
@ApiResponse(code = 412, message = "Invalid resourcegroup")})
public void removeNamespaceResourceGroup(@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace) {
validateNamespaceName(tenant, namespace);
internalSetNamespaceResourceGroup(null);
}
@GET
@Path("/{tenant}/{namespace}/scanOffloadedLedgers")
@ApiOperation(value = "Trigger the scan of offloaded Ledgers on the LedgerOffloader for the given namespace")
@ApiResponses(value = {
@ApiResponse(code = 200, message = "Successful get of offloaded ledger data", response = String.class,
examples = @Example(value = { @ExampleProperty(mediaType = "application/json",
value = "{\"objects\":[{\"key1\":\"value1\",\"key2\":\"value2\"}],"
+ "\"total\":100,\"errors\":5,\"unknown\":3}")
})),
@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Namespace doesn't exist") })
public Response scanOffloadedLedgers(@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace) {
validateNamespaceName(tenant, namespace);
try {
StreamingOutput output = (outputStream) -> {
try {
OutputStreamWriter out = new OutputStreamWriter(outputStream, StandardCharsets.UTF_8);
out.append("{objects:[\n");
internalScanOffloadedLedgers(new OffloaderObjectsScannerUtils.ScannerResultSink() {
boolean first = true;
@Override
public void object(Map<String, Object> data) throws Exception {
if (!first) {
out.write(',');
} else {
first = true;
}
String json = objectWriter().writeValueAsString(data);
out.write(json);
}
@Override
public void finished(int total, int errors, int unknown) throws Exception {
out.append("]\n");
out.append("\"total\": " + total + ",\n");
out.append("\"errors\": " + errors + ",\n");
out.append("\"unknown\": " + unknown + "\n");
}
});
out.append("}");
out.flush();
outputStream.flush();
} catch (Exception err) {
log.error("error", err);
throw new RuntimeException(err);
}
};
return Response.ok(output).type(MediaType.APPLICATION_JSON_TYPE).build();
} catch (Throwable err) {
log.error("Error while scanning offloaded ledgers for namespace {}", namespaceName, err);
throw new RestException(Response.Status.INTERNAL_SERVER_ERROR,
"Error while scanning ledgers for " + namespaceName);
}
}
@GET
@Path("/{tenant}/{namespace}/entryFilters")
@ApiOperation(value = "Get maxConsumersPerSubscription config on a namespace.", response = EntryFilters.class)
@ApiResponses(value = { @ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Namespace does not exist") })
public void getEntryFiltersPerTopic(
@Suspended final AsyncResponse asyncResponse,
@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace) {
validateNamespaceName(tenant, namespace);
validateNamespacePolicyOperationAsync(namespaceName, PolicyName.ENTRY_FILTERS, PolicyOperation.READ)
.thenCompose(__ -> getNamespacePoliciesAsync(namespaceName))
.thenAccept(polices -> asyncResponse.resume(polices.entryFilters))
.exceptionally(ex -> {
log.error("[{}] Failed to get entry filters config on namespace {}: {} ",
clientAppId(), namespaceName, ex.getCause().getMessage(), ex);
resumeAsyncResponseExceptionally(asyncResponse, ex);
return null;
});
}
@POST
@Path("/{tenant}/{namespace}/entryFilters")
@ApiOperation(value = "Set entry filters for namespace")
@ApiResponses(value = {
@ApiResponse(code = 204, message = "Operation successful"),
@ApiResponse(code = 400, message = "Specified entry filters are not valid"),
@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Tenant or cluster or namespace doesn't exist")
})
public void setEntryFiltersPerTopic(@Suspended AsyncResponse asyncResponse, @PathParam("tenant") String tenant,
@PathParam("namespace") String namespace,
@ApiParam(value = "entry filters", required = true)
EntryFilters entryFilters) {
validateNamespaceName(tenant, namespace);
internalSetEntryFiltersPerTopicAsync(entryFilters)
.thenAccept(__ -> asyncResponse.resume(Response.noContent().build()))
.exceptionally(ex -> {
log.error("Failed to set entry filters for namespace {}", namespaceName, ex);
resumeAsyncResponseExceptionally(asyncResponse, ex);
return null;
});
}
@DELETE
@Path("/{tenant}/{namespace}/entryFilters")
@ApiOperation(value = "Remove entry filters for namespace")
@ApiResponses(value = {
@ApiResponse(code = 204, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Tenant or cluster or namespace doesn't exist"),
@ApiResponse(code = 412, message = "Invalid TTL")})
public void removeNamespaceEntryFilters(@Suspended AsyncResponse asyncResponse,
@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace) {
validateNamespaceName(tenant, namespace);
internalSetEntryFiltersPerTopicAsync(null)
.thenAccept(__ -> asyncResponse.resume(Response.noContent().build()))
.exceptionally(ex -> {
log.error("Failed to remove entry filters for namespace {}", namespaceName, ex);
resumeAsyncResponseExceptionally(asyncResponse, ex);
return null;
});
}
@POST
@Path("/{tenant}/{namespace}/migration")
@ApiOperation(hidden = true, value = "Update migration for all topics in a namespace")
@ApiResponses(value = {
@ApiResponse(code = 200, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Property or cluster or namespace doesn't exist") })
public void enableMigration(@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace,
boolean migrated) {
validateNamespaceName(tenant, namespace);
internalEnableMigration(migrated);
}
@POST
@Path("/{tenant}/{namespace}/dispatcherPauseOnAckStatePersistent")
@ApiOperation(value = "Set dispatcher pause on ack state persistent configuration for specified namespace.")
@ApiResponses(value = {
@ApiResponse(code = 204, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Tenant or cluster or namespace doesn't exist"),
@ApiResponse(code = 409, message = "Concurrent modification")})
public void setDispatcherPauseOnAckStatePersistent(@Suspended final AsyncResponse asyncResponse,
@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace) {
validateNamespaceName(tenant, namespace);
internalSetDispatcherPauseOnAckStatePersistentAsync(true)
.thenRun(() -> {
log.info("[{}] Successfully enabled dispatcherPauseOnAckStatePersistent: namespace={}",
clientAppId(), namespaceName);
asyncResponse.resume(Response.noContent().build());
})
.exceptionally(ex -> {
resumeAsyncResponseExceptionally(asyncResponse, ex);
return null;
});
}
@DELETE
@Path("/{tenant}/{namespace}/dispatcherPauseOnAckStatePersistent")
@ApiOperation(value = "Remove dispatcher pause on ack state persistent configuration for specified namespace.")
@ApiResponses(value = {
@ApiResponse(code = 204, message = "Operation successful"),
@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Tenant or cluster or namespace doesn't exist"),
@ApiResponse(code = 409, message = "Concurrent modification")})
public void removeDispatcherPauseOnAckStatePersistent(@Suspended final AsyncResponse asyncResponse,
@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace) {
validateNamespaceName(tenant, namespace);
internalSetDispatcherPauseOnAckStatePersistentAsync(false)
.thenRun(() -> {
log.info("[{}] Successfully remove dispatcherPauseOnAckStatePersistent: namespace={}",
clientAppId(), namespaceName);
asyncResponse.resume(Response.noContent().build());
})
.exceptionally(ex -> {
resumeAsyncResponseExceptionally(asyncResponse, ex);
return null;
});
}
@GET
@Path("/{tenant}/{namespace}/dispatcherPauseOnAckStatePersistent")
@ApiOperation(value = "Get dispatcher pause on ack state persistent config on a namespace.",
response = Boolean.class)
@ApiResponses(value = { @ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Tenant or cluster or namespace doesn't exist") })
public void getDispatcherPauseOnAckStatePersistent(@Suspended final AsyncResponse asyncResponse,
@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace) {
validateNamespaceName(tenant, namespace);
internalGetDispatcherPauseOnAckStatePersistentAsync()
.thenApply(asyncResponse::resume)
.exceptionally(ex -> {
resumeAsyncResponseExceptionally(asyncResponse, ex);
return null;
});
}
@POST
@Path("/{tenant}/{namespace}/allowedClusters")
@ApiOperation(value = "Set the allowed clusters for a namespace.")
@ApiResponses(value = {
@ApiResponse(code = 400, message = "The list of allowed clusters should include all replication clusters."),
@ApiResponse(code = 403, message = "The requester does not have admin permissions."),
@ApiResponse(code = 404, message = "The specified tenant, cluster, or namespace does not exist."),
@ApiResponse(code = 409, message = "A peer-cluster cannot be part of an allowed-cluster."),
@ApiResponse(code = 412, message = "The namespace is not global or the provided cluster IDs are invalid.")})
public void setNamespaceAllowedClusters(@Suspended AsyncResponse asyncResponse,
@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace,
@ApiParam(value = "List of allowed clusters", required = true)
List<String> clusterIds) {
validateNamespaceName(tenant, namespace);
internalSetNamespaceAllowedClusters(clusterIds)
.thenAccept(asyncResponse::resume)
.exceptionally(e -> {
log.error("[{}] Failed to set namespace allowed clusters on namespace {}",
clientAppId(), namespace, e);
resumeAsyncResponseExceptionally(asyncResponse, e);
return null;
});
}
@GET
@Path("/{tenant}/{namespace}/allowedClusters")
@ApiOperation(value = "Get the allowed clusters for a namespace.",
response = String.class, responseContainer = "List")
@ApiResponses(value = {@ApiResponse(code = 403, message = "Don't have admin permission"),
@ApiResponse(code = 404, message = "Tenant or cluster or namespace doesn't exist"),
@ApiResponse(code = 412, message = "Namespace is not global")})
public void getNamespaceAllowedClusters(@Suspended AsyncResponse asyncResponse,
@PathParam("tenant") String tenant,
@PathParam("namespace") String namespace) {
validateNamespaceName(tenant, namespace);
internalGetNamespaceAllowedClustersAsync()
.thenAccept(asyncResponse::resume)
.exceptionally(e -> {
log.error("[{}] Failed to get namespace allowed clusters on namespace {}", clientAppId(),
namespace, e);
resumeAsyncResponseExceptionally(asyncResponse, e);
return null;
});
}
private static final Logger log = LoggerFactory.getLogger(Namespaces.class);
}
```
|
```objective-c
//===- LiveIntervalUnion.h - Live interval union data struct ---*- C++ -*--===//
//
// See path_to_url for license information.
//
//===your_sha256_hash------===//
//
// LiveIntervalUnion is a union of live segments across multiple live virtual
// registers. This may be used during coalescing to represent a congruence
// class, or during register allocation to model liveness of a physical
// register.
//
//===your_sha256_hash------===//
#ifndef LLVM_CODEGEN_LIVEINTERVALUNION_H
#define LLVM_CODEGEN_LIVEINTERVALUNION_H
#include "llvm/ADT/IntervalMap.h"
#include "llvm/ADT/SmallVector.h"
#include "llvm/CodeGen/LiveInterval.h"
#include "llvm/CodeGen/SlotIndexes.h"
#include <cassert>
#include <limits>
namespace llvm {
class raw_ostream;
class TargetRegisterInfo;
#ifndef NDEBUG
// forward declaration
template <unsigned Element> class SparseBitVector;
using LiveVirtRegBitSet = SparseBitVector<128>;
#endif
/// Union of live intervals that are strong candidates for coalescing into a
/// single register (either physical or virtual depending on the context). We
/// expect the constituent live intervals to be disjoint, although we may
/// eventually make exceptions to handle value-based interference.
class LiveIntervalUnion {
// A set of live virtual register segments that supports fast insertion,
// intersection, and removal.
// Mapping SlotIndex intervals to virtual register numbers.
using LiveSegments = IntervalMap<SlotIndex, const LiveInterval *>;
public:
// SegmentIter can advance to the next segment ordered by starting position
// which may belong to a different live virtual register. We also must be able
// to reach the current segment's containing virtual register.
using SegmentIter = LiveSegments::iterator;
/// Const version of SegmentIter.
using ConstSegmentIter = LiveSegments::const_iterator;
// LiveIntervalUnions share an external allocator.
using Allocator = LiveSegments::Allocator;
private:
unsigned Tag = 0; // unique tag for current contents.
LiveSegments Segments; // union of virtual reg segments
public:
explicit LiveIntervalUnion(Allocator &a) : Segments(a) {}
// Iterate over all segments in the union of live virtual registers ordered
// by their starting position.
SegmentIter begin() { return Segments.begin(); }
SegmentIter end() { return Segments.end(); }
SegmentIter find(SlotIndex x) { return Segments.find(x); }
ConstSegmentIter begin() const { return Segments.begin(); }
ConstSegmentIter end() const { return Segments.end(); }
ConstSegmentIter find(SlotIndex x) const { return Segments.find(x); }
bool empty() const { return Segments.empty(); }
SlotIndex startIndex() const { return Segments.start(); }
SlotIndex endIndex() const { return Segments.stop(); }
// Provide public access to the underlying map to allow overlap iteration.
using Map = LiveSegments;
const Map &getMap() const { return Segments; }
/// getTag - Return an opaque tag representing the current state of the union.
unsigned getTag() const { return Tag; }
/// changedSince - Return true if the union change since getTag returned tag.
bool changedSince(unsigned tag) const { return tag != Tag; }
// Add a live virtual register to this union and merge its segments.
void unify(const LiveInterval &VirtReg, const LiveRange &Range);
// Remove a live virtual register's segments from this union.
void extract(const LiveInterval &VirtReg, const LiveRange &Range);
// Remove all inserted virtual registers.
void clear() { Segments.clear(); ++Tag; }
// Print union, using TRI to translate register names
void print(raw_ostream &OS, const TargetRegisterInfo *TRI) const;
#ifndef NDEBUG
// Verify the live intervals in this union and add them to the visited set.
void verify(LiveVirtRegBitSet& VisitedVRegs);
#endif
// Get any virtual register that is assign to this physical unit
const LiveInterval *getOneVReg() const;
/// Query interferences between a single live virtual register and a live
/// interval union.
class Query {
const LiveIntervalUnion *LiveUnion = nullptr;
const LiveRange *LR = nullptr;
LiveRange::const_iterator LRI; ///< current position in LR
ConstSegmentIter LiveUnionI; ///< current position in LiveUnion
SmallVector<const LiveInterval *, 4> InterferingVRegs;
bool CheckedFirstInterference = false;
bool SeenAllInterferences = false;
unsigned Tag = 0;
unsigned UserTag = 0;
// Count the virtual registers in this union that interfere with this
// query's live virtual register, up to maxInterferingRegs.
unsigned collectInterferingVRegs(unsigned MaxInterferingRegs);
// Was this virtual register visited during collectInterferingVRegs?
bool isSeenInterference(const LiveInterval *VirtReg) const;
public:
Query() = default;
Query(const LiveRange &LR, const LiveIntervalUnion &LIU)
: LiveUnion(&LIU), LR(&LR) {}
Query(const Query &) = delete;
Query &operator=(const Query &) = delete;
void reset(unsigned NewUserTag, const LiveRange &NewLR,
const LiveIntervalUnion &NewLiveUnion) {
LiveUnion = &NewLiveUnion;
LR = &NewLR;
InterferingVRegs.clear();
CheckedFirstInterference = false;
SeenAllInterferences = false;
Tag = NewLiveUnion.getTag();
UserTag = NewUserTag;
}
void init(unsigned NewUserTag, const LiveRange &NewLR,
const LiveIntervalUnion &NewLiveUnion) {
if (UserTag == NewUserTag && LR == &NewLR && LiveUnion == &NewLiveUnion &&
!NewLiveUnion.changedSince(Tag)) {
// Retain cached results, e.g. firstInterference.
return;
}
reset(NewUserTag, NewLR, NewLiveUnion);
}
// Does this live virtual register interfere with the union?
bool checkInterference() { return collectInterferingVRegs(1); }
// Vector generated by collectInterferingVRegs.
const SmallVectorImpl<const LiveInterval *> &interferingVRegs(
unsigned MaxInterferingRegs = std::numeric_limits<unsigned>::max()) {
if (!SeenAllInterferences || MaxInterferingRegs < InterferingVRegs.size())
collectInterferingVRegs(MaxInterferingRegs);
return InterferingVRegs;
}
};
// Array of LiveIntervalUnions.
class Array {
unsigned Size = 0;
LiveIntervalUnion *LIUs = nullptr;
public:
Array() = default;
~Array() { clear(); }
// Initialize the array to have Size entries.
// Reuse an existing allocation if the size matches.
void init(LiveIntervalUnion::Allocator&, unsigned Size);
unsigned size() const { return Size; }
void clear();
LiveIntervalUnion& operator[](unsigned idx) {
assert(idx < Size && "idx out of bounds");
return LIUs[idx];
}
const LiveIntervalUnion& operator[](unsigned Idx) const {
assert(Idx < Size && "Idx out of bounds");
return LIUs[Idx];
}
};
};
} // end namespace llvm
#endif // LLVM_CODEGEN_LIVEINTERVALUNION_H
```
|
```forth
*> \brief \b ZGEQR
*
* Definition:
* ===========
*
* SUBROUTINE ZGEQR( M, N, A, LDA, T, TSIZE, WORK, LWORK,
* INFO )
*
* .. Scalar Arguments ..
* INTEGER INFO, LDA, M, N, TSIZE, LWORK
* ..
* .. Array Arguments ..
* COMPLEX*16 A( LDA, * ), T( * ), WORK( * )
* ..
*
*
*> \par Purpose:
* =============
*>
*> \verbatim
*>
*> ZGEQR computes a QR factorization of a complex M-by-N matrix A:
*>
*> A = Q * ( R ),
*> ( 0 )
*>
*> where:
*>
*> Q is a M-by-M orthogonal matrix;
*> R is an upper-triangular N-by-N matrix;
*> 0 is a (M-N)-by-N zero matrix, if M > N.
*>
*> \endverbatim
*
* Arguments:
* ==========
*
*> \param[in] M
*> \verbatim
*> M is INTEGER
*> The number of rows of the matrix A. M >= 0.
*> \endverbatim
*>
*> \param[in] N
*> \verbatim
*> N is INTEGER
*> The number of columns of the matrix A. N >= 0.
*> \endverbatim
*>
*> \param[in,out] A
*> \verbatim
*> A is COMPLEX*16 array, dimension (LDA,N)
*> On entry, the M-by-N matrix A.
*> On exit, the elements on and above the diagonal of the array
*> contain the min(M,N)-by-N upper trapezoidal matrix R
*> (R is upper triangular if M >= N);
*> the elements below the diagonal are used to store part of the
*> data structure to represent Q.
*> \endverbatim
*>
*> \param[in] LDA
*> \verbatim
*> LDA is INTEGER
*> The leading dimension of the array A. LDA >= max(1,M).
*> \endverbatim
*>
*> \param[out] T
*> \verbatim
*> T is COMPLEX*16 array, dimension (MAX(5,TSIZE))
*> On exit, if INFO = 0, T(1) returns optimal (or either minimal
*> or optimal, if query is assumed) TSIZE. See TSIZE for details.
*> Remaining T contains part of the data structure used to represent Q.
*> If one wants to apply or construct Q, then one needs to keep T
*> (in addition to A) and pass it to further subroutines.
*> \endverbatim
*>
*> \param[in] TSIZE
*> \verbatim
*> TSIZE is INTEGER
*> If TSIZE >= 5, the dimension of the array T.
*> If TSIZE = -1 or -2, then a workspace query is assumed. The routine
*> only calculates the sizes of the T and WORK arrays, returns these
*> values as the first entries of the T and WORK arrays, and no error
*> message related to T or WORK is issued by XERBLA.
*> If TSIZE = -1, the routine calculates optimal size of T for the
*> optimum performance and returns this value in T(1).
*> If TSIZE = -2, the routine calculates minimal size of T and
*> returns this value in T(1).
*> \endverbatim
*>
*> \param[out] WORK
*> \verbatim
*> (workspace) COMPLEX*16 array, dimension (MAX(1,LWORK))
*> On exit, if INFO = 0, WORK(1) contains optimal (or either minimal
*> or optimal, if query was assumed) LWORK.
*> See LWORK for details.
*> \endverbatim
*>
*> \param[in] LWORK
*> \verbatim
*> LWORK is INTEGER
*> The dimension of the array WORK. LWORK >= 1.
*> If LWORK = -1 or -2, then a workspace query is assumed. The routine
*> only calculates the sizes of the T and WORK arrays, returns these
*> values as the first entries of the T and WORK arrays, and no error
*> message related to T or WORK is issued by XERBLA.
*> If LWORK = -1, the routine calculates optimal size of WORK for the
*> optimal performance and returns this value in WORK(1).
*> If LWORK = -2, the routine calculates minimal size of WORK and
*> returns this value in WORK(1).
*> \endverbatim
*>
*> \param[out] INFO
*> \verbatim
*> INFO is INTEGER
*> = 0: successful exit
*> < 0: if INFO = -i, the i-th argument had an illegal value
*> \endverbatim
*
* Authors:
* ========
*
*> \author Univ. of Tennessee
*> \author Univ. of California Berkeley
*> \author Univ. of Colorado Denver
*> \author NAG Ltd.
*
*> \par Further Details
* ====================
*>
*> \verbatim
*>
*> The goal of the interface is to give maximum freedom to the developers for
*> creating any QR factorization algorithm they wish. The triangular
*> (trapezoidal) R has to be stored in the upper part of A. The lower part of A
*> and the array T can be used to store any relevant information for applying or
*> constructing the Q factor. The WORK array can safely be discarded after exit.
*>
*> Caution: One should not expect the sizes of T and WORK to be the same from one
*> LAPACK implementation to the other, or even from one execution to the other.
*> A workspace query (for T and WORK) is needed at each execution. However,
*> for a given execution, the size of T and WORK are fixed and will not change
*> from one query to the next.
*>
*> \endverbatim
*>
*> \par Further Details particular to this LAPACK implementation:
* ==============================================================
*>
*> \verbatim
*>
*> These details are particular for this LAPACK implementation. Users should not
*> take them for granted. These details may change in the future, and are not likely
*> true for another LAPACK implementation. These details are relevant if one wants
*> to try to understand the code. They are not part of the interface.
*>
*> In this version,
*>
*> T(2): row block size (MB)
*> T(3): column block size (NB)
*> T(6:TSIZE): data structure needed for Q, computed by
*> ZLATSQR or ZGEQRT
*>
*> Depending on the matrix dimensions M and N, and row and column
*> block sizes MB and NB returned by ILAENV, ZGEQR will use either
*> ZLATSQR (if the matrix is tall-and-skinny) or ZGEQRT to compute
*> the QR factorization.
*>
*> \endverbatim
*>
*> \ingroup geqr
*>
* =====================================================================
SUBROUTINE ZGEQR( M, N, A, LDA, T, TSIZE, WORK, LWORK,
$ INFO )
*
* -- LAPACK computational routine --
* -- LAPACK is a software package provided by Univ. of Tennessee, --
* -- Univ. of California Berkeley, Univ. of Colorado Denver and NAG Ltd. --
*
* .. Scalar Arguments ..
INTEGER INFO, LDA, M, N, TSIZE, LWORK
* ..
* .. Array Arguments ..
COMPLEX*16 A( LDA, * ), T( * ), WORK( * )
* ..
*
* =====================================================================
*
* ..
* .. Local Scalars ..
LOGICAL LQUERY, LMINWS, MINT, MINW
INTEGER MB, NB, MINTSZ, NBLCKS, LWMIN, LWREQ
* ..
* .. External Functions ..
LOGICAL LSAME
EXTERNAL LSAME
* ..
* .. External Subroutines ..
EXTERNAL ZLATSQR, ZGEQRT, XERBLA
* ..
* .. Intrinsic Functions ..
INTRINSIC MAX, MIN, MOD
* ..
* .. External Functions ..
INTEGER ILAENV
EXTERNAL ILAENV
* ..
* .. Executable Statements ..
*
* Test the input arguments
*
INFO = 0
*
LQUERY = ( TSIZE.EQ.-1 .OR. TSIZE.EQ.-2 .OR.
$ LWORK.EQ.-1 .OR. LWORK.EQ.-2 )
*
MINT = .FALSE.
MINW = .FALSE.
IF( TSIZE.EQ.-2 .OR. LWORK.EQ.-2 ) THEN
IF( TSIZE.NE.-1 ) MINT = .TRUE.
IF( LWORK.NE.-1 ) MINW = .TRUE.
END IF
*
* Determine the block size
*
IF( MIN ( M, N ).GT.0 ) THEN
MB = ILAENV( 1, 'ZGEQR ', ' ', M, N, 1, -1 )
NB = ILAENV( 1, 'ZGEQR ', ' ', M, N, 2, -1 )
ELSE
MB = M
NB = 1
END IF
IF( MB.GT.M .OR. MB.LE.N ) MB = M
IF( NB.GT.MIN( M, N ) .OR. NB.LT.1 ) NB = 1
MINTSZ = N + 5
IF( MB.GT.N .AND. M.GT.N ) THEN
IF( MOD( M - N, MB - N ).EQ.0 ) THEN
NBLCKS = ( M - N ) / ( MB - N )
ELSE
NBLCKS = ( M - N ) / ( MB - N ) + 1
END IF
ELSE
NBLCKS = 1
END IF
*
* Determine if the workspace size satisfies minimal size
*
LWMIN = MAX( 1, N )
LWREQ = MAX( 1, N*NB )
LMINWS = .FALSE.
IF( ( TSIZE.LT.MAX( 1, NB*N*NBLCKS + 5 ) .OR. LWORK.LT.LWREQ )
$ .AND. ( LWORK.GE.N ) .AND. ( TSIZE.GE.MINTSZ )
$ .AND. ( .NOT.LQUERY ) ) THEN
IF( TSIZE.LT.MAX( 1, NB*N*NBLCKS + 5 ) ) THEN
LMINWS = .TRUE.
NB = 1
MB = M
END IF
IF( LWORK.LT.LWREQ ) THEN
LMINWS = .TRUE.
NB = 1
END IF
END IF
*
IF( M.LT.0 ) THEN
INFO = -1
ELSE IF( N.LT.0 ) THEN
INFO = -2
ELSE IF( LDA.LT.MAX( 1, M ) ) THEN
INFO = -4
ELSE IF( TSIZE.LT.MAX( 1, NB*N*NBLCKS + 5 )
$ .AND. ( .NOT.LQUERY ) .AND. ( .NOT.LMINWS ) ) THEN
INFO = -6
ELSE IF( ( LWORK.LT.LWREQ ) .AND. ( .NOT.LQUERY )
$ .AND. ( .NOT.LMINWS ) ) THEN
INFO = -8
END IF
*
IF( INFO.EQ.0 ) THEN
IF( MINT ) THEN
T( 1 ) = MINTSZ
ELSE
T( 1 ) = NB*N*NBLCKS + 5
END IF
T( 2 ) = MB
T( 3 ) = NB
IF( MINW ) THEN
WORK( 1 ) = LWMIN
ELSE
WORK( 1 ) = LWREQ
END IF
END IF
IF( INFO.NE.0 ) THEN
CALL XERBLA( 'ZGEQR', -INFO )
RETURN
ELSE IF( LQUERY ) THEN
RETURN
END IF
*
* Quick return if possible
*
IF( MIN( M, N ).EQ.0 ) THEN
RETURN
END IF
*
* The QR Decomposition
*
IF( ( M.LE.N ) .OR. ( MB.LE.N ) .OR. ( MB.GE.M ) ) THEN
CALL ZGEQRT( M, N, NB, A, LDA, T( 6 ), NB, WORK, INFO )
ELSE
CALL ZLATSQR( M, N, MB, NB, A, LDA, T( 6 ), NB, WORK,
$ LWORK, INFO )
END IF
*
WORK( 1 ) = LWREQ
*
RETURN
*
* End of ZGEQR
*
END
```
|
The Fiat RS.14 was an Italian long-range maritime strategic reconnaissance floatplane. The RS.14 was a four/five seat all-metal cantilever low/mid-wing monoplane powered by two wing-mounted 626 kW (840 hp) Fiat A.74 R.C.38 engines. It had a conventional cantilever tail unit with a single fin and rudder. Its undercarriage consisted of two large floats on struts. It had a glazed nose for an observer or bomb aimer. The pilot and copilot sat side by side with a wireless operator's compartment behind them. In the bombing role the RS.14 was fitted with a long ventral gondola to carry various combinations of anti-submarine bombs (up to ).
Development
The RS.14 was designed by at the works at Marina di Pisa. The first of two prototypes flew in May 1939.
A prototype landplane version AS.14 was built and first flown on 11 August 1943. It was designed as a ground-attack aircraft and intended to be armed with a cannon and machine guns. It was not ordered and no others were built.
Operational history
The RS.14 went into service with the Italian Air Force with a number of maritime strategic reconnaissance squadrons at bases around the Italian coast and also in Sicily and Sardinia. They were used for convoy escort duties and anti-submarine patrols. Occasionally they engaged in aerial combat, obtaining unexpected victories such as when, on Saturday 9 May 1942, an RS.14 intercepted Spitfires that took off from the carriers HMS Eagle and USS Wasp, headed for Malta, and machine-gunned two. The two RAF fighters collided and fell into the sea. Both pilots were killed. After the 1943 Armistice a few survivors were operated by the Italian Co-Belligerent Air Force. At the end of the Second World War the aircraft were used for liaison duties around the Mediterranean carrying up to four passengers.
Variants
RS.14
Production float plane with Fiat A.74 R.C.38 engines, 188 built including two prototypes.
AS.14
Land plane version with retractable landing gear, one built.
Operators
Regia Aeronautica
Italian Co-Belligerent Air Force
Italian Air Force operated six surviving Fiat RS.14 until 1948
Specifications
See also
Notes
References
Monday, David (1984), The Concise Guide to Axis Aircraft of World War II, Chancellor Press,
The Illustrated Encyclopedia of Aircraft (Part Work 1982-1985), 1985, Orbis Publishing, Page 1812
RS.14
1940s Italian patrol aircraft
Floatplanes
Aircraft first flown in 1939
Mid-wing aircraft
Twin piston-engined tractor aircraft
|
```objective-c
#pragma once
#include "source/extensions/filters/http/common/factory_base.h"
#include "contrib/envoy/extensions/filters/http/language/v3alpha/language.pb.h"
#include "contrib/envoy/extensions/filters/http/language/v3alpha/language.pb.validate.h"
namespace Envoy {
namespace Extensions {
namespace HttpFilters {
namespace Language {
/**
* Config registration for the language detection filter (i18n). @see NamedHttpFilterConfigFactory.
*/
class LanguageFilterFactory
: public Common::FactoryBase<envoy::extensions::filters::http::language::v3alpha::Language> {
public:
LanguageFilterFactory() : FactoryBase("envoy.filters.http.language") {}
Http::FilterFactoryCb createFilterFactoryFromProtoTyped(
const envoy::extensions::filters::http::language::v3alpha::Language& proto_config,
const std::string& stat_prefix, Server::Configuration::FactoryContext& context) override;
};
DECLARE_FACTORY(LanguageFilterFactory);
} // namespace Language
} // namespace HttpFilters
} // namespace Extensions
} // namespace Envoy
```
|
```go
//
//
// path_to_url
//
// Unless required by applicable law or agreed to in writing, software
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
package main
import (
"context"
"encoding/json"
"errors"
"fmt"
"os"
survey "github.com/AlecAivazis/survey/v2"
surveycore "github.com/AlecAivazis/survey/v2/core"
"github.com/spf13/cobra"
"github.com/pulumi/pulumi/pkg/v3/backend"
"github.com/pulumi/pulumi/pkg/v3/backend/display"
"github.com/pulumi/pulumi/pkg/v3/resource/deploy"
"github.com/pulumi/pulumi/pkg/v3/resource/edit"
"github.com/pulumi/pulumi/pkg/v3/resource/stack"
"github.com/pulumi/pulumi/sdk/v3/go/common/apitype"
"github.com/pulumi/pulumi/sdk/v3/go/common/diag/colors"
"github.com/pulumi/pulumi/sdk/v3/go/common/resource"
"github.com/pulumi/pulumi/sdk/v3/go/common/slice"
"github.com/pulumi/pulumi/sdk/v3/go/common/tokens"
"github.com/pulumi/pulumi/sdk/v3/go/common/util/cmdutil"
"github.com/pulumi/pulumi/sdk/v3/go/common/util/contract"
"github.com/pulumi/pulumi/sdk/v3/go/common/util/result"
)
func newStateCmd() *cobra.Command {
cmd := &cobra.Command{
Use: "state",
Short: "Edit the current stack's state",
Long: `Edit the current stack's state
Subcommands of this command can be used to surgically edit parts of a stack's state. These can be useful when
troubleshooting a stack or when performing specific edits that otherwise would require editing the state file by hand.`,
Args: cmdutil.NoArgs,
}
cmd.AddCommand(newStateEditCommand())
cmd.AddCommand(newStateDeleteCommand())
cmd.AddCommand(newStateUnprotectCommand())
cmd.AddCommand(newStateRenameCommand())
cmd.AddCommand(newStateUpgradeCommand())
cmd.AddCommand(newStateMoveCommand())
return cmd
}
// locateStackResource attempts to find a unique resource associated with the given URN in the given snapshot. If the
// given URN is ambiguous and this is an interactive terminal, it prompts the user to select one of the resources in
// the list of resources with identical URNs to operate upon.
func locateStackResource(opts display.Options, snap *deploy.Snapshot, urn resource.URN) (*resource.State, error) {
candidateResources := edit.LocateResource(snap, urn)
switch {
case len(candidateResources) == 0: // resource was not found
return nil, fmt.Errorf("No such resource %q exists in the current state", urn)
case len(candidateResources) == 1: // resource was unambiguously found
return candidateResources[0], nil
}
// If there exist multiple resources that have the requested URN, prompt the user to select one if we're running
// interactively. If we're not, early exit.
if !cmdutil.Interactive() {
errorMsg := "Resource URN ambiguously referred to multiple resources. Did you mean:\n"
for _, res := range candidateResources {
errorMsg += fmt.Sprintf(" %s\n", res.ID)
}
return nil, errors.New(errorMsg)
}
// Note: this is done to adhere to the same color scheme as the `pulumi new` picker, which also does this.
surveycore.DisableColor = true
prompt := "Multiple resources with the given URN exist, please select the one to edit:"
prompt = opts.Color.Colorize(colors.SpecPrompt + prompt + colors.Reset)
options := slice.Prealloc[string](len(candidateResources))
optionMap := make(map[string]*resource.State)
for _, ambiguousResource := range candidateResources {
// Prompt the user to select from a list of IDs, since these resources are known to all have the same URN.
message := fmt.Sprintf("%q", ambiguousResource.ID)
if ambiguousResource.Protect {
message += " (Protected)"
}
if ambiguousResource.Delete {
message += " (Pending Deletion)"
}
options = append(options, message)
optionMap[message] = ambiguousResource
}
var option string
if err := survey.AskOne(&survey.Select{
Message: prompt,
Options: options,
PageSize: optimalPageSize(optimalPageSizeOpts{nopts: len(options)}),
}, &option, surveyIcons(opts.Color)); err != nil {
return nil, errors.New("no resource selected")
}
return optionMap[option], nil
}
// runStateEdit runs the given state edit function on a resource with the given URN in a given stack.
func runStateEdit(
ctx context.Context, stackName string, showPrompt bool,
urn resource.URN, operation edit.OperationFunc,
) error {
return runTotalStateEdit(ctx, stackName, showPrompt, func(opts display.Options, snap *deploy.Snapshot) error {
res, err := locateStackResource(opts, snap, urn)
if err != nil {
return err
}
return operation(snap, res)
})
}
// runTotalStateEdit runs a snapshot-mutating function on the entirety of the given stack's snapshot.
// Before mutating, the user may be prompted to for confirmation if the current session is interactive.
func runTotalStateEdit(
ctx context.Context, stackName string, showPrompt bool,
operation func(opts display.Options, snap *deploy.Snapshot) error,
) error {
opts := display.Options{
Color: cmdutil.GetGlobalColorization(),
}
s, err := requireStack(ctx, stackName, stackOfferNew, opts)
if err != nil {
return err
}
return totalStateEdit(ctx, s, showPrompt, opts, operation)
}
func totalStateEdit(ctx context.Context, s backend.Stack, showPrompt bool, opts display.Options,
operation func(opts display.Options, snap *deploy.Snapshot) error,
) error {
snap, err := s.Snapshot(ctx, stack.DefaultSecretsProvider)
if err != nil {
return err
} else if snap == nil {
return nil
}
if showPrompt && cmdutil.Interactive() {
confirm := false
surveycore.DisableColor = true
prompt := opts.Color.Colorize(colors.Yellow + "warning" + colors.Reset + ": ")
prompt += "This command will edit your stack's state directly. Confirm?"
if err = survey.AskOne(&survey.Confirm{
Message: prompt,
}, &confirm, surveyIcons(opts.Color)); err != nil || !confirm {
return result.FprintBailf(os.Stdout, "confirmation declined")
}
}
// The `operation` callback will mutate `snap` in-place. In order to validate the correctness of the transformation
// that we are doing here, we verify the integrity of the snapshot before the mutation. If the snapshot was valid
// before we mutated it, we'll assert that we didn't make it invalid by mutating it.
stackIsAlreadyHosed := snap.VerifyIntegrity() != nil
if err = operation(opts, snap); err != nil {
return err
}
// If the stack is already broken, don't bother verifying the integrity here.
if !stackIsAlreadyHosed && !backend.DisableIntegrityChecking {
contract.AssertNoErrorf(snap.VerifyIntegrity(), "state edit produced an invalid snapshot")
}
sdep, err := stack.SerializeDeployment(ctx, snap, false /* showSecrets */)
if err != nil {
return fmt.Errorf("serializing deployment: %w", err)
}
// Once we've mutated the snapshot, import it back into the backend so that it can be persisted.
bytes, err := json.Marshal(sdep)
if err != nil {
return err
}
dep := apitype.UntypedDeployment{
Version: apitype.DeploymentSchemaVersionCurrent,
Deployment: bytes,
}
return s.ImportDeployment(ctx, &dep)
}
// Prompt the user to select a URN from the passed in state.
//
// stackName is the name of the current stack.
//
// snap is the snapshot of the current stack. If (*snap) is not nil, it will be set to
// the retrieved snapshot value. This allows caching between calls.
//
// Prompt is displayed to the user when selecting the URN.
func getURNFromState(
ctx context.Context, stackName string, snap **deploy.Snapshot, prompt string,
) (resource.URN, error) {
if snap == nil {
// This means we won't cache the value.
snap = new(*deploy.Snapshot)
}
if *snap == nil {
opts := display.Options{
Color: cmdutil.GetGlobalColorization(),
}
s, err := requireStack(ctx, stackName, stackLoadOnly, opts)
if err != nil {
return "", err
}
*snap, err = s.Snapshot(ctx, stack.DefaultSecretsProvider)
if err != nil {
return "", err
}
if *snap == nil {
return "", errors.New("no snapshot found")
}
}
urnList := make([]string, len((*snap).Resources))
for i, r := range (*snap).Resources {
urnList[i] = string(r.URN)
}
var urn string
err := survey.AskOne(&survey.Select{
Message: prompt,
Options: urnList,
}, &urn, survey.WithValidator(survey.Required), surveyIcons(cmdutil.GetGlobalColorization()))
if err != nil {
return "", err
}
result := resource.URN(urn)
contract.Assertf(result.IsValid(),
"Because we chose from an existing URN, it must be valid")
return result, nil
}
// Ask the user for a resource name.
func getNewResourceName() (tokens.QName, error) {
var resourceName string
err := survey.AskOne(&survey.Input{
Message: "Choose a new resource name:",
}, &resourceName, surveyIcons(cmdutil.GetGlobalColorization()),
survey.WithValidator(func(ans interface{}) error {
if tokens.IsQName(ans.(string)) {
return nil
}
return errors.New("resource names may only contain alphanumerics, underscores, hyphens, dots, and slashes")
}))
if err != nil {
return "", err
}
contract.Assertf(tokens.IsQName(resourceName),
"Survey validated that resourceName %q is a QName", resourceName)
return tokens.QName(resourceName), nil
}
```
|
```go
/*
*/
package grpclogging_test
import (
"bytes"
"crypto/ecdsa"
"crypto/elliptic"
"crypto/rand"
"crypto/tls"
"crypto/x509"
"crypto/x509/pkix"
"encoding/pem"
"math/big"
"net"
"testing"
"time"
"github.com/hyperledger/fabric/common/grpclogging/testpb"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
)
//go:generate protoc --proto_path=testpb --go_out=plugins=grpc,paths=source_relative:testpb testpb/echo.proto
func TestGrpclogging(t *testing.T) {
RegisterFailHandler(Fail)
RunSpecs(t, "Grpclogging Suite")
}
var (
clientCertWithKey tls.Certificate
serverCertWithKey tls.Certificate
caCertPool *x509.CertPool
clientTLSConfig *tls.Config
serverTLSConfig *tls.Config
)
var _ = BeforeSuite(func() {
var err error
caCert, caKey := generateCA("test-ca", "127.0.0.1")
clientCert, clientKey := issueCertificate(caCert, caKey, "client", "127.0.0.1")
clientCertWithKey, err = tls.X509KeyPair(clientCert, clientKey)
Expect(err).NotTo(HaveOccurred())
serverCert, serverKey := issueCertificate(caCert, caKey, "server", "127.0.0.1")
serverCertWithKey, err = tls.X509KeyPair(serverCert, serverKey)
Expect(err).NotTo(HaveOccurred())
caCertPool = x509.NewCertPool()
added := caCertPool.AppendCertsFromPEM(caCert)
Expect(added).To(BeTrue())
serverTLSConfig = &tls.Config{
Certificates: []tls.Certificate{serverCertWithKey},
ClientAuth: tls.VerifyClientCertIfGiven,
ClientCAs: caCertPool,
RootCAs: caCertPool,
}
clientTLSConfig = &tls.Config{
Certificates: []tls.Certificate{clientCertWithKey},
RootCAs: caCertPool,
ClientSessionCache: tls.NewLRUClientSessionCache(10),
}
})
//go:generate counterfeiter -o fakes/echo_service.go --fake-name EchoServiceServer . echoServiceServer
type echoServiceServer interface {
testpb.EchoServiceServer
}
func newTemplate(subjectCN string, hosts ...string) x509.Certificate {
notBefore := time.Now().Add(-1 * time.Minute)
notAfter := time.Now().Add(365 * 24 * time.Hour)
serialNumberLimit := new(big.Int).Lsh(big.NewInt(1), 128)
serialNumber, err := rand.Int(rand.Reader, serialNumberLimit)
Expect(err).NotTo(HaveOccurred())
template := x509.Certificate{
Subject: pkix.Name{CommonName: subjectCN},
SerialNumber: serialNumber,
NotBefore: notBefore,
NotAfter: notAfter,
KeyUsage: x509.KeyUsageKeyEncipherment | x509.KeyUsageDigitalSignature,
ExtKeyUsage: []x509.ExtKeyUsage{x509.ExtKeyUsageServerAuth, x509.ExtKeyUsageClientAuth},
BasicConstraintsValid: true,
}
for _, h := range hosts {
if ip := net.ParseIP(h); ip != nil {
template.IPAddresses = append(template.IPAddresses, ip)
} else {
template.DNSNames = append(template.DNSNames, h)
}
}
return template
}
func pemEncode(derCert []byte, key *ecdsa.PrivateKey) (pemCert, pemKey []byte) {
certBuf := &bytes.Buffer{}
err := pem.Encode(certBuf, &pem.Block{Type: "CERTIFICATE", Bytes: derCert})
Expect(err).NotTo(HaveOccurred())
keyBytes, err := x509.MarshalECPrivateKey(key)
Expect(err).NotTo(HaveOccurred())
keyBuf := &bytes.Buffer{}
err = pem.Encode(keyBuf, &pem.Block{Type: "EC PRIVATE KEY", Bytes: keyBytes})
Expect(err).NotTo(HaveOccurred())
return certBuf.Bytes(), keyBuf.Bytes()
}
func generateCA(subjectCN string, hosts ...string) (pemCert, pemKey []byte) {
privateKey, err := ecdsa.GenerateKey(elliptic.P384(), rand.Reader)
Expect(err).NotTo(HaveOccurred())
publicKey := privateKey.Public()
template := newTemplate(subjectCN, hosts...)
template.KeyUsage |= x509.KeyUsageCertSign
template.IsCA = true
derBytes, err := x509.CreateCertificate(rand.Reader, &template, &template, publicKey, privateKey)
Expect(err).NotTo(HaveOccurred())
return pemEncode(derBytes, privateKey)
}
func issueCertificate(caCert, caKey []byte, subjectCN string, hosts ...string) (pemCert, pemKey []byte) {
tlsCert, err := tls.X509KeyPair(caCert, caKey)
Expect(err).NotTo(HaveOccurred())
ca, err := x509.ParseCertificate(tlsCert.Certificate[0])
Expect(err).NotTo(HaveOccurred())
privateKey, err := ecdsa.GenerateKey(elliptic.P384(), rand.Reader)
Expect(err).NotTo(HaveOccurred())
publicKey := privateKey.Public()
template := newTemplate(subjectCN, hosts...)
derBytes, err := x509.CreateCertificate(rand.Reader, &template, ca, publicKey, tlsCert.PrivateKey)
Expect(err).NotTo(HaveOccurred())
return pemEncode(derBytes, privateKey)
}
```
|
Ciega a citas is a Spanish television series produced by Mediaset España Comunicación and Big Bang Media, and aired by Cuatro TV channel. The series is starring Teresa Hurtado de Ory and Álex Gadea. It was released on March 10, 2014, and is inspired by the 2009 Argentine telenovela of the same name.
Cast
Teresa Hurtado de Ory - Lucía González Soler
Elena Irureta - Maruchi Soler
Arancha Martí - Irene Zabaleta Soler
Joaquín Climent - Zabaleta
Miguel Diosdado - Rodrigo Carrión
Belinda Washington - Pilar Aranda Serrano
Luis Fernando Alvés - Ángel González
Octavi Pujades - Carlos Rangel
Álex Gadea - Sergio Feo
Marta Nieto - Natalia Valdecantos
Ramón Pujol - Miguel Ayala
Rubén Sanz - Raúl Estévez
Rebeca Salas - Críspula "Kris" Soto
Nico Romero - Simón Lozano
Jorge Roelas - Adolfo Morcillo
Adriana Torrebejano as Beatriz
Pablo Puyol as Alberto
Awards
2010 International Emmy Awards
Best Telenovela (nominated)
References
External links
Official website
2014 telenovelas
Spanish telenovelas
2010s Spanish comedy television series
Telecinco telenovelas
Spanish-language telenovelas
2014 Spanish television series debuts
2014 Spanish television series endings
|
```objective-c
// Generated file
// clang-format off
const char *g_dilate_normalmap_shader =
"#version 450\n"
"\n"
"// Dilates a normalmap by filling \"empty\" pixels with the average of surrounding pixels.\n"
"// Assumes the input image is tiled: dilation will not interact across tiles.\n"
"// One run of this shader will dilate by 1 pixel.\n"
"\n"
"layout (local_size_x = 8, local_size_y = 8, local_size_z = 1) in;\n"
"\n"
"// We must use alternating images because each iteration reads neighbor pixels\n"
"layout (set = 0, binding = 0, rgba8ui) restrict readonly uniform uimage2D u_src_image;\n"
"layout (set = 0, binding = 1, rgba8ui) restrict writeonly uniform uimage2D u_dst_image;\n"
"\n"
"layout (set = 0, binding = 2) uniform Params {\n"
" int u_tile_size;\n"
"};\n"
"\n"
"void main() {\n"
" // This color corresponds to a null normal.\n"
" const ivec4 nocol = ivec4(127, 127, 127, 255);\n"
" const ivec2 pixel_pos = ivec2(gl_GlobalInvocationID.xy);\n"
"\n"
" const ivec4 col11 = ivec4(imageLoad(u_src_image, pixel_pos));\n"
" if (col11 != nocol) {\n"
" imageStore(u_dst_image, pixel_pos, col11);\n"
" return;\n"
" }\n"
"\n"
" //const ivec2 im_size = imageSize(u_src_image).xy;\n"
"\n"
" const ivec2 p01 = pixel_pos + ivec2(-1, 0);\n"
" const ivec2 p21 = pixel_pos + ivec2(1, 0);\n"
" const ivec2 p10 = pixel_pos + ivec2(0, -1);\n"
" const ivec2 p12 = pixel_pos + ivec2(0, 1);\n"
"\n"
" ivec4 col_sum = ivec4(0,0,0,0);\n"
" int count = 0;\n"
"\n"
" const ivec4 col01 = ivec4(imageLoad(u_src_image, p01));\n"
" // Don't sample pixels of different tiles than the current one.\n"
" // This also takes care of image borders, but we must do it more explicitely for negative borders\n"
" // because of how division works\n"
" if (col01 != nocol && pixel_pos.x != 0 && (pixel_pos.x - 1) / u_tile_size == pixel_pos.x / u_tile_size) {\n"
" col_sum += col01;\n"
" ++count;\n"
" }\n"
"\n"
" const ivec4 col21 = ivec4(imageLoad(u_src_image, p21));\n"
" if (col21 != nocol && (pixel_pos.x + 1) / u_tile_size == pixel_pos.x / u_tile_size) {\n"
" col_sum += col21;\n"
" ++count;\n"
" }\n"
"\n"
" const ivec4 col10 = ivec4(imageLoad(u_src_image, p10));\n"
" if (col10 != nocol && pixel_pos.y != 0 && (pixel_pos.y - 1) / u_tile_size == pixel_pos.y / u_tile_size) {\n"
" col_sum += col10;\n"
" ++count;\n"
" }\n"
"\n"
" const ivec4 col12 = ivec4(imageLoad(u_src_image, p12));\n"
" if (col12 != nocol && (pixel_pos.y + 1) / u_tile_size == pixel_pos.y / u_tile_size) {\n"
" col_sum += col12;\n"
" ++count;\n"
" }\n"
"\n"
" ivec4 col_avg = count == 0 ? col11 : col_sum / count;\n"
"\n"
" imageStore(u_dst_image, pixel_pos, col_avg);\n"
"}\n";
// clang-format on
```
|
```go
/*
* HCS API
*
* No description provided (generated by Swagger Codegen path_to_url
*
* API version: 2.4
* Generated by: Swagger Codegen (path_to_url
*/
package hcsschema
type CPUGroupOperation string
const (
CreateGroup CPUGroupOperation = "CreateGroup"
DeleteGroup CPUGroupOperation = "DeleteGroup"
SetProperty CPUGroupOperation = "SetProperty"
)
```
|
Zagóra is a village in the administrative district of Gmina Wola Uhruska, within Włodawa County, Lublin Voivodeship, in eastern Poland, close to the border with Ukraine.
References
Villages in Włodawa County
|
```c
/* $OpenBSD: siphash.c,v 1.5 2018/01/05 19:05:09 mikeb Exp $ */
/*-
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* 3. The name of the author may not be used to endorse or promote
* products derived from this software without specific prior written
* permission.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*/
/*
* SipHash is a family of PRFs SipHash-c-d where the integer parameters c and d
* are the number of compression rounds and the number of finalization rounds.
* A compression round is identical to a finalization round and this round
* function is called SipRound. Given a 128-bit key k and a (possibly empty)
* byte string m, SipHash-c-d returns a 64-bit value SipHash-c-d(k; m).
*
* Implemented from the paper "SipHash: a fast short-input PRF", 2012.09.18,
* by Jean-Philippe Aumasson and Daniel J. Bernstein,
* Permanent Document ID b9a943a805fbfc6fde808af9fc0ecdfa
* path_to_url
* path_to_url
*/
#include <sys/param.h>
#include <sys/systm.h>
#include <crypto/siphash.h>
static void SipHash_CRounds(SIPHASH_CTX *, int);
static void SipHash_Rounds(SIPHASH_CTX *, int);
void
SipHash_Init(SIPHASH_CTX *ctx, const SIPHASH_KEY *key)
{
uint64_t k0, k1;
k0 = lemtoh64(&key->k0);
k1 = lemtoh64(&key->k1);
ctx->v[0] = 0x736f6d6570736575ULL ^ k0;
ctx->v[1] = 0x646f72616e646f6dULL ^ k1;
ctx->v[2] = 0x6c7967656e657261ULL ^ k0;
ctx->v[3] = 0x7465646279746573ULL ^ k1;
memset(ctx->buf, 0, sizeof(ctx->buf));
ctx->bytes = 0;
}
void
SipHash_Update(SIPHASH_CTX *ctx, int rc, int rf, const void *src, size_t len)
{
const uint8_t *ptr = src;
size_t left, used;
if (len == 0)
return;
used = ctx->bytes % sizeof(ctx->buf);
ctx->bytes += len;
if (used > 0) {
left = sizeof(ctx->buf) - used;
if (len >= left) {
memcpy(&ctx->buf[used], ptr, left);
SipHash_CRounds(ctx, rc);
len -= left;
ptr += left;
} else {
memcpy(&ctx->buf[used], ptr, len);
return;
}
}
while (len >= sizeof(ctx->buf)) {
memcpy(ctx->buf, ptr, sizeof(ctx->buf));
SipHash_CRounds(ctx, rc);
len -= sizeof(ctx->buf);
ptr += sizeof(ctx->buf);
}
if (len > 0)
memcpy(ctx->buf, ptr, len);
}
void
SipHash_Final(void *dst, SIPHASH_CTX *ctx, int rc, int rf)
{
uint64_t r;
htolem64(&r, SipHash_End(ctx, rc, rf));
memcpy(dst, &r, sizeof r);
}
uint64_t
SipHash_End(SIPHASH_CTX *ctx, int rc, int rf)
{
uint64_t r;
size_t left, used;
used = ctx->bytes % sizeof(ctx->buf);
left = sizeof(ctx->buf) - used;
memset(&ctx->buf[used], 0, left - 1);
ctx->buf[7] = ctx->bytes;
SipHash_CRounds(ctx, rc);
ctx->v[2] ^= 0xff;
SipHash_Rounds(ctx, rf);
r = (ctx->v[0] ^ ctx->v[1]) ^ (ctx->v[2] ^ ctx->v[3]);
explicit_bzero(ctx, sizeof(*ctx));
return (r);
}
uint64_t
SipHash(const SIPHASH_KEY *key, int rc, int rf, const void *src, size_t len)
{
SIPHASH_CTX ctx;
SipHash_Init(&ctx, key);
SipHash_Update(&ctx, rc, rf, src, len);
return (SipHash_End(&ctx, rc, rf));
}
#define SIP_ROTL(x, b) ((x) << (b)) | ( (x) >> (64 - (b)))
static void
SipHash_Rounds(SIPHASH_CTX *ctx, int rounds)
{
while (rounds--) {
ctx->v[0] += ctx->v[1];
ctx->v[2] += ctx->v[3];
ctx->v[1] = SIP_ROTL(ctx->v[1], 13);
ctx->v[3] = SIP_ROTL(ctx->v[3], 16);
ctx->v[1] ^= ctx->v[0];
ctx->v[3] ^= ctx->v[2];
ctx->v[0] = SIP_ROTL(ctx->v[0], 32);
ctx->v[2] += ctx->v[1];
ctx->v[0] += ctx->v[3];
ctx->v[1] = SIP_ROTL(ctx->v[1], 17);
ctx->v[3] = SIP_ROTL(ctx->v[3], 21);
ctx->v[1] ^= ctx->v[2];
ctx->v[3] ^= ctx->v[0];
ctx->v[2] = SIP_ROTL(ctx->v[2], 32);
}
}
static void
SipHash_CRounds(SIPHASH_CTX *ctx, int rounds)
{
uint64_t m = lemtoh64((uint64_t *)ctx->buf);
ctx->v[3] ^= m;
SipHash_Rounds(ctx, rounds);
ctx->v[0] ^= m;
}
```
|
```python
#
#
# path_to_url
#
# Unless required by applicable law or agreed to in writing, software
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# ==============================================================================
"""Offline dump analyzer of TensorFlow Debugger (tfdbg)."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import argparse
import sys
# Google-internal import(s).
from tensorflow.python.debug.cli import analyzer_cli
from tensorflow.python.debug.lib import debug_data
from tensorflow.python.platform import app
def main(_):
if FLAGS.log_usage:
pass # No logging for open-source.
if not FLAGS.dump_dir:
print("ERROR: dump_dir flag is empty.", file=sys.stderr)
sys.exit(1)
print("tfdbg offline: FLAGS.dump_dir = %s" % FLAGS.dump_dir)
debug_dump = debug_data.DebugDumpDir(
FLAGS.dump_dir, validate=FLAGS.validate_graph)
cli = analyzer_cli.create_analyzer_ui(
debug_dump,
tensor_filters={"has_inf_or_nan": debug_data.has_inf_or_nan},
ui_type=FLAGS.ui_type)
title = "tfdbg offline @ %s" % FLAGS.dump_dir
cli.run_ui(title=title, title_color="black_on_white", init_command="lt")
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.register("type", "bool", lambda v: v.lower() == "true")
parser.add_argument(
"--dump_dir", type=str, default="", help="tfdbg dump directory to load")
parser.add_argument(
"--log_usage",
type="bool",
nargs="?",
const=True,
default=True,
help="Whether the usage of this tool is to be logged")
parser.add_argument(
"--ui_type",
type=str,
default="curses",
help="Command-line user interface type (curses | readline)")
parser.add_argument(
"--validate_graph",
nargs="?",
const=True,
type="bool",
default=True,
help="""\
Whether the dumped tensors will be validated against the GraphDefs\
""")
FLAGS, unparsed = parser.parse_known_args()
app.run(main=main, argv=[sys.argv[0]] + unparsed)
```
|
```java
package com.justwayward.reader.view.epubview;
import android.content.Context;
import android.graphics.Canvas;
import android.util.AttributeSet;
import android.view.MotionEvent;
import android.view.ViewConfiguration;
import android.view.ViewParent;
import android.widget.SeekBar;
import com.justwayward.reader.ui.fragment.EPubReaderFragment;
/**
* @author yuyh.
* @date 2016/12/13.
*/
public class VerticalSeekbar extends SeekBar {
private EPubReaderFragment fragment;
private boolean mIsDragging;
private float mTouchDownY;
private int mScaledTouchSlop;
private boolean isInScrollingContainer = false;
private OnSeekBarChangeListener mOnSeekBarChangeListener;
public boolean isInScrollingContainer() {
return isInScrollingContainer;
}
public void setInScrollingContainer(boolean isInScrollingContainer) {
this.isInScrollingContainer = isInScrollingContainer;
}
float mTouchProgressOffset;
public VerticalSeekbar(Context context, AttributeSet attrs, int defStyle) {
super(context, attrs, defStyle);
mScaledTouchSlop = ViewConfiguration.get(context).getScaledTouchSlop();
}
public VerticalSeekbar(Context context, AttributeSet attrs) {
super(context, attrs);
}
public VerticalSeekbar(Context context) {
super(context);
}
@Override
protected void onSizeChanged(int w, int h, int oldw, int oldh) {
super.onSizeChanged(h, w, oldh, oldw);
}
@Override
protected synchronized void onMeasure(int widthMeasureSpec,
int heightMeasureSpec) {
super.onMeasure(heightMeasureSpec, widthMeasureSpec);
setMeasuredDimension(getMeasuredHeight(), getMeasuredWidth());
}
@Override
protected synchronized void onDraw(Canvas canvas) {
//canvas.rotate(-90);
//canvas.translate(-getHeight(), 0);
canvas.rotate(90);
canvas.translate(0, -getWidth());
super.onDraw(canvas);
}
@Override
public boolean onTouchEvent(MotionEvent event) {
if (!isEnabled()) {
return false;
}
switch (event.getAction()) {
case MotionEvent.ACTION_DOWN:
if (isInScrollingContainer()) {
mTouchDownY = event.getY();
} else {
setPressed(true);
invalidate();
onStartTrackingTouch();
trackTouchEvent(event);
attemptClaimDrag();
onSizeChanged(getWidth(), getHeight(), 0, 0);
}
if (fragment != null) {
fragment.removeCallback();
}
break;
case MotionEvent.ACTION_MOVE:
if (mIsDragging) {
trackTouchEvent(event);
} else {
final float y = event.getY();
if (Math.abs(y - mTouchDownY) > mScaledTouchSlop) {
setPressed(true);
invalidate();
onStartTrackingTouch();
trackTouchEvent(event);
attemptClaimDrag();
}
}
onSizeChanged(getWidth(), getHeight(), 0, 0);
break;
case MotionEvent.ACTION_UP:
if (mIsDragging) {
trackTouchEvent(event);
onStopTrackingTouch();
setPressed(false);
} else {
onStartTrackingTouch();
trackTouchEvent(event);
onStopTrackingTouch();
}
onSizeChanged(getWidth(), getHeight(), 0, 0);
invalidate();
if (fragment != null) {
fragment.startCallback();
}
break;
}
return true;
}
private void trackTouchEvent(MotionEvent event) {
final int height = getHeight();
final int top = getPaddingTop();
final int bottom = getPaddingBottom();
final int available = height - top - bottom;
int y = (int) event.getY();
float scale;
float progress = 0;
//
if (y > height - bottom) {
scale = 1.0f;
} else if (y < top) {
scale = 0.0f;
} else {
scale = (float) (y - top) / (float) available;
progress = mTouchProgressOffset;
}
final int max = getMax();
progress += scale * max;
setProgress((int) progress);
//
mOnSeekBarChangeListener.onProgressChanged(this, (int) progress, true);
}
void onStartTrackingTouch() {
mIsDragging = true;
}
void onStopTrackingTouch() {
mIsDragging = false;
}
private void attemptClaimDrag() {
ViewParent p = getParent();
if (p != null) {
p.requestDisallowInterceptTouchEvent(true);
}
}
@Override
public synchronized void setProgress(int progress) {
super.setProgress(progress);
onSizeChanged(getWidth(), getHeight(), 0, 0);
}
public void setOnSeekBarChangeListener(OnSeekBarChangeListener l) {
mOnSeekBarChangeListener = l;
super.setOnSeekBarChangeListener(mOnSeekBarChangeListener);
}
public void setFragment(EPubReaderFragment fragment) {
this.fragment = fragment;
}
}
```
|
```shell
Cache your authentication details to save time
Specify a commit by its ancestry
Create a new branch from a stash
Sign your work
Remember the results of previous hunk conflicts
```
|
The Honourable Tania Rosamund Harcourt-Cooze (née Coleridge, born 22 January 1966) is an English model and actress.
Early life and education
The daughter of Major William Duke Coleridge, 5th Baron Coleridge of Ottery St Mary, a Major in the Coldstream Guards, and his first wife Everild Tania Hambrough, she is related to the poet Samuel Taylor Coleridge. The oldest of five children, she has a brother, James Duke Coleridge (born 1967) and a sister, Sophia Tamsin Coleridge (born 1970), and half-sisters, Vanessa Leyla Coleridge (born 1978) and Katharine Suzannah Coleridge (born 1981).
Born in Kenya, Harcourt-Cooze followed her father's British Army career until her parents divorced in 1977, when she was 11.
Career
Modelling
Harcourt-Cooze completed an art history and drama degree at Fine Arts College in London.
She was spotted by Sarah Doukas, who dispatched her in 1986 to model for Armani and Versace in Italy, and she became a muse for Helmut Newton. She starred opposite singer George Michael in the video for "Father Figure". She appeared in the music video of the Kane Roberts song "Twisted" and was the “power drill girl” in Van Halen's "Poundcake" video in 1991.
Management
Returning to England in 2001, she took over the management of The Chanter's House, the family's ancestral home in March 2002. The couple set up events management company Kubla Khan, through which to organise weddings, fashion shoots, residential art courses, exhibitions, house tours and cultural gatherings based around the house.
In October 2006, the increasing costs of maintaining the property caused the family trust to put the property up for sale and auction the contents.
Media
She came to public prominence again in 2008 with the airing of the fly-on-the-wall documentary, Willie's Wonky Chocolate Factory, centred on her husband's efforts to be one of the first Britons since the Cadbury family to grow, import and produce their own chocolate.
Personal life
She married Willie Harcourt-Cooze, a Burmese-Irish man, in 1993 after meeting him in her late teens. Using the funds from the sale of his London flat and his family's money, the couple purchased a cocoa farm, in the Venezuelan cloud forest, Hacienda El Tesoro in the Henri Pittier National Park, near Choroni Beach., and planted more than 50,000 Criollo cocoa trees.
She lives in Tiverton, Devon and has three children – Sophia, William and Eve
In May 2010 she and her husband separated and they formalised a divorce in 2011.
See also
Baron Coleridge
Ottery St Mary
References and footnotes
External links
1966 births
Living people
English people of Kenyan descent
English female models
Daughters of barons
English film actresses
Tania
|
Pioneer Valley Performing Arts Charter Public School (PVPA) is a public charter school in South Hadley, Massachusetts, United States. It was established in 1996 as part of the Massachusetts Educational Reform. It was originally located in Hadley, Massachusetts, but relocated to South Hadley for its tenth year in 2005.
Overview
Pioneer Valley Performing Arts Charter Public School in Western Massachusetts is known for its performing and visual arts concentrations woven into its curriculum. Students are required to take all courses that Massachusetts public schools mandate (mathematics, science, English, languages, history, etc.) but also participate in a wide variety of performing arts and visual art courses. These include courses in instrumental and vocal music, many styles of dance, theater, visual arts, film, and musical theater. The students also participate in the revision of the school charter as a democratic process when the charter goes through renewals.
The school has two performance spaces, a main theater and a studio theater. In the 2005–06 school year, PVPA made a deal with the Academy of Music to hold their musical production of the year at their facilities. From 2009 to 2011, PVPA relocated their musicals to University of Massachusetts Amherst's Bowker Auditorium. In 2012, PVPA's musical productions relocated back to the Academy of Music, and many of the school's other productions were held there as well. In 2016, PVPA opened their new mainstage theater at their South Hadley campus, and the majority of PVPA productions are now held there
Professional Performance Groups/Productions
Music
Groovy Truth Jazz Ensemble
Pop RnB
Earwrum Rock Ensemble
Spectrum ACapella
Rock and Soul Revue
Instrumental Evolution 2016-2023
Dance
Catalyst (Contemporary)
WOFA (African Drum and Dance)
Senior Dance Thesis
Funkadelic (Hip Hop), 2016-2019
Theatre
Fall High School Play
Winter Musical
Spring High School Play
Spring Middle School Play
Senior Theatre Thesis
Headgear Sketch Comedy
Notable alumni
Michael Brooks, political commentator, talk show host, and comedian
Elisha Yaffe, comedian, actor and producer
Naia Kete, singer and songwriter
Seth Glier, singer and songwriter
Sonya Kitchell, singer and songwriter
Zoe Weizenbaum, actor
Shankar Tucker, clarinetist and composer
Alsarah, singer-songwriter and ethnomusicologist
References
External links
Charter high schools in Massachusetts
Schools of the performing arts in the United States
Charter middle schools in Massachusetts
Buildings and structures in South Hadley, Massachusetts
|
Bangladesh competed at the 2008 Asian Beach Games held in Bali, Indonesia from October 18, 2008, to October 26, 2008. Bangladesh finished with 1 bronze medal.
Nations at the 2008 Asian Beach Games
2008
Asian Beach Games
|
```go
// package conf .
package conf
import (
"strings"
"fmt"
// beego "github.com/beego/beego/v2/adapter"
"os"
"path/filepath"
"strconv"
"github.com/beego/beego/v2/server/web"
)
// Session
const LoginSessionName = "LoginSessionName"
const CaptchaSessionName = "__captcha__"
const RegexpEmail = "^[a-zA-Z0-9.!#$%&'*+\\/=?^_`{|}~-]+@[a-zA-Z0-9](?:[a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?(?:\\.[a-zA-Z0-9](?:[a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?)*$"
//
const RegexpAccount = `^[a-zA-Z0-9][a-zA-Z0-9\.-]{2,50}$`
// PageSize .
const PageSize = 10
//
const (
// .
MemberSuperRole SystemRole = iota
//.
MemberAdminRole
//.
MemberGeneralRole
)
//
type SystemRole int
const (
// .
BookFounder BookRole = iota
//
BookAdmin
//.
BookEditor
//
BookObserver
//
BookRoleNoSpecific
)
//
type BookRole int
const (
LoggerOperate = "operate"
LoggerSystem = "system"
LoggerException = "exception"
LoggerDocument = "document"
)
const (
//
AuthMethodLocal = "local"
//LDAP
AuthMethodLDAP = "ldap"
)
var (
VERSION string
BUILD_TIME string
GO_VERSION string
)
var (
ConfigurationFile = "./conf/app.conf"
WorkingDirectory = "./"
LogFile = "./runtime/logs"
BaseUrl = ""
AutoLoadDelay = 0
)
// app_key
func GetAppKey() string {
return web.AppConfig.DefaultString("app_key", "mindoc")
}
func GetDatabasePrefix() string {
return web.AppConfig.DefaultString("db_prefix", "md_")
}
//
func GetDefaultAvatar() string {
return URLForWithCdnImage(web.AppConfig.DefaultString("avatar", "/static/mindoc/images/headimgurl.jpg"))
}
//.
func GetTokenSize() int {
return web.AppConfig.DefaultInt("token_size", 12)
}
//.
func GetDefaultCover() string {
return URLForWithCdnImage(web.AppConfig.DefaultString("cover", "/static/images/book.jpg"))
}
//.
func GetUploadFileExt() []string {
ext := web.AppConfig.DefaultString("upload_file_ext", "png|jpg|jpeg|gif|txt|doc|docx|pdf")
temp := strings.Split(ext, "|")
exts := make([]string, len(temp))
i := 0
for _, item := range temp {
if item != "" {
exts[i] = item
i++
}
}
return exts
}
//
func GetUploadFileSize() int64 {
size := web.AppConfig.DefaultString("upload_file_size", "0")
if strings.HasSuffix(size, "TB") {
if s, e := strconv.ParseInt(size[0:len(size)-2], 10, 64); e == nil {
return s * 1024 * 1024 * 1024 * 1024
}
}
if strings.HasSuffix(size, "GB") {
if s, e := strconv.ParseInt(size[0:len(size)-2], 10, 64); e == nil {
return s * 1024 * 1024 * 1024
}
}
if strings.HasSuffix(size, "MB") {
if s, e := strconv.ParseInt(size[0:len(size)-2], 10, 64); e == nil {
return s * 1024 * 1024
}
}
if strings.HasSuffix(size, "KB") {
if s, e := strconv.ParseInt(size[0:len(size)-2], 10, 64); e == nil {
return s * 1024
}
}
if s, e := strconv.ParseInt(size, 10, 64); e == nil {
return s
}
return 0
}
//
func GetEnableExport() bool {
return web.AppConfig.DefaultBool("enable_export", true)
}
//iframe
func GetEnableIframe() bool {
return web.AppConfig.DefaultBool("enable_iframe", false)
}
//
func GetExportProcessNum() int {
exportProcessNum := web.AppConfig.DefaultInt("export_process_num", 1)
if exportProcessNum <= 0 || exportProcessNum > 4 {
exportProcessNum = 1
}
return exportProcessNum
}
//
func GetExportLimitNum() int {
exportLimitNum := web.AppConfig.DefaultInt("export_limit_num", 1)
if exportLimitNum < 0 {
exportLimitNum = 1
}
return exportLimitNum
}
//
func GetExportQueueLimitNum() int {
exportQueueLimitNum := web.AppConfig.DefaultInt("export_queue_limit_num", 10)
if exportQueueLimitNum <= 0 {
exportQueueLimitNum = 100
}
return exportQueueLimitNum
}
//
func GetExportOutputPath() string {
exportOutputPath := filepath.Join(web.AppConfig.DefaultString("export_output_path", filepath.Join(WorkingDirectory, "cache")), "books")
return exportOutputPath
}
//.
func IsAllowUploadFileExt(ext string) bool {
if strings.HasPrefix(ext, ".") {
ext = string(ext[1:])
}
exts := GetUploadFileExt()
for _, item := range exts {
if item == "*" {
return true
}
if strings.EqualFold(item, ext) {
return true
}
}
return false
}
//
func CONF(key string, value ...string) string {
defaultValue := ""
if len(value) > 0 {
defaultValue = value[0]
}
return web.AppConfig.DefaultString(key, defaultValue)
}
//URL
func URLFor(endpoint string, values ...interface{}) string {
baseUrl := web.AppConfig.DefaultString("baseurl", "")
pathUrl := web.URLFor(endpoint, values...)
if baseUrl == "" {
baseUrl = BaseUrl
}
if strings.HasPrefix(pathUrl, "http://") {
return pathUrl
}
if strings.HasPrefix(pathUrl, "/") && strings.HasSuffix(baseUrl, "/") {
return baseUrl + pathUrl[1:]
}
if !strings.HasPrefix(pathUrl, "/") && !strings.HasSuffix(baseUrl, "/") {
return baseUrl + "/" + pathUrl
}
return baseUrl + web.URLFor(endpoint, values...)
}
func URLForNotHost(endpoint string, values ...interface{}) string {
baseUrl := web.AppConfig.DefaultString("baseurl", "")
pathUrl := web.URLFor(endpoint, values...)
if baseUrl == "" {
baseUrl = "/"
}
if strings.HasPrefix(pathUrl, "http://") {
return pathUrl
}
if strings.HasPrefix(pathUrl, "/") && strings.HasSuffix(baseUrl, "/") {
return baseUrl + pathUrl[1:]
}
if !strings.HasPrefix(pathUrl, "/") && !strings.HasSuffix(baseUrl, "/") {
return baseUrl + "/" + pathUrl
}
return baseUrl + web.URLFor(endpoint, values...)
}
func URLForWithCdnImage(p string) string {
if strings.HasPrefix(p, "http://") || strings.HasPrefix(p, "https://") {
return p
}
cdn := web.AppConfig.DefaultString("cdnimg", "")
//cdnbaseURL
if cdn == "" {
baseUrl := web.AppConfig.DefaultString("baseurl", "/")
if strings.HasPrefix(p, "/") && strings.HasSuffix(baseUrl, "/") {
return baseUrl + p[1:]
}
if !strings.HasPrefix(p, "/") && !strings.HasSuffix(baseUrl, "/") {
return baseUrl + "/" + p
}
return baseUrl + p
}
if strings.HasPrefix(p, "/") && strings.HasSuffix(cdn, "/") {
return cdn + string(p[1:])
}
if !strings.HasPrefix(p, "/") && !strings.HasSuffix(cdn, "/") {
return cdn + "/" + p
}
return cdn + p
}
func URLForWithCdnCss(p string, v ...string) string {
cdn := web.AppConfig.DefaultString("cdncss", "")
if strings.HasPrefix(p, "http://") || strings.HasPrefix(p, "https://") {
return p
}
filePath := WorkingDir(p)
if f, err := os.Stat(filePath); err == nil && !strings.Contains(p, "?") && len(v) > 0 && v[0] == "version" {
p = p + fmt.Sprintf("?v=%s", f.ModTime().Format("20060102150405"))
}
//cdnbaseURL
if cdn == "" {
baseUrl := web.AppConfig.DefaultString("baseurl", "/")
if strings.HasPrefix(p, "/") && strings.HasSuffix(baseUrl, "/") {
return baseUrl + p[1:]
}
if !strings.HasPrefix(p, "/") && !strings.HasSuffix(baseUrl, "/") {
return baseUrl + "/" + p
}
return baseUrl + p
}
if strings.HasPrefix(p, "/") && strings.HasSuffix(cdn, "/") {
return cdn + string(p[1:])
}
if !strings.HasPrefix(p, "/") && !strings.HasSuffix(cdn, "/") {
return cdn + "/" + p
}
return cdn + p
}
func URLForWithCdnJs(p string, v ...string) string {
cdn := web.AppConfig.DefaultString("cdnjs", "")
if strings.HasPrefix(p, "http://") || strings.HasPrefix(p, "https://") {
return p
}
filePath := WorkingDir(p)
if f, err := os.Stat(filePath); err == nil && !strings.Contains(p, "?") && len(v) > 0 && v[0] == "version" {
p = p + fmt.Sprintf("?v=%s", f.ModTime().Format("20060102150405"))
}
//cdnbaseURL
if cdn == "" {
baseUrl := web.AppConfig.DefaultString("baseurl", "/")
if strings.HasPrefix(p, "/") && strings.HasSuffix(baseUrl, "/") {
return baseUrl + p[1:]
}
if !strings.HasPrefix(p, "/") && !strings.HasSuffix(baseUrl, "/") {
return baseUrl + "/" + p
}
return baseUrl + p
}
if strings.HasPrefix(p, "/") && strings.HasSuffix(cdn, "/") {
return cdn + string(p[1:])
}
if !strings.HasPrefix(p, "/") && !strings.HasSuffix(cdn, "/") {
return cdn + "/" + p
}
return cdn + p
}
func WorkingDir(elem ...string) string {
elems := append([]string{WorkingDirectory}, elem...)
return filepath.Join(elems...)
}
func init() {
if p, err := filepath.Abs("./conf/app.conf"); err == nil {
ConfigurationFile = p
}
if p, err := filepath.Abs("./"); err == nil {
WorkingDirectory = p
}
if p, err := filepath.Abs("./runtime/logs"); err == nil {
LogFile = p
}
}
```
|
```php
<?php
/*
* This file is part of the Kimai time-tracking app.
*
* For the full copyright and license information, please view the LICENSE
* file that was distributed with this source code.
*/
namespace App\Event;
use App\Entity\Project;
use Symfony\Contracts\EventDispatcher\Event;
/**
* This event can be used, to dynamically add meta fields to projects
*/
final class ProjectMetaDefinitionEvent extends Event
{
public function __construct(private Project $entity)
{
}
public function getEntity(): Project
{
return $this->entity;
}
}
```
|
Santo Antônio de Pádua (, Saint Anthony of Padua) is a municipality located in the northeastern part of the Brazilian state of Rio de Janeiro. Its population was 42,594 (2020) and its area is 612 km².
Districts
Santo Antônio de Pádua (seat)
Baltazar
Campelo
Ibitiguaçu
Marangatu
Monte Alegre
Paraoquena
Santa Cruz
Neighboring municipalities
Miracema
São José de Ubá
Cambuci
Aperibé
Itaocara
Pirapetinga (MG)
Coat of arms
The coat of arms of Santo Antônio de Pádua features eight stars representing eight of its districts on a navy blue field of the shield with a tree on the right and a fountain in the middle. The municipality's name is centered in the shield and it runs diagonally from bottom left to top right. The shield is bordered with four orange rectangles on each side. The crown is on the top of the coat of arms.
External links
Santo Antônio de Pádua City Hall
References
Municipalities in Rio de Janeiro (state)
|
Richard Martin Rodas (born November 7, 1959) is a former pitcher in Major League Baseball. He pitched in 10 games for the Los Angeles Dodgers in the 1983 and 1984 seasons.
External links
Pura Pelota
1959 births
Living people
Albuquerque Dukes players
Lethbridge Dodgers players
Los Angeles Dodgers players
Major League Baseball pitchers
Sacramento City College alumni
Sacramento City Panthers baseball players
San Antonio Dodgers players
Sportspeople from Roseville, California
Baseball players from Placer County, California
Tigres de Aragua players
American expatriate baseball players in Venezuela
|
Always on My Mind is an album by saxophonist Houston Person featuring jazz versions of pop hits recorded in 1985 and released on the Muse label early the following year.
Track listing
"I Can't Help Myself" (Brian Holland, Lamont Dozier, Eddie Holland) − 4:30
"Always on My Mind" (Johnny Christopher, Mark James, Wayne Carson) − 5:28
"Endlessly" (Brook Benton, Clyde Otis) − 7:08
"How Do You Keep the Music Playing?" (Michel Legrand, Alan Bergman, Marilyn Bergman) − 6:14
"Cutie Pie" (Al Hudson, Dave Roberson, George Morgan, Corky Meadows, Terence Dudle) − 5:35
"I Might Be You (Theme from Tootsie)" (Dave Grusin, Alan Bergman, Marilyn Bergman) − 5:58
Personnel
Houston Person − tenor saxophone
David Braham − organ, electric piano
Ted Brancato − piano, keyboards
Wilbur Bascomb − bass
Bernard Purdie − drums
Ralph Dorsey − percussion
References
Houston Person albums
1986 albums
Muse Records albums
|
```java
package com.reactnativenavigation;
import com.google.android.material.floatingactionbutton.FloatingActionButton;
import androidx.appcompat.app.AppCompatActivity;
import com.facebook.react.common.*;
import org.junit.*;
import org.robolectric.*;
import static org.assertj.core.api.Java6Assertions.*;
import com.reactnativenavigation.R;
public class EnvironmentTest extends BaseTest {
@Test
public void assertJ() {
assertThat(1 + 2).isEqualTo(3).isGreaterThan(2).isLessThan(4).isNotNegative().isPositive().isNotZero();
}
@Test
public void react() {
assertThat(ReactConstants.TAG).isNotEmpty();
}
@Test
public void supportV7AppCompat() {
assertThat(AppCompatActivity.class).isNotNull();
}
@Test
public void supportDesign() {
assertThat(FloatingActionButton.class).isNotNull();
}
@Test
public void androidR() {
assertThat(com.google.android.material.R.string.bottom_sheet_behavior).isNotZero();
}
@Test
public void ableToLoadApplication() throws Exception {
assertThat(RuntimeEnvironment.application).isNotNull();
}
}
```
|
```html
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "path_to_url">
<html xmlns="path_to_url">
<head>
<meta http-equiv="Content-Type" content="text/xhtml;charset=UTF-8"/>
<meta http-equiv="X-UA-Compatible" content="IE=9"/>
<meta name="generator" content="Doxygen 1.8.10"/>
<title>Introduction_to_Algorithms: src/select_algorithms/randomized_select Directory Reference</title>
<link href="tabs.css" rel="stylesheet" type="text/css"/>
<script type="text/javascript" src="jquery.js"></script>
<script type="text/javascript" src="dynsections.js"></script>
<link href="navtree.css" rel="stylesheet" type="text/css"/>
<script type="text/javascript" src="resize.js"></script>
<script type="text/javascript" src="navtreedata.js"></script>
<script type="text/javascript" src="navtree.js"></script>
<script type="text/javascript">
$(document).ready(initResizable);
$(window).load(resizeHeight);
</script>
<link href="search/search.css" rel="stylesheet" type="text/css"/>
<script type="text/javascript" src="search/searchdata.js"></script>
<script type="text/javascript" src="search/search.js"></script>
<script type="text/javascript">
$(document).ready(function() { init_search(); });
</script>
<link href="doxygen.css" rel="stylesheet" type="text/css" />
</head>
<body>
<div id="top"><!-- do not remove this div, it is closed by doxygen! -->
<div id="titlearea">
<table cellspacing="0" cellpadding="0">
<tbody>
<tr style="height: 56px;">
<td id="projectalign" style="padding-left: 0.5em;">
<div id="projectname">Introduction_to_Algorithms
</div>
</td>
</tr>
</tbody>
</table>
</div>
<!-- end header part -->
<!-- Generated by Doxygen 1.8.10 -->
<script type="text/javascript">
var searchBox = new SearchBox("searchBox", "search",false,'Search');
</script>
<div id="navrow1" class="tabs">
<ul class="tablist">
<li><a href="index.html"><span>Main Page</span></a></li>
<li><a href="namespaces.html"><span>Namespaces</span></a></li>
<li><a href="annotated.html"><span>Classes</span></a></li>
<li><a href="files.html"><span>Files</span></a></li>
<li>
<div id="MSearchBox" class="MSearchBoxInactive">
<span class="left">
<img id="MSearchSelect" src="search/mag_sel.png"
onmouseover="return searchBox.OnSearchSelectShow()"
onmouseout="return searchBox.OnSearchSelectHide()"
alt=""/>
<input type="text" id="MSearchField" value="Search" accesskey="S"
onfocus="searchBox.OnSearchFieldFocus(true)"
onblur="searchBox.OnSearchFieldFocus(false)"
onkeyup="searchBox.OnSearchFieldChange(event)"/>
</span><span class="right">
<a id="MSearchClose" href="javascript:searchBox.CloseResultsWindow()"><img id="MSearchCloseImg" border="0" src="search/close.png" alt=""/></a>
</span>
</div>
</li>
</ul>
</div>
</div><!-- top -->
<div id="side-nav" class="ui-resizable side-nav-resizable">
<div id="nav-tree">
<div id="nav-tree-contents">
<div id="nav-sync" class="sync"></div>
</div>
</div>
<div id="splitbar" style="-moz-user-select:none;"
class="ui-resizable-handle">
</div>
</div>
<script type="text/javascript">
$(document).ready(function(){initNavTree('dir_5723e9a5e0b76787341f535e1c224e9d.html','');});
</script>
<div id="doc-content">
<!-- window showing the filter options -->
<div id="MSearchSelectWindow"
onmouseover="return searchBox.OnSearchSelectShow()"
onmouseout="return searchBox.OnSearchSelectHide()"
onkeydown="return searchBox.OnSearchSelectKey(event)">
</div>
<!-- iframe showing the search results (closed by default) -->
<div id="MSearchResultsWindow">
<iframe src="javascript:void(0)" frameborder="0"
name="MSearchResults" id="MSearchResults">
</iframe>
</div>
<div class="header">
<div class="headertitle">
<div class="title">randomized_select Directory Reference</div> </div>
</div><!--header-->
<div class="contents">
<table class="memberdecls">
<tr class="heading"><td colspan="2"><h2 class="groupheader"><a name="files"></a>
Files</h2></td></tr>
<tr class="memitem:randomizedselect_8h"><td class="memItemLeft" align="right" valign="top">file  </td><td class="memItemRight" valign="bottom"><a class="el" href="randomizedselect_8h.html">randomizedselect.h</a> <a href="randomizedselect_8h_source.html">[code]</a></td></tr>
<tr class="separator:"><td class="memSeparator" colspan="2"> </td></tr>
<tr class="memitem:randomizedselect__test_8h"><td class="memItemLeft" align="right" valign="top">file  </td><td class="memItemRight" valign="bottom"><a class="el" href="randomizedselect__test_8h.html">randomizedselect_test.h</a> <a href="randomizedselect__test_8h_source.html">[code]</a></td></tr>
<tr class="separator:"><td class="memSeparator" colspan="2"> </td></tr>
</table>
</div><!-- contents -->
</div><!-- doc-content -->
<!-- start footer part -->
<div id="nav-path" class="navpath"><!-- id is needed for treeview function! -->
<ul>
<li class="navelem"><a class="el" href="dir_8ed9dba4cd616b67da9b1338594e34e0.html">src</a></li><li class="navelem"><a class="el" href="dir_5b617f5d35050df2d00af2999eb09f44.html">select_algorithms</a></li><li class="navelem"><a class="el" href="dir_5723e9a5e0b76787341f535e1c224e9d.html">randomized_select</a></li>
<li class="footer">Generated by
<a href="path_to_url">
<img class="footer" src="doxygen.png" alt="doxygen"/></a> 1.8.10 </li>
</ul>
</div>
</body>
</html>
```
|
Skippy is an American brand of peanut butter spread manufactured in the United States and China. First sold in 1932, Skippy is currently manufactured by Hormel Foods, which bought the brand from Unilever in 2013. It is the best-selling brand of peanut butter in China and second only to the J.M. Smucker Company's Jif brand worldwide.
Brand name
Percy Crosby, creator of the popular "Skippy" comic strip (1923–1945), which had been adapted into the 1929 novel Skippy, the daytime children's radio serial Skippy (1932–1935), and the Oscar-winning 1931 film Skippy, had trademarked the name "Skippy" in 1925.
In 1932, the Alameda, California food packer Joseph L. Rosefield began to sell its second hydrogenated peanut butter, which it labeled "Skippy" without permission, Crosby successfully had the trademark invalidated in 1934. Rosefield persisted using the name and after Crosby was committed to an asylum and after the passage in 1946 of the Lanham Act, Rosefield was granted rights to the trademark.
Product
In 1955, Rosefield sold the brand to Best Foods. Its successor companies, most recently Unilever and Hormel, claim rights to the trademark over the objection of Crosby's heirs, and much litigation has occurred on this point over the decades, some of which has continued into the 2000s.
Skippy is sold in many different sizes, including a jar, known as the "Family Jar". In late 2000, Skippy reduced their standard jar size from to by adding a "dimple" in the bottom of the jar while retaining the jar's height and diameter.
Production
Skippy has factories in Little Rock, Arkansas, and Shandong Province, China. About 750,000 pounds of peanuts are brought daily to the Skippy Peanut Butter plant in Little Rock, Arkansas, resulting in over 3.5 million pounds of peanut butter produced each week.
There are 14 different varieties of Skippy Peanut Butter Spread.
Skippy Creamy Peanut Butter
Skippy Super Chunk Peanut Butter
Skippy Roasted Honey Nut Creamy Peanut Butter
Skippy Reduced Fat Creamy Peanut Butter
Skippy Reduced Fat Super Chunk Peanut Butter
Skippy Peanut Butter Blended with Plant Protein Creamy
Skippy Peanut Butter Blended with Plant Protein Chunky
Skippy Creamy Peanut Butter Spread No Sugar Added
Skippy Chunky Peanut Butter Spread No Sugar Added
Skippy Natural Creamy Peanut Butter Spread
Skippy Natural Creamy Peanut Butter Spread with Honey
Skippy Natural Super Chunk Peanut Butter Spread
Skippy Natural Super Chunk Peanut Butter Spread with Honey
Skippy Natural Creamy 1/3 Less Sodium & Sugar Peanut Butter Spread
Skippy is also available in a 6 oz. squeeze pack in Creamy or Natural Peanut Butter Spread and 1.15 oz. individual squeeze 8 packs in Creamy or Natural Peanut Butter Spread.
In 2018, Skippy added Skippy P.B. Fruit Bites to their Skippy P.B. Bites that were already available in Double Peanut Butter, Pretzel and Graham Cracker.
On September 12, 2018, Skippy announced a new line of Skippy P.B. & Jelly Minis in Peanut Butter & Grape Jelly, Natural Peanut Butter & Grape Jelly and Peanut Butter & Strawberry Jelly.
Nutrition
Skippy Peanut Butter is a cholesterol-free and gluten-free food. All varieties of Skippy Peanut Butter are also kosher except the Skippy P.B. bites.
References
External links
Skippy Peanut Butter Commercial (1958) from Youtube.com (Commercial Describing Skippy History)
Peanut butter brands
Products introduced in 1932
Hormel Foods brands
Former Unilever brands
|
```java
/**
*
*
* A Processing/Java library for high performance GPU-Computing (GLSL).
*
*/
package SoftBody2D.SoftBody2D_Playground;
import java.util.ArrayList;
import com.thomasdiewald.pixelflow.java.DwPixelFlow;
import com.thomasdiewald.pixelflow.java.softbodydynamics.DwPhysics;
import com.thomasdiewald.pixelflow.java.softbodydynamics.constraint.DwSpringConstraint;
import com.thomasdiewald.pixelflow.java.softbodydynamics.particle.DwParticle;
import com.thomasdiewald.pixelflow.java.softbodydynamics.particle.DwParticle2D;
import com.thomasdiewald.pixelflow.java.softbodydynamics.softbody.DwSoftBall2D;
import com.thomasdiewald.pixelflow.java.softbodydynamics.softbody.DwSoftBody2D;
import com.thomasdiewald.pixelflow.java.softbodydynamics.softbody.DwSoftGrid2D;
import com.thomasdiewald.pixelflow.java.utils.DwStrokeStyle;
import controlP5.Accordion;
import controlP5.ControlP5;
import controlP5.Group;
import processing.core.*;
public class SoftBody2D_Playground extends PApplet {
//
// 2D Softbody Sandbox, to debug/test/profile everything.
//
// Lots of different objects are created of particle-arrays and spring-constraints.
// Everything can collide with everything and also be destroyed (RMB).
//
// + Collision Detection
//
// Controls:
// LMB: drag particles
// MMB: drag + fix particles to a location
// RMB: disable springs, to deform objects
//
// + GUI
//
int viewport_w = 1280;
int viewport_h = 720;
int viewport_x = 230;
int viewport_y = 0;
int gui_w = 200;
int gui_x = viewport_w-gui_w;
int gui_y = 0;
// physics parameters
DwPhysics.Param param_physics = new DwPhysics.Param();
// particle parameters: same behavior for all
DwParticle.Param param_particle = new DwParticle.Param();
// spring parameters: different spring behavior for different bodies
DwSpringConstraint.Param param_spring_cloth = new DwSpringConstraint.Param();
DwSpringConstraint.Param param_spring_softbody = new DwSpringConstraint.Param();
DwSpringConstraint.Param param_spring_chain = new DwSpringConstraint.Param();
DwSpringConstraint.Param param_spring_circle = new DwSpringConstraint.Param();
// physics simulation
DwPhysics<DwParticle2D> physics;
// list, that wills store the cloths
ArrayList<DwSoftBody2D> softbodies = new ArrayList<DwSoftBody2D>();
// 0 ... default: particles, spring
// 1 ... tension
int DISPLAY_MODE = 0;
// entities to display
boolean DISPLAY_PARTICLES = true;
boolean DISPLAY_MESH = !true;
boolean DISPLAY_SRPINGS = true;
boolean DISPLAY_SPRINGS_STRUCT = true;
boolean DISPLAY_SPRINGS_SHEAR = true;
boolean DISPLAY_SPRINGS_BEND = true;
boolean UPDATE_PHYSICS = true;
// first thing to do, inside draw()
boolean NEED_REBUILD = false;
public void settings(){
size(viewport_w, viewport_h, P2D);
smooth(8);
}
public void setup() {
surface.setLocation(viewport_x, viewport_y);
// main library context
DwPixelFlow context = new DwPixelFlow(this);
context.print();
// context.printGL();
physics = new DwPhysics<DwParticle2D>(param_physics);
// global physics parameters
param_physics.GRAVITY = new float[]{ 0, 0.2f };
param_physics.bounds = new float[]{ 0, 0, width, height };
param_physics.iterations_collisions = 4;
param_physics.iterations_springs = 4;
// particle parameters
param_particle.DAMP_BOUNDS = 0.40f;
param_particle.DAMP_COLLISION = 0.9990f;
param_particle.DAMP_VELOCITY = 0.991f;
// spring parameters
param_spring_cloth .damp_dec = 0.999999f;
param_spring_cloth .damp_inc = 0.000599f;
param_spring_softbody.damp_dec = 0.999999f;
param_spring_softbody.damp_inc = 0.999999f;
param_spring_chain .damp_dec = 0.599999f;
param_spring_chain .damp_inc = 0.599999f;
param_spring_circle .damp_dec = 0.999999f;
param_spring_circle .damp_inc = 0.999999f;
createBodies();
createGUI();
frameRate(60);
}
public void createBodies(){
physics.reset();
softbodies.clear();
// create some particle-bodies: Cloth / SoftBody
float r,g,b,a,s;
int nodex_x, nodes_y, nodes_r;
float nodes_start_x, nodes_start_y;
// cloth
{
nodex_x = 30;
nodes_y = 30;
nodes_r = 7;
nodes_start_x = 50;
nodes_start_y = 70;
DwSoftGrid2D body = new DwSoftGrid2D();
body.CREATE_SHEAR_SPRINGS = true;
body.CREATE_BEND_SPRINGS = true;
body.bend_spring_mode = 2;
r = 255;
g = 180;
b = 0;
a = 160;
s = 1f;
body.setMaterialColor(color(r ,g ,b , a));
body.setParticleColor(color(r*s,g*s,b*s, a));
body.setParam(param_particle);
body.setParam(param_spring_cloth);
body.create(physics, nodex_x, nodes_y, nodes_r, nodes_start_x, nodes_start_y);
body.getNode( 0, 0).enable(false, false, false); // fix node to current location
body.getNode(body.nodes_x-1, 0).enable(false, false, false); // fix node to current location
body.createShapeParticles(this);
softbodies.add(body);
}
// grid
{
nodex_x = 10;
nodes_y = 20;
nodes_r = 7;
nodes_start_x = width/2;
nodes_start_y = height/2;
DwSoftGrid2D body = new DwSoftGrid2D();
body.CREATE_SHEAR_SPRINGS = true;
body.CREATE_BEND_SPRINGS = true;
body.bend_spring_mode = 2;
r = 0;
g = 0;
b = 0;
a = 128;
s = 1f;
body.setMaterialColor(color(r ,g ,b , a));
body.setParticleColor(color(r*s,g*s,b*s, a));
body.setParam(param_particle);
body.setParam(param_spring_softbody);
body.create(physics, nodex_x, nodes_y, nodes_r, nodes_start_x, nodes_start_y);
body.createShapeParticles(this);
softbodies.add(body);
}
// grid
{
nodex_x = 7;
nodes_y = 22;
nodes_r = 7;
nodes_start_x = 500;
nodes_start_y = 300;
DwSoftGrid2D body = new DwSoftGrid2D();
body.CREATE_SHEAR_SPRINGS = true;
body.CREATE_BEND_SPRINGS = true;
body.bend_spring_mode = 0;
r = 0;
g = 180;
b = 255;
a = 160;
s = 1f;
body.setMaterialColor(color(r ,g ,b , a));
body.setParticleColor(color(r*s,g*s,b*s, a));
body.setParam(param_particle);
body.setParam(param_spring_softbody);
body.create(physics, nodex_x, nodes_y, nodes_r, nodes_start_x, nodes_start_y);
body.getNode(0, 0).enable(false, false, false); // fix node to current location
body.createShapeParticles(this);
softbodies.add(body);
}
// lattice girder
{
nodex_x = 15;
nodes_y = 2;
nodes_r = 20;
nodes_start_x = 500;
nodes_start_y = 100;
DwSoftGrid2D body = new DwSoftGrid2D();
body.CREATE_SHEAR_SPRINGS = true;
body.CREATE_BEND_SPRINGS = true;
body.bend_spring_mode = 0;
r = 0;
g = 0;
b = 0;
a = 128;
s = 1f;
body.setMaterialColor(color(r ,g ,b , a));
body.setParticleColor(color(r*s,g*s,b*s, a));
body.setParam(param_particle);
body.setParam(param_spring_softbody);
body.create(physics, nodex_x, nodes_y, nodes_r, nodes_start_x, nodes_start_y);
body.getNode(0, 0).enable(false, false, false); // fix node to current location
body.getNode(0, 1).enable(false, false, false); // fix node to current location
body.createShapeParticles(this);
softbodies.add(body);
}
// chain
{
nodex_x = 70;
nodes_y = 1;
nodes_r = 10;
nodes_start_x = 500;
nodes_start_y = 200;
DwSoftGrid2D body = new DwSoftGrid2D();
body.CREATE_BEND_SPRINGS = false;
body.CREATE_SHEAR_SPRINGS = false;
body.self_collisions = true; // particles of this body can collide among themselves
body.collision_radius_scale = 1.00f; // funny, if bigger than 1 and self_collisions = true
r = 0;
g = 0;
b = 0;
a = 128;
s = 1f;
body.setMaterialColor(color(r ,g ,b , a));
body.setParticleColor(color(r*s,g*s,b*s, a));
body.setParam(param_particle);
body.setParam(param_spring_chain);
body.create(physics, nodex_x, nodes_y, nodes_r, nodes_start_x, nodes_start_y);
body.getNode( 0, 0).enable(false, false, false); // fix node to current location
body.getNode(35, 0).enable(false, false, false);
body.createShapeParticles(this);
softbodies.add(body);
}
// circle
{
nodes_r = 10;
nodes_start_x = 300;
nodes_start_y = height-150;
DwSoftBall2D body = new DwSoftBall2D();
body.CREATE_BEND_SPRINGS = false;
body.CREATE_SHEAR_SPRINGS = false;
body.bend_spring_mode = 0;
body.bend_spring_dist = 8;
r = 0;
g = 0;
b = 0;
a = 160;
s = 1f;
body.setMaterialColor(color(r ,g ,b , a));
body.setParticleColor(color(r*s,g*s,b*s, a));
body.setParam(param_particle);
body.setParam(param_spring_circle);
body.create(physics, nodes_start_x, nodes_start_y, 70, nodes_r);
body.createShapeParticles(this);
softbodies.add(body);
}
}
public void draw() {
if(NEED_REBUILD){
createBodies();
NEED_REBUILD = false;
}
updateMouseInteractions();
// update physics simulation
physics.update(1);
// render
background(DISPLAY_MODE == 0 ? 255 : 92);
// 3) mesh, solid
if(DISPLAY_MESH){
for(DwSoftBody2D body : softbodies){
body.createShapeMesh(this.g);
}
}
// 1) particles
if(DISPLAY_PARTICLES){
for(DwSoftBody2D body : softbodies){
// body.use_particles_color = (DISPLAY_MODE == 0);
body.displayParticles(this.g);
}
}
// 2) mesh, solid
if(DISPLAY_MESH){
for(DwSoftBody2D body : softbodies){
body.displayMesh(this.g);
}
}
if(DISPLAY_SRPINGS){
for(DwSoftBody2D body : softbodies){
body.shade_springs_by_tension = (DISPLAY_MODE == 1);
body.displaySprings(this.g, new DwStrokeStyle(color(255, 90, 30), 0.3f), DwSpringConstraint.TYPE.BEND);
body.displaySprings(this.g, new DwStrokeStyle(color( 70, 140, 255), 0.6f), DwSpringConstraint.TYPE.SHEAR);
body.displaySprings(this.g, new DwStrokeStyle(color( 0, 0, 0), 1.0f), DwSpringConstraint.TYPE.STRUCT);
}
}
// interaction stuff
if(DELETE_SPRINGS){
fill(255,64);
stroke(0);
strokeWeight(1);
ellipse(mouseX, mouseY, DELETE_RADIUS*2, DELETE_RADIUS*2);
}
// info
int NUM_SPRINGS = physics.getSpringCount();
int NUM_PARTICLES = physics.getParticlesCount();
String txt_fps = String.format(getClass().getName()+ " [particles %d] [springs %d] [frame %d] [fps %6.2f]", NUM_PARTICLES, NUM_SPRINGS, frameCount, frameRate);
surface.setTitle(txt_fps);
}
// this resets all springs and particles, to some of its initial states
// can be used after deactivating springs with the mouse
public void repairAllSprings(){
for(DwSoftBody2D body : softbodies){
for(DwParticle pa : body.particles){
pa.setCollisionGroup(body.collision_group_id);
pa.setRadiusCollision(pa.rad());
pa.enableAllSprings(true);
}
}
}
// update all springs rest-lengths, based on current particle position
// the effect is, that the body keeps the current shape
public void applySpringMemoryEffect(){
ArrayList<DwSpringConstraint> springs = physics.getSprings();
for(DwSpringConstraint spring : springs){
spring.updateRestlength();
}
}
//////////////////////////////////////////////////////////////////////////////
// User Interaction
//////////////////////////////////////////////////////////////////////////////
DwParticle particle_mouse = null;
public DwParticle findNearestParticle(float mx, float my){
return findNearestParticle(mx, my, Float.MAX_VALUE);
}
public DwParticle findNearestParticle(float mx, float my, float search_radius){
float dd_min_sq = search_radius * search_radius;
DwParticle2D[] particles = physics.getParticles();
DwParticle particle = null;
for(int i = 0; i < particles.length; i++){
float dx = mx - particles[i].cx;
float dy = my - particles[i].cy;
float dd_sq = dx*dx + dy*dy;
if( dd_sq < dd_min_sq){
dd_min_sq = dd_sq;
particle = particles[i];
}
}
return particle;
}
public ArrayList<DwParticle> findParticlesWithinRadius(float mx, float my, float search_radius){
float dd_min_sq = search_radius * search_radius;
DwParticle2D[] particles = physics.getParticles();
ArrayList<DwParticle> list = new ArrayList<DwParticle>();
for(int i = 0; i < particles.length; i++){
float dx = mx - particles[i].cx;
float dy = my - particles[i].cy;
float dd_sq = dx*dx + dy*dy;
if(dd_sq < dd_min_sq){
list.add(particles[i]);
}
}
return list;
}
public void updateMouseInteractions(){
if(cp5.isMouseOver()) return;
// deleting springs/constraints between particles
if(DELETE_SPRINGS){
ArrayList<DwParticle> list = findParticlesWithinRadius(mouseX, mouseY, DELETE_RADIUS);
for(DwParticle tmp : list){
tmp.enableAllSprings(false);
tmp.collision_group = physics.getNewCollisionGroupId();
tmp.rad_collision = tmp.rad;
}
} else {
if(particle_mouse != null){
float[] mouse = {mouseX, mouseY};
particle_mouse.moveTo(mouse, 0.2f);
}
}
}
boolean DELETE_SPRINGS = false;
float DELETE_RADIUS = 10;
public void mousePressed(){
if(mouseButton == RIGHT ) DELETE_SPRINGS = true;
if(!DELETE_SPRINGS){
particle_mouse = findNearestParticle(mouseX, mouseY, 100);
if(particle_mouse != null) particle_mouse.enable(false, false, false);
}
}
public void mouseReleased(){
if(particle_mouse != null && !DELETE_SPRINGS){
if(mouseButton == LEFT ) particle_mouse.enable(true, true, true );
if(mouseButton == CENTER) particle_mouse.enable(true, false, false);
particle_mouse = null;
}
if(mouseButton == RIGHT ) DELETE_SPRINGS = false;
}
public void keyReleased(){
if(key == 's') repairAllSprings();
if(key == 'm') applySpringMemoryEffect();
if(key == 'r') createBodies();
if(key == '1') DISPLAY_MODE = 0;
if(key == '2') DISPLAY_MODE = 1;
if(key == '3') DISPLAY_PARTICLES = !DISPLAY_PARTICLES;
if(key == '4') DISPLAY_MESH = !DISPLAY_MESH;
if(key == '5') DISPLAY_SRPINGS = !DISPLAY_SRPINGS;
if(key == ' ') UPDATE_PHYSICS = !UPDATE_PHYSICS;
}
////////////////////////////////////////////////////////////////////////////
// GUI
////////////////////////////////////////////////////////////////////////////
public void setDisplayMode(int val){
DISPLAY_MODE = val;
}
public void setDisplayTypes(float[] val){
DISPLAY_PARTICLES = (val[0] > 0);
DISPLAY_MESH = (val[1] > 0);
DISPLAY_SRPINGS = (val[2] > 0);
}
public void setGravity(float val){
physics.param.GRAVITY[1] = val;
}
public void togglePause(){
UPDATE_PHYSICS = !UPDATE_PHYSICS;
}
ControlP5 cp5;
public void createGUI(){
cp5 = new ControlP5(this);
cp5.setAutoDraw(true);
int sx, sy, px, py, oy;
sx = 100; sy = 14; oy = (int)(sy*1.4f);
////////////////////////////////////////////////////////////////////////////
// GUI - CLOTH
////////////////////////////////////////////////////////////////////////////
Group group_physics = cp5.addGroup("global");
{
group_physics.setHeight(20).setSize(gui_w, height)
.setBackgroundColor(color(0, 204)).setColorBackground(color(0, 204));
group_physics.getCaptionLabel().align(CENTER, CENTER);
px = 10; py = 15;
int bsx = (gui_w-40)/3;
cp5.addButton("rebuild").setGroup(group_physics).plugTo(this, "createBodies").setSize(bsx, 18).setPosition(px, py);
cp5.addButton("pause") .setGroup(group_physics).plugTo(this, "togglePause").setSize(bsx, 18).setPosition(px+=bsx+10, py);
px = 10;
cp5.addSlider("gravity").setGroup(group_physics).setSize(sx, sy).setPosition(px, py+=(int)(oy*1.5f))
.setRange(0, 1).setValue(physics.param.GRAVITY[1]).plugTo(this, "setGravity");
cp5.addSlider("iter: springs").setGroup(group_physics).setSize(sx, sy).setPosition(px, py+=oy)
.setRange(0, 20).setValue(physics.param.iterations_springs).plugTo( physics.param, "iterations_springs");
cp5.addSlider("iter: collisions").setGroup(group_physics).setSize(sx, sy).setPosition(px, py+=oy)
.setRange(0, 8).setValue(physics.param.iterations_collisions).plugTo( physics.param, "iterations_collisions");
cp5.addRadio("setDisplayMode").setGroup(group_physics).setSize(sy,sy).setPosition(px, py+=(int)(oy*1.4f))
.setSpacingColumn(2).setSpacingRow(2).setItemsPerRow(1)
.addItem("springs: colored",0)
.addItem("springs: tension",1)
.activate(DISPLAY_MODE);
cp5.addCheckBox("setDisplayTypes").setGroup(group_physics).setSize(sy,sy).setPosition(px, py+=(int)(oy*2.4f))
.setSpacingColumn(2).setSpacingRow(2).setItemsPerRow(1)
.addItem("PARTICLES", 0).activate(DISPLAY_PARTICLES ? 0 : 5)
.addItem("MESH " , 1).activate(DISPLAY_MESH ? 1 : 5)
.addItem("SRPINGS" , 2).activate(DISPLAY_SRPINGS ? 2 : 5);
}
////////////////////////////////////////////////////////////////////////////
// GUI - SPRINGS
////////////////////////////////////////////////////////////////////////////
Group group_springs = cp5.addGroup("springs");
{
Group group_cloth = group_springs;
group_cloth.setHeight(20).setSize(gui_w, 210)
.setBackgroundColor(color(0, 204)).setColorBackground(color(0, 204));
group_cloth.getCaptionLabel().align(CENTER, CENTER);
px = 10; py = 15;
cp5.addSlider("Cloth.tensile").setGroup(group_cloth).setSize(sx, sy).setPosition(px, py+=oy)
.setRange(0.01f, 1).setValue(param_spring_cloth.damp_dec).plugTo(param_spring_cloth, "damp_dec");
cp5.addSlider("Cloth.pressure").setGroup(group_cloth).setSize(sx, sy).setPosition(px, py+=oy)
.setRange(0.01f, 1).setValue(param_spring_cloth.damp_inc).plugTo(param_spring_cloth, "damp_inc");
cp5.addSlider("Cube.tensile").setGroup(group_cloth).setSize(sx, sy).setPosition(px, py+=(int)(oy*2))
.setRange(0.01f, 1).setValue(param_spring_softbody.damp_dec).plugTo(param_spring_softbody, "damp_dec");
cp5.addSlider("Cube.pressure").setGroup(group_cloth).setSize(sx, sy).setPosition(px, py+=oy)
.setRange(0.01f, 1).setValue(param_spring_softbody.damp_inc).plugTo(param_spring_softbody, "damp_inc");
cp5.addSlider("Ball.tensile").setGroup(group_cloth).setSize(sx, sy).setPosition(px, py+=(int)(oy*2))
.setRange(0.01f, 1).setValue(param_spring_circle.damp_dec).plugTo(param_spring_circle, "damp_dec");
cp5.addSlider("Ball.pressure").setGroup(group_cloth).setSize(sx, sy).setPosition(px, py+=oy)
.setRange(0.01f, 1).setValue(param_spring_circle.damp_inc).plugTo(param_spring_circle, "damp_inc");
}
////////////////////////////////////////////////////////////////////////////
// GUI - ACCORDION
////////////////////////////////////////////////////////////////////////////
cp5.addAccordion("acc").setPosition(gui_x, gui_y).setWidth(gui_w).setSize(gui_w, height)
.setCollapseMode(Accordion.MULTI)
.addItem(group_springs)
.addItem(group_physics)
// .open(0, 1)
;
}
public static void main(String args[]) {
PApplet.main(new String[] { SoftBody2D_Playground.class.getName() });
}
}
```
|
Hargraves is a surname. Notable people with the surname include:
Daniel Hargraves (born 1975), Australian rules footballer
Edward Hargraves (1816–1891), gold prospector in Australia
Fred Hargraves (1884–1960), English footballer
James Hargraves (1690–1741), English Anglican divine who became the Dean of Chichester Cathedral in 1739
Orin Hargraves (born 1953), American lexicographer
Paul E. Hargraves (born 1941), a phycologist using the standard author abbreviation of Hargraves
Peter Hargraves (born 1972), American retired sprinter
Robert B. Hargraves (1928-2003), geologist
See also
Hargraves, Martian crater, named after Robert B. Hargraves
Hargrave (surname)
|
Seizure is a 2003 novel by American author Robin Cook which explores the concerns raised by advances in therapeutic cloning. It debuted at Number 6 on The New York Times Best Seller list on August 3, 2003. It remained on the best seller list for three weeks. In November 2004 it appeared on the paperback best seller list.
Senator Ashley Butler is a quintessential Southern demagogue whose support of traditional American values includes a knee-jerk reaction against virtually all biotechnologies. When he's called to chair a subcommittee introducing legislation to ban new cloning technology, the senator views his political future in bold relief; and Dr. Daniel Lowell, inventor of the technique that will take stem cell research to the next level, sees a roadblock positioned before his biotech startup.
The two seemingly opposite personalities clash during the senate hearings, but the men have a common desire. Butler's hunger for political power far outstrips his concern for the unborn; and Lowell's pursuit of gargantuan personal wealth and celebrity overrides any considerations for patients' well-being. Further complicating the proceedings is the confidential news that Senator Butler has developed Parkinson's disease, leading the senator and the researcher into a Faustian pact. In a perilous attempt to prematurely harness Lowell's new technology, the therapy leaves the senator with the horrifying effects of temporal lobe epilepsy—seizures of the most bizarre order.
Characters
Dr. Daniel Lowell: Scientist inventor of the HTSR and main character
Dr. Stephanie D'agonstini: Scientist and Daniel's partner
Senator Ashley Butler: Quintessential Southern demagogue politician that is afraid of the impact that his disease(Parkinson) may do with his career.
Carol Menning: Butler's assistant that follows him during his travel.
References
2003 American novels
Novels by Robin Cook
|
```smalltalk
/*
* PROJECT: Atomix Development
* LICENSE: BSD 3-Clause (LICENSE.md)
* PURPOSE: Kernel Thread Scheduler
* PROGRAMMERS: Aman Priyadarshi (aman.eureka@gmail.com)
*/
using Atomixilc.Machine;
using Atomixilc.Attributes;
using Atomix.Kernel_H.Lib;
using Atomix.Kernel_H.Arch.x86;
namespace Atomix.Kernel_H.Core
{
internal static class Scheduler
{
static Thread CurrentTask;
static IQueue<Thread> ThreadQueue;
internal static Process SystemProcess;
internal static Process RunningProcess
{
get { return CurrentTask.Process; }
}
internal static Thread RunningThread
{
get { return CurrentTask; }
}
internal static void Init(uint KernelDirectory, uint StackStart)
{
ThreadQueue = new IQueue<Thread>(100);
SystemProcess = new Process("System", KernelDirectory);
CurrentTask = new Thread(SystemProcess, 0, StackStart, 10000, false);
CurrentTask.Start();
}
internal static void AddThread(Thread th)
{
ThreadQueue.Enqueue(th);
}
[Label("__Switch_Task__")]
internal static uint SwitchTask(uint aStack)
{
var NextTask = InvokeNext();
if (CurrentTask == NextTask)
return aStack;
CurrentTask.SaveStack(aStack);
if (NextTask.Process != CurrentTask.Process)
NextTask.Process.SetEnvironment();
CurrentTask = NextTask;
return NextTask.LoadStack();
}
private static Thread InvokeNext()
{
var state = CurrentTask.Status;
switch (state)
{
case ThreadState.Running:
ThreadQueue.Enqueue(CurrentTask);
break;
default:// Do nothing for not active
break;
}
return ThreadQueue.Dequeue();
}
}
}
```
|
Cochise County in southeastern Arizona was the scene of a number of violent conflicts in the 19th-century and early 20th-century American Old West, including between white settlers and Apache Indians, between opposing political and economic factions, and between outlaw gangs and local law enforcement. Cochise County was carved off in 1881 from the easternmost portion of Pima County during a formative period in the American Southwest. The era was characterized by rapidly growing boomtowns, the emergence of large-scale farming and ranching interests, lucrative mining operations, and the development of new technologies in railroading and telecommunications. Complicating the situation was staunch resistance to white settlement from local Native American groups, most notably during the Apache Wars, as well as Cochise County's location on the border with Mexico, which not only threatened international conflict but also presented opportunities for criminal smugglers and cattle rustlers.
Factional hostilities emerged as soon as American settlers began arriving in southern Arizona in large numbers in the 1860s and 1870s. The Gadsden Purchase of 1853 had opened the territory to Americans, and the sudden growth of settlement and investment proved a source of great enmity between local Apaches and the American newcomers. Pima County and later Cochise County were the primary battleground for most of the resulting quarter-century of warfare, which was almost constant in the region until the late 1880s.
In addition to the Native American conflicts, there was also considerable tension between rural residents of Cochise County, who were for the most part Democrats from the agrarian Confederate States, and more urban residents living within the region's few developed towns, who were largely Republican business owners from the industrial Union States. The division created polarizing sectional alliances and culminated in countless local feuds, the most well-known of which has been called the Cochise County feud or the Earp–Clanton feud, which included the historic Gunfight at the O.K. Corral in the town of Tombstone and Wyatt Earp's Vendetta Ride in the early 1880s. Dr. George E. Goodfellow famously described Tombstone, the capital of Cochise County, as the "condensation of wickedness."
Formation of Cochise County
The land that now comprises Cochise County, along with the rest of modern Arizona south of the Gila River and a small part of southwestern New Mexico, was Mexican territory until 1853, when it was purchased by the United States in the Gadsden Purchase. Cochise County was created on February 1, 1881 from the eastern portion of Pima County. It was named after the legendary Chiricahua Apache war chief Cochise, who was a pivotal figure in the Apache Wars before his death in 1874. The county seat was Tombstone until 1929, when it moved to Bisbee.
Cochise County is almost a perfect square in the southeasternmost corner of the state: . It covers a land area of –1.5 times the size of the state of Connecticut–and shares an border with the Mexican state of Sonora. The county also includes of the San Pedro River watershed. Most of Cochise County is dominated by basin and range topography, with high-altitude rocky plateaus and forested mountain ranges (including the Dragoon Mountains and the Chiricahua Mountains) separated by broad, low-lying valleys of Sonoran Desert scrub. Dragoon Summit divides the county.
Apache Wars
The Apache Wars were Arizona's and New Mexico's most prominent conflict for more than 30 years in the latter half of the 19th century, as well as one of the lengthiest conflicts in all of the Indian Wars. The land that is now Cochise County is located in the ancestral homeland of the Chiricahua Apache, who fiercely resisted American encroachment on their territory for decades. The Chiricahua War began as the American Civil War was beginning, when the southern part of New Mexico Territory (of which present-day Arizona was a part) had nominally joined the Confederate States of America as Confederate Arizona and both Union and Confederate armies vied for control of New Mexico and the important emigrant trail and stagecoach route passing through the area en route to California.
Cochise and the Bascom Affair
In January 1861, a band of Apaches raided the ranch of John Ward, stole some livestock, and kidnapped the son of a Mexican woman who lived with Ward. Ward wrongly believed that the unrelated Chiricahua Apache chief Cochise and his followers were responsible and he demanded that the United States Army confront Cochise, recover the livestock, and rescue the boy. A month later, the army responded by sending Lieutenant George Nicholas Bascom and fifty-four men into Apache Pass, where several people had been massacred by the Chiricahuas in the past. After setting up camp about a mile from a Butterfield Overland Mail waystation, Bascom lured Cochise and several of his relatives into his tent and threatened to hold him hostage until Ward's property and the boy were returned. Furious and insulted, Cochise cut through the wall of the tent, eluded the guards posted outside, and attempted to free the other hostages by offering to exchange them for several of his own prisoners. When Bascom refused, the Chiricahuas killed their prisoners, and Bascom responded by hanging Cochise's hostage relatives.
The incident triggered a series of retaliations that soon erupted into full-scale war across a vast portion of southeastern Arizona and southwestern New Mexico. The fighting did not come to any significant end until 1886, when the Apache shaman Geronimo surrendered to the U.S. Army at Skeleton Canyon. But even after Geronimo's surrender, warriors like Massai and the Apache Kid continued to raid settlers' ranches for years into the 20th century.
Battle of Apache Pass
Between 1861 and 1886, there were dozens of skirmishes between white soldiers or settlers and Apaches in Cochise County. The biggest of these battles occurred when Cochise and Mangas Coloradas attempted to ambush a detachment of the California Column as it made its way east through Apache Pass in the Chiricahua Mountains. During the ensuing Battle of Apache Pass, on June 15, 1862, Captain Thomas L. Roberts and 126 men routed 500 Apaches from their fortified positions overlooking the area. Sixty-six Apaches were killed while the Americans suffered only five casualties. The army then built Fort Bowie to protect the trail that led through the pass.
Conflicts and feuds
Rural vs. town interests
Many of the ranchers and Cowboys who lived in the Cochise County countryside were resentful of the growing power of the business owners and townspeople who increasingly influenced local politics and law in the county. A cowboy in that time and region was generally regarded as an outlaw. Legitimate cowmen were referred to as cattle herders or ranchers.
The ranchers largely maintained control of the country around Tombstone, due in large part to the sympathetic support of Cochise County Sheriff Johnny Behan who favored the Cowboys and rural ranchers and who grew to intensely dislike the Earps. Behan tended to ignore the Earp's complaints about the McLaury's and Clanton's horse thieving and cattle rustling.
The townspeople and business owners welcomed the Cowboys who had money to spend in the numerous bordellos, gambling halls, and drinking establishments. When lawlessness got out of hand, they enacted ordinances to control the disruptive revelry and shootings. As officers of the law, the Earp brothers held authority at times on the federal, county and local level. They were resented by the Cowboys for their tactics as when Wyatt Earp buffaloed Curly Bill when he accidentally shot Marshal Fred White. The Earps were also known to bend the law in their favor when it affected their gambling and saloon interests, which earned them further enmity with the Cowboy faction.
Under the surface were other tensions aggravating the simmering distrust. Most of the leading cattlemen and Cowboys were Confederate sympathizers and Democrats from Southern states, especially Missouri and Texas. The mine and business owners, miners, townspeople and city lawmen including the Earps were largely Republicans from the Northern states. There was also the fundamental conflict over resources and land, of traditional, Southern-style "small government" agrarianism of the rural Cowboys with Northern-style industrial capitalism.
Lawmen vs. outlaws
During the rapid growth of Cochise County in the 1880s at the peak of the silver mining boom, outlaws derisively called "Cowboys" frequently robbed stagecoaches and brazenly stole cattle in broad daylight, scaring off the legitimate cowboys watching the herds. It became an insult to call a legitimate cattleman a "Cowboy." Legal cowmen were generally called herders or ranchers.
The lines between the outlaw element and law enforcement were not always distinct. Doc Holliday had a reputation as a killer, though modern research has only identified three individuals he shot. He was also friends with Bill Leonard, who was implicated in a stagecoach robbery. Cowboy Frank Stilwell was a known cattle rustler and served as an assistant Sheriff under Cochise County Sheriff Johnny Behan. Cowboy and outlaw Texas Jack Vermillion was a friend of the Earps who deputized him after Virgil Earp was maimed in an ambush.
Murder of Deputies Adams and Finley
At the behest of Judge Charles Silent, Territorial Marshal Crawley Dake deputized John H. Adams and Cornelius Finley. While traveling north to company headquarters in September 1878, less than two weeks after they were deputized, five Mexicans intercepted Adams and Finley, who they believed were carrying gold ore, and killed them. They failed to find any ore however. One of the suspects in the killings was Florentino Saiz, who the Arizona Weekly Star identified as "the 1878 murderer of Deputy U.S. Marshals Cornelius Finley and John Hicks Adams on September 2, 1878". During the Coroner's Inquest into the death of Morgan Earp, Pete Spence's wife, Marietta Duarte, implicated her husband and four other men, including Florentino Cruz, in Morgan's murder. Saiz and Cruz may have been the same person. In 1879, the Mexican federal government refused to allow Dake to extradite two of the suspects. Unable to find justice in the courts for his brother's murder, Wyatt Earp began a vendetta, and killed Florentino Cruz on March 22, 1882 at a wood camp near South Pass of the Dragoon Mountains.
Smuggling and cattle rustling
From early in the history of Pima County, bandits used the border between the United States and Mexico to raid across in one direction and use the other as sanctuary. In December, 1878, and again the next year, Mexican authorities complained about American outlaw Cowboys who stole Mexican beef and resold it in Arizona. The Arizona Citizen reported that both U.S. and Mexican bandits were stealing horses from the Santa Cruz Valley and selling the livestock in Sonora, Mexico. Arizona Territorial Governor Fremont investigated the Mexican government's allegations and accused them in turn of allowing outlaws to use Sonora as a base of operations for raiding into Arizona.
The Clanton and McLaury clans were among those allegedly involved in the clandestine cross-border livestock smuggling from Sonora into Arizona. The illegal cattle operations kept beef prices lower and provided cheap stock that helped small ranchers get by. Many early Tombstone residents looked the other way when it was "only Mexicans" being robbed.
The Clanton family led by Newman Haynes Clanton had a ranch about southeast of Tombstone that was a way station for stolen Mexican beef. He was assisted by his sons Ike, Billy, and Phin Clanton. Frank and Tom McLaury had a ranch outside of Tombstone that they used to buy and re-sell stolen Mexican cattle.
On July 25, 1880, Captain Joseph H. Hurst requested the assistance of Deputy U.S. Marshal Virgil Earp, who brought Wyatt and Morgan Earp, as well as Wells Fargo agent Marshall Williams, to track the thieves of six U.S. Army mules stolen from Camp Rucker. This was a federal matter because the animals were U.S. property. They found the animals on the McLaury's Ranch on the Babacomari River and the branding iron used to change the "US" brand to "D8".
To avoid bloodshed, Cowboy Frank Patterson promised to return the mules so the posse withdrew. The Cowboys showed up two days later without the mules and laughed at Captain Hurst and the Earps. Hurst responded by printing and distributing a handbill describing the theft and promising a reward for the "trial and conviction" of the thieves. He specifically charged Frank McLaury with assisting with the theft. It was reprinted in the Epitaph on July 30, 1880. Frank McLaury angrily printed a response in the Cowboy-friendly Nuggett, calling Hurst "a coward, a vagabond, a rascal, and a malicious liar."
In late 1879 one of Wyatt Earp's prized horses named Dick Naylor was stolen. Almost a year later he got a tip that it had been seen at the Clanton ranch near Charleston. Earp rode out to their ranch and spotted the horse. Ike Clanton and his brother Billy were both present. Earp returned with Holliday to recover the horse. On the way, they overtook Behan, who was riding in a wagon. Behan was also heading to the ranch to serve an election-hearing subpoena on Ike Clanton.
Pleasant Valley War
At the start of the Pleasant Valley War, a notorious feud that took place in Arizona's Tonto Basin from 1882 to 1892, the smuggler Neil McLeod left Globe, Arizona for Cochise County. Many Cochise County cattle dealers were losing cattle and horses to thieves that T. W. Ayles described as an "organized band" whose "connections seem to extend to and over the Mexican border."
In the middle of 1881, the Mexican military dropped taxes on alcohol and tobacco and began vigorously pursuing the Cowboys. In response, the rustlers increased their stock thefts on the U.S. side of the border. McLeod used boxing matches and wrestling as a cover for his less scrupulous activities of rustling and selling contraband near the Mexican border.
Prizefighting had become quite sophisticated in Tombstone and in October 1883 McLeod beat the then champion Young in four rounds and was awarded a $400 prize. McLeod also maintained good relations with Judge Aaron H. Hackney in Globe by helping the judge's friends caught in the feud, including Frederick Russell Burnham, leave the Tonto Basin to hide out in Tombstone. In August 1884, McLeod was assassinated in Nacosair, Sonora, Mexico by James Powers.
First Skeleton Canyon Massacre
Skeleton Canyon is located in the Peloncillo Mountains, which straddles the modern Arizona and New Mexico state border. This canyon connects the Animas Valley of New Mexico with the San Simon Valley of Arizona. The first Skeleton Canyon massacres was an attack on Mexican Rurales by rustlers in July 1879. They attacked a rancho in northern Sonora, killing several of the inhabitants. After the attack on the rancho, the survivors reported the attacks to Commandant Francisco Neri and he sent a detachment of Rurales out, among them Captain Alfredo Carrillo. The Rurales illegally crossed the border into Arizona and as the Rurales entered in the canyon, shots were fired. Three of the Rurales survived the initial onslaught. Then the Cowboys executed the Rurales leader.
The Mexican Government protested the killings to President Chester Arthur despite the fact that the Mexican policemen had crossed into a foreign country where they had no jurisdiction. Although the assailants were never positively identified, it was speculated that Old Man Clanton, Ike Clanton, Billy Clanton, "Curly Bill" Brocius, Johnny Ringo, and Florentino Cruz were the murderers.
Governor Fremont asks for militia
Territorial Governor John C. Frémont, who had been the first Republican presidential candidate in 1856, was largely an absentee appointee. But in February 1881 he suggested to the territorial legislature that they fund a state militia to ride against the outlaws and stop the rustling. The legislators hooted down his plan.
Second Skeleton Canyon Massacre
In July 1881, "Curly Bill" Brocius received word that several Mexican smugglers carrying silver were heading to the United States through Skeleton Canyon. Johnny Ringo reported that Curly Bill and several other men including Old Man Clanton, Ike Clanton, Billy Clanton, Frank McLaury, Tom McLaury, Billy Grounds, and Zwing Hunt hid in the rocks high above the trail. As the smugglers rode through the canyon the murderers opened fire, killing six of the nineteen. The rest were killed as they tried to get away.
Guadalupe Canyon Massacre
In August 1881, Mexican Commandant Felipe Neri dispatched troops to the border. Some researchers theorize that Mexican Rurales led by Captain Alfredo Carrillo, who had survived the Skeleton Canyon Massacre in 1879, led the ambush of the Cowboys. They found "Old Man" Clanton and six others bedded down for the night in Guadalupe Canyon with a herd of cattle. The Mexicans waited until dawn and killed five of the Cowboys.
The dead included Old Man Clanton; Charley Snow, a ranch hand who thought he had heard a bear and was the first killed; Jim Crain, who was wanted for the stagecoach robbery near Tombstone during which Bud Philpott had been murdered; Dick Gray, son of Col. Mike Gray; and Billy Lang, a cattle rancher. Clanton, Crain, and Gray were either still in their bedrolls or in the act of getting dressed when killed. Lang was the only one who had a chance to fight back. Harry Ernshaw, a milk farmer, was grazed by a bullet on the nose; Billy Byers feigned death until the soldiers left.
Tombstone marshal killed
On October 28, 1880, Tombstone town marshal Fred White was trying to break up a group of late revelers shooting at the moon on Allen Street in Tombstone. He attempted to confiscate the pistol of Curly Bill Brocius and was shot in the abdomen. Wyatt Earp buffaloed Brocius, knocking him unconscious, and arrested him. Wyatt told his biographer many years later that he thought Brocius was still armed at the time and had not noticed that Brocius' pistol was already on the ground. The pistol contained only one expended cartridge and five live rounds. Brocius waived a preliminary hearing so he and his case could be transferred to Tucson District Court. White died two days after his shooting, changing Brocius' charge to murder.
On December 27, 1880, Wyatt testified that he thought the shooting was accidental. It was also demonstrated that Brocius' pistol could be fired from half-cock. Fred White also left a statement before he died that the shooting was not intentional. The judge released Brocius, but Brocius retained bitterness towards Earp for the rough treatment he got when arrested.
Elections and ballot-stuffing
Pima County sheriff Democrat Charles A. Shibell appointed
Wyatt Earp as a Pima County Deputy Sheriff on July 27, 1880. Wyatt did his job well, and from August through November his name was mentioned nearly every week by the Epitaph or the Nugget newspapers.
Pima County sheriff
Shibell ran for reelection in the November 2, 1880, against Republican Bob Paul. The region was strongly Republican and Paul was expected to win. Whoever won would likely appoint someone from the same political party. Republican Wyatt expected he would continue in the job.
Johnny Ringo attended the Democratic party convention in Pima County and got himself elected as a delegate for San Simon/Cienega Precinct 27, located in San Simon Valley in northern Cochise County. This was despite the fact he'd shot Louis Hancock, the brother in law to James Hayes, a member of the Committee of Credentials, only a few months before. He persuaded the Pima County Board of Supervisors to make the house of rustling buddy Joe Hill the polling place and himself and Ike Clanton as election officials. But when the supervisors learned that Joe Hill had already moved, they moved the polling place to the home of John Magill and removed Johnny Ringo and Ike Clanton as election officials, but it was too late. On the day of the election in Precinct 27—San Simon Valley in northern Cochise County—James C. Hancock reported that Cowboys Curly Bill Brocius and Ringo served as election officials in the San Simon precinct. However, on June 1, the day before the election, Ringo biographer David Johnson places Ringo in New Mexico with Ike Clanton. Curly Bill had been arrested and jailed in Tucson on October 28 for killing Sheriff Fred White and he was still there on election day.
The home of John Magill was used as the polling place. A mysterious "Henry Johnson" was responsible for certifying the ballots. This turned out to be James Johnson, the same James K. Johnson who had been shooting up Allen Street the night Marshal White was killed. Moreover, he was the same Johnson that testified at Curly Bill's preliminary hearing after he shot Fred White. James Johnson later testified for Bud Paul in the election hearing and said that the ballots had been left in the care of Phin Clanton. None of the witnesses during the election hearing reported on ballots being cast for dogs.
They gathered the dozen or so legal voters in town and coerced them to vote for Shibell. Then they gathered non-voters like the children and Chinese and had them cast ballots. Not satisfied, the named all the dogs, burros and poultry and cast ballots in their names for Shibell. The San Simon precinct turned out an amazing 104 votes, 103 of them for Shibell.
Democrat Shibell was unexpectedly reelected by a margin of 58 votes. He immediately appointed Johnny Behan as the new deputy sheriff for the Tombstone region of Pima County. Wyatt, also a Republican, supported Paul and resigned as deputy sheriff on November 9. Shibell immediately appointed Behan as the new South Pima deputy sheriff. Paul and Earp checked the ballots and were suspicious to see that 108 out of 109 voters in Precinct 27 had voted for Shibell.
On November 19, Paul filed suit and accused Shibell of ballot-stuffing. The trial was transferred to Tucson's district court and began on January 17. On January 20, 1881, the Arizona Star reported, “There has been some big cheating somewhere, and by some persons. It was clear that there had been reckless counting at Tombstone, fraud at San Simon and a careless election board at Tres Alamos.” A recount was held and this time Paul had 402 votes and Shibell had 354.
Judge C.G.W. French ruled in Paul's favor in late January, 1881, throwing the whole precinct out, but Shibell appealed, preventing Paul from taking office until April 1881. However, the eastern portion of Pima County had been split off to form Cochise County on February 1. This prevented Paul from appointing Earp as deputy sheriff for the Tombstone area of Pima County.
Cochise County sheriff
When Cochise County was initially formed, both Wyatt Earp and Johnny Behan sought the new sheriff's position. It was a lucrative job, far beyond its salary. The sheriff was not only responsible for enforcing the law but was also county assessor, tax collector, and responsible for collecting prostitution, gambling, liquor, and theater fees. The county supervisors allowed the sheriff to keep ten percent of all amounts paid. This made the job worth more than $40,000 a year (about $ today).
Democrat Johnny Behan had considerably more political experience than Republican Wyatt Earp. Behan had previously served as Yavapai County Sheriff from 1871 to 1873. He had been elected to the Arizona Territorial Legislature twice, representing Yavapai Country in the 7th Territorial Legislature in 1873 and Mohave County in the 10th in 1879. Behan moved for a time to the northwest Arizona Territory where he served as the Mohave County Recorder in 1877 and then deputy sheriff of Mohave County at Gillet, in 1879.
Furthermore, Behan's partner in the Dexter Livery, John Dunbar, had a brother Thomas who served in the Arizona Territorial Legislature. Thomas Dunbar introduced the bill that split Cochise County off from Pima County in the far southeast corner of the territory, and he became known as the “father of Cochise County”. The Dunbar family in their home town of Bangor, Maine, were "close family friends" of the powerful Senator James G. Blaine, also from Bangor, and one of the most powerful Republican congressmen of his time. The Dunbars used their influence to help Behan get appointed Sheriff of the new Cochise County, in February 1881.
Behan utilized his existing position and his superior political connections to lobby hard for the position. The office was appointed by the Territorial governor and confirmed by the territorial legislature. Wyatt also had other interests including a claim in the Vizina mine, water rights proposals, and a one-quarter interest in the Oriental Hotel where the ran the Faro concession at the Oriental Saloon. Behan made a deal with Earp: he promised Wyatt a position as his undersheriff if he was appointed over Wyatt. Earp withdrew his name from the political contest. Behan used the influence Behan had previously served as Yavapai County Sheriff from 1871 to 1873. He had been elected to the Arizona Territorial Legislature twice, representing Yavapai Country in the 7th Territorial Legislature in 1873 and Mohave County in the 10th in 1879. Behan moved for a time to the northwest Arizona Territory where he served as the Mohave County Recorder in 1877 and then deputy sheriff of Mohave County at Gillet, in 1879.
When Cochise County was formed, Governor John C. Frémont appointed and the Territorial Legislature approved Behan as Sheriff and John Dunbar as the first Cochise County Treasurer on February 10, 1881.
Behan reneged on his deal with Earp and appointed prominent Democrat Harry Woods instead. Later that year, Behan gave a contrived explanation of his actions during the hearings after the Gunfight at the O.K. Corral. He said he broke his promise to appoint Earp because of an incident shortly before his appointment. Searching for a horse stolen in late 1879, Wyatt learned about a year later that the horse was in nearby Charleston. Wyatt spotted Billy Clanton attempting to remove the horse from a corral and retrieved it without trouble. Behan was in the area to serve a subpoena on Ike Clanton. Ike was hopping mad when Behan finally found him, for Earp had told Clanton that Behan "had taken a posse of nine men down there to arrest him." Behan took offense at Wyatt's tactics and changed his mind about appointing Wyatt. Holliday reported in an interview in 1882 that "from that time a coolness grew up between the two men."
Virgil Earp loses election
Deputy U.S. Marshal Virgil Earp ran against Ben Sippy, a part-time policeman, for the job of Tombstone City Marshal. Sippy ran ads in the Democrat and Cowboy-loyal Nuggett, but Virgil didn't get the support he expected from John Clum and the Republican The Epitaph. To Virgil's surprise, he lost, by a margin of 311–259.
Notable shootouts
On April 6, 1880, only two months after he arrived, Tombstone resident George Parsons wrote in his diary, "Several more shooting scrapes but they are of such frequent occurrence that their novelty has ceased." Cochise County became well known for the dozens of shootings and public gunfights between Old West lawmen and outlaws that occurred within its boundaries.
Gunfight at the O.K. Corral
On October 26, 1881, town marshal and Deputy U. S. Marshall Virgil Earp led his brothers and deputies Wyatt and James along with Doc Holliday in a confrontation with five outlaw Cowboys: Wesley Fuller, Tom and Frank McLaury, Billy Claiborne and Ike Clanton. The Cowboys were armed in violation of a city ordinance prohibiting carrying weapons in town. This shootout became famous as the Gunfight at the O.K. Corral.
Wes Fuller, Ike Clanton, and Billy Claiborne ran from the fight unharmed, but Billy Clanton and both McLaury brothers were killed. It took place at about 3:00 pm on October 26, 1881, in Tombstone, Arizona Territory of the United States. Although only three men were killed during the gunfight, it is generally regarded as the most famous gunfight in the history of the Old West.
Despite its name, the gunfight began in a wide empty lot or alley on Fremont Street, between C. S. Fly's lodging house and photographic studio and the MacDonald assay house. The lot was six doors east of an alleyway that served as the O.K. Corral's rear entrance. The two opposing parties were initially only about apart. About thirty shots were fired in thirty seconds. Ike Clanton and Billy Claiborne ran from the fight, unharmed. Frank and Tom McLaury and Billy Clanton were killed; Morgan Earp, Virgil Earp, and Doc Holliday were wounded and survived. Ike Clanton filed murder charges against Doc Holliday and the Earps and after a month-long preliminary hearing they were exonerated.
The Earps and Doc Holliday were charged by Billy Clanton's brother, Ike Clanton, with murder but were eventually exonerated by a local judge after a 30-day preliminary hearing and then again by a local grand jury. The so-called cowboy faction allegedly targeted the Earps for assassination over the next six months, which led to a series of killings and retributions, often with federal and county lawmen supporting different sides of the conflict. The series of battles became known as the Earp Vendetta Ride. The Earps and Doc Holliday left Arizona and the Cowboy element was less of a threat from that point forward.
Death of jealous husband
In mid-June 1880, "Buckskin" Frank Leslie who had an eye for the ladies, escorted Mary Killen, the Commercial Hotel's housekeeper, to a dance. Accounts differ as to whether she was separated from her husband or still married to him. After the dance, they sat on the porch of the Cosmopolitan Hotel and were spotted by her drunken husband. Killen appeared out of the dark street and shot at Leslie, barely missing him. Leslie fired back and shot Mike Killen twice. Mike Killen died five days later and was buried in Tombstone's Boot Hill cemetery on June 22, 1880. Buckskin Frank Leslie had fired in self-defense and wasn't arrested.
Murder of Henry Schneider
On January 14, 1881, gambler Michael O'Rourke (aka Johnny Behind the Deuce) got into a disagreement with Henry Schneider chief engineer of the Tombstone Mining and Milling Company, at a restaurant during lunch. According to the Epitaph, Schneider suspected O'Rourke had stolen several articles of clothing from Schneider's cabin, but could not prove it. The next day Schneider went to lunch and he and O'Rourke got into a disagreement. According to O'Rourke and two friends of his, Schneider produced a knife and O'Rourke shot him in self-defense. Another stated that O'Rourke took offense at something Schneider said and threatened him, saying, "Goddamn you, I'll shoot you when you come out." He waited for Schneider outside and killed him—O'Rourke said in self-defense. Charleston's constable George McKelvey arrested O'Rourke. Henry was well-liked and a mob of miners quickly gathered, threatening to lynch O'Rourke on the spot. McKelvey took O'Rourke on a buckboard wagon to Tombstone and the mob followed. Once in Tombstone, Tombstone Marshal Ben Sippy, with the assistance of Deputy U.S. Marshal Virgil Earp, Assistant City Marshal Morgan Earp, and former Pima County Sheriff Wyatt Earp held the crowd at bay until calm prevailed.
Luke Short kills Charlie Storms
In February 1881, Luke Short and professional gambler and gunfighter Charlie Storms had a verbal altercation about a faro game which was defused by Bat Masterson, who knew both men. On February 28, Storm confronted Short once again outside the Oriental Saloon. This time he pulled a .45 caliber revolver. But Storm was too slow and Short shot him in the chest at point-blank range, his muzzle flash setting Storms' clothes on fire. Short shot Storms again before his body hit the ground. George Parsons witnessed Storms' death and wrote in his journal, "The faro games went right on as though nothing had happened." Short was arrested but the shooting was ruled as self defense. Short left Tombstone in April and returned to Leadville, Colorado.
The Tombstone Daily Journal asked in March 1881 how a hundred outlaws could terrorize the best system of government in the world, asking, "Can not the marshal summon a posse and throw the ruffians out?"
Billy Claiborne shoots James Hickey
In Charleston on October 1, 1881, James Hickey was drunk. He taunted Billy Claiborne, following him around, daring him to fight. Billy avoided Hickey and left Ben Wood's Saloon for J.B. Ayer's Saloon across the street. Hickey followed right behind, hectoring Claiborne. Claiborne left once again because of Hickey and headed toward Harry Queen's Saloon. Hickey stopped him before he could enter Harry Queen's. Claiborne yelled, "Stay away from me!" and drew his revolver. He shot Hickey once between the eyes. Claiborne was arrested and stood trial but was acquitted because of Hickey's harassment.
Frank Leslie kills Billy Claiborne
On November 14, 1882, Frank Leslie became involved in an argument with Billy Claiborne who, after the recent death of William Bonney, had demanded to be known as "Billy the Kid". Claiborne claimed he had killed three men who had ridiculed him, although there is only evidence of Claiborne 's fight with Bill Hickey. But he had run from the Gunfight at the O.K. Corral, and his reputation suffered as a result. On this late night, Claiborne threatened Leslie. When Leslie still refused to refer to him as "Billy the Kid", Claiborne left, only to return later that night. A patron told Leslie that Claiborne was waiting for him outside the Oriental Saloon. Leslie walked out a side door and when Claiborne shot at him, he shot back, killing him. Because Claiborne was waiting outside to ambush Leslie and fired first, the killing was ruled justified. It was described as "an incident that became an open-and-closed affair over the short period of time required by Frank to puff through a rolled cylinder of Bull Durham."
Lester Moore "no more"
One of the most well-known headstones in Tombstone's Boot Hill cemetery belongs to Lester Moore. He was a Wells, Fargo & Co. station agent in the Mexican border at Naco, Arizona Territory. One afternoon Hank Dunstan appeared to claim a package due him. When he got it, he found it thoroughly mangled. The two men argued, and then both Moore and Dunstan drew their weapons. Dunstan got off four shots, hitting Moore in the chest with his .44 caliber revolver. Dunstan was mortally wounded with a hole through his ribs by the single shot Moore had squeezed off. Les Moore was buried in Boot Hill and his famous tombstone epitaph remains an attraction in the cemetery:
HERE LIES LESTER MOORE, FOUR SLUGS FROM A 44, NO LES NO MORE
Sheriff Slaughter vs. Jack Taylor Gang
In 1886, John Horton Slaughter was elected Cochise County sheriff. Four members of the Jack Taylor Gang—Manuel Robles, Geronimo Miranda, Fred Federico, and Nieves Deron—were wanted by both the Mexican Rurales and Arizona law enforcement for robbery and murder. Trying to evade the lawmen's pursuit, the men came to Tombstone to visit relatives. Slaughter heard that the men were nearby and rode out to arrest them, but the outlaws were tipped off and fled. Slaughter eventually learned they were hiding with Robles' brother in nearby Contention City. Slaughter raised a posse and raided the house. They surprised Robles and Deron while they were asleep, but the gang members rose shooting. Slaughter killed Robles' brother while Deron and Robles ran for cover. Shooting as he ran, Deron nicked Slaughter's right ear lobe. Slaughter shot back and mortally wounded Deron. In his dying minutes, Deron confessed he was guilty of the crimes he had been charged with. Robles got away but he and Miranda were later shot and killed by Mexican authorities.
Robberies and murders
Between 1877 and 1882, bandits robbed 36 stagecoaches in the southern portion of the territory.
While his election as sheriff was being contested, Bob Paul worked as a Wells Fargo shotgun messenger. On March 15, 1881, at 10 p.m., three cowboys attempted to rob a Kinnear & Company stagecoach carrying US$26,000 in silver bullion (or about $ in today's dollars) en route from Tombstone to Benson, Arizona, the nearest rail terminal. Eli "Budd" Philpot, a popular driver, had been handling the reins but felt ill and switched giving Paul the driver's seat in Contention City. Near Drew's Station, just outside Contention City, a man stepped into the road and commanded them to "Hold!" Paul fired his shotgun and emptied his revolver at the robbers, wounding a Cowboy later identified as Bill Leonard in the groin. They returned fire, killing Philpot, sitting in Bob Paul's place. Paul urged the horses forward and the Cowboys fired again, killing Peter Roerig, a beer salesman for Anheuser Busch riding in the rear dickey seat. The horses spooked and Paul wasn't able to bring the stage under control for almost a mile, leaving the robbers with nothing. Paul later said he thought the first shot killing Philpot in the shotgun messenger seat had been meant for him as he would normally have been seated there.
Paul sent a telegram from nearby Benson to Deputy U.S. Marshal Virgil Earp. When Virgil received it at 10:00 pm, he deputized Wyatt and Morgan Earp, Bat Masterson, who was dealing faro at the Oriental Saloon, and Wells Fargo agent Marshall Williams. Pima County Sheriff Behan and Deputy Sheriff Billy Breakinridge joined in. They arrived at Drew's Station around dawn. Behan tried to talk them out of following the murders. The Earps were skilled trackers and Masterson could read sign like an Indian. Virgil insisted they pursue the killers and told Behan he could ride along or ride back to Tombstone. Behan indifferently agreed to stay, and they tracked three pairs of boots to a nearby hiding spot where the outlaws mounted their horses, accompanied by a fourth rider. Bob Paul thought he recognized the voices of Bill Leonard and Jim Crain. Wyatt had seen Cowboys who worked for the Clantons—Bill Leonard, Harry "The Kid" Head, Jim Crain and a drifter named Luther King—camped out in an old adobe along the stage route for the past week, and he suspected they were watching the stage for an opportunity to rob it.
The lines between the outlaw element and law enforcement were not always distinct. Doc Holliday had a reputation as a killer, though modern research has only identified three individuals he shot. He was friends with Bill Leonard, who was implicated in a stagecoach robbery. The Earp posse followed the robbers' trail to a nearby ranch where they found King. He wouldn't tell who his confederates were until the posse lied and told him that Holliday's girlfriend Big Nose Kate had been shot in the holdup. Fearful of Holliday's reputation, he confessed to holding the reins of the robbers' horses, and identified Leonard, Head, and Crain as the robbers. They were all known Cowboys and rustlers. Behan, Breakinridge, and Williams escorted King back to Tombstone.
Posse tracks robbers
On March 19, King was escorted in the front door of the jail and let out the back a few minutes later. King had arranged with Undersheriff Harry Woods (publisher of the Nuggett) to sell the horse he had been riding to John Dunbar, Sheriff Behan's partner in the Dexter Livery Stable. King conveniently escaped while Dunbar and Woods were making out the bill-of-sale. Woods claimed that someone had deliberately unlocked a secured back door to the jail. The Earps and the townspeople were furious at King's easy escape. Williams was later dismissed from Wells Fargo, leaving behind a number of debts, when it was determined he had been stealing from the company for years.
The Earps, Bob Paul, and others pursued the other two men for 17 days, riding at one point for 60 hours without food and 36 hours without water. The Cowboys were able to trade in their horses for fresh stock from friendly ranchers along the way. The lawmen were not so fortunate. During the ride Paul's horse died and Wyatt and Morgan's horses became so weak that the two men walked back to Tombstone to obtain new horses. After pursuing the Cowboys for over in a grand circle that finally led them into New Mexico, they could not obtain more fresh horses and were forced to give up the chase. They returned to Tombstone on April 1 to find that King had escaped. Wyatt accused Behan of complicity in King's escape, a charge that Behan strongly denied.
Behan submitted a bill for $796.84 to the county for posse expenses, but he refused to reimburse the Earp's for any of their costs. Virgil was incensed. They were finally reimbursed by Wells, Fargo & Co. later on, but King's easy escape and Behan's refusal to reimburse them caused further friction between county and city law enforcement, and between Behan and the Earps.
Bisbee stagecoach robbery
Virgil Earp was appointed Tombstone's city marshal (chief of police) on June 6, 1881, after Ben Sippy abandoned the job. On September 8, 1881, tensions between the Earps and the McLaurys further increased when a passenger stage on the 'Sandy Bob Line' in the Tombstone area bound for Bisbee, Arizona was held up. The masked bandits robbed all of the passengers of their valuables and the strongbox of about $2,500. During the robbery, the driver heard one of the robbers describe the money as "sugar", a phrase known to be used by Frank Stilwell. Stilwell had until the prior month been a deputy for Sheriff Behan but had been fired for "accounting irregularities".
Both Pete Spence and Stilwell were friends of Tom and Frank McLaury. Wyatt and Virgil Earp rode with the sheriff's posse attempting to track the Bisbee stage robbers. At the scene of the holdup, Wells, Fargo & Co. undercover agent Fred Dodge discovered an unusual boot print left by someone wearing a custom-repaired boot heel. The Earps checked a shoe repair shop in Bisbee that had removed a heel matching the boot print from Frank Stilwell's boot.
When Stilwell arrived in Bisbee with his livery stable partner, Pete Spence, Morgan and Wyatt Earp, Marshall Williams, agent of Wells Fargo & Co., and Deputy Sheriff William Breakenridge arrested them for the robbery. Stilwell and Spence were arraigned before Judge Wells Spicer and posted $7,000 bond. At the preliminary hearing, Stilwell and Spence were able to provide several witnesses who supported their alibis. Judge Spicer dropped the charges for insufficient evidence just as he had done for Doc Holliday earlier in the year. Having evaded the state charges, Virgil Earp in his other role as Deputy U.S. Marshal re-arrested Spence and Stilwell on October 13 for the Bisbee robbery on a new federal charge of interfering with a mail carrier. The newspapers, however, reported that they had been arrested for a different stage robbery that occurred (October 8) near Contention City.
Virgil took Frank to Tucson for arraignment where he was held at the territorial jail. While Virgil was in Tucson, he deputized Wyatt to act in his place an assistant city marshal in Tombstone. The Cowboys saw the new arrest as further evidence they were being unfairly harassed and targeted by the Earps. They let the Earps know that they could expect retaliation. While Wyatt and Virgil were in Tucson for the federal hearing on the charges against Spence and Stilwell, Frank McLaury confronted Morgan Earp. He told him that the McLaurys would kill the Earps if they tried to arrest Spence, Stilwell, or the McLaurys again. The Tombstone Epitaph reported "that since the arrest of Spence and Stilwell, veiled threats [are] being made that the friends of the accused will 'get the Earps.'"
Prominent businessman murdered
Representing the danger of the Cowboys to business owners and citizens, on Saturday evening, March 25, 1881, chief engineer Martin R. Peel of the Tombstone Milling and Mining Company near Charleston was murdered by two masked men. They walked in on the mill superintendent, Peel, and two other men, who were socializing, with their rifles raised. Without saying a word, the first man fired a shot into Peel's chest, killing him instantly. Peel was shot through the heart at such close range that his clothing was set on fire. The second man fired a shot at W.L. Austin but he and the two other men ducked behind a counter and were not hit. No attempt at robbery was made and no motive could be immediately established. The assailants, who wore scarves over the faces, were believed to be Zwing Hunt and Billy Grounds, two well-known outlaws. Some people assumed they were planning a robbery but one fired his weapon accidentally. They were assisted by a third man who held their horses only a few hundred feet away.
The crime sent reverberations through Tombstone and Cochise County. Peel's father, respected Judge Bryant L. Peel, sent an open letter to The Tombstone Epitaph stating that the citizens needed to take the law into their own hands.
Within a few days, the suspects were reported at the Chandler Ranch, about from Tombstone. Sheriff Behan was out of town, so Deputy Sheriff Billy Breakenridge assembled a posse of five locals who arrived at the ranch before dawn on the morning of March 29. John Gillespie knocked on the door and was answered with a shot to the chest. Jack Young was shot through the thigh, and Hugh Allen was struck in the neck. Billy Grounds stepped into the doorway and Breakenridge shot him in the face with a shotgun, killing him. When Zwing Hunt stepped out from the side of the house, Breakenridge and Allen shot him in the chest. Hunt survived his wounds and soon escaped with the help of his brother, Hugh, who came over from Texas.
Spurned lover kills William Kinsman
On February 23, 1883, William Kinsman had been living with May Woodman. Apparently as a joke, someone had run a notice in the Epitaph newspaper that Kinsman intended to marry Woodman. Kinsman responded and ran his own announcement that he had no intentions of marrying May Woodman. Kinsman was standing in front of the Oriental Saloon on Allen Street when May Woodman walked up and shot him. Woodman was sentenced to five years in the Yuma Territorial Prison for killing Kinsman, although the acting governor pardoned her after she had served less than one year.
Bisbee Massacre
On December 8, 1883 in Bisbee, Arizona five outlaw Cowboys robbed the Goldwater & Castaneda Mercantile and killed four people. Six men were arrested and five of them were later convicted and executed on March 28, 1884 for the crime. They were the first criminals to be legally hanged in Tombstone, then the county seat.
The sixth man, John Heath, who was accused of organizing the robbery, was tried separately and sentenced to life in prison. Unsatisfied with what they perceived as a lenient sentence, a Tombstone lynch mob forcibly removed him from jail and hanged him on February 22, 1884. Today, the graves of the five murderers are part of the popular tourist attraction at Boothill Graveyard in Tombstone.
See also
History of Arizona
Lincoln County War (New Mexico)
Johnson County War (Wyoming)
References
Arizona folklore
1881 in Arizona Territory
1882 in the United States
Conflicts in 1881
Conflicts in 1882
Crime in Arizona Territory
American folklore
History of Cochise County, Arizona
Cochise County conflict
Ghost towns in Arizona
Cemeteries in Arizona
|
Gary Mel Hein (born March 26, 1965, in Los Angeles) is a former American rugby union player. He played as a wing. He is grandson of the late New York Giants player and Pro Football Hall of Famer Mel Hein and son of the pole vaulter Mel Hein Jr., who briefly held the U.S. indoor pole vault record with a jump of 16'5 3/4" at the Cow Palace in San Francisco, California.
Career
Hein started his career in 1984, playing for California Golden Bears, coached by the future USA Eagles coach Jack Clark, where he won three national titles in five years. In 1989, after several tours around the world and playing for Old Belvedere RFC in Dublin, Ireland, Hein played for Oxford University, becoming, along with Don James Jr., the first American rugby union players to play for Oxford since Pete Dawkins in 1961. With Oxford, Hein lost once and won once the annual Varsity Blues match against Cambridge
Hein was first capped for the USA Eagles against Tunisia, at Pebble Beach, on 3 May 1987. He also played the 1987 and 1991 World Cups. His last cap for the USA Eagles was during the match against Bermuda, at Hamilton, on March 12, 1994. In his international career, he earned 25 international caps and scored in aggregate 12 points and 3 tries in XVs, and 29 international caps representing the USA Eagles in Sevens, including as captain of his country's side in the 1993 Rugby Sevens World Cup in Edinburgh, Scotland.
Currently, he coaches Lamorinda Rugby's High School Varsity and JV teams.
Notes
External links
American rugby union coaches
American rugby union players
1953 births
United States international rugby union players
Rugby union wings
Living people
Oxford University RFC players
1987 Rugby World Cup players
1991 Rugby World Cup players
|
Helen Chandler (February 1, 1906 – April 30, 1965) was an American film and theater actress, best known for playing Mina Seward in the 1931 horror film Dracula.
Career
Born in Charleston, South Carolina, Chandler attended the Professional Children's School and made her Broadway debut on September 2, 1918 at the Globe Theatre in Penrod, Edward E. Rose's adaptation of the like-named Booth Tarkington series of stories. Her early performances include Arthur Hopkins' 1920 production of Richard III, which starred John Barrymore, Macbeth in 1921 with Lionel Barrymore; Hedvig in Henrik Ibsen's The Wild Duck in 1925 and Ophelia in the 1925 modern dress version of Hamlet starring Basil Sydney. By the time of her first film she had been in over twenty Broadway productions.
She made her film debut in 1927 in the silent film The Music Master and in 1930 joined Leslie Howard, Douglas Fairbanks Jr., and Beryl Mercer for Outward Bound, the film version of the stage success. The unusual story told of a group of passengers on an ocean liner who gradually realize that they are all dead and will soon face the Last Judgment. Chandler, with her blonde hair and ethereal quality, was considered to be perfectly cast, and she received critical praise for her performance.
Chandler did not want to play the role for which she is probably best remembered, Mina in Dracula (1931); she wanted to play Alice in Alice in Wonderland. Nevertheless, Chandler joined David Manners and Bela Lugosi in what became one of the most successful movies made at that time. Chandler appeared with Manners that same year in the Lost Generation celebration of alcohol in Paris, The Last Flight, also starring Richard Barthelmess and John Mack Brown. She achieved more successes in A House Divided (1931) and Christopher Strong (1933), all the while dividing her time among films, radio work, and theater roles in Los Angeles, New York and London.
She starred in British actor Will Hay's 1934 movie, Radio Parade of 1935 and played a role on Lux Radio in Alibi Ike with Joe E. Brown (1937). Among her later stage successes were Within The Gates in 1934, Pride and Prejudice in 1935, Lady Precious Stream in 1936 with then-husband Bramwell Fletcher, a reprise of her film role in Outward Bound in 1938 and various productions of Boy Meets Girl and Noël Coward's Tonight at 8.30
Personal life
On February 14, 1935, Chandler married actor Bramwell Fletcher in Riverside Church in New York. She had previously been married to Cyril Hume, whom she divorced in 1934. From February 3, 1943 until her death, she was married to Walter S. Piascik.
By the late 1930s she was battling alcoholism and her acting career declined. She was hospitalized several times but was unable to gain control over her life. In 1950, Chandler was severely burned in an apartment fire, caused by her falling asleep while smoking. She survived but her body was badly disfigured. Her alcoholism continued unabated after the accident.
Death
Chandler died on April 30, 1965, following surgery in Hollywood, California and was cremated according to her wishes. She was survived by her husband, Walter Piascik. Chandler's original inurnment site was the private vault at the Chapel of the Pines Crematory in Los Angeles. After an online fundraising effort led by Hollywood Graveyard YouTube channel creator Arthur Dark,
Chandler's ashes were reinurned in the Cathedral Mausoleum of Hollywood Forever Cemetery on July 13, 2023.
Filmography
The Music Master (1927) as Jenny
The Joy Girl (1927) as Flora
Mother's Boy (1929) as Rose Lyndon
Salute (1929) as Nancy Wayne
The Sky Hawk (1929) as Joan Allan
Rough Romance (1930) as Marna Reynolds
Outward Bound (1930) as Ann
Mothers Cry (1930) as Beattie Williams
Dracula (1931) as Mina Seward
Daybreak (1931) as Laura Taub
Salvation Nell (1931) as Nell Saunders
The Last Flight (1931) as Nikki
Fanny Foley Herself (1931) as Lenore
A House Divided (1931) as Ruth Evans
Vanity Street (1932) as Jeanie Gregg
Behind Jury Doors (1932) as Elsa Lanfield
Christopher Strong (1933) as Monica Strong
Alimony Madness (1933) as Joan Armstrong
Dance Hall Hostess (1933) as Nora Marsh
Goodbye Again (1933) as Elizabeth Clochessy
The Worst Woman in Paris? (1933) as Mary Dunbar
Long Lost Father (1934) as Lindsey Lane
Midnight Alibi (1934) as Abigail 'Abbie' Ardsley as a Girl
Unfinished Symphony (1934) as Emmie Passeuter
Radio Parade of 1935 (1934) as Joan Garland
It's a Bet (1935) as Clare
Mr. Boggs Steps Out (1938) as Oleander Tubbs (final film role)
Renfield (2023) as Mina Seward (posthumous; archive footage)
Notes
References
External links
Helen Chandler Fansite
Van Neste, Dan. ""Helen Chandler: Vision of Beauty"" Films of the Golden Age, Spring, 1998 .
Fiore, David. Hypocritic Days Insomniac Press, 2014. Toronto Star review of the novel
1906 births
1965 deaths
American film actresses
American silent film actresses
American stage actresses
Burials at Chapel of the Pines Crematory
Burials at Hollywood Forever Cemetery
Actresses from Charleston, South Carolina
20th-century American actresses
|
James Britton may refer to:
James H. Britton (1817–1900), mayor of St. Louis, Missouri, United States
James Britton (painter) (1878–1936), American painter and art critic
James Clelland Britton (1903–1984), Canadian diplomat in the 1950s and 1960s
Jim Britton (born 1944), American baseball pitcher
James N. Britton (1908–1994), British educator
|
```yaml
apiVersion: k8s.nginx.org/v1
kind: VirtualServer
metadata:
name: virtual-server
spec:
host: virtual-server.example.com
policies:
- name: jwt-policy-valid-multi
- name: jwt-policy-valid
upstreams:
- name: backend2
service: backend2-svc
port: 80
- name: backend1
service: backend1-svc
port: 80
routes:
- path: "/backend1"
action:
pass: backend1
- path: "/backend2"
action:
pass: backend2
```
|
```c++
#include "rar.hpp"
void HashValue::Init(HASH_TYPE Type)
{
HashValue::Type=Type;
// Zero length data CRC32 is 0. It is important to set it when creating
// headers with no following data like directories or symlinks.
if (Type==HASH_RAR14 || Type==HASH_CRC32)
CRC32=0;
if (Type==HASH_BLAKE2)
{
// your_sha256_hash
// is BLAKE2sp hash of empty data. We init the structure to this value,
// so if we create a file or service header with no following data like
// "file copy" or "symlink", we set the checksum to proper value avoiding
// additional header type or size checks when extracting.
static byte EmptyHash[32]={
0xdd, 0x0e, 0x89, 0x17, 0x76, 0x93, 0x3f, 0x43,
0xc7, 0xd0, 0x32, 0xb0, 0x8a, 0x91, 0x7e, 0x25,
0x74, 0x1f, 0x8a, 0xa9, 0xa1, 0x2c, 0x12, 0xe1,
0xca, 0xc8, 0x80, 0x15, 0x00, 0xf2, 0xca, 0x4f
};
memcpy(Digest,EmptyHash,sizeof(Digest));
}
}
bool HashValue::operator == (const HashValue &cmp)
{
if (Type==HASH_NONE || cmp.Type==HASH_NONE)
return true;
if (Type==HASH_RAR14 && cmp.Type==HASH_RAR14 ||
Type==HASH_CRC32 && cmp.Type==HASH_CRC32)
return CRC32==cmp.CRC32;
if (Type==HASH_BLAKE2 && cmp.Type==HASH_BLAKE2)
return memcmp(Digest,cmp.Digest,sizeof(Digest))==0;
return false;
}
DataHash::DataHash()
{
blake2ctx=NULL;
HashType=HASH_NONE;
#ifdef RAR_SMP
ThPool=NULL;
MaxThreads=0;
#endif
}
DataHash::~DataHash()
{
#ifdef RAR_SMP
delete ThPool;
#endif
cleandata(&CurCRC32, sizeof(CurCRC32));
if (blake2ctx!=NULL)
{
cleandata(blake2ctx, sizeof(blake2sp_state));
delete blake2ctx;
}
}
void DataHash::Init(HASH_TYPE Type,uint MaxThreads)
{
if (blake2ctx==NULL)
blake2ctx=new blake2sp_state;
HashType=Type;
if (Type==HASH_RAR14)
CurCRC32=0;
if (Type==HASH_CRC32)
CurCRC32=0xffffffff; // Initial CRC32 value.
if (Type==HASH_BLAKE2)
blake2sp_init(blake2ctx);
#ifdef RAR_SMP
DataHash::MaxThreads=Min(MaxThreads,MaxHashThreads);
#endif
}
void DataHash::Update(const void *Data,size_t DataSize)
{
#ifndef SFX_MODULE
if (HashType==HASH_RAR14)
CurCRC32=Checksum14((ushort)CurCRC32,Data,DataSize);
#endif
if (HashType==HASH_CRC32)
CurCRC32=CRC32(CurCRC32,Data,DataSize);
if (HashType==HASH_BLAKE2)
{
#ifdef RAR_SMP
if (MaxThreads>1 && ThPool==NULL)
ThPool=new ThreadPool(BLAKE2_THREADS_NUMBER);
blake2ctx->ThPool=ThPool;
blake2ctx->MaxThreads=MaxThreads;
#endif
blake2sp_update( blake2ctx, (byte *)Data, DataSize);
}
}
void DataHash::Result(HashValue *Result)
{
Result->Type=HashType;
if (HashType==HASH_RAR14)
Result->CRC32=CurCRC32;
if (HashType==HASH_CRC32)
Result->CRC32=CurCRC32^0xffffffff;
if (HashType==HASH_BLAKE2)
{
// Preserve the original context, so we can continue hashing if necessary.
blake2sp_state res=*blake2ctx;
blake2sp_final(&res,Result->Digest);
}
}
uint DataHash::GetCRC32()
{
return HashType==HASH_CRC32 ? CurCRC32^0xffffffff : 0;
}
bool DataHash::Cmp(HashValue *CmpValue,byte *Key)
{
HashValue Final;
Result(&Final);
if (Key!=NULL)
ConvertHashToMAC(&Final,Key);
return Final==*CmpValue;
}
```
|
```go
// snippet-start:[rds.go.list_parameter_groups]
package main
// snippet-start:[rds.go.list_parameter_groups.imports]
import (
"fmt"
"github.com/aws/aws-sdk-go/aws/session"
"github.com/aws/aws-sdk-go/service/rds"
)
// snippet-end:[rds.go.list_parameter_groups.imports]
// GetParameterGroups retrieves your Amazon RDS parameter groups
// Inputs:
// sess is the current session, which provides configuration for the SDK's service clients
// Output:
// If success, a list of the parameter groups and nil
// Otherwise, nil and an error from the call to DescribeDBParameterGroups
func GetParameterGroups(sess *session.Session) (*rds.DescribeDBParameterGroupsOutput, error) {
// snippet-start:[rds.go.list_parameter_groups.call]
svc := rds.New(sess)
result, err := svc.DescribeDBParameterGroups(nil)
// snippet-end:[rds.go.list_parameter_groups.call]
if err != nil {
return nil, err
}
return result, nil
}
func main() {
// snippet-start:[rds.go.list_parameter_groups.session]
sess := session.Must(session.NewSessionWithOptions(session.Options{
SharedConfigState: session.SharedConfigEnable,
}))
// snippet-end:[rds.go.list_parameter_groups.session]
result, err := GetParameterGroups(sess)
if err != nil {
fmt.Println("Got an error retrieving parameter groups:")
fmt.Println(err)
return
}
if len(result.DBParameterGroups) < 1 {
fmt.Println("Could not find any parameter groups")
return
}
for _, p := range result.DBParameterGroups {
fmt.Println("* " + *p.DBParameterGroupName + " with description: " + *p.Description)
}
}
// snippet-end:[rds.go.list_parameter_groups]
```
|
```java
/*
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are met:
*
* - Redistributions of source code must retain the above copyright notice,
* this list of conditions and the following disclaimer.
* - Redistributions in binary form must reproduce the above copyright notice,
* this list of conditions and the following disclaimer in the documentation
* and/or other materials provided with the distribution.
* - Neither the name of the libjpeg-turbo Project nor the names of its
* contributors may be used to endorse or promote products derived from this
* software without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS",
* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
* LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
* CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
* SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
* INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
* CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
* POSSIBILITY OF SUCH DAMAGE.
*/
package org.libjpegturbo.turbojpeg;
import java.awt.image.*;
import java.nio.*;
import java.io.*;
/**
* TurboJPEG compressor
*/
public class TJCompressor implements Closeable {
private static final String NO_ASSOC_ERROR =
"No source image is associated with this instance";
/**
* Create a TurboJPEG compressor instance.
*/
public TJCompressor() throws TJException {
init();
}
/**
* Create a TurboJPEG compressor instance and associate the uncompressed
* source image stored in <code>srcImage</code> with the newly created
* instance.
*
* @param srcImage see {@link #setSourceImage} for description
*
* @param x see {@link #setSourceImage} for description
*
* @param y see {@link #setSourceImage} for description
*
* @param width see {@link #setSourceImage} for description
*
* @param pitch see {@link #setSourceImage} for description
*
* @param height see {@link #setSourceImage} for description
*
* @param pixelFormat pixel format of the source image (one of
* {@link TJ#PF_RGB TJ.PF_*})
*/
public TJCompressor(byte[] srcImage, int x, int y, int width, int pitch,
int height, int pixelFormat) throws TJException {
setSourceImage(srcImage, x, y, width, pitch, height, pixelFormat);
}
/**
* @deprecated Use
* {@link #TJCompressor(byte[], int, int, int, int, int, int)} instead.
*/
@SuppressWarnings("checkstyle:JavadocMethod")
@Deprecated
public TJCompressor(byte[] srcImage, int width, int pitch, int height,
int pixelFormat) throws TJException {
setSourceImage(srcImage, width, pitch, height, pixelFormat);
}
/**
* Create a TurboJPEG compressor instance and associate the uncompressed
* source image stored in <code>srcImage</code> with the newly created
* instance.
*
* @param srcImage see
* {@link #setSourceImage(BufferedImage, int, int, int, int)} for description
*
* @param x see
* {@link #setSourceImage(BufferedImage, int, int, int, int)} for description
*
* @param y see
* {@link #setSourceImage(BufferedImage, int, int, int, int)} for description
*
* @param width see
* {@link #setSourceImage(BufferedImage, int, int, int, int)} for description
*
* @param height see
* {@link #setSourceImage(BufferedImage, int, int, int, int)} for description
*/
public TJCompressor(BufferedImage srcImage, int x, int y, int width,
int height) throws TJException {
setSourceImage(srcImage, x, y, width, height);
}
/**
* Associate an uncompressed RGB, grayscale, or CMYK source image with this
* compressor instance.
*
* @param srcImage image buffer containing RGB, grayscale, or CMYK pixels to
* be compressed or encoded. This buffer is not modified.
*
* @param x x offset (in pixels) of the region in the source image from which
* the JPEG or YUV image should be compressed/encoded
*
* @param y y offset (in pixels) of the region in the source image from which
* the JPEG or YUV image should be compressed/encoded
*
* @param width width (in pixels) of the region in the source image from
* which the JPEG or YUV image should be compressed/encoded
*
* @param pitch bytes per line of the source image. Normally, this should be
* <code>width * TJ.pixelSize(pixelFormat)</code> if the source image is
* unpadded, but you can use this parameter to, for instance, specify that
* the scanlines in the source image are padded to a 4-byte boundary or to
* compress/encode a JPEG or YUV image from a region of a larger source
* image. You can also be clever and use this parameter to skip lines, etc.
* Setting this parameter to 0 is the equivalent of setting it to
* <code>width * TJ.pixelSize(pixelFormat)</code>.
*
* @param height height (in pixels) of the region in the source image from
* which the JPEG or YUV image should be compressed/encoded
*
* @param pixelFormat pixel format of the source image (one of
* {@link TJ#PF_RGB TJ.PF_*})
*/
public void setSourceImage(byte[] srcImage, int x, int y, int width,
int pitch, int height, int pixelFormat)
throws TJException {
if (handle == 0) init();
if (srcImage == null || x < 0 || y < 0 || width < 1 || height < 1 ||
pitch < 0 || pixelFormat < 0 || pixelFormat >= TJ.NUMPF)
throw new IllegalArgumentException("Invalid argument in setSourceImage()");
srcBuf = srcImage;
srcWidth = width;
if (pitch == 0)
srcPitch = width * TJ.getPixelSize(pixelFormat);
else
srcPitch = pitch;
srcHeight = height;
srcPixelFormat = pixelFormat;
srcX = x;
srcY = y;
srcBufInt = null;
srcYUVImage = null;
}
/**
* @deprecated Use
* {@link #setSourceImage(byte[], int, int, int, int, int, int)} instead.
*/
@SuppressWarnings("checkstyle:JavadocMethod")
@Deprecated
public void setSourceImage(byte[] srcImage, int width, int pitch,
int height, int pixelFormat) throws TJException {
setSourceImage(srcImage, 0, 0, width, pitch, height, pixelFormat);
srcX = srcY = -1;
}
/**
* Associate an uncompressed RGB or grayscale source image with this
* compressor instance.
*
* @param srcImage a <code>BufferedImage</code> instance containing RGB or
* grayscale pixels to be compressed or encoded. This image is not modified.
*
* @param x x offset (in pixels) of the region in the source image from which
* the JPEG or YUV image should be compressed/encoded
*
* @param y y offset (in pixels) of the region in the source image from which
* the JPEG or YUV image should be compressed/encoded
*
* @param width width (in pixels) of the region in the source image from
* which the JPEG or YUV image should be compressed/encoded (0 = use the
* width of the source image)
*
* @param height height (in pixels) of the region in the source image from
* which the JPEG or YUV image should be compressed/encoded (0 = use the
* height of the source image)
*/
public void setSourceImage(BufferedImage srcImage, int x, int y, int width,
int height) throws TJException {
if (handle == 0) init();
if (srcImage == null || x < 0 || y < 0 || width < 0 || height < 0)
throw new IllegalArgumentException("Invalid argument in setSourceImage()");
srcX = x;
srcY = y;
srcWidth = (width == 0) ? srcImage.getWidth() : width;
srcHeight = (height == 0) ? srcImage.getHeight() : height;
if (x + width > srcImage.getWidth() || y + height > srcImage.getHeight())
throw new IllegalArgumentException("Compression region exceeds the bounds of the source image");
int pixelFormat;
boolean intPixels = false;
if (byteOrder == null)
byteOrder = ByteOrder.nativeOrder();
switch (srcImage.getType()) {
case BufferedImage.TYPE_3BYTE_BGR:
pixelFormat = TJ.PF_BGR; break;
case BufferedImage.TYPE_4BYTE_ABGR:
case BufferedImage.TYPE_4BYTE_ABGR_PRE:
pixelFormat = TJ.PF_XBGR; break;
case BufferedImage.TYPE_BYTE_GRAY:
pixelFormat = TJ.PF_GRAY; break;
case BufferedImage.TYPE_INT_BGR:
if (byteOrder == ByteOrder.BIG_ENDIAN)
pixelFormat = TJ.PF_XBGR;
else
pixelFormat = TJ.PF_RGBX;
intPixels = true; break;
case BufferedImage.TYPE_INT_RGB:
case BufferedImage.TYPE_INT_ARGB:
case BufferedImage.TYPE_INT_ARGB_PRE:
if (byteOrder == ByteOrder.BIG_ENDIAN)
pixelFormat = TJ.PF_XRGB;
else
pixelFormat = TJ.PF_BGRX;
intPixels = true; break;
default:
throw new IllegalArgumentException("Unsupported BufferedImage format");
}
srcPixelFormat = pixelFormat;
WritableRaster wr = srcImage.getRaster();
if (intPixels) {
SinglePixelPackedSampleModel sm =
(SinglePixelPackedSampleModel)srcImage.getSampleModel();
srcStride = sm.getScanlineStride();
DataBufferInt db = (DataBufferInt)wr.getDataBuffer();
srcBufInt = db.getData();
srcBuf = null;
} else {
ComponentSampleModel sm =
(ComponentSampleModel)srcImage.getSampleModel();
int pixelSize = sm.getPixelStride();
if (pixelSize != TJ.getPixelSize(pixelFormat))
throw new IllegalArgumentException("Inconsistency between pixel format and pixel size in BufferedImage");
srcPitch = sm.getScanlineStride();
DataBufferByte db = (DataBufferByte)wr.getDataBuffer();
srcBuf = db.getData();
srcBufInt = null;
}
srcYUVImage = null;
}
/**
* Associate an uncompressed YUV planar source image with this compressor
* instance.
*
* @param srcImage YUV planar image to be compressed. This image is not
* modified.
*/
public void setSourceImage(YUVImage srcImage) throws TJException {
if (handle == 0) init();
if (srcImage == null)
throw new IllegalArgumentException("Invalid argument in setSourceImage()");
srcYUVImage = srcImage;
srcBuf = null;
srcBufInt = null;
}
/**
* Set the level of chrominance subsampling for subsequent compress/encode
* operations. When pixels are converted from RGB to YCbCr (see
* {@link TJ#CS_YCbCr}) or from CMYK to YCCK (see {@link TJ#CS_YCCK}) as part
* of the JPEG compression process, some of the Cb and Cr (chrominance)
* components can be discarded or averaged together to produce a smaller
* image with little perceptible loss of image clarity (the human eye is more
* sensitive to small changes in brightness than to small changes in color.)
* This is called "chrominance subsampling".
* <p>
* NOTE: This method has no effect when compressing a JPEG image from a YUV
* planar source. In that case, the level of chrominance subsampling in
* the JPEG image is determined by the source. Furthermore, this method has
* no effect when encoding to a pre-allocated {@link YUVImage} instance. In
* that case, the level of chrominance subsampling is determined by the
* destination.
*
* @param newSubsamp the level of chrominance subsampling to use in
* subsequent compress/encode oeprations (one of
* {@link TJ#SAMP_444 TJ.SAMP_*})
*/
public void setSubsamp(int newSubsamp) {
if (newSubsamp < 0 || newSubsamp >= TJ.NUMSAMP)
throw new IllegalArgumentException("Invalid argument in setSubsamp()");
subsamp = newSubsamp;
}
/**
* Set the JPEG image quality level for subsequent compress operations.
*
* @param quality the new JPEG image quality level (1 to 100, 1 = worst,
* 100 = best)
*/
public void setJPEGQuality(int quality) {
if (quality < 1 || quality > 100)
throw new IllegalArgumentException("Invalid argument in setJPEGQuality()");
jpegQuality = quality;
}
/**
* Compress the uncompressed source image associated with this compressor
* instance and output a JPEG image to the given destination buffer.
*
* @param dstBuf buffer that will receive the JPEG image. Use
* {@link TJ#bufSize} to determine the maximum size for this buffer based on
* the source image's width and height and the desired level of chrominance
* subsampling.
*
* @param flags the bitwise OR of one or more of
* {@link TJ#FLAG_BOTTOMUP TJ.FLAG_*}
*/
public void compress(byte[] dstBuf, int flags) throws TJException {
if (dstBuf == null || flags < 0)
throw new IllegalArgumentException("Invalid argument in compress()");
if (srcBuf == null && srcBufInt == null && srcYUVImage == null)
throw new IllegalStateException(NO_ASSOC_ERROR);
if (jpegQuality < 0)
throw new IllegalStateException("JPEG Quality not set");
if (subsamp < 0 && srcYUVImage == null)
throw new IllegalStateException("Subsampling level not set");
if (srcYUVImage != null)
compressedSize = compressFromYUV(srcYUVImage.getPlanes(),
srcYUVImage.getOffsets(),
srcYUVImage.getWidth(),
srcYUVImage.getStrides(),
srcYUVImage.getHeight(),
srcYUVImage.getSubsamp(),
dstBuf, jpegQuality, flags);
else if (srcBuf != null) {
if (srcX >= 0 && srcY >= 0)
compressedSize = compress(srcBuf, srcX, srcY, srcWidth, srcPitch,
srcHeight, srcPixelFormat, dstBuf, subsamp,
jpegQuality, flags);
else
compressedSize = compress(srcBuf, srcWidth, srcPitch, srcHeight,
srcPixelFormat, dstBuf, subsamp, jpegQuality,
flags);
} else if (srcBufInt != null) {
if (srcX >= 0 && srcY >= 0)
compressedSize = compress(srcBufInt, srcX, srcY, srcWidth, srcStride,
srcHeight, srcPixelFormat, dstBuf, subsamp,
jpegQuality, flags);
else
compressedSize = compress(srcBufInt, srcWidth, srcStride, srcHeight,
srcPixelFormat, dstBuf, subsamp, jpegQuality,
flags);
}
}
/**
* Compress the uncompressed source image associated with this compressor
* instance and return a buffer containing a JPEG image.
*
* @param flags the bitwise OR of one or more of
* {@link TJ#FLAG_BOTTOMUP TJ.FLAG_*}
*
* @return a buffer containing a JPEG image. The length of this buffer will
* not be equal to the size of the JPEG image. Use {@link
* #getCompressedSize} to obtain the size of the JPEG image.
*/
public byte[] compress(int flags) throws TJException {
byte[] buf;
if (srcYUVImage != null) {
buf = new byte[TJ.bufSize(srcYUVImage.getWidth(),
srcYUVImage.getHeight(),
srcYUVImage.getSubsamp())];
} else {
checkSourceImage();
buf = new byte[TJ.bufSize(srcWidth, srcHeight, subsamp)];
}
compress(buf, flags);
return buf;
}
/**
* @deprecated Use
* {@link #setSourceImage(BufferedImage, int, int, int, int)} and
* {@link #compress(byte[], int)} instead.
*/
@SuppressWarnings("checkstyle:JavadocMethod")
@Deprecated
public void compress(BufferedImage srcImage, byte[] dstBuf, int flags)
throws TJException {
setSourceImage(srcImage, 0, 0, 0, 0);
compress(dstBuf, flags);
}
/**
* @deprecated Use
* {@link #setSourceImage(BufferedImage, int, int, int, int)} and
* {@link #compress(int)} instead.
*/
@SuppressWarnings("checkstyle:JavadocMethod")
@Deprecated
public byte[] compress(BufferedImage srcImage, int flags)
throws TJException {
setSourceImage(srcImage, 0, 0, 0, 0);
return compress(flags);
}
/**
* Encode the uncompressed source image associated with this compressor
* instance into a YUV planar image and store it in the given
* <code>YUVImage</code> instance. This method uses the accelerated color
* conversion routines in TurboJPEG's underlying codec but does not execute
* any of the other steps in the JPEG compression process. Encoding
* CMYK source images to YUV is not supported.
*
* @param dstImage {@link YUVImage} instance that will receive the YUV planar
* image
*
* @param flags the bitwise OR of one or more of
* {@link TJ#FLAG_BOTTOMUP TJ.FLAG_*}
*/
public void encodeYUV(YUVImage dstImage, int flags) throws TJException {
if (dstImage == null || flags < 0)
throw new IllegalArgumentException("Invalid argument in encodeYUV()");
if (srcBuf == null && srcBufInt == null)
throw new IllegalStateException(NO_ASSOC_ERROR);
if (srcYUVImage != null)
throw new IllegalStateException("Source image is not correct type");
checkSubsampling();
if (srcWidth != dstImage.getWidth() || srcHeight != dstImage.getHeight())
throw new IllegalStateException("Destination image is the wrong size");
if (srcBufInt != null) {
encodeYUV(srcBufInt, srcX, srcY, srcWidth, srcStride, srcHeight,
srcPixelFormat, dstImage.getPlanes(), dstImage.getOffsets(),
dstImage.getStrides(), dstImage.getSubsamp(), flags);
} else {
encodeYUV(srcBuf, srcX, srcY, srcWidth, srcPitch, srcHeight,
srcPixelFormat, dstImage.getPlanes(), dstImage.getOffsets(),
dstImage.getStrides(), dstImage.getSubsamp(), flags);
}
compressedSize = 0;
}
/**
* @deprecated Use {@link #encodeYUV(YUVImage, int)} instead.
*/
@SuppressWarnings("checkstyle:JavadocMethod")
@Deprecated
public void encodeYUV(byte[] dstBuf, int flags) throws TJException {
if (dstBuf == null)
throw new IllegalArgumentException("Invalid argument in encodeYUV()");
checkSourceImage();
checkSubsampling();
YUVImage dstYUVImage = new YUVImage(dstBuf, srcWidth, 4, srcHeight,
subsamp);
encodeYUV(dstYUVImage, flags);
}
/**
* Encode the uncompressed source image associated with this compressor
* instance into a unified YUV planar image buffer and return a
* <code>YUVImage</code> instance containing the encoded image. This method
* uses the accelerated color conversion routines in TurboJPEG's underlying
* codec but does not execute any of the other steps in the JPEG compression
* process. Encoding CMYK source images to YUV is not supported.
*
* @param pad the width of each line in each plane of the YUV image will be
* padded to the nearest multiple of this number of bytes (must be a power of
* 2.)
*
* @param flags the bitwise OR of one or more of
* {@link TJ#FLAG_BOTTOMUP TJ.FLAG_*}
*
* @return a YUV planar image.
*/
public YUVImage encodeYUV(int pad, int flags) throws TJException {
checkSourceImage();
checkSubsampling();
if (pad < 1 || ((pad & (pad - 1)) != 0))
throw new IllegalStateException("Invalid argument in encodeYUV()");
YUVImage dstYUVImage = new YUVImage(srcWidth, pad, srcHeight, subsamp);
encodeYUV(dstYUVImage, flags);
return dstYUVImage;
}
/**
* Encode the uncompressed source image associated with this compressor
* instance into separate Y, U (Cb), and V (Cr) image planes and return a
* <code>YUVImage</code> instance containing the encoded image planes. This
* method uses the accelerated color conversion routines in TurboJPEG's
* underlying codec but does not execute any of the other steps in the JPEG
* compression process. Encoding CMYK source images to YUV is not supported.
*
* @param strides an array of integers, each specifying the number of bytes
* per line in the corresponding plane of the output image. Setting the
* stride for any plane to 0 is the same as setting it to the component width
* of the plane. If <code>strides</code> is null, then the strides for all
* planes will be set to their respective component widths. You can adjust
* the strides in order to add an arbitrary amount of line padding to each
* plane.
*
* @param flags the bitwise OR of one or more of
* {@link TJ#FLAG_BOTTOMUP TJ.FLAG_*}
*
* @return a YUV planar image.
*/
public YUVImage encodeYUV(int[] strides, int flags) throws TJException {
checkSourceImage();
checkSubsampling();
YUVImage dstYUVImage = new YUVImage(srcWidth, strides, srcHeight, subsamp);
encodeYUV(dstYUVImage, flags);
return dstYUVImage;
}
/**
* @deprecated Use {@link #encodeYUV(int, int)} instead.
*/
@SuppressWarnings("checkstyle:JavadocMethod")
@Deprecated
public byte[] encodeYUV(int flags) throws TJException {
checkSourceImage();
checkSubsampling();
YUVImage dstYUVImage = new YUVImage(srcWidth, 4, srcHeight, subsamp);
encodeYUV(dstYUVImage, flags);
return dstYUVImage.getBuf();
}
/**
* @deprecated Use
* {@link #setSourceImage(BufferedImage, int, int, int, int)} and
* {@link #encodeYUV(byte[], int)} instead.
*/
@SuppressWarnings("checkstyle:JavadocMethod")
@Deprecated
public void encodeYUV(BufferedImage srcImage, byte[] dstBuf, int flags)
throws TJException {
setSourceImage(srcImage, 0, 0, 0, 0);
encodeYUV(dstBuf, flags);
}
/**
* @deprecated Use
* {@link #setSourceImage(BufferedImage, int, int, int, int)} and
* {@link #encodeYUV(int, int)} instead.
*/
@SuppressWarnings("checkstyle:JavadocMethod")
@Deprecated
public byte[] encodeYUV(BufferedImage srcImage, int flags)
throws TJException {
setSourceImage(srcImage, 0, 0, 0, 0);
return encodeYUV(flags);
}
/**
* Returns the size of the image (in bytes) generated by the most recent
* compress operation.
*
* @return the size of the image (in bytes) generated by the most recent
* compress operation.
*/
public int getCompressedSize() {
return compressedSize;
}
/**
* Free the native structures associated with this compressor instance.
*/
@Override
public void close() throws TJException {
if (handle != 0)
destroy();
}
@SuppressWarnings("checkstyle:DesignForExtension")
@Override
protected void finalize() throws Throwable {
try {
close();
} catch (TJException e) {
} finally {
super.finalize();
}
};
private native void init() throws TJException;
private native void destroy() throws TJException;
// JPEG size in bytes is returned
@SuppressWarnings("checkstyle:HiddenField")
@Deprecated
private native int compress(byte[] srcBuf, int width, int pitch,
int height, int pixelFormat, byte[] jpegBuf, int jpegSubsamp, int jpegQual,
int flags) throws TJException;
@SuppressWarnings("checkstyle:HiddenField")
private native int compress(byte[] srcBuf, int x, int y, int width,
int pitch, int height, int pixelFormat, byte[] jpegBuf, int jpegSubsamp,
int jpegQual, int flags) throws TJException;
@SuppressWarnings("checkstyle:HiddenField")
@Deprecated
private native int compress(int[] srcBuf, int width, int stride,
int height, int pixelFormat, byte[] jpegBuf, int jpegSubsamp, int jpegQual,
int flags) throws TJException;
@SuppressWarnings("checkstyle:HiddenField")
private native int compress(int[] srcBuf, int x, int y, int width,
int stride, int height, int pixelFormat, byte[] jpegBuf, int jpegSubsamp,
int jpegQual, int flags) throws TJException;
@SuppressWarnings("checkstyle:HiddenField")
private native int compressFromYUV(byte[][] srcPlanes, int[] srcOffsets,
int width, int[] srcStrides, int height, int subsamp, byte[] jpegBuf,
int jpegQual, int flags)
throws TJException;
@SuppressWarnings("checkstyle:HiddenField")
@Deprecated
private native void encodeYUV(byte[] srcBuf, int width, int pitch,
int height, int pixelFormat, byte[] dstBuf, int subsamp, int flags)
throws TJException;
@SuppressWarnings("checkstyle:HiddenField")
private native void encodeYUV(byte[] srcBuf, int x, int y, int width,
int pitch, int height, int pixelFormat, byte[][] dstPlanes,
int[] dstOffsets, int[] dstStrides, int subsamp, int flags)
throws TJException;
@SuppressWarnings("checkstyle:HiddenField")
@Deprecated
private native void encodeYUV(int[] srcBuf, int width, int stride,
int height, int pixelFormat, byte[] dstBuf, int subsamp, int flags)
throws TJException;
@SuppressWarnings("checkstyle:HiddenField")
private native void encodeYUV(int[] srcBuf, int x, int y, int width,
int srcStride, int height, int pixelFormat, byte[][] dstPlanes,
int[] dstOffsets, int[] dstStrides, int subsamp, int flags)
throws TJException;
static {
TJLoader.load();
}
private void checkSourceImage() {
if (srcWidth < 1 || srcHeight < 1)
throw new IllegalStateException(NO_ASSOC_ERROR);
}
private void checkSubsampling() {
if (subsamp < 0)
throw new IllegalStateException("Subsampling level not set");
}
private long handle = 0;
private byte[] srcBuf = null;
private int[] srcBufInt = null;
private int srcWidth = 0;
private int srcHeight = 0;
private int srcX = -1;
private int srcY = -1;
private int srcPitch = 0;
private int srcStride = 0;
private int srcPixelFormat = -1;
private YUVImage srcYUVImage = null;
private int subsamp = -1;
private int jpegQuality = -1;
private int compressedSize = 0;
private int yuvPad = 4;
private ByteOrder byteOrder = null;
}
```
|
```java
`ArrayList` vs `LinkedList`
Collections vs arrays
Multidimensional array declaration
Do not attempt comparisons with NaN
Numeric Conversion - Widening
```
|
Text M of the rongorongo corpus, the larger of two tablets in Vienna and therefore also known as the Large or Great Vienna tablet, is one of two dozen surviving rongorongo texts.
Other names
M is the standard designation, from Barthel (1958). Fischer (1997) refers to it as RR24.
Location
Museum für Völkerkunde, Vienna. Catalog # 22869.
There is a reproduction in the Musée de l'Homme, Paris.
Description
A rotted unfluted tablet of Pacific rosewood (Orliac 2005), 28.4 × 13.7 × 2.5 cm, M is one of the rongorongo tablets in the worst condition. It evidently lay on side b in damp soil, probably in a cave, for many years. The edges are rotted and the surfaces worm-eaten. Fischer suggests that the gashes on the top and side may have been intentional, for lashings.
Provenance
In 1882 an archaeological expedition aboard the SMS Hyäne visited Easter Island, and captain Wilhelm Geiseler purchased two tablets. The purchase had been arranged by Schlubach, the German consul in Valparaíso, at the request of Adolf Bastian, the director of the Königliches Museum für Völkerkunde in Berlin. The tablets were given to the uncle of Schlubach's wife, Alexander Salmon, Jr, who then shipped three tablets, M, N, and O, to Schlubach. Several years later, when Schlubach returned to Hamburg, he sent just one of the tablets to Bastian and sold the other two privately to the Hamburg firm "Klée und Kocher". They were then sold to the Austrian Vice-Consul in Hamburg, Heinrich Freiherr von Westenholz, who donated them to Vienna's Museum für Völkerkunde in 1886.
Alexander Salmon, Jr, the manager of the Brander plantations on Easter Island who had transcribed and (poorly) translated the 'readings' that Jaussen obtained for his texts, encouraged the manufacture of Rapanui artworks, and several scholars have questioned its authenticity. However, he never presented them as authentic, and Fischer accepts this text as genuine.
Content
Although there is little text to go on, Fischer reports that line Mr2 shares two sequences of glyphs with Gr2 of the Small Santiago tablet, which he suggests may have been the model for the Large Vienna.
Text
Nine lines of ~ 120 glyphs are visible on side a; side b is 'destroyed'. Fischer says that M apparently once held c. 11 lines of text on either side, much like "Mamari". Line a7 has 'faint traces' of glyphs which Fischer believes might be recoverable with electronic imaging.
Fischer also reports (p. 398) that M has suffered recent damage:
It is sad to have to report that between 1933, when Paul Rivet had a plaster cast (M.H. 33.79.2) of this tablet made for his Musée d'Ethnologie (now Musée de l'Homme) in Paris, and 1992, when I inspected the original in Vienna, great damage had occurred. The sequence is now missing from the middle top line of the tablet […]. What is far worse, however: someone has also apparently intentionally removed a piece from the tablet that contained parts of two lines of glyphs. Some 13 elements of [M]a1 and three of [M]a2—a fragment c. 7 × 2 cm in size—are now missing.
References
BARTHEL, Thomas S. 1958. Grundlagen zur Entzifferung der Osterinselschrift (Bases for the Decipherment of the Easter Island Script). Hamburg : Cram, de Gruyter.
FISCHER, Steven Roger. 1997. RongoRongo, the Easter Island Script: History, Traditions, Texts. Oxford and N.Y.: Oxford University Press.
ORLIAC, Catherine. 2005. "The Rongorongo Tablets from Easter Island: Botanical Identification and 14C Dating." Archaeology in Oceania 40.3.
External links
Barthel's coding of text M
Rongorongo inscriptions
|
These are the Billboard magazine R&B albums that reached number-one in 1970.
Chart history
See also
1970 in music
R&B number-one hits of 1970 (USA)
United States RandB albums
1970
|
Gekko intermedium, also known as the intermediate flying gecko or Philippine flying gecko, is a species of gecko. It is endemic to the Philippines.
References
Gekko
Reptiles of the Philippines
Endemic fauna of the Philippines
Reptiles described in 1915
Taxa named by Edward Harrison Taylor
|
```objective-c
/* packet-xtp.h
* Routines for Xpress Transport Protocol dissection
*
* $Id: packet-xtp.h 25116 2008-04-19 09:19:32Z stig $
*
* Wireshark - Network traffic analyzer
* By Gerald Combs <gerald@wireshark.org>
*
* This program is free software; you can redistribute it and/or
* as published by the Free Software Foundation; either version 2
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
*
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*/
#ifndef __PACKET_XTP_H__
#define __PACKET_XTP_H__
#define XTP_VERSION_4 0x001
/* XTP type of Service */
#define XTP_TOS_UNSPEC 0
#define XTP_TOS_UNACKED_DGRAM 1
#define XTP_TOS_ACKED_DGRAM 2
#define XTP_TOS_TRANS 3
#define XTP_TOS_UNICAST_STREAM 4
#define XTP_TOS_UNACKED_MULTICAST_STREAM 5
#define XTP_TOS_MULTICAST_STREAM 6
/* Address Format */
#define XTP_ADDR_NULL 0
#define XTP_ADDR_IP 1
#define XTP_ADDR_ISO 2
#define XTP_ADDR_XEROX 3
#define XTP_ADDR_IPX 4
#define XTP_ADDR_LOCAL 5
#define XTP_ADDR_IP6 6
/* packet type */
#define XTP_DATA_PKT 0
#define XTP_CNTL_PKT 1
#define XTP_FIRST_PKT 2
#define XTP_ECNTL_PKT 3
#define XTP_TCNTL_PKT 5
#define XTP_JOIN_PKT 6
#define XTP_JCNTL_PKT 7
#define XTP_DIAG_PKT 8
/* cmd options mask */
#define XTP_CMD_OPTIONS_NOCHECK 0x400000
#define XTP_CMD_OPTIONS_EDGE 0x200000
#define XTP_CMD_OPTIONS_NOERR 0x100000
#define XTP_CMD_OPTIONS_MULTI 0x080000
#define XTP_CMD_OPTIONS_RES 0x040000
#define XTP_CMD_OPTIONS_SORT 0x020000
#define XTP_CMD_OPTIONS_NOFLOW 0x010000
#define XTP_CMD_OPTIONS_FASTNAK 0x008000
#define XTP_CMD_OPTIONS_SREQ 0x004000
#define XTP_CMD_OPTIONS_DREQ 0x002000
#define XTP_CMD_OPTIONS_RCLOSE 0x001000
#define XTP_CMD_OPTIONS_WCLOSE 0x000800
#define XTP_CMD_OPTIONS_EOM 0x000400
#define XTP_CMD_OPTIONS_END 0x000200
#define XTP_CMD_OPTIONS_BTAG 0x000100
#define XTP_KEY_RTN ((guint64)1<<63)
/** packet structures definition **/
struct xtp_cntl {
guint64 rseq;
guint64 alloc;
guint32 echo;
};
#define XTP_CNTL_PKT_LEN 20
struct xtp_ecntl {
guint64 rseq;
guint64 alloc;
guint32 echo;
guint32 nspan;
};
#define MIN_XTP_ECNTL_PKT_LEN 24
struct xtp_traffic_cntl {
guint64 rseq;
guint64 alloc;
guint32 echo;
guint32 rsvd;
guint64 xkey;
};
#define XTP_TRAFFIC_CNTL_LEN 32
/* tformat = 0x00 */
struct xtp_traffic_spec0 {
guint16 tlen;
guint8 service;
guint8 tformat;
guint32 none;
};
#define XTP_TRAFFIC_SPEC0_LEN 8
/* tformat = 0x01 */
struct xtp_traffic_spec1 {
guint16 tlen;
guint8 service;
guint8 tformat;
guint32 maxdata;
guint32 inrate;
guint32 inburst;
guint32 outrate;
guint32 outburst;
};
#define XTP_TRAFFIC_SPEC1_LEN 24
struct xtp_ip_addr_seg {
guint16 alen;
guint8 adomain;
guint8 aformat;
guint32 dsthost;
guint32 srchost;
guint16 dstport;
guint16 srcport;
};
#define XTP_IP_ADDR_SEG_LEN 16
#define XTP_NULL_ADDR_SEG_LEN 8
struct xtp_diag {
guint32 code;
guint32 val;
gchar *msg;
};
#define XTP_DIAG_PKT_HEADER_LEN 8
struct xtphdr {
guint64 key;
guint32 cmd;
guint32 cmd_options; /* 24 bits */
guint8 cmd_ptype;
guint8 cmd_ptype_ver; /* 3 bits */
guint8 cmd_ptype_pformat; /* 5 bits */
guint32 dlen;
guint16 check;
guint16 sort;
guint32 sync;
guint64 seq;
};
#define XTP_HEADER_LEN 32
#endif /* __PACKET_XTP_H__ */
```
|
Nicholas Channing DiGiovanni (born May 19, 1996) is an American celebrity chef, internet personality, and entertainer. As of March 2023, DiGiovanni has over 25 million followers across his social media accounts, including YouTube, TikTok, and Instagram.
He is the youngest finalist ever on MasterChef, placing third when he competed at 22 years old.
Early life
DiGiovanni was born on May 19, 1996, in Barrington, Rhode Island, to Chris and Susan DiGiovanni (née Naimi). He is of Persian, Italian, German, and British descent. He is the oldest of four brothers. He developed a passion for food at a young age by watching his grandmother and great-grandmother cook meals for the family.
Education
DiGiovanni attended high school at Milton Academy, where he graduated cum laude. He served as co-head of the Milton Academy Community Service program.
After graduating from high school, DiGiovanni went to Harvard University in Massachusetts. At Harvard, DiGiovanni created his own concentration called "Food and Climate". As part of his coursework, he attended lectures taught by Massimo Bottura, Grant Achatz and José Andrés.
For his senior year, DiGiovanni analyzed data on carbon emissions in 36 international restaurants from Singapore to San Francisco. His thesis was advised by American author and journalist Michael Pollan.
DiGiovanni got accepted to attend Harvard Business School through the 2+2 deferral program. In January 2023, he let Harvard know he was not planning to matriculate.
Career
Television
During his senior year of college, DiGiovanni attended a casting call for season 10 of MasterChef. He was selected to compete on the show and finished in third place. To film the show, DiGiovanni reportedly left in the middle of the term at Harvard without informing his professors. He returned the next season as a mentor for finalists.
Social media
After completing MasterChef and graduating from Harvard, DiGiovanni began to post cooking videos on YouTube. In his first-ever YouTube video, DiGiovanni cooked the dessert he would have made in the MasterChef finale, which has over 3.5 million views as of August 2023. He then began to regularly post videos of him cooking different foods. His YouTube channel has over 11 million subscribers as of July 2023.
DiGiovanni has partnered with brands including Nutella, Amazon, Walmart, and Kinder Bueno.
Philanthropy
DiGiovanni is the Lead Ambassador for The Farmlink Project. Additionally, in 2021, DiGiovanni participated in the #TeamSeas campaign founded by YouTuber MrBeast, which raised $30 million to clean up of trash in the ocean. In 2022, he partnered with Chipotle to donate 20 million pounds of food.
Accolades
DiGiovanni has been featured in several publications, including Today, Good Morning America, and Harper's Bazaar. In 2021, DiGiovanni was included in the Forbes 30 Under 30 list for Food and Drink. The same year, DiGiovanni won YouTube's Streamy Award for Food, an award recognizing the world's top food creator of the year. He was a Webby Award recipient in 2022.
In November 2021, DiGiovanni (with Lynja) broke the Guinness World Record for the largest ever cake pop, which weighed .
In June 2022, he broke another Guinness World Record for the largest ever chicken nugget, which weighed . In August, he visited the most fast food restaurants in 24 hours (69 restaurants). In October, he beat Gordon Ramsay's record for the fastest time to fillet a fish by five seconds; completing it in 1 minute exactly. On the same day he constructed the world's largest sushi roll, which measured at in diameter. In November, he created the largest fortune cookie at and made the largest donation of turkey in 24 hours ( - roughly 7,620 turkeys).
On May 11, 2023, DiGiovanni, alongside Ramsay, broke the Guinness World Record for the largest Beef Wellington, which weighed . This was his ninth Guinness World Record and was broken in partnership with celebrity chefs Ramsay, Max the Meat Guy, Guga Foods and The Golden Balance.
On June 13, 2023, DiGiovanni released his debut cookbook, Knife Drop. It debuted at #1 on the New York Times Bestsellers list and remained on the list for five consecutive weeks.
Awards and nominations
Filmography and bibliography
Filmography
Bibliography
References
External links
YouTube channel
American TikTokers
American YouTubers
1996 births
Living people
Milton Academy alumni
Harvard University alumni
People from Providence, Rhode Island
American television chefs
Chefs from Massachusetts
American people of Italian descent
American people of Iranian descent
American male sailors (sport)
Harvard Crimson athletes
Harvard Crimson sailing
Streamy Award winners
Chefs from Rhode Island
YouTubers from Rhode Island
Reality cooking competition contestants
|
```go
package ansiterm
type AnsiEventHandler interface {
// Print
Print(b byte) error
// Execute C0 commands
Execute(b byte) error
// CUrsor Up
CUU(int) error
// CUrsor Down
CUD(int) error
// CUrsor Forward
CUF(int) error
// CUrsor Backward
CUB(int) error
// Cursor to Next Line
CNL(int) error
// Cursor to Previous Line
CPL(int) error
// Cursor Horizontal position Absolute
CHA(int) error
// Vertical line Position Absolute
VPA(int) error
// CUrsor Position
CUP(int, int) error
// Horizontal and Vertical Position (depends on PUM)
HVP(int, int) error
// Text Cursor Enable Mode
DECTCEM(bool) error
// Origin Mode
DECOM(bool) error
// 132 Column Mode
DECCOLM(bool) error
// Erase in Display
ED(int) error
// Erase in Line
EL(int) error
// Insert Line
IL(int) error
// Delete Line
DL(int) error
// Insert Character
ICH(int) error
// Delete Character
DCH(int) error
// Set Graphics Rendition
SGR([]int) error
// Pan Down
SU(int) error
// Pan Up
SD(int) error
// Device Attributes
DA([]string) error
// Set Top and Bottom Margins
DECSTBM(int, int) error
// Index
IND() error
// Reverse Index
RI() error
// Flush updates from previous commands
Flush() error
}
```
|
FilmFair was a British production company and animation studio that produced children's television series, animated cartoons, educational films, and television advertisements. The company made numerous stop motion films using puppets, clay animation, and cutout animation.
History
Foundation
FilmFair was founded in 1959 by American animator Gus Jekel in Los Angeles, California. After working with Walt Disney Productions and other Hollywood animation studios in the 1930s, Jekel incorporated FilmFair because he wanted the freedom to create live action work as well. The studio was in Animation Alley, a stretch of Cahuenga Boulevard that runs through Studio City in northern Los Angeles.
Jekel's company produced television advertisements—some animated, others live action—and was extremely successful; even Disney was a client.
In the late 1960s, Jekel asked an English colleague, Graham Clutterbuck, to start a European office for FilmFair. Clutterbuck had been producing and coordinating television ads for European advertising agencies and had just lost his job as director general of Les Cinéastes Associés in Paris. Although he was not well-acquainted with animation, Clutterbuck accepted the job offer. Clutterbuck established FilmFair's European office in Paris. It was there that he met Serge Danot, who pitched his ideas for a children's series, but Clutterbuck turned him down. Soon after, Danot signed a contract with the BBC to produce the series The Magic Roundabout. He invited Clutterbuck to watch them film. While there, Clutterbuck met the series' co-creator, Ivor Wood. Later, the two men agreed that Wood would make animated films for FilmFair. The success of The Magic Roundabout paved the way for more stop-motion animation at the BBC. Soon, Wood came up with the idea for The Herbs, which premiered on BBC1 in 1968.
FilmFair London
By this time, Beatlemania had made England a cultural hotspot. Clutterbuck found it too difficult to attract English talent to France, so he moved the office to London. There, Barry Leith joined the company as director of animation. Wood and Leith collaborated on The Wombles, but Wood also had a few ideas for animating Michael Bond's stories about Paddington Bear. Bond was enthusiastic about Wood's artistic vision and began scripting the first series. BBC1 premiered Paddington in 1976 to great acclaim. FilmFair produced new episodes of the programme for three years, and it expanded into a considerable media franchise.
FilmFair continued to produce successful stop motion programmes through the mid-1970s. The company's first classically animated series, Simon in the Land of Chalk Drawings, premiered in 1976. It was adapted from a series of children's books written and illustrated by Edward McLachlan. The company's first series not directed by Wood was The Perishers, a classically animated series directed by Dick Horn.
As FilmFair London continued to produce animated television series for the BBC and ITV, they eventually reached an international audience through broadcast syndication and home video distribution.
Acquisitions
In the early 1980s, Central Independent Television bought a controlling share of the European branch of FilmFair. Graham Clutterbuck died of cancer on 30 April 1988; FilmFair dedicated Bangers and Mash to his memory.
In 1991, Central sold FilmFair to Storm Group (also known as the Caspian Group), one of FilmFair's video distributors. Altschul Group Corporation (AGC) bought FilmFair's American branch in 1992, as part of campaign to acquire more than a dozen film companies. Discovery Education, a subsidiary of Discovery Communications, bought AGC's film catalogue in 2003. As of 2022, Discovery Education is now owned by Clearlake Capital, with Francisco Partners along with Discovery, Inc.'s successor and Warner Bros. parent company Warner Bros. Discovery holding minority stakes.
In 1996, the Caspian Group sold FilmFair London's catalogue and production amenities to Canada-based company CINAR Films, whose purchase included all associated distribution, publication, licensing, and merchandising rights. In 2000, Cinar executives were implicated in a financial scandal, and again in 2001. In 2004, the company rebranded to Cookie Jar Group, which in turn was acquired by DHX Media (now WildBrain) in 2012, thus acquiring the rights to the European FilmFair properties and making DHX the largest independent producer of kids programming with 8,550 half hours up from 2,550.
Productions
Animated television series
Television specials
Pilots
See also
Ragdoll Productions
Cosgrove Hall Films
History of British animation
List of WildBrain programs
References
Further reading
External links
British animation studios
Defunct mass media companies of the United Kingdom
Children's television
Television production companies of the United Kingdom
Companies based in Los Angeles
Companies based in Paris
Entertainment companies established in 1968
Companies disestablished in 1996
WildBrain
1968 establishments in California
1968 establishments in England
1996 disestablishments in California
1996 disestablishments in England
|
The Nottingham Trophy (formerly known as the Aegon Trophy) was an annual tennis tournament played in Nottingham, England. The tournament was part of the ATP Challenger Tour and the International Tennis Federation (ITF Women's Circuit) as a $75,000 event. The tournament's key sponsor was Dutch insurance firm Aegon. The tournament was held at the end of May before the main tour's grass-court season starts.
In 2021, an ATP Challenger Tour and ITF Women's World Tennis Tour event was held in Nottingham, under the name Nottingham Trophy. This event was supposed to be held as the Ilkley Trophy, but moved to Nottingham due to the COVID-19 pandemic.
Location
The tournament is held annually at the Nottingham Tennis Centre within the University Park area of Nottingham.
History
The city used to hold an ATP Tour event, the Nottingham Open; however, due to its failure to attracted big names the tournament was merged with the women's Eastbourne International event in 2009. It was merged with Eastbourne due to the LTA wanting to attract an umbrella sponsor and a younger audience to Eastbourne. However, in December 2008, it was announced that Nottingham would take over from Surbiton, in hosting the grass court ATP Challenger and ITF event. It started in 2009, replacing the Surbiton Trophy due to the renovation of the facilities that had been undertaken at the Nottingham Tennis Centre. The tournament moved back to Surbiton for the 2015 season. A new WTA International competition commenced on 8 June 2015 instead.
Past finals
Men's singles
Women's singles
Men's doubles
Women's doubles
References
External links
ATP Challenger Tour
ITF Women's World Tennis Tour
Grass court tennis tournaments
Tennis tournaments in England
Sport in Nottingham
Recurring sporting events established in 2009
2009 establishments in England
Recurring sporting events disestablished in 2014
2014 disestablishments in England
|
```go
//
// path_to_url
//
// Unless required by applicable law or agreed to in writing, software
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
package client
import (
api "github.com/projectcalico/calico/libcalico-go/lib/apis/v1"
"github.com/projectcalico/calico/libcalico-go/lib/apis/v1/unversioned"
"github.com/projectcalico/calico/libcalico-go/lib/backend/model"
"github.com/projectcalico/calico/libcalico-go/lib/converter"
"github.com/projectcalico/calico/libcalico-go/lib/scope"
)
// BGPPeerInterface has methods to work with BGPPeer resources.
type BGPPeerInterface interface {
List(api.BGPPeerMetadata) (*api.BGPPeerList, error)
Get(api.BGPPeerMetadata) (*api.BGPPeer, error)
Create(*api.BGPPeer) (*api.BGPPeer, error)
Update(*api.BGPPeer) (*api.BGPPeer, error)
Apply(*api.BGPPeer) (*api.BGPPeer, error)
Delete(api.BGPPeerMetadata) error
}
// bgpPeers implements BGPPeerInterface
type bgpPeers struct {
converter.BGPPeerConverter
c *Client
}
// newBGPPeers returns a new BGPPeerInterface bound to the supplied client.
func newBGPPeers(c *Client) BGPPeerInterface {
return &bgpPeers{c: c}
}
// Create creates a new BGP peer.
func (h *bgpPeers) Create(a *api.BGPPeer) (*api.BGPPeer, error) {
return a, h.c.create(*a, h)
}
// Update updates an existing BGP peer.
func (h *bgpPeers) Update(a *api.BGPPeer) (*api.BGPPeer, error) {
return a, h.c.update(*a, h)
}
// Apply updates a BGP peer if it exists, or creates a new BGP peer if it does not exist.
func (h *bgpPeers) Apply(a *api.BGPPeer) (*api.BGPPeer, error) {
return a, h.c.apply(*a, h)
}
// Delete deletes an existing BGP peer.
func (h *bgpPeers) Delete(metadata api.BGPPeerMetadata) error {
return h.c.delete(metadata, h)
}
// Get returns information about a particular BGP peer.
func (h *bgpPeers) Get(metadata api.BGPPeerMetadata) (*api.BGPPeer, error) {
if a, err := h.c.get(metadata, h); err != nil {
return nil, err
} else {
return a.(*api.BGPPeer), nil
}
}
// List takes a Metadata, and returns a BGPPeerList that contains the list of BGP peers
// that match the Metadata (wildcarding missing fields).
func (h *bgpPeers) List(metadata api.BGPPeerMetadata) (*api.BGPPeerList, error) {
l := api.NewBGPPeerList()
// Global and host peers are listed separately. Work out which we need
// to list.
listGlobal := metadata.Scope == scope.Global || (metadata.Scope == scope.Undefined && metadata.Node == "")
listNode := metadata.Scope == scope.Node || metadata.Scope == scope.Undefined
// Tweak the scope of the Metadata so that we are performing a list within
// a specific scope.
if listGlobal {
metadata.Scope = scope.Global
if err := h.c.list(metadata, h, l); err != nil {
return nil, err
}
}
if listNode {
metadata.Scope = scope.Node
if err := h.c.list(metadata, h, l); err != nil {
return nil, err
}
}
return l, nil
}
// convertMetadataToListInterface converts a BGPPeerMetadata to a BGPPeerListOptions.
// This is part of the conversionHelper interface.
func (h *bgpPeers) convertMetadataToListInterface(m unversioned.ResourceMetadata) (model.ListInterface, error) {
pm := m.(api.BGPPeerMetadata)
if pm.Scope == scope.Global {
return model.GlobalBGPPeerListOptions{
PeerIP: pm.PeerIP,
}, nil
} else {
return model.NodeBGPPeerListOptions{
PeerIP: pm.PeerIP,
Nodename: pm.Node,
}, nil
}
}
// convertMetadataToKey converts a BGPPeerMetadata to a HostBGPPeerKey/GlobalBGPPeerKey
// This is part of the conversionHelper interface.
func (h *bgpPeers) convertMetadataToKey(m unversioned.ResourceMetadata) (model.Key, error) {
return h.ConvertMetadataToKey(m)
}
// convertAPIToKVPair converts an API BGPPeer structure to a KVPair containing a
// backend BGPPeer and HostBGPPeerKey/GlobalBGPPeerKey.
// This is part of the conversionHelper interface.
func (h *bgpPeers) convertAPIToKVPair(a unversioned.Resource) (*model.KVPair, error) {
return h.ConvertAPIToKVPair(a)
}
// convertKVPairToAPI converts a KVPair containing a backend BGPPeer and HostBGPPeerKey/GlobalBGPPeerKey
// to an API BGPPeer structure.
// This is part of the conversionHelper interface.
func (h *bgpPeers) convertKVPairToAPI(d *model.KVPair) (unversioned.Resource, error) {
return h.ConvertKVPairToAPI(d)
}
```
|
Glandirana tientaiensis, also known as Tiantai frog and Tientai rough-skinned frog, is a species of frog in the family Ranidae. Its name refers to its type locality, Tiantai. It is endemic to eastern China and is only known from Zhejiang and south-eastern Anhui provinces.
Male G. tientaiensis measure and female in length. Their natural habitats are open, low-gradient large streams and small rivers at elevations of above sea level. They sometimes also occur in still-water pools close to streams. This uncommon species is threatened by habitat loss.
References
tientaiensis
Frogs of China
Endemic fauna of China
Amphibians described in 1933
Taxonomy articles created by Polbot
|
```javascript
CKEDITOR.plugins.setLang("iframe","id",{border:"Tampilkan Batas Bingkai",noUrl:"Please type the iframe URL",scrolling:"Aktifkan Scrollbar",title:"IFrame Properties",toolbar:"IFrame"});
```
|
```objective-c
//
//
// path_to_url
//
#ifndef your_sha256_hashCOVERY_H
#define your_sha256_hashCOVERY_H
/// \file rmanDiscovery/rmanDiscovery.h
#include "pxr/pxr.h"
#include "pxr/usd/ndr/discoveryPlugin.h"
#include <functional>
PXR_NAMESPACE_OPEN_SCOPE
/// \class RmanDiscoveryPlugin
///
/// Discovers nodes supported by the HdPrman render delegate.
///
class RmanDiscoveryPlugin final : public NdrDiscoveryPlugin
{
public:
/// A filter for discovered nodes. If the function returns false
/// then the discovered node is discarded. Otherwise the function
/// can modify the discovery result.
using Filter = std::function<bool(NdrNodeDiscoveryResult&)>;
/// Constructor.
RmanDiscoveryPlugin();
/// DiscoverNodes() will pass each result to the given function for
/// modification. If the function returns false then the result is
/// discarded.
RmanDiscoveryPlugin(Filter filter);
/// Virtual destructor
~RmanDiscoveryPlugin();
/// Discover all of the nodes that appear within the the search paths
/// provided and match the extensions provided.
NdrNodeDiscoveryResultVec DiscoverNodes(const Context&) override;
/// Gets the paths that this plugin is searching for nodes in.
const NdrStringVec& GetSearchURIs() const override;
private:
/// The paths (abs) indicating where the plugin should search for nodes.
NdrStringVec _searchPaths;
/// The extensions (excluding leading '.') that signify a valid node file.
/// The extension will be used as the `type` member in the resulting
/// `NdrNodeDiscoveryResult` instance.
NdrStringVec _allowedExtensions;
/// Whether or not to follow symlinks while scanning directories for files.
bool _followSymlinks;
// The filter to run on the results.
Filter _filter;
};
void
RmanDiscoveryPlugin_SetDefaultSearchPaths(const NdrStringVec &paths);
void
RmanDiscoveryPlugin_SetDefaultFollowSymlinks(bool followSymlinks);
PXR_NAMESPACE_CLOSE_SCOPE
#endif // your_sha256_hashCOVERY_H
```
|
"Closure" is the eleventh episode of the seventh season of the science fiction television series The X-Files, and the 150th episode overall. It was directed by Kim Manners and written by series creator Chris Carter and Frank Spotnitz. The installment explores the series' overarching mythology and is the conclusion of a two-part episode revolving around the final revelation of what really happened to Fox Mulder's (David Duchovny) sister, Samantha. Originally aired by the Fox network on February 13, 2000, "Closure" received a Nielsen rating of 9.1 and was seen by 15.35 million viewers. The episode received mostly positive reviews from critics; many felt that the final reveal was emotional and powerful, although some were unhappy with the resolution.
The show centers on FBI special agents Fox Mulder (Duchovny) and Dana Scully (Gillian Anderson) who work on cases linked to the paranormal, called X-Files. Mulder is a believer in the paranormal, while the skeptical Scully has been assigned to debunk his work, but the two have developed a deep friendship. In this episode, after Mulder is forced to accept that his mother's death was by her own hand, he is led by a man whose son disappeared years earlier to another truth: that his sister, Samantha, was among the souls taken by ‘walk-ins’, saving the souls of children doomed to live unhappy lives.
"Closure" was a story milestone for the series, finally revealing Samantha's fate; this story-arc had driven a large part of the series' earlier episodes. The episode was written as a continuation to the previous episode, "Sein und Zeit," but branched off into different territory. Although a majority of the episode was filmed on a soundstage, several scenes were shot on location, such as the scenes at the former Norton Air Force Base in San Bernardino, California. Several of the sequences, specifically those featuring the souls of dead children, required elaborate filming techniques. The episode has been analyzed due to its themes of belief and hope.
Plot
Background
For the first five seasons of the series, FBI federal agents Fox Mulder (David Duchovny) and Dana Scully (Gillian Anderson) sought to gain understanding about the disappearance of Mulder's sister, Samantha, who was abducted when Mulder was 12 years old. In the previous episode, "Sein und Zeit", Mulder and Scully tracked down a serial killer who targeted children. While investigating the case, Mulder began to get emotionally involved, due to the similarities with his sister's disappearance.
Events
Mulder and Scully aid the Sacramento Police in the investigation of a brutal murder committed by Truelove, the owner of the Santa Village. As the remains of more children are discovered, he admits killing twenty-four children, but denies murdering Amber Lynn LaPierre, who disappeared from her home in the previous episode. Mulder is approached by psychic Harold Piller, who tells Mulder that he has helped law enforcement across the world, and has proved in various cases that children had been taken by "walk-ins", beings composed of starlight. Piller believes that walk-ins save children who suffer terrible fates.
Scully becomes worried about Piller's influence over Mulder. The agents return to Washington, D.C., where Mulder keeps searching for evidence in the case. Meanwhile, Piller gets a vision of Mulder's mother, who recently died by suicide, leading Mulder to April Air Force Base. Scully finds evidence that Samantha's disappearance is linked to The Smoking Man (William B. Davis); when she returns to her apartment, she finds him waiting for her. He tells her that he had called off the search for Mulder's sister when she vanished because he knew she was dead.
When Mulder returns to April Air Force Base, he uncovers proof that Samantha lived with the Smoking Man along with his son, Jeffrey Spender, and that she was forced to undergo painful tests. Scully finds a 1979 police report of a girl matching Samantha's description, and learns that she was taken to a hospital emergency room. She and Mulder find the nurse who treated her, and the nurse describes how Samantha disappeared the same way as Amber, without a trace. Mulder later walks through the forest and receives a vision of Samantha along with the spirits of other children, including Amber Lynn. Upon telling Scully and Piller of his vision, Piller reacts badly upon hearing that his son is dead, while Mulder accepts that his sister is dead and in a better place. When Scully comforts Mulder and asks if he is all right, he responds with a choked "I'm fine. I'm free."
Production
Writing
"Closure," written by series creator Chris Carter and executive producer Frank Spotnitz, brought an end to Mulder's quest for his sister, Samantha, who had been abducted when he was a child. While the idea to close the story arc received mixed reactions from various production and crew members, many realized that the time had come for the show to answer one of its biggest questions. Spotnitz explained that, "I think [series star, David Duchovny] grew tired of playing the man who is missing his sister. [...] I told him, 'This is going to be the last time you're going to have to play [that part].'" Paul Rabwin noted that, "It's been seven years. I don't think any of us are going to miss Samantha Mulder. That device and motivation were very strong in the early years of the show. But as the years have gone by, the speculation kind of melted away."
"Closure" continued where the previous episode, "Sein und Zeit", left off but branched off into different territory. Carter later explained that, "emotionally, it was heavy stuff for everybody, but necessarily so. These episodes involved two very personal cases, the search for a serial killer [in 'Sein und Zeit'] and the search for Mulder's sister [in 'Closure']." Marc Shapiro, in his book All Things: The Official Guide to The X-Files, Vol. 6 noted that, in addition to bringing an end to the Samantha story arc, the episode was "very much a [Smoking Man] episode" in that it explored his involvement in Samantha's abduction and revealed to the audience that he was seriously ill. The episode's tagline was changed from the usual "The Truth is Out There" to "Believe to Understand".
Filming
Manners argued that "Closure" was one of the first episodes in which the production staff was able to "shoot in Los Angeles with the sun out". Previous to this episode, the show's production staff was "struggling with the fact that we weren't in Vancouver anymore and that our show had suddenly become very bright and cheery". To amend this, Bill Roe, director of photography, used tree branches and c-stands to block out the sunlight. The first scene with the walk-ins rising up from their grave, shot at Griffith Park above the playground, was "tricky," according to director Kim Manners, as he felt uncomfortable telling the children to rise out of "graves", feeling it could psychologically hurt them; instead, he had the crew call the holes in the ground "forts." The scenes taking place at April Air Force Base were shot in San Bernardino, California, at a closed airfield, the former Norton Air Force Base. On the airbase was a large, derelict complex of over 400 buildings (many of them houses) that had been constructed and used by the United States military. According to Manners, the entire area was an "eerie ... ghost town", as many of the houses were still full of old, abandoned furniture. Originally, the producers wanted the name of the fictitious air force base to be "March Air Force Base". However, the presence of a real air reserve base with the same name located less than 10 miles away in Riverside, California, necessitated a change to "April Air Force Base". The scene at the restaurant was shot at the Carriage Inn on Sepulveda Boulevard.
During filming, David Duchovny decided to act out the reunion scene in a manner contrary to what the script called for. Manners later said, "In the script, it called for his sister to run up and hug him, and Mulder was to start crying. David didn't want to cry. I said, 'David, you're finally realizing your sister is, in fact, dead.' […] He said, 'Just watch what I do; trust me.' And, he held that little girl actress—there was a beatific smile on his face that was absolutely astounding." Manners was very happy with the change and included it in the final cut of the episode. To create the scene featuring the ghosts of the dead children interacting with the characters, various layers of film had to be overlaid onto each other. The scenes were laborious and took "many passes" to complete. After the shots had been secured, the film of the ghosts had to be made slightly transparent. These scenes were actually shot in daylight, and a specialized "day for night" photography (in which the subjects were illuminated with bright lights and the sky was completely avoided) was used to make the finished scene look as if it had been filmed at night. The scenes were shot at 48 frames a second, twice that of the show's normal filming speed. Rebecca Toolan traveled to Los Angeles from Vancouver specifically for this episode and "Sein Und Zeit". To create her ghostly apparition, the production staff filmed Toolan and superimposed the image over a shot of Duchovny. Manners played the part of the hypnotist in the video that Scully watches, with the director-turned-actor later noting, "I only act when you can't actually see my face". Manners was critical of Duchovny's wig used in this scene—which had been added to make the footage seem older. He sardonically noted that "this is [not] one of the episodes that Cheri Medcalf [the show's make-up director] won an Emmy for."
Composer Mark Snow described his score as possessing a "sense of biblical fervor and religiosity—an elegy—a feeling about it that was so poignant and touching to me." "My Weakness", a song by Moby from his 1999 album Play, is used in this episode, first when the FBI discover the mass grave and finally at the end when Mulder encounters his sister's spirit. Carter never told Snow about the decision to use someone else's music, although Snow has since said that his reaction to the use of the song was very positive and that the song was a "perfect" fit. Another Moby song, "The Sky is Broken" also from Play, would be featured in the later seventh-season episode "all things".
Themes
According to Amy M. Donaldson in her book We Want to Believe: Faith and Gospel in The X-Files, Mulder's opening monologue may be an example of "Mulder now being more receptive to the possibility of God's intervention". Throughout much of the series, Mulder has shown a disdain for religion. However, in "Closure", Donaldson points out that "Mulder's belief in God, as always, revolves around his beliefs about his sister's fate". As such, Mulder expresses hope that those who die in a cruel fashion "live on in some other way". Furthermore, she argues that because "Closure" opens with the tagline "Believe to understand", Mulder must "take the leap of faith" in order to find enlightenment, and ultimately the truth about his sister. The first half of the episode plays out according to the tagline; Mulder first believes in "his desire stated in the opening voiceover", and then finds closure.
Donaldson also parallels elements in the episode to the plots of other episodes such as the fourth season entry "Paper Hearts", wherein it is suggested that a serial killer murdered Samantha. In "Paper Hearts", a father of a victim notes that the uncertainty of his daughter's murder allowed those who were involved to "consider the possibilities, both for the best and for the worst". However, once it is revealed that his daughter was murdered, all hope was removed. Conversely, Mulder holds onto the possibility that Samantha is alive through much of the series, but when he realizes that she is indeed dead in "Closure", hope is removed but in its place is found peace. To parallel Mulder's acceptance, Harold Piller refuses to believe his son is dead; as such, he "cling[s] to the possibility [because] uncertainty allows him hope."
Reception
Ratings
"Closure" first aired in the United States on February 13, 2000. The episode earned a Nielsen household rating of 9.1, with a 13 share. Nielsen ratings are audience measurement systems that determine the audience size and composition of television programming in the U.S. This means that roughly 9.1 percent of all television-equipped households, and 13 percent of households watching television, were watching the episode. It was viewed by 15.35 million viewers in the United States. On May 28, 2000, the episode debuted on Sky 1 in the United Kingdom and gathered 0.68 million viewers, making it the eighth most watched program shown on Sky 1 that week, in front of Angel and The Simpsons. The episode was later included on The X-Files Mythology, Volume 3 – Colonization, a DVD collection that contains episodes involved with the alien Colonist's plans to take over the earth.
Initial reviews
Initial reviews were mixed, with some critics applauding the story's conclusion, and others deriding it. Tom Kessenich, in his book Examinations: An Unauthorized Look at Seasons 6–9 of the X-Files, opined that the episode worked best "if some of the previous Samantha-related clues were forgotten", such as when the Alien Bounty Hunter told Mulder that she was still alive in "End Game". Despite this, he wrote that "it was only right that Samantha be dead since Mulder's life had always been defined by what he has lost, not what he has found". He surmised that the episode was not "perfect", but that its "plusses greatly outweighed any missteps along the way". He was also complimentary towards "the ethereal quality of the final few moments", writing that they "lifted this episode up and made it one of the season's most memorable". Kenneth Silber from Space.com was pleased with the episode, and wrote, "'Closure' is a satisfying episode, one that puts to bed the now-tiresome search for Mulder's sister Samantha." Jeremy Conrad from IGN referred to the episode as "excellent" and noted that a large portion of The X-Files mythology ended with the resolution of Samantha's abduction, saying, "['Closure' is] a final, and concrete, answer to the single thing that was driving Mulder for the entire run of the series. In some ways, when he got that answer a major part of The X-Files story ended."
Not all reviews were positive. Paula Vitaris from CFQ gave the episode a negative review and awarded it one-and-a-half stars out of four. She wrote, "Instead of a grand, breath-taking, heart-breaking finale that should be the climax of Mulder's search for Samantha, the story expires limply with some nonsense about Samantha being of the starlight children." Bobby Bryant and Tracy Burlison of The State named the episode the "Worst Conspiracy" episode. The two noted that because "a tenet of The X-Files was that Mulder's sister, Samantha, had been (a) kidnapped by aliens or (b) kidnapped by government conspirators", the fact that she had actually been turned into a spirit "insanely offers a supernatural explanation to a science-fiction mystery".
Later reviews
Later reviews have seen "Closure" in a much more positive light, with many critics praising its ending. Zack Handlen of The A.V. Club awarded the episode an "A−". He argued that the episode worked due to two scenes: the sequence in which Mulder reads aloud from Samantha's diary, and the final shot of Mulder being reunited with his sister. He wrote that the "stark simplicity" of the former made it emotionally powerful, and that the latter was "a bit sappy, a bit surreal, a bit lovely" but nonetheless "a beautiful moment". Meghan Deans of Tor.com felt that the story was "silly", but that, when paired with the idea that Samantha was truly an innocent victim, successfully becomes a "comfort". She called it a move that "the show must give Mulder, and us, in order to shut down this storyline for good." Robert Shearman and Lars Pearson, in their book Wanting to Believe: A Critical Guide to The X-Files, Millennium & The Lone Gunmen, rated the episode four stars out of five, and called it "brave". The two noted that while some of the sentimentality is pushed too far—such as when Mulder finds his sister's diary speaking to him, or when Mulder talks about all lost souls being stars—the "critical moment" featuring Mulder reuniting with his sister's spirit is "extraordinarily moving".
Footnotes
Bibliography
External links
2000 American television episodes
Television episodes directed by Kim Manners
Television episodes written by Chris Carter (screenwriter)
Television episodes written by Frank Spotnitz
Television episodes set in California
Television episodes set in Connecticut
The X-Files (season 7) episodes
Television episodes about ghosts
|
```css
/* athiti-200normal - latin */
@font-face {
font-family: 'Athiti';
font-style: normal;
font-display: swap;
font-weight: 200;
src:
local('Athiti Extra Light '),
local('Athiti-Extra Light'),
url('./files/athiti-latin-200.woff2') format('woff2'), /* Super Modern Browsers */
url('./files/athiti-latin-200.woff') format('woff'); /* Modern Browsers */
}
/* athiti-300normal - latin */
@font-face {
font-family: 'Athiti';
font-style: normal;
font-display: swap;
font-weight: 300;
src:
local('Athiti Light '),
local('Athiti-Light'),
url('./files/athiti-latin-300.woff2') format('woff2'), /* Super Modern Browsers */
url('./files/athiti-latin-300.woff') format('woff'); /* Modern Browsers */
}
/* athiti-400normal - latin */
@font-face {
font-family: 'Athiti';
font-style: normal;
font-display: swap;
font-weight: 400;
src:
local('Athiti Regular '),
local('Athiti-Regular'),
url('./files/athiti-latin-400.woff2') format('woff2'), /* Super Modern Browsers */
url('./files/athiti-latin-400.woff') format('woff'); /* Modern Browsers */
}
/* athiti-500normal - latin */
@font-face {
font-family: 'Athiti';
font-style: normal;
font-display: swap;
font-weight: 500;
src:
local('Athiti Medium '),
local('Athiti-Medium'),
url('./files/athiti-latin-500.woff2') format('woff2'), /* Super Modern Browsers */
url('./files/athiti-latin-500.woff') format('woff'); /* Modern Browsers */
}
/* athiti-600normal - latin */
@font-face {
font-family: 'Athiti';
font-style: normal;
font-display: swap;
font-weight: 600;
src:
local('Athiti SemiBold '),
local('Athiti-SemiBold'),
url('./files/athiti-latin-600.woff2') format('woff2'), /* Super Modern Browsers */
url('./files/athiti-latin-600.woff') format('woff'); /* Modern Browsers */
}
/* athiti-700normal - latin */
@font-face {
font-family: 'Athiti';
font-style: normal;
font-display: swap;
font-weight: 700;
src:
local('Athiti Bold '),
local('Athiti-Bold'),
url('./files/athiti-latin-700.woff2') format('woff2'), /* Super Modern Browsers */
url('./files/athiti-latin-700.woff') format('woff'); /* Modern Browsers */
}
```
|
```kotlin
@file:Suppress("UNCHECKED_CAST", "CAST_NEVER_SUCCEEDS")
package tornadofx
import javafx.beans.binding.Bindings
import javafx.beans.binding.BooleanBinding
import javafx.beans.binding.BooleanExpression
import javafx.beans.property.Property
import javafx.beans.property.SimpleBooleanProperty
import javafx.beans.property.SimpleObjectProperty
import javafx.beans.property.StringProperty
import javafx.beans.value.ChangeListener
import javafx.beans.value.ObservableValue
import javafx.beans.value.WritableValue
import javafx.collections.FXCollections
import javafx.collections.ObservableList
import javafx.scene.control.*
import javafx.scene.paint.Color
import javafx.scene.text.Text
import javafx.util.StringConverter
import javafx.util.converter.*
import java.math.BigDecimal
import java.math.BigInteger
import java.text.Format
import java.time.LocalDate
import java.time.LocalDateTime
import java.time.LocalTime
import java.util.*
import java.util.concurrent.Callable
private fun <T> Property<T>.internalBind(property: ObservableValue<T>, readonly: Boolean) {
ViewModel.register(this, property)
if (readonly || (property !is Property<*>)) bind(property) else bindBidirectional(property as Property<T>)
}
fun <T> ComboBoxBase<T>.bind(property: ObservableValue<T>, readonly: Boolean = false) =
valueProperty().internalBind(property, readonly)
fun ColorPicker.bind(property: ObservableValue<Color>, readonly: Boolean = false) =
valueProperty().internalBind(property, readonly)
fun DatePicker.bind(property: ObservableValue<LocalDate>, readonly: Boolean = false) =
valueProperty().internalBind(property, readonly)
fun ProgressIndicator.bind(property: ObservableValue<Number>, readonly: Boolean = false) =
progressProperty().internalBind(property, readonly)
fun <T> ChoiceBox<T>.bind(property: ObservableValue<T>, readonly: Boolean = false) =
valueProperty().internalBind(property, readonly)
fun CheckBox.bind(property: ObservableValue<Boolean>, readonly: Boolean = false) =
selectedProperty().internalBind(property, readonly)
fun CheckMenuItem.bind(property: ObservableValue<Boolean>, readonly: Boolean = false) =
selectedProperty().internalBind(property, readonly)
fun Slider.bind(property: ObservableValue<Number>, readonly: Boolean = false) =
valueProperty().internalBind(property, readonly)
fun <T> Spinner<T>.bind(property: ObservableValue<T>, readonly: Boolean = false) =
valueFactory.valueProperty().internalBind(property, readonly)
inline fun <reified S : T, reified T : Any> Labeled.bind(
property: ObservableValue<S>,
readonly: Boolean = false,
converter: StringConverter<T>? = null,
format: Format? = null
) {
bindStringProperty(textProperty(), converter, format, property, readonly)
}
inline fun <reified S : T, reified T : Any> TitledPane.bind(
property: ObservableValue<S>,
readonly: Boolean = false,
converter: StringConverter<T>? = null,
format: Format? = null
) = bindStringProperty(textProperty(), converter, format, property, readonly)
inline fun <reified S : T, reified T : Any> Text.bind(
property: ObservableValue<S>,
readonly: Boolean = false,
converter: StringConverter<T>? = null,
format: Format? = null
) = bindStringProperty(textProperty(), converter, format, property, readonly)
inline fun <reified S : T, reified T : Any> TextInputControl.bind(
property: ObservableValue<S>,
readonly: Boolean = false,
converter: StringConverter<T>? = null,
format: Format? = null
) = bindStringProperty(textProperty(), converter, format, property, readonly)
inline fun <reified S : T, reified T : Any> bindStringProperty(
stringProperty: StringProperty,
converter: StringConverter<T>?,
format: Format?,
property: ObservableValue<S>,
readonly: Boolean
) {
if (stringProperty.isBound) stringProperty.unbind()
val effectiveReadonly = readonly || property !is Property<S> || S::class != T::class
ViewModel.register(stringProperty, property)
if (S::class == String::class) when {
effectiveReadonly -> stringProperty.bind(property as ObservableValue<String>)
else -> stringProperty.bindBidirectional(property as Property<String>)
} else {
val effectiveConverter = if (format != null) null else converter ?: getDefaultConverter<S>()
if (effectiveReadonly) {
val toStringConverter = Callable {
when {
converter != null -> converter.toString(property.value)
format != null -> format.format(property.value)
else -> property.value?.toString()
}
}
val stringBinding = Bindings.createStringBinding(toStringConverter, property)
stringProperty.bind(stringBinding)
} else when {
effectiveConverter != null -> stringProperty.bindBidirectional(property as Property<S>, effectiveConverter as StringConverter<S>)
format != null -> stringProperty.bindBidirectional(property as Property<S>, format)
else -> throw IllegalArgumentException("Cannot convert from ${S::class} to String without an explicit converter or format")
}
}
}
inline fun <reified T : Any> getDefaultConverter() = when (T::class.javaPrimitiveType ?: T::class) {
Int::class.javaPrimitiveType -> IntegerStringConverter()
Long::class.javaPrimitiveType -> LongStringConverter()
Double::class.javaPrimitiveType -> DoubleStringConverter()
Float::class.javaPrimitiveType -> FloatStringConverter()
Date::class -> DateStringConverter()
BigDecimal::class -> BigDecimalStringConverter()
BigInteger::class -> BigIntegerStringConverter()
Number::class -> NumberStringConverter()
LocalDate::class -> LocalDateStringConverter()
LocalTime::class -> LocalTimeStringConverter()
LocalDateTime::class -> LocalDateTimeStringConverter()
Boolean::class.javaPrimitiveType -> BooleanStringConverter()
else -> null
} as StringConverter<T>?
fun ObservableValue<Boolean>.toBinding() = object : BooleanBinding() {
init {
super.bind(this@toBinding)
}
override fun dispose() {
super.unbind(this@toBinding)
}
override fun computeValue() = this@toBinding.value
override fun getDependencies(): ObservableList<*> = FXCollections.singletonObservableList(this@toBinding)
}
fun <T, N> ObservableValue<T>.select(nested: (T) -> ObservableValue<N>): Property<N> {
fun extractNested(): ObservableValue<N>? = value?.let(nested)
var currentNested: ObservableValue<N>? = extractNested()
return object : SimpleObjectProperty<N>() {
val changeListener = ChangeListener<Any?> { _, _, _ ->
invalidated()
fireValueChangedEvent()
}
init {
currentNested?.addListener(changeListener)
this@select.addListener(changeListener)
}
override fun invalidated() {
currentNested?.removeListener(changeListener)
currentNested = extractNested()
currentNested?.addListener(changeListener)
}
override fun get() = currentNested?.value
override fun set(v: N?) {
(currentNested as? WritableValue<N>)?.value = v
super.set(v)
}
}
}
fun <T> ObservableValue<T>.selectBoolean(nested: (T) -> BooleanExpression): BooleanExpression {
fun extractNested() = nested(value)
val dis = this
var currentNested = extractNested()
return object : SimpleBooleanProperty() {
val changeListener = ChangeListener<Boolean> { _, _, _ ->
currentNested = extractNested()
fireValueChangedEvent()
}
init {
dis.onChange {
fireValueChangedEvent()
invalidated()
}
}
override fun invalidated() {
currentNested.removeListener(changeListener)
currentNested = extractNested()
currentNested.addListener(changeListener)
}
override fun getValue() = currentNested.value
override fun setValue(v: Boolean?) {
(currentNested as? WritableValue<*>)?.value = v
super.setValue(v)
}
}
}
```
|
Huillolluni (possibly from Aymara and Quechua willullu poor / orphan, Aymara -ni a suffix to indicate ownership, "the one with an orphan") is a mountain in the Vilcanota mountain range in the Andes of Peru, about high. It is situated in the Cusco Region, Quispicanchi Province, Marcapata District, and in the Paucartambo Province, Kosñipata District. Huillolluni lies north-east of the mountain Jolljepunco and north-west of the mountain Ancahuachana.
References
Mountains of Peru
Mountains of Cusco Region
|
```php
<?php
/*
*
*
* path_to_url
*
* Unless required by applicable law or agreed to in writing, software
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
*/
namespace Google\Service\PolicySimulator;
class GoogleCloudPolicysimulatorV1alphaOrgPolicyViolationsPreview extends \Google\Collection
{
protected $collection_key = 'customConstraints';
/**
* @var string
*/
public $createTime;
/**
* @var string[]
*/
public $customConstraints;
/**
* @var string
*/
public $name;
protected $overlayType = GoogleCloudPolicysimulatorV1alphaOrgPolicyOverlay::class;
protected $overlayDataType = '';
protected $resourceCountsType = your_sha256_hashrceCounts::class;
protected $resourceCountsDataType = '';
/**
* @var string
*/
public $state;
/**
* @var int
*/
public $violationsCount;
/**
* @param string
*/
public function setCreateTime($createTime)
{
$this->createTime = $createTime;
}
/**
* @return string
*/
public function getCreateTime()
{
return $this->createTime;
}
/**
* @param string[]
*/
public function setCustomConstraints($customConstraints)
{
$this->customConstraints = $customConstraints;
}
/**
* @return string[]
*/
public function getCustomConstraints()
{
return $this->customConstraints;
}
/**
* @param string
*/
public function setName($name)
{
$this->name = $name;
}
/**
* @return string
*/
public function getName()
{
return $this->name;
}
/**
* @param GoogleCloudPolicysimulatorV1alphaOrgPolicyOverlay
*/
public function setOverlay(GoogleCloudPolicysimulatorV1alphaOrgPolicyOverlay $overlay)
{
$this->overlay = $overlay;
}
/**
* @return GoogleCloudPolicysimulatorV1alphaOrgPolicyOverlay
*/
public function getOverlay()
{
return $this->overlay;
}
/**
* @param your_sha256_hashrceCounts
*/
public function setResourceCounts(your_sha256_hashrceCounts $resourceCounts)
{
$this->resourceCounts = $resourceCounts;
}
/**
* @return your_sha256_hashrceCounts
*/
public function getResourceCounts()
{
return $this->resourceCounts;
}
/**
* @param string
*/
public function setState($state)
{
$this->state = $state;
}
/**
* @return string
*/
public function getState()
{
return $this->state;
}
/**
* @param int
*/
public function setViolationsCount($violationsCount)
{
$this->violationsCount = $violationsCount;
}
/**
* @return int
*/
public function getViolationsCount()
{
return $this->violationsCount;
}
}
// Adding a class alias for backwards compatibility with the previous class name.
class_alias(GoogleCloudPolicysimulatorV1alphaOrgPolicyViolationsPreview::class, your_sha256_hashOrgPolicyViolationsPreview');
```
|
Sephardic Temple Tifereth Israel, also called The Sephardic Temple, is a large, urban Sephardi Jewish synagogue located in Westwood, Los Angeles, California at the corner of Wilshire Boulevard and Warner Avenue. Established on February 1, 1920 as the "Sephardic Community of Los Angeles," it exists today as the merger of three major Sephardic organizations with approximately 600 member families.
Overview and History
Sephardic Temple Tifereth Israel is a modern synagogue in the Sephardic tradition. It is the largest Sephardic synagogue in California, and one of the largest in the country. The synagogue offers a robust and diverse array of spiritual, cultural and social activities, including daily morning minyan, Shabbat services with hundreds of attendees, Torah classes, a vibrant Men's Club, Sisterhood, Young Professional events, and more. Sephardic Temple offers traditional non-equalitarian services, using Orthodox liturgy and prayer books. Sephardic Temple maintains its own unique blend of authentic traditional Judaism with openness to the opportunities and challenges of modern life. As such, it is not formally affiliated with any specific Jewish stream or movement. In the words of its legendary cantor, Hazan Haim Mizrahi: "We are simply traditional, and Sephardic". This traditionalism appeals to hundreds of families who feel disengaged from liberal Judaism, and yet do not fully subscribe to Orthodoxy at the same time.
The history of the Sephardic Temple reflects the history of the Sephardic community in Los Angeles. The first Sephardi Jews to arrive in Los Angeles came around 1853; however, significant numbers of Sephardim came in the early 20th century from places such as Egypt, Rhodes, Salonica, Turkey, and other regions of the former Ottoman Empire and elsewhere in the Middle East.
These early Sephardi immigrants to Los Angeles founded the Avat Shalom Society in 1912 to unify Jewish immigrants coming from the Ottoman Empire, but the society soon dissolved. In 1917, the Peace and Progress Society was formed mostly by immigrants from the island of Rhodes. In 1935 the name was changed to the Sephardic Hebrew Center and later to Sephardic Beth Shalom. Others organized the Haim VaHessed Society ("Sephardic Brotherhood") of Los Angeles.
On February 1, 1920, 39 Turkish elders of the Sephardi community formed the Sephardic Community of Los Angeles ("La Communidad"). Rabbi Abraham Caraco served as the first rabbi of the community. Minutes from the first meeting note that monthly dues would be $1 for each member. In 1924, the Sephardic Community of Los Angeles purchased property for a synagogue at 52nd Street and Second Avenue in southwest Los Angeles. The synagogue was never built at this location due to funds nearly being exhausted. Rabbi Caraco died in 1925, with Rabbi David Tovi serving from 1925 through World War II. By 1928, membership increased to the point where the property was sold, and a new larger site was purchased for $12,000 on the corner of Santa Barbara Avenue (now Martin Luther King Jr. Blvd) and LaSalle Avenue in the West Adams neighborhood of South Los Angeles. Groundbreaking ceremonies took place on September 1, 1931 with 125 members, and the new Santa Barbara Avenue Temple was dedicated on February 21, 1932 with invited guests including the mayor of Los Angeles.
Following World War II, the congregation started a choir and increased the frequency of Friday night services. Rabbi Tovi died and was replaced by Rabbi Axelrod, who in turn was replaced by Rabbi Friedman of Florence, Italy. Rabbi Friedman eventually resigned due to intra-congregational squabbling.
In 1959, the Sephardic Community of Los Angeles merged with the Haim VaHessed Society ("Sephardic Brotherhood"), forming the Sephardic Community and Brotherhood of Los Angeles. Rabbi Jacob M. Ott was invited to be the spiritual leader of the community on a trial basis, and ended up serving as Senior Rabbi until his retirement in 1992. By 1970, the congregation outgrew the Santa Barbara Avenue Temple and broke ground at the current Wilshire Boulevard and Warner Avenue site in Westwood for a new synagogue. The building would not be finally dedicated until September 15, 1981, when California Governor Jerry Brown addressed a crowd of over 1000 guests during a dedication ceremony.
On October 1, 1987, King Juan Carlos I and Queen Sofía of Spain visited the temple, reported to be the first visit to a synagogue by a Spanish king since the expulsion of the Jews from Spain in 1492. In 1993, the newly named Sephardic Temple Tifereth Israel merged with Sephardic Beth Shalom, uniting all three major original Sephardic congregations in Los Angeles. In 1994 the temple installed a group of "Sephardic Heritage" stained glass windows by Israeli artist Raphael Abecassis.
Rabbi Ott's successor, Rabbi Daniel Bouskila, was raised and became bar mitzvah at the temple, and became the senior rabbi two years after being hired out of Yeshiva University's rabbinical school. Under his leadership the temple's Hebrew school grew considerably, as did attendance at services. After 17 years, Bouskila decided to step down in Feb. 2010 at the age of 45 to become director of the Sephardic Educational Center.
His successor as Senior Rabbi was Rabbi Jay Shasho Levy, J.D, from Feb. 2010 to May 2013. Rabbi Levy is from a Sephardic family with roots in Spain, Aleppo and Gaziantep. He is also an award winning Cantor, writer, composer, and recording artist who has lectured and performed around the world.
In 2013, Sephardic Temple Tifereth Israel appointed Dr. Tal Sessler as Senior Rabbi. Rabbi Dr. Sessler formerly served as the Rabbi of Freehold Jewish Center in New Jersey, and earlier as the Rabbi of the Jewish Center of Forest Hills West in New York. Rabbi Sessler left the Temple in May 2021. Rabbi Refael Cohen is the current Senior Rabbi. The Cantor is Haim Mizrahi and the Gabbai is Edward Mizrahi.
References
External links
Official Sephardic Temple Tifereth Israel website
1920 establishments in California
1981 establishments in California
Conservative Jewish day schools
Greek-American culture in California
Greek-Jewish culture in the United States
Egyptian-American culture in California
Egyptian-Jewish culture in the United States
Jewish day schools in California
Jewish Rhodian history
Jewish organizations established in 1920
Schools in Los Angeles
Sephardi Jewish culture in California
Spanish-Jewish culture in the United States
Synagogues completed in 1981
Synagogues in Los Angeles
Turkish-Jewish culture in the United States
Unafilliated synagogues in California
Westwood, Los Angeles
Sephardi synagogues
|
Rodney Allen Leisle (born February 5, 1981) is a former American football defensive tackle. He was drafted by the New Orleans Saints in the fifth round of the 2004 NFL Draft. He attended Ridgeview High School in Bakersfield, California and played college football at UCLA.
Leisle has also been a member of the New York Giants, Saskatchewan Roughriders and Arizona Cardinals.
Career
Leisle played in 18 games for the Saints and appeared in one game for his last season with Saints before being placed on injured reserve with a knee injury. Saints cut Leisle in their 2007 training camp.
Leisle resurfaced with the Giants during their training camp in 2008, but sustained a season ending rib injury before season started. He spent some time in Canadian football with the Saskatchewan Roughriders during their 2008 season. He competed for a roster spot during Cardinals training camp in 2009.
Lesile was signed again to the Saints in 2009 before being released again in 2010.
References
External links
Arizona Cardinals bio
New York Giants bio
UCLA Bruins bio
1981 births
Living people
Players of American football from Fresno, California
American football defensive tackles
American players of Canadian football
Canadian football defensive linemen
UCLA Bruins football players
New Orleans Saints players
New York Giants players
Saskatchewan Roughriders players
Arizona Cardinals players
|
The Estonian Sovereignty Declaration (), fully: Declaration on the Sovereignty of the Estonian SSR (), was issued on 16 November 1988 during the Singing Revolution in the Estonian SSR. The declaration asserted Estonia's sovereignty and the supremacy of the Estonian laws over the laws of the Soviet Union. Estonia's newly elected parliament also laid claim to all natural resources: land, inland waters, forests, mineral deposits and to the means of industrial production, agriculture, construction, state banks, transportation, municipal services, etc. within Estonia's borders.
Background
Estonia gained independence in 1918, in the aftermath of World War I. During World War II, on 16-17 June 1940, Estonia was invaded and occupied by the Soviet army, and its territory was subsequently annexed by the Stalinist Soviet Union in August 1940.
The majority of Western nations refused to recognize the incorporation of Estonia de jure by the Soviet Union and only recognized the government of the Estonian SSR de facto or not at all. Such countries recognized Estonian, Latvian and Lithuanian diplomats and consuls who still functioned in the name of their former governments. These diplomats persisted in this anomalous situation until the ultimate restoration of Baltic independence.
In the 1980s, new policies of perestroika and glasnost were introduced and political repression in the Soviet Union came to an end. As a result, during the 1991 Soviet coup d'état attempt on 20 August 1991, Estonia restored full independence, almost three years after the Estonian Sovereignty Declaration was made. On 6 September 1991, the Soviet Union recognized the independence of Estonia, and the country became a member of the United Nations on 17 September 1991. After more than three years of negotiations, on 31 August 1994, the last remaining armed forces of Russia withdrew from Estonia.
The Declaration
See also
On the Restoration of Independence of the Republic of Latvia
Act of the Re-Establishment of the State of Lithuania
State continuity of the Baltic states
Dissolution of the Soviet Union
References
Dissolution of the Soviet Union
1988 in the Soviet Union
Singing Revolution
Sovereignty
1988 in politics
1988 in Estonia
Politics of Estonia
November 1988 events in Europe
1988 documents
|
Quintus Aurelius Pactumeius Fronto was a Roman senator active during the first century AD. He was suffect consul for the nundinium September-October 80 as the colleague of Lucius Aelius Lamia Plautius Aelianus.
Fronto is the earliest documented person from North Africa to accede to the Roman consulate, although his brother Quintus Aurelius Pactumeius Clemens, the date of whose consulship is not known but is around the same time, could be earlier; a stamp on an amphora found in Pompeii dates the ceramic to the consulate of "Marcellus and Pactumeius". The mystery lies in an inscription from Cirta set up by Pactumeia, daughter of one of these brothers, but the name of her father is damaged, and the traces could fit either man. Both Fronto and his brother Clemens are known solely from inscriptions.
Both brothers were born into the equestrian class, and thus homines novi. Mireille Corbier, in her monograph on financial administrators of the Roman Empire, explains their gentilica as the result of a testamentary adoption by a Quintus Aurelius. Both were adlected into the Senate as praetorians by Vespasian and Titus in 73–74. Fronto was appointed curator of the aerarium militare presumably for three years between the years 75 and 79. Corbier remarks on the remarkable speed of Fronto's advancement: he needed only six years to achieve consular rank after being admitted to the Senate. The lives of both brothers lack documentation after their consulates.
Corbier believes Pactumeia was the daughter of Pactumeius Fronto, while his brother Clemens is the grandfather of Publius Pactumeius Clemens, consul in 138.
References
1st-century Romans
Romans from Africa
Suffect consuls of Imperial Rome
Pactumeius
Pactumeii
Ancient Roman adoptees
|
```c++
/*
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
*
* path_to_url
*
* Unless required by applicable law or agreed to in writing,
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* specific language governing permissions and limitations
*/
/*!
* \file src/relax/block_builder.cc
*/
#include <tvm/arith/analyzer.h>
#include <tvm/relax/analysis.h>
#include <tvm/relax/block_builder.h>
#include <tvm/relax/expr_functor.h>
#include <tvm/relax/op_attr_types.h>
#include <tvm/relax/struct_info.h>
#include <tvm/relax/struct_info_functor.h>
#include <tvm/relax/transform.h>
#include <tvm/relax/type.h>
#include <tvm/relay/op.h>
#include <tvm/runtime/registry.h>
#include <tvm/tir/function.h>
#include <memory>
#include <unordered_map>
#include <unordered_set>
#include <vector>
#include "../../node/ndarray_hash_equal.h"
// Block builder have three categories of logics that are interdependent with each other.
//
// The logics are somewhat interdependent with each other.
// To help us implement a block builder in two parts:
//
// - BlockBuilderImpl: implements ctx and scope management, with no normalization.
// - BlockBuilderImplWithNormalize: subclasses BlockBuilderImpl and implements normalization.
//
// The final blockbuilder create will be backed by BlockBuilderWithNormalize
namespace tvm {
namespace relax {
//---------------------------------------
// ctx and scope management.
//---------------------------------------
class BlockBuilderImpl : public BlockBuilderNode {
public:
explicit BlockBuilderImpl(IRModule context_mod) : context_mod_(std::move(context_mod)) {}
~BlockBuilderImpl() {
if (!block_stack_.empty()) {
LOG(WARNING) << "BlockBuilder destroyed with remaining blocks!";
}
}
//-------------------------------
// Global Context management
//-------------------------------
NameSupply name_supply() final { return name_supply_; }
IRModule GetContextIRModule() const final { return context_mod_; }
IRModule Finalize() final { return transform::NormalizeGlobalVar()(context_mod_); }
GlobalVar AddFunction(const BaseFunc& func, String func_name_hint) final {
LazyInitCtxFuncDedupMap();
auto it = ctx_func_dedup_map_->find(func);
if (it == ctx_func_dedup_map_->end()) {
context_mod_.CopyOnWrite();
String func_name = GetUniqueName(func_name_hint);
while (context_mod_->ContainGlobalVar(func_name)) {
func_name = GetUniqueName(func_name_hint);
}
GlobalVar gvar(func_name);
StructInfo finfo;
if (func->struct_info_.defined()) {
finfo = GetStructInfo(func);
} else if (auto* prim_func = func.as<tir::PrimFuncNode>()) {
// NOTE: use a slightly different struct info than checked type
// in PrimFunc so handle can turn into Tensor.
// TODO(relax-team): add fine-grained PrimFunc struct info signature generation.
finfo = FuncStructInfo::OpaqueFunc(StructInfoFromType(prim_func->ret_type));
} else {
finfo = StructInfoFromType(func->checked_type());
}
UpdateStructInfo(gvar, finfo);
context_mod_->Add(gvar, func);
(*ctx_func_dedup_map_)[func].insert(gvar);
return gvar;
} else {
ICHECK(it->second.size()) << "Values contained in de-duplication map must be non-empty sets, "
<< "but found an empty set for function of type "
<< func->GetTypeKey();
// To provide deterministic results, return the GlobalVar that
// comes first in lexicographic order.
return *std::min_element(
it->second.begin(), it->second.end(),
[](const GlobalVar& a, const GlobalVar& b) { return a->name_hint < b->name_hint; });
}
}
void UpdateFunction(const GlobalVar& gv, BaseFunc function) final {
context_mod_.CopyOnWrite();
// Remove function from the de-duplication map.
if (ctx_func_dedup_map_ != nullptr) {
auto it = context_mod_->functions.find(gv);
if (it != context_mod_->functions.end()) {
BaseFunc old_func = (*it).second;
auto ptr = ctx_func_dedup_map_->find(old_func);
ICHECK(ptr != ctx_func_dedup_map_->end())
<< "BlockBuilder::UpdateFunction is updating " << gv
<< ", which appears in the BlockBuilder's context_mod_, "
<< "but does not appear in the de-duplication map";
ICHECK(ptr->second.count(gv))
<< "BlockBuilder::UpdateFunction is updating " << gv
<< ", but the de-duplication map for the previous value of this function "
<< "does not include " << gv;
ptr->second.erase(gv);
if (ptr->second.empty()) {
ctx_func_dedup_map_->erase(ptr);
}
}
}
context_mod_->Update(gv, function);
// add new dedup map item.
if (ctx_func_dedup_map_ != nullptr) {
(*ctx_func_dedup_map_)[function].insert(gv);
}
}
[[noreturn]] void ReportFatal(const Diagnostic& diagnostic) final {
// TODO(relax-team): Print more context information by looking
// into the diagnostic->loc and surrounding IRModule.
// We do not materialzie DiagnosticContext to avoid double referencing to
// the change IRModule in COW. Additionally, we need to be able to
// continue use the builder after an error is thrown to avoid state building up.
// in an interactive environment.
LOG(FATAL) << diagnostic->message;
}
//-------------------------------
// Scope management
//-------------------------------
Optional<Expr> LookupBinding(const Var& var) final {
auto it = binding_table_.find(var->vid);
if (it == binding_table_.end()) return NullOpt;
return it->second;
}
void BeginDataflowBlock() final { block_stack_.emplace_back(BlockFrame{{}, true}); }
void BeginBindingBlock() final { block_stack_.emplace_back(BlockFrame{{}, false}); }
void BeginScope(Optional<Array<Var>> params) final {
// The current implementation handles the collection of shape var
// defined in parameter struct info annotations. The implementation
// is correct (since we will simply erase all relax Vars in EraseToWellDefined),
// but can be further improved.
//
// TODO(relax-team): Add support for relax Var in struct info annotations.
scope_stack_.emplace_back(ScopeFrame());
if (params.defined()) {
for (const auto& param : params.value()) {
AddDefinitionToScope(param);
}
}
}
void BeginInnerScope() final {
if (scope_stack_.size()) {
scope_stack_.emplace_back(scope_stack_.back());
} else {
scope_stack_.emplace_back(ScopeFrame());
}
}
void AddDefinitionToScope(Var var) final {
if (scope_stack_.empty()) {
return;
}
auto& shape_var_map = CurrentScopeFrame()->shape_var_map;
// The current implementation handles the collection of shape var
// defined in parameter struct info annotations. The implementation
// is correct (since we will simply erase all relax Vars in EraseToWellDefined),
// but can be further improved.
Map<tir::Var, PrimExpr> var_map = StructInfoVarCollector::Collect(GetStructInfo(var));
for (const auto& kv : var_map) {
const tir::Var& shape_var = kv.first;
const PrimExpr& shape_expr = kv.second;
auto it = shape_var_map.find(shape_var);
if (it == shape_var_map.end()) {
shape_var_map.Set(shape_var, shape_expr);
// Expose the shape variable as non-negative, for purposes
// of shape inference. In many cases, knowning that the
// shape variable is non-negative allows for simpler
// expressions for dynamic shapes.
analyzer_.MarkGlobalNonNegValue(shape_var);
} else {
const PrimExpr& old_shape_expr = (*it).second;
CHECK(old_shape_expr.same_as(shape_expr) ||
analyzer_.CanProveEqual(old_shape_expr, shape_expr))
<< "Inconsistent shape var " << shape_var << " in scope: " << old_shape_expr << " vs "
<< shape_expr;
}
}
}
void EndScope() final { scope_stack_.pop_back(); }
BindingBlock EndBlock() final {
BlockFrame* cur_frame = CurrentBlockFrame();
BindingBlock ret = cur_frame->is_dataflow ? DataflowBlock(cur_frame->bindings)
: BindingBlock(cur_frame->bindings);
block_stack_.pop_back();
return ret;
}
bool CurrentBlockIsDataFlow() final { return CurrentBlockFrame()->is_dataflow; }
Var Emit(Expr expr, String name_hint) final {
return this->Emit(expr, CurrentBlockFrame()->is_dataflow, name_hint);
}
Var EmitMatchCast(Expr value, StructInfo struct_info, String name_hint) final {
value = this->Normalize(value);
CHECK(StructInfoBaseCheck(GetStructInfo(value), struct_info) != BaseCheckResult::kFailL0)
<< "It is impossible to match cast any value into the target struct_info. "
"But got value struct info: "
<< GetStructInfo(value) << ", given struct info: " << struct_info;
// NOTE: do match cast checking later in a pass.
BlockFrame* cur_frame = CurrentBlockFrame();
Var var = CreateVar(cur_frame->is_dataflow, name_hint);
UpdateStructInfo(var, struct_info);
MatchCast match_cast(var, value, struct_info);
cur_frame->bindings.push_back(match_cast);
// NOTE match shape do not follow simple binding rule
// as a result should not appear in binding table.
AddDefinitionToScope(var);
return var;
}
Var EmitOutput(Expr output, String name_hint) final {
BlockFrame* cur_frame = CurrentBlockFrame();
ICHECK(cur_frame->is_dataflow) << "EmitOutput has to be called inside dataflow block.";
return Emit(output, false, name_hint);
}
void EmitNormalized(Binding binding) final {
BlockFrame* cur_frame = CurrentBlockFrame();
if (const auto* var_binding = binding.as<VarBindingNode>()) {
if (!cur_frame->is_dataflow) {
ICHECK(!var_binding->var.as<DataflowVarNode>())
<< "Cannot emit dataflow var in non-dataflow block";
}
// normalized check
ICHECK(var_binding->var->struct_info_.defined());
ICHECK(var_binding->value->struct_info_.defined());
cur_frame->bindings.push_back(binding);
binding_table_[var_binding->var->vid] = var_binding->value;
} else if (const auto* match_cast = binding.as<MatchCastNode>()) {
if (!cur_frame->is_dataflow) {
ICHECK(!match_cast->var.as<DataflowVarNode>())
<< "Cannot emit dataflow var in non-dataflow block";
}
// normalized check
ICHECK(match_cast->var->struct_info_.defined());
ICHECK(match_cast->value->struct_info_.defined());
// NOTE match shape do not follow simple binding rule
// as a result should not appear in binding table.
cur_frame->bindings.push_back(binding);
AddDefinitionToScope(match_cast->var);
} else {
LOG(FATAL) << "Unsupported binding type: " << binding->GetTypeKey();
}
}
arith::Analyzer* GetAnalyzer() final { return &analyzer_; }
protected:
/*!
* \brief A representation of a block frame.
*
* A block frame is a record containing the bindings needed
* to build a binding block, and a boolean to indicate if the
* block being built is a DataflowBlock or not.
*/
struct BlockFrame {
/*!
* \brief List of bindings
*/
Array<Binding> bindings;
/*! \brief Whether current block is dataflow block. */
bool is_dataflow;
/*!
* \brief Binding map used by normalizer.
*
* \note The normalizer only caches reuse in the current block scope
* and will not cache bindings from parent scope.
*/
std::unordered_map<Expr, Var, ObjectPtrHash, ObjectPtrEqual> normalize_binding_map;
};
/*!
* \brief A representation of a scope frame.
*
* A scope frame records tracks the context of current scope.
*/
struct ScopeFrame {
// NOTE: for simplicity, only tracks symbolic var for now
// the scope is only used for erasure, so less information means
// more conservative analysis.
// Consider impl alternative: merge with block frame if we have more frame kinds.
//
// TODO(relax-team) tracks the var defined also through match-cast.
/*! \brief set of defined symbolic vars, value as themself. */
Map<tir::Var, PrimExpr> shape_var_map;
};
/*! \brief A stack to store block frames. */
std::vector<BlockFrame> block_stack_;
/*! \brief A stack to store scope frames. */
std::vector<ScopeFrame> scope_stack_;
/*! \brief A binding table that maps var to value. */
std::unordered_map<Id, Expr, ObjectPtrHash, ObjectPtrEqual> binding_table_;
/*! \brief A name supply to get unique names for IR construction. */
NameSupply name_supply_;
/*! \brief The IRModule being built by the BlockBuilder. */
IRModule context_mod_;
/*! \brief Internal analzyer */
arith::Analyzer analyzer_;
/*!
* \return The current frame.
* \note Never hold the value of current frame between Normalize
* or other scope calls this value can change if the block stack get updated,
* then the block frame is no longer valid.
*/
BlockFrame* CurrentBlockFrame() {
ICHECK(!block_stack_.empty()) << "no block is being built";
return &block_stack_.back();
}
/*!
* \return The current scope frame.
* \note only use this value
*/
ScopeFrame* CurrentScopeFrame() {
ICHECK(!scope_stack_.empty()) << "no scope is being opened";
return &scope_stack_.back();
}
/*!
* \brief Emits an Expr, and returns the variable it is bound to.
* \param expr The Expr to be emitted.
* \param is_dataflow Is the bound variable a DataflowVar or not(i.e. Var).
* \param name_hint Name hint for the bound variable.
* \note This Emit function normalizes the \p expr,
* and performs shape/type deductions by calling Normalize.
* \return The new variable that \p expr is bound to.
*/
Var Emit(Expr expr, bool is_dataflow, String name_hint) {
expr = this->Normalize(expr);
Var var = CreateVar(is_dataflow, name_hint);
// set the values
UpdateStructInfo(var, Downcast<StructInfo>(expr->struct_info_.value()));
CurrentBlockFrame()->bindings.push_back(VarBinding(var, expr));
// update the binding table
binding_table_[var->vid] = expr;
return var;
}
/*!
* \brief Create var for bindings
* \param is_dataflow Is the bound variable a DataflowVar or not(i.e. Var).
* \param name_hint Name hint for the bound variable.
* \return The created var.
*/
Var CreateVar(bool is_dataflow, String name_hint) {
if (name_hint.empty()) {
name_hint = is_dataflow ? "lv" : "gv";
}
Id vid = Id(GetUniqueName(name_hint));
return is_dataflow ? DataflowVar(vid, /*struct_info_annotation=*/NullOpt)
: Var(vid, /*struct_info_annotation=*/NullOpt);
}
private:
std::string GetUniqueName(const std::string& prefix) {
return name_supply_->FreshName(prefix, /*add_prefix*/ false, /*add_underscore*/ false);
}
/*! \brief A custom structural hashing that ignores NDArray raw data. */
class StructuralHashIgnoreNDarray : public BaseValueHash {
public:
using BaseValueHash::operator();
uint64_t operator()(const ObjectRef& key) const {
return SHashHandlerIgnoreNDArray().Hash(key, false);
}
};
/*!
* \brief A hashmap to store the mapping of Relax functions and TIR PrimFuncs
* in context_mod to their GlobalVar to avoid generating duplicated functions.
* We use a custom hash to avoid hashing constants that may be bound to each BaseFunc.
*/
std::unique_ptr<
std::unordered_map<BaseFunc, std::unordered_set<GlobalVar, ObjectPtrHash, ObjectPtrEqual>,
StructuralHashIgnoreNDarray, StructuralEqual>>
ctx_func_dedup_map_ = nullptr;
/*!
* \brief lazily initialize function dedeup map.
*/
void LazyInitCtxFuncDedupMap() {
if (ctx_func_dedup_map_ != nullptr) return;
ctx_func_dedup_map_ = std::make_unique<
std::unordered_map<BaseFunc, std::unordered_set<GlobalVar, ObjectPtrHash, ObjectPtrEqual>,
StructuralHashIgnoreNDarray, StructuralEqual>>();
for (const auto& kv : context_mod_->functions) {
const GlobalVar gv = kv.first;
const BaseFunc func = kv.second;
(*ctx_func_dedup_map_)[func].insert(gv);
}
}
// Collect all the variables that a parameter var can define.
// The collector is used to making sure that we record the
// shape vars as defined when calling BeginScope(params)
class StructInfoVarCollector : public StructInfoVisitor {
public:
static Map<tir::Var, PrimExpr> Collect(const StructInfo& struct_info) {
StructInfoVarCollector collector;
collector(struct_info);
return collector.shape_var_map_;
}
private:
void VisitStructInfo_(const TensorStructInfoNode* op) final {
if (const auto* shape_expr = op->shape.as<ShapeExprNode>()) {
for (const PrimExpr& s : shape_expr->values) {
// Only collect single var defined shape. Ignore something like `R.Tensor((m + 1, n + 1))
if (const auto* var = s.as<tir::VarNode>()) {
shape_var_map_.Set(GetRef<tir::Var>(var), s);
}
}
}
}
void VisitStructInfo_(const ShapeStructInfoNode* op) final {
for (const PrimExpr& s : op->values.value_or(Array<PrimExpr>())) {
// Only collect single var defined shape. Ignore something like `R.Shape((m + 1, n + 1))
if (const auto* var = s.as<tir::VarNode>()) {
shape_var_map_.Set(GetRef<tir::Var>(var), s);
}
}
}
void VisitStructInfo_(const PrimStructInfoNode* op) final {
// Only collect single var defined shape. Ignore something like `R.Prim(value=m + 1)`
if (op->value.defined()) {
if (auto var = op->value.as<tir::Var>()) {
shape_var_map_.Set(var.value(), op->value.value());
}
}
}
private:
Map<tir::Var, PrimExpr> shape_var_map_;
};
};
//---------------------------------------
// Normalization
//---------------------------------------
#define RELAX_EXPR_NORMALIZER_LEAF(OP) \
Expr VisitExpr_(const OP* op) final { return GetRef<Expr>(op); }
// TODO(relax-team): Check normalize logic after struct info.
// Normalizer on struct info:
//
// We take benefit of the following invariants(that are checked in constructor):
// - If an expr appears in StructInfo, then it is already normalized.
// As a result, we do not need to peek into StructInfo in Normalization.
// - Constant, ShapeExpr, already have their StructInfo populated in constructing time.
class Normalizer : public BlockBuilderImpl, private ExprFunctor<Expr(const Expr&)> {
public:
explicit Normalizer(IRModule context_mod) : BlockBuilderImpl(context_mod) {}
explicit Normalizer(IRModule context_mod,
BlockBuilder::DisableOperatorSpecificNormalizationForTVMScript)
: BlockBuilderImpl(context_mod), apply_f_normalize_(false) {}
Expr Normalize(const Expr& expr) final {
Expr normalized = this->VisitExpr(expr);
// Invariant:
// After Normalize: an Expr always have
// struct_info (with the exception of Op).
if (!normalized->IsInstance<OpNode>()) {
ICHECK(normalized->struct_info_.defined())
<< "The struct_info_ of an Expr except OpNode after "
"normalization must not be nullptr. However, this Expr does not have struct_info_: "
<< normalized;
}
return normalized;
}
/*!
* \brief Normalize Argument values to call and other IR sub-fields.
* \param arg The argument.
* \return The normalized value.
*
* \note This function create a new binding for non-leaf expressions except for tuple.
*/
Expr NormalizeArgument(const Expr& arg) final {
if (!block_stack_.empty()) {
// cache lookup
BlockFrame* cur_frame = CurrentBlockFrame();
auto it = cur_frame->normalize_binding_map.find(arg);
if (it != cur_frame->normalize_binding_map.end()) {
return it->second;
}
}
// skip visit expr's cache, normalize arg
Expr post = ExprFunctor::VisitExpr(arg);
if (!IsLeafOrTuple(arg)) {
ICHECK(!block_stack_.empty()) << "Cannot normalize non-leaf without a scope";
Var var = this->Emit(post, "");
// NOTE: current frame addr can change due to underlying vector
// re-allocation, redo lookup
CurrentBlockFrame()->normalize_binding_map[arg] = var;
return var;
} else {
return post;
}
}
RELAX_EXPR_NORMALIZER_LEAF(ExternFuncNode);
RELAX_EXPR_NORMALIZER_LEAF(GlobalVarNode);
RELAX_EXPR_NORMALIZER_LEAF(OpNode);
RELAX_EXPR_NORMALIZER_LEAF(ConstantNode);
RELAX_EXPR_NORMALIZER_LEAF(ShapeExprNode);
RELAX_EXPR_NORMALIZER_LEAF(PrimValueNode);
RELAX_EXPR_NORMALIZER_LEAF(StringImmNode);
RELAX_EXPR_NORMALIZER_LEAF(DataTypeImmNode);
template <typename T>
Expr VisitVar_(const typename T::ContainerType* var) {
// Parameters and free-vars must be present with struct info
// Other vars must have already been normalized through binding
ICHECK(var->struct_info_.defined())
<< "Var " << var->name_hint() << " does not have struct info.";
return GetRef<Var>(var);
}
Expr VisitExpr_(const VarNode* var_ptr) final {
auto var = VisitVar_<Var>(var_ptr);
if (HasVoidStructInfo(var)) {
return VisitExpr(Tuple(Array<Expr>{}));
} else {
return var;
}
}
Expr VisitExpr_(const DataflowVarNode* var) final { return VisitVar_<DataflowVar>(var); }
Expr VisitExpr(const Expr& expr) final {
// lookup normalize map
if (!block_stack_.empty()) {
BlockFrame* cur_frame = CurrentBlockFrame();
auto it = cur_frame->normalize_binding_map.find(expr);
if (it != cur_frame->normalize_binding_map.end()) {
return it->second;
}
}
return ExprFunctor::VisitExpr(expr);
}
Expr VisitExpr_(const TupleNode* op) final {
bool unchanged = true;
Array<Expr> new_fields;
for (const Expr& field : op->fields) {
Expr new_field = this->NormalizeArgument(field);
new_fields.push_back(new_field);
unchanged &= new_field.same_as(field);
}
Tuple tuple = unchanged ? GetRef<Tuple>(op) : Tuple(new_fields, op->span);
// Update tuple fields.
if (!tuple->struct_info_.defined()) {
Array<StructInfo> tuple_sinfo;
for (Expr field : tuple->fields) {
tuple_sinfo.push_back(GetStructInfo(field));
}
UpdateStructInfo(tuple, TupleStructInfo(tuple_sinfo, op->span));
}
return tuple;
}
Expr VisitExpr_(const FunctionNode* op) final {
Expr new_body = this->VisitWithNewScope(op->body, op->params);
if (new_body.same_as(op->body)) {
return GetRef<Function>(op);
} else {
return Function(op->params, new_body, op->ret_struct_info, op->is_pure, op->attrs);
}
}
Expr VisitExpr_(const CallNode* op) final {
Expr new_op = this->NormalizeArgument(op->op);
Array<Expr> new_args = op->args.Map([this](const Expr& arg) { return NormalizeArgument(arg); });
Call call;
if (new_op.same_as(op->op) && new_args.same_as(op->args)) {
call = GetRef<Call>(op);
} else {
call = Call(new_op, new_args, op->attrs, op->sinfo_args);
}
if (!call->struct_info_.defined()) {
auto inferred_sinfo = InferStructInfo(call);
UpdateStructInfo(call, inferred_sinfo);
}
// If the operation has defined a custom normalization
// function using the FNormalize attribute, apply it. If the
// normalization modified the expression, re-visit in case it
// produced a nested expression.
if (apply_f_normalize_) {
if (auto func_normalize = op_map_normalize_.get(op->op, nullptr); func_normalize != nullptr) {
Expr normalized = func_normalize(GetRef<BlockBuilder>(this), call);
if (!normalized.same_as(call)) {
return VisitExpr(normalized);
}
}
}
return call;
}
Expr VisitExpr_(const SeqExprNode* op) final {
bool unchanged = true;
Array<BindingBlock> new_blocks;
for (BindingBlock block : op->blocks) {
BindingBlock new_block = this->VisitBindingBlock(block);
new_blocks.push_back(new_block);
unchanged &= new_block.same_as(block);
}
// Because the input may not be normalized, the SeqExpr may occur
// nested within another SeqExpr. In that case, we want to use
// whatever binding-block type the parent uses, so that we any
// bindings collected into the prologue will be compatible with
// the parent block.
if (block_stack_.size() && CurrentBlockIsDataFlow()) {
this->BeginDataflowBlock();
} else {
this->BeginBindingBlock();
}
// the body may not be a leaf expression, so check for that
Expr new_body = this->NormalizeArgument(op->body);
unchanged &= new_body.same_as(op->body);
BindingBlock prologue = this->EndBlock();
if (!prologue->bindings.empty()) {
new_blocks.push_back(prologue);
unchanged = false;
}
// Combine nearby blocks if possible
Array<BindingBlock> normalized_blocks = NormalizeBlocks(new_blocks);
unchanged &= normalized_blocks.same_as(new_blocks);
SeqExpr seq_expr;
if (unchanged) {
seq_expr = GetRef<SeqExpr>(op);
} else {
seq_expr = SeqExpr(normalized_blocks, new_body, op->span);
}
// only do shape/type inference if the SeqExpr does not have shape/type
if (!seq_expr->struct_info_.defined()) {
UpdateStructInfo(seq_expr, EraseToWellDefinedInScope(GetStructInfo(seq_expr->body)));
}
return seq_expr;
}
Expr VisitExpr_(const IfNode* op) final {
Expr new_cond = this->NormalizeArgument(op->cond);
Expr new_true = this->VisitWithNewScope(op->true_branch);
Expr new_false = this->VisitWithNewScope(op->false_branch);
If if_node;
if (new_cond.same_as(op->cond) && new_true.same_as(op->true_branch) &&
new_false.same_as(op->false_branch)) {
if_node = GetRef<If>(op);
} else {
if_node = If(new_cond, new_true, new_false, op->span);
}
if (!if_node->struct_info_.defined()) {
auto true_info = EraseToWellDefinedInScope(GetStructInfo(new_true));
auto false_info = EraseToWellDefinedInScope(GetStructInfo(new_false));
UpdateStructInfo(if_node, StructInfoLCA(true_info, false_info));
}
return if_node;
}
Expr VisitExpr_(const TupleGetItemNode* op) final {
Expr new_tuple = this->NormalizeArgument(op->tuple);
TupleGetItem node = new_tuple.same_as(op->tuple) ? GetRef<TupleGetItem>(op)
: TupleGetItem(new_tuple, op->index);
if (!node->struct_info_.defined()) {
auto opt = MatchStructInfo<TupleStructInfo>(node->tuple);
ICHECK(opt) << "The struct info of Tuple must be TupleStructInfo, "
<< "but expression " << node->tuple << " has struct info "
<< node->tuple->struct_info_;
UpdateStructInfo(node, opt.value()->fields[node->index]);
}
return node;
}
Binding VisitBinding(const Binding& binding) {
if (auto* var_binding = binding.as<VarBindingNode>()) {
return this->VisitVarBinding(GetRef<VarBinding>(var_binding));
} else {
auto* match_cast = binding.as<MatchCastNode>();
ICHECK(match_cast) << "Unsupported binding type: " << binding->GetTypeKey();
return this->VisitMatchCast(GetRef<MatchCast>(match_cast));
}
}
VarBinding VisitVarBinding(VarBinding binding) {
Expr new_value = this->VisitExpr(binding->value);
if (!new_value.same_as(binding->value)) {
binding = VarBinding(binding->var, new_value, binding->span);
}
if (!binding->var->struct_info_.defined()) {
UpdateStructInfo(binding->var, GetStructInfo(new_value));
}
return binding;
}
MatchCast VisitMatchCast(MatchCast binding) {
Expr new_value = this->VisitExpr(binding->value);
if (!new_value.same_as(binding->value)) {
binding = MatchCast(binding->var, new_value, binding->struct_info, binding->span);
}
if (!binding->var->struct_info_.defined()) {
UpdateStructInfo(binding->var, binding->struct_info);
}
return binding;
}
BindingBlock VisitBindingBlock(const BindingBlock& block) {
if (block.as<DataflowBlockNode>()) {
this->BeginDataflowBlock();
} else {
this->BeginBindingBlock();
}
bool unchanged = true;
for (const Binding& binding : block->bindings) {
Binding new_binding = this->VisitBinding(binding);
unchanged &= new_binding.same_as(binding);
this->EmitNormalized(new_binding);
}
BindingBlock new_block = this->EndBlock();
unchanged &= new_block->bindings.size() == block->bindings.size();
if (unchanged) {
return block;
}
return new_block;
}
private:
// Helper function to infer the type of a Call.
StructInfo InferStructInfo(const Call& call) {
if (auto* op_ptr = call->op.as<OpNode>()) {
// Case 1: the op field is a primitive op, look up FInferStructInfo attribute
Op op = GetRef<Op>(op_ptr);
bool is_dist_op = false;
for (const auto& arg : call->args) {
if (arg->struct_info_.as<distributed::DTensorStructInfoNode>()) {
is_dist_op = true;
break;
}
}
if (is_dist_op) {
for (const auto& arg : call->args) {
ICHECK(!arg->struct_info_.as<TensorStructInfoNode>())
<< "Distributed operator must take DTensor instead of Tensor as input";
}
ICHECK(op_map_dist_infer_struct_info_.count(op))
<< " Cannot find the dist.FInferStructInfo attribute registered to op: " << op->name;
return op_map_dist_infer_struct_info_[op](call, GetRef<BlockBuilder>(this));
}
ICHECK(op_map_infer_struct_info_.count(op))
<< " Cannot find the FInferStructInfo attribute registered to op: " << op->name;
return op_map_infer_struct_info_[op](call, GetRef<BlockBuilder>(this));
} else {
// derive using function parameters
ICHECK(call->op->struct_info_.defined());
auto opt = MatchStructInfo<FuncStructInfo>(call->op);
ICHECK(opt) << "Call->op must contains a function struct info";
FuncStructInfo finfo = opt.value();
return DeriveCallRetStructInfo(finfo, call, GetRef<BlockBuilder>(this), &analyzer_);
}
}
// erase to well defined within current scope.
StructInfo EraseToWellDefinedInScope(StructInfo info) {
if (scope_stack_.empty()) {
// If no scopes are active, then this fragment does not require
// any normalization.
return info;
}
auto* curr_scope = CurrentScopeFrame();
auto f_shape_var_map = [curr_scope](tir::Var var) -> Optional<PrimExpr> {
auto it = curr_scope->shape_var_map.find(var);
if (it != curr_scope->shape_var_map.end()) return (*it).second;
return NullOpt;
};
return EraseToWellDefined(info, f_shape_var_map);
}
Expr VisitWithNewScope(const Expr& expr, Optional<Array<Var>> params = NullOpt) {
if (params.defined()) {
this->BeginScope(params.value());
} else {
this->BeginInnerScope();
}
Expr ret;
// SeqExpr do not need to prepare for normalization.
if (expr.as<SeqExprNode>()) {
ret = this->VisitExpr(expr);
} else {
this->BeginBindingBlock();
Expr post = this->NormalizeArgument(expr);
BindingBlock prologue = this->EndBlock();
// "New scopes" (function bodies, if/else clauses) must be wrapped in seq exprs.
// Don't wrap if it's already a seq and there are no bindings to add
if (post.as<SeqExprNode>() && prologue->bindings.empty()) {
return post;
}
Array<BindingBlock> bindings;
if (!prologue->bindings.empty()) {
bindings.push_back(prologue);
}
SeqExpr seq(bindings, post);
UpdateStructInfo(seq, EraseToWellDefinedInScope(GetStructInfo(seq->body)));
ret = seq;
}
this->EndScope();
return ret;
}
Array<BindingBlock> FlattenBlocks(const Array<BindingBlock>& blocks) {
// If there is a binding that is a seq expr, split the current block,
// add the nested blocks prior to the seq expr, and bind the seq expr body
// to the var
Array<BindingBlock> ret;
bool changed = false;
for (const BindingBlock& block : blocks) {
bool is_dataflow = block->IsInstance<DataflowBlockNode>();
Array<Binding> current;
for (const Binding& binding : block->bindings) {
Expr value;
if (const auto* var_binding = binding.as<VarBindingNode>()) {
value = var_binding->value;
} else if (const auto* match_cast = binding.as<MatchCastNode>()) {
value = match_cast->value;
} else {
LOG(FATAL) << "Unknown binding type: " << binding->GetTypeKey();
}
// if we encounter a nested seq, we have to flatten it:
// 1. Append the binding block we've accumulated so far
// 2. Reset the current block
// 3. Append the inner blocks
// 4. Add a binding of the current var to the seq expr's body to the current block
// then continue
if (auto seq = value.as<SeqExprNode>()) {
changed = true;
ret.push_back(is_dataflow ? DataflowBlock(current) : BindingBlock(current));
current = {};
// We do not need to flatten recursively because the normalizer will have normalized
// and thus flattened the inner SeqExprs already
for (const BindingBlock& block : seq->blocks) {
if (is_dataflow && !block->IsInstance<DataflowBlockNode>()) {
// A DataflowBlock occurring within a non-DataflowBlock
// usually is an error, resulting from return of a
// `BindingBlock`. However, it may still be well-formed
// if there are no relax::DataflowVar instances used by
// the non-DataflowBlock. This would result in multiple
// dataflow sections, split by non-dataflow portions,
// but would still be valid.
//
// Since the most common occurrence is due to mis-use,
// explicitly check for it here rather than waiting for a
// WellFormed check later on.
auto free_vars = FreeVars(SeqExpr({block}, Tuple(Array<Expr>{})));
Array<DataflowVar> free_dataflow_vars;
for (const auto& var : free_vars) {
if (auto opt = var.as<DataflowVar>()) {
free_dataflow_vars.push_back(opt.value());
}
}
if (free_dataflow_vars.size()) {
LOG(FATAL)
<< "Malformed AST: "
<< "A DataflowVar may only be used within a DataflowBlock. "
<< "The variable " << binding->var << " is defined within a DataflowBlock, "
<< "but is bound to a SeqExpr that contains non-dataflow BindingBlocks. "
<< "These non-dataflow BindingBlocks use the DataflowVars "
<< free_dataflow_vars << ", which is invalid.";
}
}
ret.push_back(block);
}
if (const auto* var_binding = binding.as<VarBindingNode>()) {
current.push_back(VarBinding(var_binding->var, seq->body));
} else if (const auto* match_cast = binding.as<MatchCastNode>()) {
current.push_back(MatchCast(match_cast->var, seq->body, match_cast->struct_info));
} else {
LOG(FATAL) << "Unknown binding type: " << binding->GetTypeKey();
}
} else {
current.push_back(binding);
}
}
ret.push_back(is_dataflow ? DataflowBlock(current) : BindingBlock(current));
}
return changed ? ret : blocks;
}
Array<BindingBlock> NormalizeBlocks(const Array<BindingBlock>& blocks) {
bool changed = false;
Array<BindingBlock> ret;
auto flattened = FlattenBlocks(blocks);
if (!flattened.same_as(blocks)) {
changed = true;
}
for (const BindingBlock& block : flattened) {
if (block->bindings.empty()) {
// Case 1. Skip empty blocks
changed = true;
} else if (!ret.empty() && ret.back()->type_index() == block->type_index()) {
// Case 2. Merge with previous block if possible
BindingBlock merged;
// NOTE: should check DataflowBlockNode first.
if (const auto* dataflow_block = ret.back().as<DataflowBlockNode>()) {
auto n = make_object<DataflowBlockNode>(*dataflow_block);
n->bindings.insert(n->bindings.end(), block->bindings.begin(), block->bindings.end());
merged = DataflowBlock(n);
} else if (const auto* binding_block = ret.back().as<BindingBlockNode>()) {
auto n = make_object<BindingBlockNode>(*binding_block);
n->bindings.insert(n->bindings.end(), block->bindings.begin(), block->bindings.end());
merged = BindingBlock(n);
} else {
LOG(FATAL) << "Unknown block type: " << ret.back()->GetTypeKey();
}
ret.pop_back();
ret.push_back(merged);
changed = true;
} else {
// Case 3. Add to the result
ret.push_back(block);
}
}
return changed ? ret : blocks;
}
/*! \brief Operator struct info inference map. */
tvm::OpAttrMap<FInferStructInfo> op_map_infer_struct_info_ =
Op::GetAttrMap<FInferStructInfo>("FInferStructInfo");
tvm::OpAttrMap<FInferStructInfo> op_map_dist_infer_struct_info_ =
Op::GetAttrMap<FInferStructInfo>("dist.FInferStructInfo");
/*! \brief Operator normalization function */
tvm::OpAttrMap<FNormalize> op_map_normalize_ = Op::GetAttrMap<FNormalize>("FNormalize");
/*! \brief Whether the FNormalize function should be applied */
bool apply_f_normalize_{true};
};
BlockBuilder BlockBuilder::Create(Optional<IRModule> mod) {
ObjectPtr<BlockBuilderNode> n = make_object<Normalizer>(mod.value_or(IRModule()));
return BlockBuilder(n);
}
BlockBuilder BlockBuilder::Create(Optional<IRModule> mod,
BlockBuilder::DisableOperatorSpecificNormalizationForTVMScript) {
ObjectPtr<BlockBuilderNode> n = make_object<Normalizer>(
mod.value_or(IRModule()), BlockBuilder::DisableOperatorSpecificNormalizationForTVMScript());
return BlockBuilder(n);
}
//---------------------------------------
// User facing function registration.
//---------------------------------------
TVM_REGISTER_OBJECT_TYPE(BlockBuilderNode);
TVM_REGISTER_GLOBAL("relax.BlockBuilderCreate").set_body_typed([](Optional<IRModule> mod) {
return BlockBuilder::Create(mod);
});
TVM_REGISTER_GLOBAL("relax.BlockBuilderBeginDataflowBlock")
.set_body_method<BlockBuilder>(&BlockBuilderNode::BeginDataflowBlock);
TVM_REGISTER_GLOBAL("relax.BlockBuilderBeginBindingBlock")
.set_body_method<BlockBuilder>(&BlockBuilderNode::BeginBindingBlock);
TVM_REGISTER_GLOBAL("relax.BlockBuilderEndBlock")
.set_body_method<BlockBuilder>(&BlockBuilderNode::EndBlock);
TVM_REGISTER_GLOBAL("relax.BlockBuilderNormalize")
.set_body_method<BlockBuilder>(&BlockBuilderNode::Normalize);
TVM_REGISTER_GLOBAL("relax.BlockBuilderEmit")
.set_body_typed([](BlockBuilder builder, Expr expr, String name_hint) {
return builder->Emit(expr, name_hint);
});
TVM_REGISTER_GLOBAL("relax.BlockBuilderEmitMatchCast")
.set_body_typed([](BlockBuilder builder, Expr value, StructInfo struct_info, String name_hint) {
return builder->EmitMatchCast(value, struct_info, name_hint);
});
TVM_REGISTER_GLOBAL("relax.BlockBuilderEmitOutput")
.set_body_typed([](BlockBuilder builder, const Expr& output, String name_hint) {
return builder->EmitOutput(output, name_hint);
});
TVM_REGISTER_GLOBAL("relax.BlockBuilderEmitNormalized")
.set_body_typed([](BlockBuilder builder, Binding binding) {
return builder->EmitNormalized(binding);
});
TVM_REGISTER_GLOBAL("relax.BlockBuilderGetUniqueName")
.set_body_typed([](BlockBuilder builder, String name_hint) {
return builder->name_supply()->FreshName(name_hint, /*add_prefix*/ false,
/*add_underscore*/ false);
});
TVM_REGISTER_GLOBAL("relax.BlockBuilderAddFunction")
.set_body_method<BlockBuilder>(&BlockBuilderNode::AddFunction);
TVM_REGISTER_GLOBAL("relax.BlockBuilderUpdateFunction")
.set_body_method<BlockBuilder>(&BlockBuilderNode::UpdateFunction);
TVM_REGISTER_GLOBAL("relax.BlockBuilderGetContextIRModule")
.set_body_method<BlockBuilder>(&BlockBuilderNode::GetContextIRModule);
TVM_REGISTER_GLOBAL("relax.BlockBuilderFinalize")
.set_body_method<BlockBuilder>(&BlockBuilderNode::Finalize);
TVM_REGISTER_GLOBAL("relax.BlockBuilderCurrentBlockIsDataFlow")
.set_body_method<BlockBuilder>(&BlockBuilderNode::CurrentBlockIsDataFlow);
TVM_REGISTER_GLOBAL("relax.BlockBuilderLookupBinding")
.set_body_method<BlockBuilder>(&BlockBuilderNode::LookupBinding);
TVM_REGISTER_GLOBAL("relax.BlockBuilderBeginScope")
.set_body_method<BlockBuilder>(&BlockBuilderNode::BeginScope);
TVM_REGISTER_GLOBAL("relax.BlockBuilderEndScope")
.set_body_method<BlockBuilder>(&BlockBuilderNode::EndScope);
} // namespace relax
} // namespace tvm
```
|
```php
<?php
/*
*
*
* path_to_url
*
* Unless required by applicable law or agreed to in writing, software
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
*/
namespace Google\Service\Playdeveloperreporting;
class your_sha256_hashetRequest extends \Google\Collection
{
protected $collection_key = 'metrics';
/**
* @var string[]
*/
public $dimensions;
/**
* @var string
*/
public $filter;
/**
* @var string[]
*/
public $metrics;
/**
* @var int
*/
public $pageSize;
/**
* @var string
*/
public $pageToken;
protected $timelineSpecType = GooglePlayDeveloperReportingV1beta1TimelineSpec::class;
protected $timelineSpecDataType = '';
/**
* @var string
*/
public $userCohort;
/**
* @param string[]
*/
public function setDimensions($dimensions)
{
$this->dimensions = $dimensions;
}
/**
* @return string[]
*/
public function getDimensions()
{
return $this->dimensions;
}
/**
* @param string
*/
public function setFilter($filter)
{
$this->filter = $filter;
}
/**
* @return string
*/
public function getFilter()
{
return $this->filter;
}
/**
* @param string[]
*/
public function setMetrics($metrics)
{
$this->metrics = $metrics;
}
/**
* @return string[]
*/
public function getMetrics()
{
return $this->metrics;
}
/**
* @param int
*/
public function setPageSize($pageSize)
{
$this->pageSize = $pageSize;
}
/**
* @return int
*/
public function getPageSize()
{
return $this->pageSize;
}
/**
* @param string
*/
public function setPageToken($pageToken)
{
$this->pageToken = $pageToken;
}
/**
* @return string
*/
public function getPageToken()
{
return $this->pageToken;
}
/**
* @param GooglePlayDeveloperReportingV1beta1TimelineSpec
*/
public function setTimelineSpec(GooglePlayDeveloperReportingV1beta1TimelineSpec $timelineSpec)
{
$this->timelineSpec = $timelineSpec;
}
/**
* @return GooglePlayDeveloperReportingV1beta1TimelineSpec
*/
public function getTimelineSpec()
{
return $this->timelineSpec;
}
/**
* @param string
*/
public function setUserCohort($userCohort)
{
$this->userCohort = $userCohort;
}
/**
* @return string
*/
public function getUserCohort()
{
return $this->userCohort;
}
}
// Adding a class alias for backwards compatibility with the previous class name.
class_alias(your_sha256_hashetRequest::class, your_sha256_hashngV1beta1QuerySlowRenderingRateMetricSetRequest');
```
|
```objective-c
/*
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
*
* * Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* * Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in
* the documentation and/or other materials provided with the
* distribution.
* * Neither the name of Intel Corporation nor the names of its
* contributors may be used to endorse or promote products derived
* from this software without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*
*/
#ifndef _SGX_UAE_QUOTE_EX_H_
#define _SGX_UAE_QUOTE_EX_H_
#include <stdint.h>
#include "sgx_quote.h"
#include "sgx_error.h"
#include "sgx_urts.h"
#ifdef __cplusplus
extern "C" {
#endif
/**
* Function use to select the attestation key. The function will select a att_key_id_list of attestation keys supported
* by the quote verifier if the platform can support one in the list. If the platform cannot support one in the list,
* the API will return error SGX_ERROR_UNSUPPORTED_ATT_KEY_ID. If multiple attestation keys are supported
* by both quote verifier and the platform software, the "default quoting type" in config file will be used. Alternatively,
* if the quote provider doesn't supply a list if attestation keys supported by the quote verifier (p_att_key_id_list == NULL),
* then the platform software will select the attestation key by the internal logic.
*
*
* @param p_att_key_id_list [In] List of the supported attestation key IDs provided by the quote verifier. Can be
* NULL, in such case a default att key supported by PSW will be returned.
* @param att_key_id_list_size The size of attestation key ID list.
* @param p_selected_key_id [In, Out] Pointer to the selected attestation key. This should be used by the
* application as input to the quoting and remote attestation APIs. Must not be NULL.
*
* @return SGX_SUCCESS Successfully return an attestation key. The p_selected_key_id will be filled with selected
* attestation key ID.
* @return SGX_ERROR_INVALID_PARAMETER Invalid parameter if p_selected_key_id is NULL,
* list header is incorrect, or the number of key IDs in the list exceed the maximum.
* @return SGX_ERROR_UNSUPPORTED_ATT_KEY_ID The platform quoting infrastructure does not support any of the keys in the
* list. This can be because it doesn't carry the QE that owns the attestation key or the platform is in a
* mode that doesn't allow any of the listed keys; for example, for privacy reasons. It also returns such eror
* if the platform software only supports a key that is not supported by the current launch control policy.
* @return SGX_ERROR_UNEXPECTED Unexpected internal error.
*/
sgx_status_t SGXAPI sgx_select_att_key_id(const uint8_t *p_att_key_id_list, uint32_t att_key_id_list_size,
sgx_att_key_id_t *p_selected_key_id);
/**
* The application calls this API to request the selected platform's attestation key owner to generate or obtain
* the attestation key. Once called, the QE that owns the attestation key described by the inputted attestation
* key id will do what is required to get this platform's attestation including getting any certification data
* required from the PCE. Depending on the type of attestation key and the attestation key owner, this API will
* return the same attestation key public ID or generate a new one. The caller can request that the attestation
* key owner "refresh" the key. This will cause the owner to either re-get the key or generate a new one. The
* platform's attestation key owner is expected to store the key in persistent memory and use it in the
* subsequent quote generation APIs described below.
*
* In an environment where attestation key provisioning and certification needs to take place during a platform
* deployment phase, an application can generate the attestation key, certify it with the PCK Cert and register
* it with the attestation owners cloud infrastructure. That way, the key is available during the run time
* phase to generate code without requiring re-certification.
*
* The QE's target info is also returned by this API that will allow the application's enclave to generate a
* REPORT that the attestation key owner's QE will verify using local REPORT-based attestation when generating a
* quote.
*
* In order to allow the application to allocate the public key id buffer first, the application can call this
* function with the p_pub_key_id set to NULL and the p_pub_key_id_size to a valid size_t pointer. In this
* case, the function will return the required buffer size to contain the p_pub_key_id_size and ignore the other
* parameters. The application can then call this API again with the correct p_pub_key_size and the pointer to
* the allocated buffer in p_pub_key_id.
*
*
* @param p_att_key_id The selected att_key_id from the quote verifier's list. It includes the QE identity as
* well as the attestation key's algorithm type. It cannot be NULL.
* @param p_qe_target_info Pointer to QE's target info required by the application to generate an enclave REPORT
* targeting the selected QE. Must not be NULL when p_pub_key_id is not NULL.
* @param p_pub_key_id_size This parameter can be used in 2 ways. When p_pub_key_id is NULL, the API will
* return the buffer size required to hold the attestation's public key ID. The
* application can then allocate the buffer and call it again with p_pub_key_id not set
* to NULL and the other parameters valid. If p_pub_key_id is not NULL, p_pub_key_id_size
* must be large enough to hold the return attestation's public key ID. Must not be
* NULL.
* @param p_pub_key_id This parameter can be used in 2 ways. When it is passed in as NULL and p_pub_key_id_size
* is not NULL, the API will return the buffer size required to hold the attestation's
* public key ID. The other parameters will be ignored. When it is not NULL, it must point
* to a buffer which is at least a long as the value passed in by p_pub_key_id_size. API will
* return the attestation key's public identifier if no error occured.
* @return SGX_SUCCESS Successfully selected an attestation key. Either returns the required attestation's
* public key ID size in p_pub_key_id_size when p_pub_key_id is passed in as NULL. When p_pub_key_id is
* not NULL, p_qe_target_info will contain the attestation key's QE target info for REPORT generation
* and p_pub_key_id will contain the attestation's public key ID.
* @return SGX_ERROR_INVALID_PARAMETER Invalid parameter if p_pub_key_id_size, p_att_key_id is NULL.
* If p_pub_key_id_size is not NULL, the other parameters must be valid.
* @return SGX_ERROR_UNSUPPORTED_ATT_KEY_ID The platform quoting infrastructure does not support the key described
* in p_att_key_id.
* @return SGX_ERROR_ATT_KEY_CERTIFICATION_FAILURE Failed to generate and certify the attestation key.
*
*/
sgx_status_t SGXAPI sgx_init_quote_ex(const sgx_att_key_id_t* p_att_key_id,
sgx_target_info_t *p_qe_target_info,
size_t* p_pub_key_id_size,
uint8_t* p_pub_key_id);
/**
* The application needs to call this function before generating a quote. The quote size is variable
* depending on the type of attestation key selected and other platform or key data required to generate the
* quote. Once the application calls this API, it will use the returned p_quote_size to allocate the buffer
* required to hold the generated quote. A pointer to this buffer is provided to the sgx_get_quote_ex() API.
*
* If the key is not available, this API may return an error (SGX_ERROR_ATT_KEY_UNINITIALIZED) depending on
* the algorithm. In this case, the caller must call sgx_init_quote_ex() to re-generate and certify the attestation key.
*
* @param p_att_key_id The selected attestation key ID from the quote verifier's list. It includes the QE
* identity as well as the attestation key's algorithm type. It cannot be NULL.
* @param p_quote_size Pointer to the location where the required quote buffer size will be returned. Must
* not be NULL.
*
* @return SGX_SUCCESS Successfully calculated the required quote size. The required size in bytes is returned in the
* memory pointed to by p_quote_size.
* @return SGX_ERROR_INVALID_PARAMETER Invalid parameter. p_quote_size and p_att_key_id must not be NULL.
* @return SGX_ERROR_ATT_KEY_UNINITIALIZED The platform quoting infrastructure does not have the attestation
* key available to generate quotes. sgx_init_quote_ex() must be called again.
* @return SGX_ERROR_UNSUPPORTED_ATT_KEY_ID The platform quoting infrastructure does not support the key
* described in p_att_key_id.
*/
sgx_status_t SGXAPI sgx_get_quote_size_ex(const sgx_att_key_id_t *p_att_key_id,
uint32_t* p_quote_size);
/**
* The function will take the application enclave's REPORT that will be converted into a quote after the QE verifies
* the REPORT. Once verified it will sign it with platform's attestation key matching the selected attestation key ID.
* If the key is not available, this API may return an error (SGX_ERROR_ATT_KEY_UNINITIALIZED) depending on the algorithm.
* In this case, the caller must call sgx_init_quote_ex() to re-generate and certify the attestation key. an attestation key.
*
*
* The caller can request a REPORT from the QE using a supplied nonce. This will allow the enclave requesting the quote
* to verify the QE used to generate the quote. This makes it more difficult for something to spoof a QE and allows the
* app enclave to catch it earlier. But since the authenticity of the QE lies in knowledge of the Quote signing key,
* such spoofing will ultimately be detected by the quote verifier. QE REPORT.ReportData =
* SHA256(*p_nonce||*p_quote)||32-0x00's.
*
* @param p_app_report Pointer to the enclave report that needs the quote. The report needs to be generated using the
* QE's target info returned by the sgx_init_quote_ex() API. Must not be NULL.
* @param p_att_key_id The selected attestation key ID from the quote verifier's list. It includes the QE identity as
* well as the attestation key's algorithm type. It cannot be NULL.
* @param p_qe_report_info Pointer to a data structure that will contain the information required for the QE to generate
* a REPORT that can be verified by the application enclave. The inputted data structure
* contains the application's TARGET_INFO, a nonce and a buffer to hold the generated report.
* The QE Report will be generated using the target information and the QE's REPORT.ReportData =
* SHA256(*p_nonce||*p_quote)||32-0x00's. This parameter is used when the application wants to
* verify the QE's REPORT to provide earlier detection that the QE is not being spoofed by
* untrusted code. A spoofed QE will ultimately be rejected by the remote verifier. This
* parameter is optional and will be ignored when NULL.
* @param p_quote Pointer to the buffer that will contain the quote.
* @param quote_size Size of the buffer pointed to by p_quote.
*
* @return SGX_SUCCESS Successfully generated the quote.
* @return SGX_ERROR_INVALID_PARAMETER If either p_app_report or p_quote is null. Or, if quote_size isn't large
* enough, p_att_key_id is NULL.
* @return SGX_ERROR_ATT_KEY_UNINITIALIZED The platform quoting infrastructure does not have the attestation key
* available to generate quotes. sgx_init_quote_ex() must be called again.
* @return SGX_ERROR_UNSUPPORTED_ATT_KEY_ID The platform quoting infrastructure does not support the key described in
* p_att_key_id.
* @return SGX_ERROR_MAC_MISMATCH Report MAC check failed on application report.
*/
sgx_status_t SGXAPI sgx_get_quote_ex(const sgx_report_t *p_app_report,
const sgx_att_key_id_t *p_att_key_id,
sgx_qe_report_info_t *p_qe_report_info,
uint8_t *p_quote,
uint32_t quote_size);
/**
* The application needs to call this function before getting supported_attestation key IDs. The number is variable
* depending on the platform. Once the application calls this API, it will use the returned p_att_key_id_num to allocate
* the buffer required to hold the supported_attestation key IDs. A pointer to this buffer is provided to the
* sgx_get_supported_att_key_ids() API.
*
* @param p_att_key_id_num Pointer to the location where the required number will be returned. Must not be NULL.
*
* @return SGX_SUCCESS Successfully calculated the required number.
* @return SGX_ERROR_INVALID_PARAMETER Invalid parameter. p_att_key_id_num must not be NULL.
*/
sgx_status_t SGXAPI sgx_get_supported_att_key_id_num(uint32_t *p_att_key_id_num);
/**
* The function will generate an array of all supported attestation key IDs.
* The application needs to call sgx_get_supported_att_key_id_num() API first to get the number of key IDs. Then
* the application needs to allocate the buffer whose size is sizeof sgx_att_key_id_ext_t * att_key_id_num.
*
* @param p_att_key_id_list Pointer to the buffer that will contain supported_attestation key IDs. It cannot be NULL.
* @param att_key_id_num the number of supported key IDs.
*
* @return SGX_SUCCESS Successfully generated the supported attestation key IDs.
* @return SGX_ERROR_INVALID_PARAMETER Invalid parameter. p_att_key_id_list must not be NULL.
*/
sgx_status_t SGXAPI sgx_get_supported_att_key_ids(sgx_att_key_id_ext_t *p_att_key_id_list,
uint32_t att_key_id_num);
#ifdef __cplusplus
}
#endif
#endif
```
|
Zemacies simulacrum is an extinct species of sea snail, a marine gastropod mollusk in the family Borsoniidae.
Description
Distribution
This extinct marine species is endemic to New Zealand and was found in Middle Miocene strata.
References
Laws, Trans. Roy. Soc. N. Z., vol. 65, p. 35, pi. 5, fig. 12.
Maxwell, P.A. (2009). Cenozoic Mollusca. pp. 232–254 in Gordon, D.P. (ed.) New Zealand inventory of biodiversity. Volume one. Kingdom Animalia: Radiata, Lophotrochozoa, Deuterostomia. Canterbury University Press, Christchurch.
simulacrum
Gastropods of New Zealand
Gastropods described in 1935
|
```yaml
#
#
# path_to_url
#
# Unless required by applicable law or agreed to in writing, software
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
---
apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
name: kubevirt-cluster-critical
value: 1000000000
globalDefault: false
description: "This priority class should be used for core kubevirt components only."
```
|
Demulsifiers, or emulsion breakers, are a class of specialty chemicals used to separate emulsions, for example, water in oil. They are commonly used in the processing of crude oil, which is typically produced along with significant quantities of saline water. This water (and salt) must be removed from the crude oil prior to refining. If the majority of the water and salt are not removed, significant corrosion problems can occur in the refining process.
Demulsifiers are typically based on the following chemistry:
Acid catalysed phenol-formaldehyde resins
Base catalysed phenol-formaldehyde resins
Epoxy resins
Polyethyleneimines
Polyamines
Di-epoxides
Polyols
dendrimer
The above are usually ethoxylated (and/or propoxylated) to provide the desired degree of water/oil solubility.
The addition of ethylene oxide increases water solubility, propylene oxide decreases it.
Commercially available demulsifier formulations are typically a mixture of two to four different chemistries, in carrier solvent(s) such as xylene, heavy aromatic naphtha (HAN), Isopropanol, methanol, 2-Ethylhexanol or diesel.
Demulsifiers are manufactured by chemical manufacturers including:
Arkema
Baker Hughes
BASF
ChampionX
Clariant
Dow Chemical Company
Lubrizol
Nouryon
PureChem Services (CES)
SI Group
Chembiotec Additives
Solvay
Stepan
Starborn Chemical
References
Chemical mixtures
|
Asilika Sevutia (born 15 July 1988) is a Fijian netball player and current vice captain of the Fiji national team who plays in the positions of center, wing attack or wing defense.
Sevutia was identified is a promising junior netballer by Fiji netball team coach Melissa Walker in 2009.
She was included in the Fijian squad as the vice captain for the 2019 Netball World Cup, which was also her maiden appearance at a Netball World Cup.
References
1988 births
Living people
Fijian netball players
2019 Netball World Cup players
|
Kamalabai Hospet also known as (Kamalatai Hospet) (1896–1981) was a co-founder of Matru Sewa Sangh, a non-profit social organisation based in Nagpur, Maharashtra, India.
Together with Venutai Nene (1896–1973), she founded Matru Sewa Sangh in 1921. In 1971, she also co-founded Vidya Shikshan Prasarak Mandal, which now has more than fifty educational institutes.
Early life
Kamalabai Hospet was born on 23 May 1896 in Shirpur village in Dhule of Maharashatra State in India. Her maiden name was Yamuna Krushna Mohoni. She was the seventh child of Krushnaji Tatya Mohoni and Radhabai Mohoni. She was married at the age of 12 year to Mr. Gururao Hospet as was the custom in those days. Mr. Gururao died when Kamalabai was a mere 15 years old. The Hospet family was very conservative and as per custom in those days Kamalabai's head was to be tonsured. Kamalabai's brother Mr. D.K Mohoni realised this and returned her to her maternal home in Nagpur with the help of local police.
Later life
For further education she was sent to Hingana, Pune at the institute founded by Maharshi Dhondo Keshav Karve. She did not complete her education here however, to avoid the financial burden of her education on her brothers' families. With the thought of becoming financially independent, she joined a nursing course at the Dufferin Hospital in Nagpur. She was a student here from 1918 to 1920.
While undergoing training at the Dufferin Hospital she was reprimanded by a British doctor for offering a bed pan to a poor Indian patient. This incident made a deep impact on her and she decided to start a maternity home which would serve all Indian women without discrimination. As a result, she founded such a maternity home called Matru Sewa Sangh at Sitabuldi, Nagpur as soon as she completed her training. This maternity home started with four beds in one location and during her lifetime had expanded to 21 branches across multiple states such as Maharashtra, Madhya Pradesh and Rajasthan. She paid personal care and attention to her patients as well as their families. Apart from running the maternity home, she started a Nurses Training School, and Institute of Social Work, a school for mentally disabled students: Nandanwan, a senior citizen home: Panchavati, a foundling home and an orphanage all under the aegis of Matru Sewa Sangh.
Kamalabai also rendered great service in the India Freedom struggle. She worked as a volunteer in the All India Congress Session at Nagpur in 1920.
Death
Kamalbai Hospet died on 15 November 1981 leaving behind a retinue of dedicated workers which included members of her family as well.
Recognition
Kamalabai was conferred in 1959 with Nalawa Medal by Indian Red Cross Society, which is an award for excellence in nursing in India. In 1961, the Government of India honoured her social work with a Padma Shri title and later Jamnalal Bajaj Award in 1980.
References
External links
Matru Sewa Sangh, Official website
1896 births
1981 deaths
Recipients of the Padma Shri in social work
Scholars from Nagpur
20th-century Indian educational theorists
20th-century Indian women educational theorists
Women educators from Maharashtra
Educators from Maharashtra
20th-century Indian women scientists
Social workers from Maharashtra
20th-century Indian women educators
20th-century Indian educators
|
Rubix is a British multinational company based in London specialised in the distribution of industrial products and services for industrial engineering, maintenance (technical) and operations. Rubix has become the number 1 company in Europe in the maintenance, repair and overhaul (MRO) sector with 750 locations across 22 countries and a turnover of 3 billion euros in 2022.
The company serves over 220,000 customers and distributes over 2 million products, including bearings, mechanical power transmission components, flow technology and fluid power products, machining, cutting tool materials, personal protective equipment and general maintenance products, as well as logistics and technical services.
Rubix has five exclusive brands: Cutline (high-performance rotary cutting tools), GISS (PPE and safety products), Mecaline (mechanical power transmission products), Roebuck (hand tools), and Spartex (industrial essentials).
History
Rubix was created on 26 June 2018, following the name change of the IPH-Brammer Group. It was created in September 2017 following the merger of the Brammer group and the IPH group, both acquired by US investment fund Advent International in 2017.
Rubix has been ranked Number 28 on the 2020 Sunday Times HSBC Top Track 100. In August 2022, Rubix achieved a Gold EcoVadis sustainability rating, placing them in the top 2% of companies in their industry. In 2021, Rubix reduced Scope 1 emissions by 26%, Scope 2 emissions by 8%, and Scope 3 emissions by 25%.
Corporate affairs
Executive management
In February 2023, the executive board members are:
Franck Voisin, Chief Executive officer
Katy Phillips, Chief Financial Officer
Gatien Gillon, Chief Operating Officer
Helen Ebert, Group General Counsel
David Morkeberg, Group HR Director
Lee Pruitt, Chief Digital & Marketing Officer
while country general managers are:
Tiziano Biasoli, Italy
Jesus Martinez Planas, Spain
Alexandre Labasse, France
Paul van der Rest, Benelux and Nordics
Vince McGurk, UK
Pinaki Banerjee, Central and Eastern Europe
André Thönes, DACH
Countries and Subsidiaries
Rubix provides industrial products and services in 22 countries: Austria, Belgium, Czech Republic, Denmark, Finland, France, Germany, Hungary, Iceland, Ireland, Italy, Luxembourg, Netherlands, Norway, Portugal, Poland, Romania, Slovakia, Spain, Sweden, Switzerland and the United Kingdom.
Advanced Industrial Rewinds (AIR) (United Kingdom)
AKN (Germany)
Barlotti (Italy)
Bedu (Netherlands)
Brammer (multiple locations)
BT Brammer (Netherlands)
Brammer Buck & Hickman (United Kingdom)
Buenaventura Giner (Spain)
Cañellas Protecció S.L.U. (Spain)
Casa das Correias (Portugal)
CompCare (United Kingdom)
C.Plüss (Switzerland)
Deritend (United Kingdom)
DHSF (Spain)
EFC (Netherlands)
Escudier (France)
Fluidmec (Italy)
FIPA (France)
GeeveHydraulics (Netherlands)
Gondrom (Germany)
Hafner (Poland)
Holding Europeo de Compresores (Spain)
Hydraflow Hydraulics (United Kingdom)
Julsa (Spain)
Kistenpfennig (multiple locations)
Knowlton & Newman (United Kingdom)
LERBS Group (Germany)
Rubix Spa (Minetti) (Italy)
Magema (Netherlands)
Martin Depner (Germany)
Matara (United Kingdom)
Matrix (United Kingdom)
MCA (Netherlands)
Montalpina (Switzerland)
Motronic Service Sabadell (Spain)
Nova Modet (Italy)
Novo Tech (Romania)
NT Transmissions (France)
Orexad Brammer (France)
Outilacier (France)
Peter Campbell Sales (PCS) (United Kingdom)
PePe (Poland)
Petean (Italy)
RCDE France (Master Outillage) (France)
Robod (Poland)
Rubix Benelux (Benelux)
Rubix Iceland (Iceland)
Schäfer Technik (Germany)
SEALL (Czech Republic)
Sistemas De Manipulación Asistida (SMA) (Spain)
Solyro (France)
Stop Fluid (Spain)
Suministros Navarro (multiple locations)
Syresa (Spain)
TCB (Netherlands)
Technidis (France)
TEST SEALING SYSTEMS (Poland)
Uniseals (Italy)
Zitec (Germany)
References
External links
Companies based in London
Engineering companies of the United Kingdom
British companies established in 2018
2018 establishments in England
|
```xml
import { Point } from "../../entities/geometry/Point"
import { NoteEvent } from "../../track"
import { NotePoint } from "./NotePoint"
import { TickTransform } from "./TickTransform"
export class NoteCoordTransform implements TickTransform {
constructor(
private readonly pixelsPerTick: number,
readonly pixelsPerKey: number,
private readonly maxNoteNumber: number,
) {}
// pixels
getX(tick: number) {
return tick * this.pixelsPerTick
}
getY(noteNumber: number) {
return (this.maxNoteNumber - noteNumber) * this.pixelsPerKey
}
// ticks
getTick(pixels: number) {
return pixels / this.pixelsPerTick
}
getNoteNumber(pixels: number) {
return Math.ceil(this.getNoteNumberFractional(pixels))
}
getNoteNumberFractional(pixels: number) {
return this.maxNoteNumber - pixels / this.pixelsPerKey
}
getDeltaNoteNumber(deltaPixels: number) {
return -deltaPixels / this.pixelsPerKey
}
get numberOfKeys() {
return this.maxNoteNumber + 1
}
//
getMaxY() {
return this.numberOfKeys * this.pixelsPerKey
}
getRect(note: NoteEvent) {
return {
x: this.getX(note.tick),
y: this.getY(note.noteNumber),
width: this.getX(note.duration),
height: this.pixelsPerKey,
}
}
getDrumRect(note: NoteEvent) {
return {
x: this.getX(note.tick) - this.pixelsPerKey / 2,
y: this.getY(note.noteNumber),
width: this.pixelsPerKey,
height: this.pixelsPerKey,
}
}
getNotePoint(pos: Point): NotePoint {
return {
tick: this.getTick(pos.x),
noteNumber: this.getNoteNumber(pos.y),
}
}
getNotePointFractional(pos: Point): NotePoint {
return {
tick: this.getTick(pos.x),
noteNumber: this.getNoteNumberFractional(pos.y),
}
}
equals(t: NoteCoordTransform) {
return (
this.pixelsPerKey === t.pixelsPerKey &&
this.pixelsPerTick === t.pixelsPerTick &&
this.maxNoteNumber === t.maxNoteNumber
)
}
// Unique integer representing the horizontal transformation
get horizontalId(): number {
return this.pixelsPerTick
}
}
```
|
Hypatima subdentata is a moth in the family Gelechiidae. It was described by Alexey Diakonoff in 1954. It is found in New Guinea.
References
Hypatima
Moths described in 1954
|
```python
from office365.runtime.client_value import ClientValue
from office365.runtime.types.collections import StringCollection
class QueryContext(ClientValue):
"""This object contains the query context properties."""
def __init__(self, group_object_ids=None, site_id=None, tenant_instance_id=None):
"""
:param str site_id: This property contains the site identification.
"""
self.GroupObjectIds = StringCollection(group_object_ids)
self.SpSiteId = site_id
self.TenantInstanceId = tenant_instance_id
@property
def entity_type_name(self):
return "Microsoft.Office.Server.Search.REST.QueryContext"
```
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.