text
stringlengths 1
22.8M
|
|---|
Chestnut Hill Avenue station is a light rail surface stop on the MBTA Green Line B branch, located in the median of Commonwealth Avenue just east of Chestnut Hill Avenue in the Brighton neighborhood of Boston, Massachusetts. Chestnut Hill Avenue has two low-level platforms, serving the B branch's two tracks; the stop is not accessible.
Just to the west of the station, there is a wye connecting the B branch to non-revenue tracks that run along Chestnut Hill Avenue to Reservoir Carhouse at Cleveland Circle. The tracks are used to supply the B branch with cars before rush hour, as the carhouse at Boston College has limited storage area. The leg of the wye leading from the westbound B branch to the non-revenue tracks is out of service and paved over.
History
On May 14, 2008, an outbound train derailed at Chestnut Hill Avenue. It struck a nearby utility pole, which brought down the overhead wires, causing the trolley to catch fire. No injuries were reported, but the trolley suffered significant damage.
Track work in 2018–19, which included replacement of platform edges at several stops, triggered requirements for accessibility modifications at those stops. Design for Chestnut Hill Avenue and four other B Branch stops was 30% complete by December 2022. , construction is expected to take place during 2024.
References
External links
MBTA – Chestnut Hill Avenue
Station from Chestnut Hill Avenue from Google Maps Street View
Brighton, Boston
Green Line (MBTA) stations
Railway stations in Boston
|
```php
<?php
/*
* This file is part of the Symfony package.
*
* (c) Fabien Potencier <fabien@symfony.com>
*
* For the full copyright and license information, please view the LICENSE
* file that was distributed with this source code.
*/
namespace Symfony\Component\HttpKernel\Exception;
/**
* UnprocessableEntityHttpException.
*
* @author Steve Hutchins <hutchinsteve@gmail.com>
*/
class UnprocessableEntityHttpException extends HttpException
{
/**
* Constructor.
*
* @param string $message The internal exception message
* @param \Exception $previous The previous exception
* @param int $code The internal exception code
*/
public function __construct($message = null, \Exception $previous = null, $code = 0)
{
parent::__construct(422, $message, $previous, array(), $code);
}
}
```
|
Rissani is a town in Errachidia Province in eastern Morocco, located near Erfoud. It is the closest town of significant size to the Erg Chebbi, the largest sand desert in Morocco. Its population in 2004 was 20,469.
The mausoleum of Moulay Ali Cherif, third great-grandfather of Moulay Cherif, founder of the Alaouite Dynasty of Morocco, is located on the southern edge of town.
History
Rissani is the ancient capital of Tafilalet. Its location as a crossroads between north and south gave the city a certain importance in previous times.
A former major caravan center, Rissani remains a major commercial center in the region, with a large souk, particularly lively on Tuesdays, Thursdays and Sundays. It is noted for its leather and goat skin trading.
Things to Do
Rissani provide great opportunity to do shopping the Moroccan way:
Haggle for traditional items like rugs, spices, and jewelry at the famous Saturday souk.
Browse fossil vendors near the souk entrance selling marble, stone, and mineral pieces.
Wander through the morning livestock souk as donkeys are traded and examine intricately decorated saddles.
The buzzing markets embody Rissani's enduring commer
References
Populated places in Errachidia Province
Burial sites of the 'Alawi dynasty
|
```smalltalk
using System.Text;
using System.Xml.Linq;
namespace Xamarin.Android.Tools.ManifestAttributeCodeGenerator;
class MetadataSource
{
public Dictionary<string, MetadataType> Types { get; } = [];
public Dictionary<string, MetadataAttribute> Elements { get; } = [];
public MetadataSource (string filename)
{
var xml = XElement.Load (filename);
foreach (var element in xml.Elements ("element")) {
var me = new MetadataAttribute (element);
Elements.Add (me.Path, me);
}
foreach (var element in xml.Elements ("type")) {
var el = new MetadataType (element);
Types.Add (el.Name, el);
}
}
public MetadataAttribute GetMetadata (string path)
{
if (Elements.TryGetValue (path, out var element))
return element;
throw new InvalidOperationException ($"No MetadataElement found for path '{path}'.");
}
public void EnsureAllElementsAccountedFor (List<ElementDefinition> elements)
{
var missing = new List<string> ();
foreach (var e in elements) {
if (!Types.TryGetValue (e.ActualElementName, out var t)) {
missing.Add ($"- Type: <{e.ActualElementName}>");
continue;
}
if (t.Ignore)
continue;
foreach (var a in e.Attributes) {
var name = $"{e.ActualElementName}.{a.Name}";
if (!Elements.TryGetValue (name, out _))
missing.Add ($"- Element: {name}");
}
}
if (missing.Count == 0)
return;
var sb = new StringBuilder ();
sb.AppendLine ("The following manifest elements are not specified in the metadata:");
foreach (var m in missing)
sb.AppendLine (m);
throw new InvalidOperationException (sb.ToString ());
}
public void EnsureAllMetadataElementsExistInManifest (List<ElementDefinition> elements)
{
var missing = new List<string> ();
foreach (var type in Types) {
var type_def = elements.FirstOrDefault (e => e.ActualElementName == type.Key);
if (type_def is null) {
missing.Add ($"- Type: {type.Key}");
continue;
}
}
foreach (var type in Elements) {
var type_name = type.Key.FirstSubset ('.');
var elem_name = type.Key.LastSubset ('.');
var type_def = elements.FirstOrDefault (e => e.ActualElementName == type_name);
if (type_def is null) {
missing.Add ($"- Element: {type.Key}");
continue;
}
var elem_def = type_def.Attributes.FirstOrDefault (e => e.Name == elem_name);
if (elem_def is null) {
missing.Add ($"- Element: {type.Key}");
continue;
}
}
if (missing.Count == 0)
return;
var sb = new StringBuilder ();
sb.AppendLine ("The following elements specified in the metadata were not found in the manifest:");
foreach (var e in missing)
sb.AppendLine (e);
throw new InvalidOperationException (sb.ToString ());
}
}
class MetadataAttribute
{
public string Path { get; set; }
public bool Visible { get; set; } = true;
public string? Type { get; set; }
public string? Name { get; set; }
public string? Obsolete { get; set; }
public bool ReadOnly { get; set; }
public bool ManualMap { get; set; }
public MetadataAttribute (XElement element)
{
Path = element.Attribute ("path")?.Value ?? throw new InvalidDataException ("Missing 'path' attribute.");
if (!Path.Contains ('.'))
throw new InvalidDataException ($"Invalid 'path' attribute value: {Path}");
Visible = element.GetAttributeBoolOrDefault ("visible", true);
Type = element.Attribute ("type")?.Value;
Name = element.Attribute ("name")?.Value;
Obsolete = element.Attribute ("obsolete")?.Value;
ReadOnly = element.GetAttributeBoolOrDefault ("readonly", false);
ManualMap = element.GetAttributeBoolOrDefault ("manualMap", false);
}
}
public class MetadataType
{
public string Name { get; set; }
public string ManagedName { get; set; } = string.Empty;
public string Namespace { get; set; } = string.Empty;
public bool Ignore { get; set; }
public string OutputFile { get; set; } = string.Empty;
public string Usage { get; set; } = string.Empty;
public bool AllowMultiple { get; set; }
public bool IsJniNameProvider { get; set; }
public bool HasDefaultConstructor { get; set; }
public bool IsSealed { get; set; }
public bool GenerateMapping { get; set; }
public MetadataType (XElement element)
{
Name = element.GetRequiredAttributeString ("name");
Ignore = element.GetAttributeBoolOrDefault ("ignore", false);
if (Ignore)
return;
Namespace = element.GetRequiredAttributeString ("namespace");
OutputFile = element.GetRequiredAttributeString ("outputFile");
Usage = element.GetRequiredAttributeString ("usage");
AllowMultiple = element.GetAttributeBoolOrDefault ("allowMultiple", false);
IsJniNameProvider = element.GetAttributeBoolOrDefault ("jniNameProvider", false);
HasDefaultConstructor = element.GetAttributeBoolOrDefault ("defaultConstructor", true);
IsSealed = element.GetAttributeBoolOrDefault ("sealed", true);
ManagedName = element.Attribute ("managedName")?.Value ?? Name.Unhyphenate ().Capitalize () + "Attribute";
GenerateMapping = element.GetAttributeBoolOrDefault ("generateMapping", true);
}
}
```
|
16th Avenue may refer to:
16th Avenue, a street forming the Music Row district of Nashville, Tennessee
"16th Avenue" (song), a 1982 song by Lacy J. Dalton
16 Avenue N, in Calgary, Alberta, Canada
16th Avenue (York Region), Ontario, Canada
16th Avenue Records, a defunct record label
|
```yaml
apiVersion: v1
kind: ReplicationController
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"ReplicationController","metadata":{"annotations":{},"labels":{"app":"test-app"},"name":"test-app-replicationcontroller","namespace":"test-local-deploy-all"},"spec":{"replicas":2,"selector":{"app":"test-app"},"template":{"metadata":{"labels":{"app":"test-app"},"name":"test-app"},"spec":{"containers":[{"image":"gcr.io/cbd-test/test-app:latest","name":"test-app","ports":[{"containerPort":80}]}]}}}}
creationTimestamp: 2019-06-11T15:29:16Z
generation: 2
labels:
app: test-app
name: test-app-replicationcontroller
namespace: test-local-deploy-all
resourceVersion: "6040056"
selfLink: /api/v1/namespaces/test-local-deploy-all/replicationcontrollers/test-app-replicationcontroller
uid: ac5b7d26-8c5d-11e9-8840-42010a8e00dc
spec:
replicas: 2
selector:
app: test-app
template:
metadata:
creationTimestamp: null
labels:
app: test-app
name: test-app
spec:
containers:
- image: gcr.io/cbd-test/test-app:latest
imagePullPolicy: Always
name: test-app
ports:
- containerPort: 80
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 2
fullyLabeledReplicas: 2
observedGeneration: 2
readyReplicas: 1
replicas: 2
```
|
```java
/*
*
*/
package io.debezium.connector.postgresql;
import static io.debezium.connector.postgresql.TestHelper.PK_FIELD;
import static io.debezium.connector.postgresql.TestHelper.TYPE_LENGTH_PARAMETER_KEY;
import static io.debezium.connector.postgresql.TestHelper.TYPE_NAME_PARAMETER_KEY;
import static io.debezium.connector.postgresql.TestHelper.TYPE_SCALE_PARAMETER_KEY;
import static io.debezium.connector.postgresql.TestHelper.topicName;
import static io.debezium.connector.postgresql.junit.SkipWhenDecoderPluginNameIs.DecoderPluginName.PGOUTPUT;
import static io.debezium.junit.EqualityCheck.LESS_THAN;
import static junit.framework.TestCase.assertEquals;
import static junit.framework.TestCase.assertTrue;
import static org.assertj.core.api.Assertions.assertThat;
import static org.assertj.core.api.Assertions.entry;
import static org.assertj.core.api.Assertions.fail;
import static org.junit.Assert.assertFalse;
import static org.junit.Assert.assertNotNull;
import java.math.BigDecimal;
import java.nio.ByteBuffer;
import java.nio.charset.StandardCharsets;
import java.time.Instant;
import java.time.LocalTime;
import java.time.OffsetDateTime;
import java.time.ZoneOffset;
import java.util.Arrays;
import java.util.Collections;
import java.util.HashMap;
import java.util.HashSet;
import java.util.List;
import java.util.Set;
import java.util.UUID;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicBoolean;
import java.util.concurrent.atomic.AtomicInteger;
import java.util.function.Function;
import java.util.function.Predicate;
import java.util.stream.Collectors;
import java.util.stream.IntStream;
import java.util.stream.LongStream;
import java.util.stream.Stream;
import java.util.stream.StreamSupport;
import org.apache.commons.lang3.RandomStringUtils;
import org.apache.kafka.connect.data.Decimal;
import org.apache.kafka.connect.data.Field;
import org.apache.kafka.connect.data.Schema;
import org.apache.kafka.connect.data.SchemaBuilder;
import org.apache.kafka.connect.data.Struct;
import org.apache.kafka.connect.header.Header;
import org.apache.kafka.connect.source.SourceRecord;
import org.apache.kafka.connect.storage.MemoryOffsetBackingStore;
import org.assertj.core.api.Assertions;
import org.awaitility.Awaitility;
import org.awaitility.core.ConditionTimeoutException;
import org.junit.Before;
import org.junit.Rule;
import org.junit.Test;
import org.junit.rules.TestRule;
import org.postgresql.util.PSQLException;
import io.debezium.config.CommonConnectorConfig;
import io.debezium.config.CommonConnectorConfig.BinaryHandlingMode;
import io.debezium.config.Configuration;
import io.debezium.connector.SnapshotRecord;
import io.debezium.connector.postgresql.PostgresConnectorConfig.IntervalHandlingMode;
import io.debezium.connector.postgresql.PostgresConnectorConfig.SchemaRefreshMode;
import io.debezium.connector.postgresql.PostgresConnectorConfig.SnapshotMode;
import io.debezium.connector.postgresql.connection.PostgresConnection;
import io.debezium.connector.postgresql.connection.ReplicationConnection;
import io.debezium.connector.postgresql.junit.SkipTestDependingOnDecoderPluginNameRule;
import io.debezium.connector.postgresql.junit.SkipWhenDecoderPluginNameIs;
import io.debezium.connector.postgresql.junit.SkipWhenDecoderPluginNameIsNot;
import io.debezium.data.Bits;
import io.debezium.data.Enum;
import io.debezium.data.Envelope;
import io.debezium.data.SpecialValueDecimal;
import io.debezium.data.VariableScaleDecimal;
import io.debezium.data.VerifyRecord;
import io.debezium.data.geometry.Point;
import io.debezium.doc.FixFor;
import io.debezium.embedded.EmbeddedEngineConfig;
import io.debezium.heartbeat.DatabaseHeartbeatImpl;
import io.debezium.heartbeat.Heartbeat;
import io.debezium.jdbc.JdbcConnection;
import io.debezium.jdbc.JdbcValueConverters.DecimalMode;
import io.debezium.jdbc.TemporalPrecisionMode;
import io.debezium.junit.ConditionalFail;
import io.debezium.junit.EqualityCheck;
import io.debezium.junit.SkipWhenDatabaseVersion;
import io.debezium.junit.logging.LogInterceptor;
import io.debezium.relational.RelationalChangeRecordEmitter;
import io.debezium.relational.RelationalDatabaseConnectorConfig.DecimalHandlingMode;
import io.debezium.relational.Table;
import io.debezium.relational.TableId;
import io.debezium.relational.Tables;
import io.debezium.relational.Tables.TableFilter;
import io.debezium.time.MicroTime;
import io.debezium.time.MicroTimestamp;
import io.debezium.time.ZonedTime;
import io.debezium.time.ZonedTimestamp;
import io.debezium.util.HexConverter;
import io.debezium.util.Stopwatch;
import io.debezium.util.Testing;
/**
* Integration test for the {@link RecordsStreamProducer} class. This also tests indirectly the PG plugin functionality for
* different use cases.
*
* @author Horia Chiorean (hchiorea@redhat.com)
*/
public class RecordsStreamProducerIT extends AbstractRecordsProducerTest {
private TestConsumer consumer;
@Rule
public final TestRule skip = new SkipTestDependingOnDecoderPluginNameRule();
@Rule
public TestRule conditionalFail = new ConditionalFail();
@Before
public void before() throws Exception {
// ensure the slot is deleted for each test
TestHelper.dropAllSchemas();
TestHelper.executeDDL("init_postgis.ddl");
String statements = "CREATE SCHEMA IF NOT EXISTS public;" +
"DROP TABLE IF EXISTS test_table;" +
"CREATE TABLE test_table (pk SERIAL, text TEXT, PRIMARY KEY(pk));" +
"CREATE TABLE table_with_interval (id SERIAL PRIMARY KEY, title VARCHAR(512) NOT NULL, time_limit INTERVAL DEFAULT '60 days'::INTERVAL NOT NULL);" +
"INSERT INTO test_table(text) VALUES ('insert');";
TestHelper.execute(statements);
Configuration.Builder configBuilder = TestHelper.defaultConfig()
.with(PostgresConnectorConfig.INCLUDE_UNKNOWN_DATATYPES, false)
.with(PostgresConnectorConfig.SCHEMA_EXCLUDE_LIST, "postgis");
// todo DBZ-766 are these really needed?
if (TestHelper.decoderPlugin() == PostgresConnectorConfig.LogicalDecoder.PGOUTPUT) {
configBuilder = configBuilder.with("database.replication", "database")
.with("database.preferQueryMode", "simple")
.with("assumeMinServerVersion.set", "9.4");
}
// Testing.Print.enable();
}
private void startConnector(Function<Configuration.Builder, Configuration.Builder> customConfig, boolean waitForSnapshot, Predicate<SourceRecord> isStopRecord)
throws InterruptedException {
start(PostgresConnector.class, new PostgresConnectorConfig(customConfig.apply(TestHelper.defaultConfig()
.with(PostgresConnectorConfig.INCLUDE_UNKNOWN_DATATYPES, false)
.with(PostgresConnectorConfig.SCHEMA_EXCLUDE_LIST, "postgis")
.with(PostgresConnectorConfig.SNAPSHOT_MODE, waitForSnapshot ? SnapshotMode.INITIAL : SnapshotMode.NO_DATA))
.build()).getConfig(), isStopRecord);
assertConnectorIsRunning();
waitForStreamingToStart();
if (waitForSnapshot) {
// Wait for snapshot to be in progress
consumer = testConsumer(1);
consumer.await(TestHelper.waitTimeForRecords(), TimeUnit.SECONDS);
consumer.remove();
}
}
private void startConnector(Function<Configuration.Builder, Configuration.Builder> customConfig, boolean waitForSnapshot) throws InterruptedException {
startConnector(customConfig, waitForSnapshot, (x) -> false);
}
private void startConnector(Function<Configuration.Builder, Configuration.Builder> customConfig) throws InterruptedException {
startConnector(customConfig, true);
}
private void startConnector() throws InterruptedException {
startConnector(Function.identity(), true);
}
@Test
public void shouldReceiveChangesForInsertsWithDifferentDataTypes() throws Exception {
TestHelper.executeDDL("postgres_create_tables.ddl");
startConnector();
consumer = testConsumer(1);
// numerical types
consumer.expects(1);
assertInsert(INSERT_NUMERIC_TYPES_STMT, 1, schemasAndValuesForNumericType());
// numerical decimal types
consumer.expects(1);
assertInsert(INSERT_NUMERIC_DECIMAL_TYPES_STMT_NO_NAN, 1, schemasAndValuesForBigDecimalEncodedNumericTypes());
// string types
consumer.expects(1);
assertInsert(INSERT_STRING_TYPES_STMT, 1, schemasAndValuesForStringTypes());
// monetary types
consumer.expects(1);
assertInsert(INSERT_CASH_TYPES_STMT, 1, schemaAndValuesForMoneyTypes());
// negative monetary types
consumer.expects(1);
assertInsert(INSERT_NEGATIVE_CASH_TYPES_STMT, 2, schemaAndValuesForNegativeMoneyTypes());
// bits and bytes
consumer.expects(1);
assertInsert(INSERT_BIN_TYPES_STMT, 1, schemaAndValuesForBinTypes());
// date and time
consumer.expects(1);
assertInsert(INSERT_DATE_TIME_TYPES_STMT, 1, schemaAndValuesForDateTimeTypes());
// text
consumer.expects(1);
assertInsert(INSERT_TEXT_TYPES_STMT, 1, schemasAndValuesForTextTypes());
// geom types
consumer.expects(1);
assertInsert(INSERT_GEOM_TYPES_STMT, 1, schemaAndValuesForGeomTypes());
// range types
consumer.expects(1);
assertInsert(INSERT_RANGE_TYPES_STMT, 1, schemaAndValuesForRangeTypes());
}
@Test
@FixFor("DBZ-5014")
public void shouldReceiveDeletesWithInfinityDate() throws Exception {
// Testing.Print.enable();
TestHelper.executeDDL("postgres_create_tables.ddl");
TestHelper.execute("ALTER TABLE time_table REPLICA IDENTITY FULL");
startConnector();
executeAndWait(INSERT_DATE_TIME_TYPES_STMT);
consumer = testConsumer(1);
assertDelete(DELETE_DATE_TIME_TYPES_STMT, 1, schemaAndValuesForDateTimeTypes());
}
@Test
@FixFor("DBZ-1498")
public void shouldReceiveChangesForIntervalAsString() throws Exception {
TestHelper.executeDDL("postgres_create_tables.ddl");
startConnector(config -> config
.with(PostgresConnectorConfig.INTERVAL_HANDLING_MODE, IntervalHandlingMode.STRING));
consumer = testConsumer(1);
// date and time
consumer.expects(1);
assertInsert(INSERT_DATE_TIME_TYPES_STMT, 1, schemaAndValuesForIntervalAsString());
}
@Test
@FixFor("DBZ-766")
public void shouldReceiveChangesAfterConnectionRestart() throws Exception {
TestHelper.dropDefaultReplicationSlot();
TestHelper.dropPublication();
startConnector(config -> config
.with(PostgresConnectorConfig.INCLUDE_UNKNOWN_DATATYPES, true)
.with(PostgresConnectorConfig.DROP_SLOT_ON_STOP, Boolean.FALSE)
.with(PostgresConnectorConfig.SCHEMA_EXCLUDE_LIST, "postgis"));
TestHelper.execute("CREATE TABLE t0 (pk SERIAL, d INTEGER, PRIMARY KEY(pk));");
consumer = testConsumer(1);
waitForStreamingToStart();
// Insert new row and verify inserted
executeAndWait("INSERT INTO t0 (pk,d) VALUES(1,1);");
assertRecordInserted("public.t0", PK_FIELD, 1);
// simulate the connector is stopped
stopConnector();
// Alter schema offline
TestHelper.execute("ALTER TABLE t0 ADD COLUMN d2 INTEGER;");
TestHelper.execute("ALTER TABLE t0 ALTER COLUMN d SET NOT NULL;");
// Start the producer and wait; the wait is to guarantee the stream thread is polling
// This appears to be a potential race condition problem
startConnector(config -> config
.with(PostgresConnectorConfig.INCLUDE_UNKNOWN_DATATYPES, true)
.with(PostgresConnectorConfig.SCHEMA_EXCLUDE_LIST, "postgis"),
false);
consumer = testConsumer(1);
waitForStreamingToStart();
// Insert new row and verify inserted
executeAndWait("INSERT INTO t0 (pk,d,d2) VALUES (2,1,3);");
assertRecordInserted("public.t0", PK_FIELD, 2);
}
@Test
@FixFor("DBZ-1698")
public void shouldReceiveUpdateSchemaAfterConnectionRestart() throws Exception {
TestHelper.dropDefaultReplicationSlot();
TestHelper.dropPublication();
startConnector(config -> config
.with(PostgresConnectorConfig.INCLUDE_UNKNOWN_DATATYPES, true)
.with(PostgresConnectorConfig.SCHEMA_EXCLUDE_LIST, "postgis")
.with(PostgresConnectorConfig.DROP_SLOT_ON_STOP, false)
.with(PostgresConnectorConfig.SCHEMA_REFRESH_MODE, SchemaRefreshMode.COLUMNS_DIFF_EXCLUDE_UNCHANGED_TOAST));
TestHelper.execute("CREATE TABLE t0 (pk SERIAL, d INTEGER, PRIMARY KEY(pk));");
consumer = testConsumer(1);
waitForStreamingToStart();
// Insert new row and verify inserted
executeAndWait("INSERT INTO t0 (pk,d) VALUES(1,1);");
assertRecordInserted("public.t0", PK_FIELD, 1);
// simulate the connector is stopped
stopConnector();
Thread.sleep(3000);
// Add record offline
TestHelper.execute("INSERT INTO t0 (pk,d) VALUES(2,2);");
// Alter schema offline
TestHelper.execute("ALTER TABLE t0 ADD COLUMN d2 NUMERIC(10,6) DEFAULT 0 NOT NULL;");
TestHelper.execute("ALTER TABLE t0 ALTER COLUMN d SET NOT NULL;");
// Start the producer and wait; the wait is to guarantee the stream thread is polling
// This appears to be a potential race condition problem
startConnector(config -> config
.with(PostgresConnectorConfig.INCLUDE_UNKNOWN_DATATYPES, true)
.with(PostgresConnectorConfig.SCHEMA_EXCLUDE_LIST, "postgis")
.with(PostgresConnectorConfig.DROP_SLOT_ON_STOP, false)
.with(PostgresConnectorConfig.SCHEMA_REFRESH_MODE, SchemaRefreshMode.COLUMNS_DIFF_EXCLUDE_UNCHANGED_TOAST),
false);
consumer = testConsumer(2);
waitForStreamingToStart();
// Insert new row and verify inserted
executeAndWait("INSERT INTO t0 (pk,d,d2) VALUES (3,1,3);");
assertRecordInserted("public.t0", PK_FIELD, 2);
assertRecordInserted("public.t0", PK_FIELD, 3);
stopConnector();
TestHelper.dropDefaultReplicationSlot();
TestHelper.dropPublication();
}
@Test
public void shouldReceiveChangesForInsertsCustomTypes() throws Exception {
TestHelper.executeDDL("postgres_create_tables.ddl");
startConnector(config -> config.with(PostgresConnectorConfig.INCLUDE_UNKNOWN_DATATYPES, true));
// custom types + null value
assertInsert(INSERT_CUSTOM_TYPES_STMT, 1, schemasAndValuesForCustomTypes());
}
@Test
public void your_sha256_hash() throws Exception {
TestHelper.executeDDL("postgres_create_tables.ddl");
startConnector(config -> config.with(PostgresConnectorConfig.INCLUDE_UNKNOWN_DATATYPES, false));
// custom types + null value
assertInsert(INSERT_CUSTOM_TYPES_STMT, 1, schemasAndValuesForCustomTypes());
}
@Test
@FixFor("DBZ-1141")
public void shouldProcessNotNullColumnsConnectDateTypes() throws Exception {
final Struct before = testProcessNotNullColumns(TemporalPrecisionMode.CONNECT);
if (before != null) {
assertThat(before.get("created_at")).isEqualTo(new java.util.Date(0));
assertThat(before.get("created_at_tz")).isEqualTo("1970-01-01T00:00:00Z");
assertThat(before.get("ctime")).isEqualTo(new java.util.Date(0));
assertThat(before.get("ctime_tz")).isEqualTo("00:00:00Z");
assertThat(before.get("cdate")).isEqualTo(new java.util.Date(0));
assertThat(before.get("cmoney")).isEqualTo(new BigDecimal("0.00"));
assertThat(before.get("cbits")).isEqualTo(new byte[0]);
}
}
@Test
@FixFor("DBZ-1141")
public void shouldProcessNotNullColumnsAdaptiveDateTypes() throws Exception {
final Struct before = testProcessNotNullColumns(TemporalPrecisionMode.ADAPTIVE);
if (before != null) {
assertThat(before.get("created_at")).isEqualTo(0L);
assertThat(before.get("created_at_tz")).isEqualTo("1970-01-01T00:00:00Z");
assertThat(before.get("ctime")).isEqualTo(0L);
assertThat(before.get("ctime_tz")).isEqualTo("00:00:00Z");
assertThat(before.get("cdate")).isEqualTo(0);
assertThat(before.get("cmoney")).isEqualTo(new BigDecimal("0.00"));
assertThat(before.get("cbits")).isEqualTo(new byte[0]);
}
}
@Test
@FixFor("DBZ-1141")
public void shouldProcessNotNullColumnsAdaptiveMsDateTypes() throws Exception {
final Struct before = testProcessNotNullColumns(TemporalPrecisionMode.ADAPTIVE_TIME_MICROSECONDS);
if (before != null) {
assertThat(before.get("created_at")).isEqualTo(0L);
assertThat(before.get("created_at_tz")).isEqualTo("1970-01-01T00:00:00Z");
assertThat(before.get("ctime")).isEqualTo(0L);
assertThat(before.get("ctime_tz")).isEqualTo("00:00:00Z");
assertThat(before.get("cdate")).isEqualTo(0);
assertThat(before.get("cmoney")).isEqualTo(new BigDecimal("0.00"));
assertThat(before.get("cbits")).isEqualTo(new byte[0]);
}
}
@Test
@FixFor("DBZ-1158")
public void shouldProcessNotNullColumnsFallbacksReplicaIdentity() throws Exception {
// Use adaptive here as its the connector default
final Struct before = testProcessNotNullColumns(TemporalPrecisionMode.ADAPTIVE);
if (before != null) {
assertThat(before.get("csmallint")).isEqualTo((short) 0);
assertThat(before.get("cinteger")).isEqualTo(0);
assertThat(before.get("cbigint")).isEqualTo(0L);
assertThat(before.get("creal")).isEqualTo(0.f);
assertThat(before.get("cbool")).isEqualTo(false);
assertThat(before.get("cfloat8")).isEqualTo(0.0);
assertThat(before.get("cnumeric")).isEqualTo(new BigDecimal("0.00"));
assertThat(before.get("cvarchar")).isEqualTo("");
assertThat(before.get("cbox")).isEqualTo(new byte[0]);
assertThat(before.get("ccircle")).isEqualTo(new byte[0]);
assertThat(before.get("cinterval")).isEqualTo(0L);
assertThat(before.get("cline")).isEqualTo(new byte[0]);
assertThat(before.get("clseg")).isEqualTo(new byte[0]);
assertThat(before.get("cpath")).isEqualTo(new byte[0]);
assertThat(before.get("cpoint")).isEqualTo(Point.createValue(Point.builder().build(), 0, 0));
assertThat(before.get("cpolygon")).isEqualTo(new byte[0]);
assertThat(before.get("cchar")).isEqualTo("");
assertThat(before.get("ctext")).isEqualTo("");
assertThat(before.get("cjson")).isEqualTo("");
assertThat(before.get("cxml")).isEqualTo("");
assertThat(before.get("cuuid")).isEqualTo("");
assertThat(before.get("cvarbit")).isEqualTo(new byte[0]);
assertThat(before.get("cinet")).isEqualTo("");
assertThat(before.get("ccidr")).isEqualTo("");
assertThat(before.get("cmacaddr")).isEqualTo("");
}
}
private Struct testProcessNotNullColumns(TemporalPrecisionMode temporalMode) throws Exception {
TestHelper.executeDDL("postgres_create_tables.ddl");
startConnector(config -> config
.with(PostgresConnectorConfig.INCLUDE_UNKNOWN_DATATYPES, true)
.with(PostgresConnectorConfig.SCHEMA_EXCLUDE_LIST, "postgis")
.with(PostgresConnectorConfig.TIME_PRECISION_MODE, temporalMode));
consumer.expects(1);
executeAndWait("INSERT INTO not_null_table VALUES (default, 30, '2019-02-10 11:34:58', '2019-02-10 11:35:00', "
+ "'10:20:11', '10:20:12', '2019-02-01', '$20', B'101', 32766, 2147483646, 9223372036854775806, 3.14, "
+ "true, 3.14768, 1234.56, 'Test', '(0,0),(1,1)', '<(0,0),1>', '01:02:03', '{0,1,2}', '((0,0),(1,1))', "
+ "'((0,0),(0,1),(0,2))', '(1,1)', '((0,0),(0,1),(1,1))', 'a', 'hello world', '{\"key\": 123}', "
+ "'<doc><item>abc</item></doc>', 'a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a11', B'101', '192.168.1.100', "
+ "'192.168.1', '08:00:2b:01:02:03');");
consumer.remove();
consumer.expects(1);
executeAndWait("UPDATE not_null_table SET val=40");
final SourceRecord record = consumer.remove();
VerifyRecord.isValidUpdate(record, "pk", 1);
VerifyRecord.isValid(record);
return ((Struct) record.value()).getStruct("before");
}
@Test(timeout = 30000)
public void shouldReceiveChangesForInsertsWithPostgisTypes() throws Exception {
TestHelper.executeDDL("postgis_create_tables.ddl");
startConnector();
consumer = testConsumer(1, "public"); // spatial_ref_sys produces a ton of records in the postgis schema
consumer.setIgnoreExtraRecords(true);
// need to wait for all the spatial_ref_sys to flow through and be ignored.
// this exceeds the normal 2s timeout.
TestHelper.execute("INSERT INTO public.dummy_table DEFAULT VALUES;");
consumer.await(TestHelper.waitTimeForRecords() * 10, TimeUnit.SECONDS);
while (true) {
if (!consumer.isEmpty()) {
SourceRecord record = consumer.remove();
if (record.topic().endsWith(".public.dummy_table")) {
break;
}
}
}
// now do it for actual testing
// postgis types
consumer.expects(1);
assertInsert(INSERT_POSTGIS_TYPES_STMT, 1, schemaAndValuesForPostgisTypes());
}
@Test(timeout = 30000)
public void shouldReceiveChangesForInsertsWithPostgisArrayTypes() throws Exception {
TestHelper.executeDDL("postgis_create_tables.ddl");
startConnector();
consumer = testConsumer(1, "public"); // spatial_ref_sys produces a ton of records in the postgis schema
consumer.setIgnoreExtraRecords(true);
// need to wait for all the spatial_ref_sys to flow through and be ignored.
// this exceeds the normal 2s timeout.
TestHelper.execute("INSERT INTO public.dummy_table DEFAULT VALUES;");
consumer.await(TestHelper.waitTimeForRecords() * 10, TimeUnit.SECONDS);
while (true) {
if (!consumer.isEmpty()) {
SourceRecord record = consumer.remove();
if (record.topic().endsWith(".public.dummy_table")) {
break;
}
}
}
// now do it for actual testing
// postgis types
consumer.expects(1);
assertInsert(INSERT_POSTGIS_ARRAY_TYPES_STMT, 1, schemaAndValuesForPostgisArrayTypes());
}
@Test
public void shouldReceiveChangesForInsertsWithQuotedNames() throws Exception {
TestHelper.executeDDL("postgres_create_tables.ddl");
startConnector();
// Quoted column name
assertInsert(INSERT_QUOTED_TYPES_STMT, 1, schemasAndValuesForQuotedTypes());
}
@Test
public void shouldReceiveChangesForInsertsWithArrayTypes() throws Exception {
TestHelper.executeDDL("postgres_create_tables.ddl");
startConnector();
assertInsert(INSERT_ARRAY_TYPES_STMT, 1, schemasAndValuesForArrayTypes());
}
@Test
@FixFor("DBZ-1029")
@SkipWhenDecoderPluginNameIs(value = PGOUTPUT, reason = "Decoder synchronizes all schema columns when processing relation messages")
public void shouldReceiveChangesForInsertsIndependentOfReplicaIdentity() throws Exception {
// insert statement should not be affected by replica identity settings in any way
startConnector();
TestHelper.execute("ALTER TABLE test_table REPLICA IDENTITY DEFAULT;");
String statement = "INSERT INTO test_table (text) VALUES ('pk_and_default');";
assertInsert(statement, 2, Collections.singletonList(new SchemaAndValueField("text", SchemaBuilder.OPTIONAL_STRING_SCHEMA, "pk_and_default")));
consumer.expects(1);
TestHelper.execute("ALTER TABLE test_table REPLICA IDENTITY FULL;");
statement = "INSERT INTO test_table (text) VALUES ('pk_and_full');";
assertInsert(statement, 3, Collections.singletonList(new SchemaAndValueField("text", SchemaBuilder.OPTIONAL_STRING_SCHEMA, "pk_and_full")));
consumer.expects(1);
TestHelper.execute("ALTER TABLE test_table DROP CONSTRAINT test_table_pkey CASCADE;");
statement = "INSERT INTO test_table (pk, text) VALUES (4, 'no_pk_and_full');";
assertInsert(statement, 4, Collections.singletonList(new SchemaAndValueField("text", SchemaBuilder.OPTIONAL_STRING_SCHEMA, "no_pk_and_full")));
consumer.expects(1);
TestHelper.execute("ALTER TABLE test_table REPLICA IDENTITY DEFAULT;");
statement = "INSERT INTO test_table (pk, text) VALUES (5, 'no_pk_and_default');";
assertInsert(statement, 5, Collections.singletonList(new SchemaAndValueField("text", SchemaBuilder.OPTIONAL_STRING_SCHEMA, "no_pk_and_default")));
}
@Test
@FixFor("DBZ-1029")
@SkipWhenDecoderPluginNameIsNot(value = SkipWhenDecoderPluginNameIsNot.DecoderPluginName.PGOUTPUT, reason = "Decoder synchronizes all schema columns when processing relation messages")
public void your_sha256_hashhemaChanged() throws Exception {
// insert statement should not be affected by replica identity settings in any way
startConnector();
TestHelper.execute("ALTER TABLE test_table REPLICA IDENTITY DEFAULT;");
String statement = "INSERT INTO test_table (text) VALUES ('pk_and_default');";
assertInsert(statement, 2, Collections.singletonList(new SchemaAndValueField("text", SchemaBuilder.OPTIONAL_STRING_SCHEMA, "pk_and_default")));
consumer.expects(1);
TestHelper.execute("ALTER TABLE test_table REPLICA IDENTITY FULL;");
statement = "INSERT INTO test_table (text) VALUES ('pk_and_full');";
assertInsert(statement, 3, Collections.singletonList(new SchemaAndValueField("text", SchemaBuilder.OPTIONAL_STRING_SCHEMA, "pk_and_full")));
consumer.expects(1);
TestHelper.execute("ALTER TABLE test_table DROP CONSTRAINT test_table_pkey CASCADE;");
statement = "INSERT INTO test_table (pk, text) VALUES (4, 'no_pk_and_full');";
assertInsert(statement, Arrays.asList(new SchemaAndValueField("pk", SchemaBuilder.int32().defaultValue(0).build(), 4),
new SchemaAndValueField("text", SchemaBuilder.OPTIONAL_STRING_SCHEMA, "no_pk_and_full")));
consumer.expects(1);
TestHelper.execute("ALTER TABLE test_table REPLICA IDENTITY DEFAULT;");
statement = "INSERT INTO test_table (pk, text) VALUES (5, 'no_pk_and_default');";
assertInsert(statement, Arrays.asList(new SchemaAndValueField("pk", SchemaBuilder.int32().defaultValue(0).build(), 5),
new SchemaAndValueField("text", SchemaBuilder.OPTIONAL_STRING_SCHEMA, "no_pk_and_default")));
}
@Test
@FixFor("DBZ-478")
public void shouldReceiveChangesForNullInsertsWithArrayTypes() throws Exception {
TestHelper.executeDDL("postgres_create_tables.ddl");
startConnector();
assertInsert(INSERT_ARRAY_TYPES_WITH_NULL_VALUES_STMT, 1, schemasAndValuesForArrayTypesWithNullValues());
}
@Test
public void shouldReceiveChangesForNewTable() throws Exception {
String statement = "CREATE SCHEMA s1;" +
"CREATE TABLE s1.a (pk SERIAL, aa integer, PRIMARY KEY(pk));" +
"INSERT INTO s1.a (aa) VALUES (11);";
startConnector();
executeAndWait(statement);
assertRecordInserted("s1.a", PK_FIELD, 1);
}
@Test
public void shouldReceiveChangesForRenamedTable() throws Exception {
String statement = "DROP TABLE IF EXISTS renamed_test_table;" +
"ALTER TABLE test_table RENAME TO renamed_test_table;" +
"INSERT INTO renamed_test_table (text) VALUES ('new');";
startConnector();
executeAndWait(statement);
assertRecordInserted("public.renamed_test_table", PK_FIELD, 2);
}
@Test
@SkipWhenDecoderPluginNameIs(value = PGOUTPUT, reason = "An update on a table with no primary key and default replica throws PSQLException as tables must have a PK")
public void shouldReceiveChangesForUpdates() throws Exception {
startConnector();
executeAndWait("UPDATE test_table set text='update' WHERE pk=1");
// the update record should be the last record
SourceRecord updatedRecord = consumer.remove();
String topicName = topicName("public.test_table");
assertEquals(topicName, updatedRecord.topic());
VerifyRecord.isValidUpdate(updatedRecord, PK_FIELD, 1);
// default replica identity only fires previous values for PK changes
List<SchemaAndValueField> expectedAfter = Collections.singletonList(
new SchemaAndValueField("text", SchemaBuilder.OPTIONAL_STRING_SCHEMA, "update"));
assertRecordSchemaAndValues(expectedAfter, updatedRecord, Envelope.FieldName.AFTER);
// alter the table and set its replica identity to full the issue another update
consumer.expects(1);
TestHelper.setReplicaIdentityForTable("test_table", "FULL");
executeAndWait("UPDATE test_table set text='update2' WHERE pk=1");
updatedRecord = consumer.remove();
assertEquals(topicName, updatedRecord.topic());
VerifyRecord.isValidUpdate(updatedRecord, PK_FIELD, 1);
// now we should get both old and new values
List<SchemaAndValueField> expectedBefore = Collections.singletonList(new SchemaAndValueField("text", SchemaBuilder.OPTIONAL_STRING_SCHEMA, "update"));
assertRecordSchemaAndValues(expectedBefore, updatedRecord, Envelope.FieldName.BEFORE);
expectedAfter = Collections.singletonList(new SchemaAndValueField("text", SchemaBuilder.OPTIONAL_STRING_SCHEMA, "update2"));
assertRecordSchemaAndValues(expectedAfter, updatedRecord, Envelope.FieldName.AFTER);
// without PK and with REPLICA IDENTITY FULL we still getting all fields 'before' and all fields 'after'
TestHelper.execute("ALTER TABLE test_table DROP CONSTRAINT test_table_pkey CASCADE;");
consumer.expects(1);
executeAndWait("UPDATE test_table SET text = 'update3' WHERE pk = 1;");
updatedRecord = consumer.remove();
assertEquals(topicName, updatedRecord.topic());
expectedBefore = Collections.singletonList(new SchemaAndValueField("text", SchemaBuilder.OPTIONAL_STRING_SCHEMA, "update2"));
assertRecordSchemaAndValues(expectedBefore, updatedRecord, Envelope.FieldName.BEFORE);
expectedAfter = Collections.singletonList(new SchemaAndValueField("text", SchemaBuilder.OPTIONAL_STRING_SCHEMA, "update3"));
assertRecordSchemaAndValues(expectedAfter, updatedRecord, Envelope.FieldName.AFTER);
// without PK and with REPLICA IDENTITY DEFAULT we will get nothing
TestHelper.setReplicaIdentityForTable("test_table", "DEFAULT");
consumer.expects(0);
executeAndWaitForNoRecords("UPDATE test_table SET text = 'no_pk_and_default' WHERE pk = 1;");
assertThat(consumer.isEmpty()).isTrue();
}
@Test
public void shouldReceiveChangesForUpdatesWithColumnChanges() throws Exception {
// add a new column
String statements = "ALTER TABLE test_table ADD COLUMN uvc VARCHAR(2);" +
"ALTER TABLE test_table REPLICA IDENTITY FULL;" +
"UPDATE test_table SET uvc ='aa' WHERE pk = 1;";
startConnector();
consumer = testConsumer(1);
executeAndWait(statements);
// the update should be the last record
SourceRecord updatedRecord = consumer.remove();
String topicName = topicName("public.test_table");
assertEquals(topicName, updatedRecord.topic());
VerifyRecord.isValidUpdate(updatedRecord, PK_FIELD, 1);
// now check we got the updated value (the old value should be null, the new one whatever we set)
List<SchemaAndValueField> expectedBefore = Collections.singletonList(new SchemaAndValueField("uvc", null, null));
assertRecordSchemaAndValues(expectedBefore, updatedRecord, Envelope.FieldName.BEFORE);
List<SchemaAndValueField> expectedAfter = Collections.singletonList(new SchemaAndValueField("uvc", SchemaBuilder.OPTIONAL_STRING_SCHEMA,
"aa"));
assertRecordSchemaAndValues(expectedAfter, updatedRecord, Envelope.FieldName.AFTER);
// rename a column
statements = "ALTER TABLE test_table RENAME COLUMN uvc to xvc;" +
"UPDATE test_table SET xvc ='bb' WHERE pk = 1;";
consumer.expects(1);
executeAndWait(statements);
updatedRecord = consumer.remove();
VerifyRecord.isValidUpdate(updatedRecord, PK_FIELD, 1);
// now check we got the updated value (the old value should be null, the new one whatever we set)
expectedBefore = Collections.singletonList(new SchemaAndValueField("xvc", SchemaBuilder.OPTIONAL_STRING_SCHEMA, "aa"));
assertRecordSchemaAndValues(expectedBefore, updatedRecord, Envelope.FieldName.BEFORE);
expectedAfter = Collections.singletonList(new SchemaAndValueField("xvc", SchemaBuilder.OPTIONAL_STRING_SCHEMA, "bb"));
assertRecordSchemaAndValues(expectedAfter, updatedRecord, Envelope.FieldName.AFTER);
// drop a column
statements = "ALTER TABLE test_table DROP COLUMN xvc;" +
"UPDATE test_table SET text ='update' WHERE pk = 1;";
consumer.expects(1);
executeAndWait(statements);
updatedRecord = consumer.remove();
VerifyRecord.isValidUpdate(updatedRecord, PK_FIELD, 1);
// change a column type
statements = "ALTER TABLE test_table ADD COLUMN modtype INTEGER;" +
"INSERT INTO test_table (pk,modtype) VALUES (2,1);";
consumer.expects(1);
executeAndWait(statements);
updatedRecord = consumer.remove();
VerifyRecord.isValidInsert(updatedRecord, PK_FIELD, 2);
assertRecordSchemaAndValues(
Collections.singletonList(new SchemaAndValueField("modtype", SchemaBuilder.OPTIONAL_INT32_SCHEMA, 1)), updatedRecord, Envelope.FieldName.AFTER);
statements = "ALTER TABLE test_table ALTER COLUMN modtype TYPE SMALLINT;"
+ "UPDATE test_table SET modtype = 2 WHERE pk = 2;";
consumer.expects(1);
executeAndWait(statements);
updatedRecord = consumer.remove();
VerifyRecord.isValidUpdate(updatedRecord, PK_FIELD, 2);
assertRecordSchemaAndValues(
Collections.singletonList(new SchemaAndValueField("modtype", SchemaBuilder.OPTIONAL_INT16_SCHEMA, (short) 1)), updatedRecord, Envelope.FieldName.BEFORE);
assertRecordSchemaAndValues(
Collections.singletonList(new SchemaAndValueField("modtype", SchemaBuilder.OPTIONAL_INT16_SCHEMA, (short) 2)), updatedRecord, Envelope.FieldName.AFTER);
}
private Header getPKUpdateNewKeyHeader(SourceRecord record) {
return this.getHeaderField(record, RelationalChangeRecordEmitter.PK_UPDATE_NEWKEY_FIELD);
}
private Header getPKUpdateOldKeyHeader(SourceRecord record) {
return this.getHeaderField(record, RelationalChangeRecordEmitter.PK_UPDATE_OLDKEY_FIELD);
}
private Header getHeaderField(SourceRecord record, String fieldName) {
return StreamSupport.stream(record.headers().spliterator(), false)
.filter(header -> fieldName.equals(header.key()))
.collect(Collectors.toList()).get(0);
}
@Test
public void shouldReceiveChangesForUpdatesWithPKChanges() throws Exception {
startConnector();
consumer = testConsumer(3);
executeAndWait("UPDATE test_table SET text = 'update', pk = 2");
String topicName = topicName("public.test_table");
// first should be a delete of the old pk
SourceRecord deleteRecord = consumer.remove();
assertEquals(topicName, deleteRecord.topic());
VerifyRecord.isValidDelete(deleteRecord, PK_FIELD, 1);
Header keyPKUpdateHeader = getPKUpdateNewKeyHeader(deleteRecord);
assertEquals(Integer.valueOf(2), ((Struct) keyPKUpdateHeader.value()).getInt32("pk"));
// followed by a tombstone of the old pk
SourceRecord tombstoneRecord = consumer.remove();
assertEquals(topicName, tombstoneRecord.topic());
VerifyRecord.isValidTombstone(tombstoneRecord, PK_FIELD, 1);
// and finally insert of the new value
SourceRecord insertRecord = consumer.remove();
assertEquals(topicName, insertRecord.topic());
VerifyRecord.isValidInsert(insertRecord, PK_FIELD, 2);
keyPKUpdateHeader = getPKUpdateOldKeyHeader(insertRecord);
assertEquals(Integer.valueOf(1), ((Struct) keyPKUpdateHeader.value()).getInt32("pk"));
}
@Test
@FixFor("DBZ-582")
public void shouldReceiveChangesForUpdatesWithPKChangesWithoutTombstone() throws Exception {
startConnector(config -> config
.with(PostgresConnectorConfig.INCLUDE_UNKNOWN_DATATYPES, true)
.with(CommonConnectorConfig.TOMBSTONES_ON_DELETE, false));
consumer = testConsumer(2);
executeAndWait("UPDATE test_table SET text = 'update', pk = 2");
String topicName = topicName("public.test_table");
// first should be a delete of the old pk
SourceRecord deleteRecord = consumer.remove();
assertEquals(topicName, deleteRecord.topic());
VerifyRecord.isValidDelete(deleteRecord, PK_FIELD, 1);
Header keyPKUpdateHeader = getPKUpdateNewKeyHeader(deleteRecord);
assertEquals(Integer.valueOf(2), ((Struct) keyPKUpdateHeader.value()).getInt32("pk"));
// followed by insert of the new value
SourceRecord insertRecord = consumer.remove();
assertEquals(topicName, insertRecord.topic());
VerifyRecord.isValidInsert(insertRecord, PK_FIELD, 2);
keyPKUpdateHeader = getPKUpdateOldKeyHeader(insertRecord);
assertEquals(Integer.valueOf(1), ((Struct) keyPKUpdateHeader.value()).getInt32("pk"));
}
@Test
public void shouldReceiveChangesForDefaultValues() throws Exception {
String statements = "ALTER TABLE test_table REPLICA IDENTITY FULL;" +
"ALTER TABLE test_table ADD COLUMN default_column TEXT DEFAULT 'default';" +
"INSERT INTO test_table (text) VALUES ('update');";
startConnector();
consumer = testConsumer(1);
executeAndWait(statements);
SourceRecord insertRecord = consumer.remove();
assertEquals(topicName("public.test_table"), insertRecord.topic());
VerifyRecord.isValidInsert(insertRecord, PK_FIELD, 2);
List<SchemaAndValueField> expectedSchemaAndValues = Arrays.asList(
new SchemaAndValueField("text", SchemaBuilder.OPTIONAL_STRING_SCHEMA, "update"),
new SchemaAndValueField("default_column", SchemaBuilder.string().optional().defaultValue("default").build(), "default"));
assertRecordSchemaAndValues(expectedSchemaAndValues, insertRecord, Envelope.FieldName.AFTER);
}
@Test
public void shouldReceiveChangesForTypeConstraints() throws Exception {
// add a new column
String statements = "ALTER TABLE test_table ADD COLUMN num_val NUMERIC(5,2);" +
"ALTER TABLE test_table REPLICA IDENTITY FULL;" +
"UPDATE test_table SET num_val = 123.45 WHERE pk = 1;";
startConnector();
consumer = testConsumer(1);
executeAndWait(statements);
// the update should be the last record
SourceRecord updatedRecord = consumer.remove();
String topicName = topicName("public.test_table");
assertEquals(topicName, updatedRecord.topic());
VerifyRecord.isValidUpdate(updatedRecord, PK_FIELD, 1);
// now check we got the updated value (the old value should be null, the new one whatever we set)
List<SchemaAndValueField> expectedBefore = Collections.singletonList(new SchemaAndValueField("num_val", null, null));
assertRecordSchemaAndValues(expectedBefore, updatedRecord, Envelope.FieldName.BEFORE);
List<SchemaAndValueField> expectedAfter = Collections.singletonList(
new SchemaAndValueField("num_val", Decimal.builder(2).parameter(TestHelper.PRECISION_PARAMETER_KEY, "5").optional().build(), new BigDecimal("123.45")));
assertRecordSchemaAndValues(expectedAfter, updatedRecord, Envelope.FieldName.AFTER);
// change a constraint
statements = "ALTER TABLE test_table ALTER COLUMN num_val TYPE NUMERIC(6,1);" +
"INSERT INTO test_table (pk,num_val) VALUES (2,123.41);";
consumer.expects(1);
executeAndWait(statements);
updatedRecord = consumer.remove();
VerifyRecord.isValidInsert(updatedRecord, PK_FIELD, 2);
assertRecordSchemaAndValues(
Collections.singletonList(new SchemaAndValueField("num_val", Decimal.builder(1).parameter(TestHelper.PRECISION_PARAMETER_KEY, "6").optional().build(),
new BigDecimal("123.4"))),
updatedRecord, Envelope.FieldName.AFTER);
statements = "ALTER TABLE test_table ALTER COLUMN num_val TYPE NUMERIC;" +
"INSERT INTO test_table (pk,num_val) VALUES (3,123.4567);";
consumer.expects(1);
executeAndWait(statements);
updatedRecord = consumer.remove();
final Struct dvs = new Struct(VariableScaleDecimal.schema());
dvs.put("scale", 4).put("value", new BigDecimal("123.4567").unscaledValue().toByteArray());
VerifyRecord.isValidInsert(updatedRecord, PK_FIELD, 3);
assertRecordSchemaAndValues(
Collections.singletonList(new SchemaAndValueField("num_val", VariableScaleDecimal.builder().optional().build(), dvs)), updatedRecord,
Envelope.FieldName.AFTER);
statements = "ALTER TABLE test_table ALTER COLUMN num_val TYPE DECIMAL(12,4);" +
"INSERT INTO test_table (pk,num_val) VALUES (4,2.48);";
consumer.expects(1);
executeAndWait(statements);
updatedRecord = consumer.remove();
VerifyRecord.isValidInsert(updatedRecord, PK_FIELD, 4);
assertRecordSchemaAndValues(
Collections.singletonList(new SchemaAndValueField("num_val", Decimal.builder(4).parameter(TestHelper.PRECISION_PARAMETER_KEY, "12").optional().build(),
new BigDecimal("2.4800"))),
updatedRecord, Envelope.FieldName.AFTER);
statements = "ALTER TABLE test_table ALTER COLUMN num_val TYPE DECIMAL(12);" +
"INSERT INTO test_table (pk,num_val) VALUES (5,1238);";
consumer.expects(1);
executeAndWait(statements);
updatedRecord = consumer.remove();
VerifyRecord.isValidInsert(updatedRecord, PK_FIELD, 5);
assertRecordSchemaAndValues(
Collections.singletonList(new SchemaAndValueField("num_val", Decimal.builder(0).parameter(TestHelper.PRECISION_PARAMETER_KEY, "12").optional().build(),
new BigDecimal("1238"))),
updatedRecord, Envelope.FieldName.AFTER);
statements = "ALTER TABLE test_table ALTER COLUMN num_val TYPE DECIMAL;" +
"INSERT INTO test_table (pk,num_val) VALUES (6,1225.1);";
consumer.expects(1);
executeAndWait(statements);
updatedRecord = consumer.remove();
final Struct dvs2 = new Struct(VariableScaleDecimal.schema());
dvs2.put("scale", 1).put("value", new BigDecimal("1225.1").unscaledValue().toByteArray());
VerifyRecord.isValidInsert(updatedRecord, PK_FIELD, 6);
assertRecordSchemaAndValues(
Collections.singletonList(new SchemaAndValueField("num_val", VariableScaleDecimal.builder().optional().build(), dvs2)), updatedRecord,
Envelope.FieldName.AFTER);
statements = "ALTER TABLE test_table ALTER COLUMN num_val SET NOT NULL;" +
"INSERT INTO test_table (pk,num_val) VALUES (7,1976);";
consumer.expects(1);
executeAndWait(statements);
updatedRecord = consumer.remove();
dvs2.put("scale", 0).put("value", new BigDecimal("1976").unscaledValue().toByteArray());
VerifyRecord.isValidInsert(updatedRecord, PK_FIELD, 7);
assertRecordSchemaAndValues(
Collections.singletonList(new SchemaAndValueField("num_val", VariableScaleDecimal.builder().build(), dvs2)), updatedRecord, Envelope.FieldName.AFTER);
}
@Test
public void shouldReceiveChangesForDeletes() throws Exception {
// add a new entry and remove both
String statements = "INSERT INTO test_table (text) VALUES ('insert2');" +
"DELETE FROM test_table WHERE pk > 0;";
startConnector();
consumer = testConsumer(5);
executeAndWait(statements);
String topicPrefix = "public.test_table";
String topicName = topicName(topicPrefix);
assertRecordInserted(topicPrefix, PK_FIELD, 2);
// first entry removed
SourceRecord record = consumer.remove();
assertEquals(topicName, record.topic());
VerifyRecord.isValidDelete(record, PK_FIELD, 1);
// followed by a tombstone
record = consumer.remove();
assertEquals(topicName, record.topic());
VerifyRecord.isValidTombstone(record, PK_FIELD, 1);
// second entry removed
record = consumer.remove();
assertEquals(topicName, record.topic());
VerifyRecord.isValidDelete(record, PK_FIELD, 2);
// followed by a tombstone
record = consumer.remove();
assertEquals(topicName, record.topic());
VerifyRecord.isValidTombstone(record, PK_FIELD, 2);
}
@Test
@FixFor("DBZ-582")
public void shouldReceiveChangesForDeletesWithoutTombstone() throws Exception {
// add a new entry and remove both
String statements = "INSERT INTO test_table (text) VALUES ('insert2');" +
"DELETE FROM test_table WHERE pk > 0;";
startConnector(config -> config
.with(PostgresConnectorConfig.INCLUDE_UNKNOWN_DATATYPES, true)
.with(CommonConnectorConfig.TOMBSTONES_ON_DELETE, false));
consumer = testConsumer(3);
executeAndWait(statements);
String topicPrefix = "public.test_table";
String topicName = topicName(topicPrefix);
assertRecordInserted(topicPrefix, PK_FIELD, 2);
// first entry removed
SourceRecord record = consumer.remove();
assertEquals(topicName, record.topic());
VerifyRecord.isValidDelete(record, PK_FIELD, 1);
// second entry removed
record = consumer.remove();
assertEquals(topicName, record.topic());
VerifyRecord.isValidDelete(record, PK_FIELD, 2);
}
@Test
@SkipWhenDecoderPluginNameIs(value = PGOUTPUT, reason = "A delete on a table with no primary key and default replica throws PSQLException as tables must have a PK")
public void shouldReceiveChangesForDeletesDependingOnReplicaIdentity() throws Exception {
String topicName = topicName("public.test_table");
// With PK we should get delete event with default level of replica identity
String statement = "ALTER TABLE test_table REPLICA IDENTITY DEFAULT;" +
"DELETE FROM test_table WHERE pk = 1;";
startConnector(config -> config
.with(PostgresConnectorConfig.INCLUDE_UNKNOWN_DATATYPES, true)
.with(CommonConnectorConfig.TOMBSTONES_ON_DELETE, false));
consumer = testConsumer(1);
executeAndWait(statement);
SourceRecord record = consumer.remove();
assertEquals(topicName, record.topic());
VerifyRecord.isValidDelete(record, PK_FIELD, 1);
// Without PK we should get delete event with REPLICA IDENTITY FULL
statement = "ALTER TABLE test_table DROP CONSTRAINT test_table_pkey CASCADE;" +
"INSERT INTO test_table (pk, text) VALUES (2, 'insert2');" +
"DELETE FROM test_table WHERE pk = 2;";
consumer.expects(2);
TestHelper.setReplicaIdentityForTable("test_table", "FULL");
executeAndWait(statement);
assertRecordInserted("public.test_table", PK_FIELD, 2);
record = consumer.remove();
assertEquals(topicName, record.topic());
VerifyRecord.isValidDelete(record, PK_FIELD, 2);
// Without PK and without REPLICA IDENTITY FULL we will not get delete event
statement = "INSERT INTO test_table (pk, text) VALUES (3, 'insert3');" +
"DELETE FROM test_table WHERE pk = 3;";
consumer.expects(1);
TestHelper.setReplicaIdentityForTable("test_table", "DEFAULT");
executeAndWait(statement);
assertRecordInserted("public.test_table", PK_FIELD, 3);
assertThat(consumer.isEmpty()).isTrue();
}
@Test
@FixFor("DBZ-4137")
public void shouldReceiveNumericTypeAsDoubleWithNullDefaults() throws Exception {
TestHelper.execute(
"DROP TABLE IF EXISTS numeric_table_with_n_defaults;",
"CREATE TABLE numeric_table_with_n_defaults (\n" +
" r int4 NOT NULL,\n" +
" r_numeric numeric(19, 4) NULL DEFAULT NULL,\n" +
" r_int int4 NULL DEFAULT NULL);",
"ALTER TABLE numeric_table_with_n_defaults REPLICA IDENTITY FULL");
startConnector(config -> config.with(PostgresConnectorConfig.DECIMAL_HANDLING_MODE, DecimalHandlingMode.DOUBLE),
false);
consumer = testConsumer(1);
// INSERT
String statement = "INSERT INTO numeric_table_with_n_defaults (r) VALUES (1);";
assertInsert(
statement,
Arrays.asList(
new SchemaAndValueField("r", Schema.INT32_SCHEMA, 1),
new SchemaAndValueField("r_numeric",
new SchemaBuilder(Schema.Type.FLOAT64)
.name(Schema.FLOAT64_SCHEMA.name())
.version(Schema.FLOAT64_SCHEMA.version())
.optional()
.defaultValue(null)
.build(),
null),
new SchemaAndValueField("r_int",
new SchemaBuilder(Schema.Type.INT32)
.name(Schema.INT32_SCHEMA.name())
.version(Schema.INT32_SCHEMA.version())
.optional()
.defaultValue(null)
.build(),
null)));
}
@Test
@FixFor("DBZ-4137")
public void shouldReceiveNumericTypeAsDoubleWithDefaults() throws Exception {
TestHelper.execute(
"DROP TABLE IF EXISTS numeric_table_with_defaults;",
"CREATE TABLE numeric_table_with_defaults (\n" +
" r int4 NOT NULL,\n" +
" r_numeric numeric(19, 4) NOT NULL DEFAULT 1,\n" +
" r_int int4 NOT NULL DEFAULT 2);",
"ALTER TABLE numeric_table_with_defaults REPLICA IDENTITY FULL");
startConnector(config -> config.with(PostgresConnectorConfig.DECIMAL_HANDLING_MODE, DecimalHandlingMode.DOUBLE),
false);
consumer = testConsumer(1);
// INSERT
String statement = "INSERT INTO numeric_table_with_defaults (r) VALUES (1);";
assertInsert(
statement,
Arrays.asList(
new SchemaAndValueField("r", Schema.INT32_SCHEMA, 1),
new SchemaAndValueField("r_numeric",
new SchemaBuilder(Schema.Type.FLOAT64)
.name(Schema.FLOAT64_SCHEMA.name())
.version(Schema.FLOAT64_SCHEMA.version())
.defaultValue(1.0d)
.build(),
1.0d),
new SchemaAndValueField("r_int",
new SchemaBuilder(Schema.Type.INT32)
.name(Schema.INT32_SCHEMA.name())
.version(Schema.INT32_SCHEMA.version())
.defaultValue(2)
.build(),
2)));
}
@Test
public void shouldReceiveNumericTypeAsDouble() throws Exception {
TestHelper.executeDDL("postgres_create_tables.ddl");
startConnector(config -> config.with(PostgresConnectorConfig.DECIMAL_HANDLING_MODE, DecimalHandlingMode.DOUBLE));
assertInsert(INSERT_NUMERIC_DECIMAL_TYPES_STMT, 1, schemasAndValuesForDoubleEncodedNumericTypes());
}
@Test
@FixFor("DBZ-611")
public void shouldReceiveNumericTypeAsString() throws Exception {
TestHelper.executeDDL("postgres_create_tables.ddl");
startConnector(config -> config.with(PostgresConnectorConfig.DECIMAL_HANDLING_MODE, DecimalHandlingMode.STRING));
assertInsert(INSERT_NUMERIC_DECIMAL_TYPES_STMT, 1, schemasAndValuesForStringEncodedNumericTypes());
}
@Test
@FixFor("DBZ-6758")
@SkipWhenDatabaseVersion(check = LESS_THAN, major = 14, reason = "Infinity support for numeric type was added in Postgres 14")
public void shouldReceiveChangesForInfinityNumericWithInfinity() throws Exception {
TestHelper.executeDDL("postgres_create_tables.ddl");
startConnector(config -> config.with(PostgresConnectorConfig.DECIMAL_HANDLING_MODE, "string"));
assertInsert(INSERT_NUMERIC_DECIMAL_TYPES_STMT_WITH_INFINITY, 1, schemasAndValuesForStringEncodedNumericTypesWithInfinity());
}
@Test
@FixFor("DBZ-898")
public void shouldReceiveHStoreTypeWithSingleValueAsMap() throws Exception {
TestHelper.executeDDL("postgres_create_tables.ddl");
startConnector(config -> config.with(PostgresConnectorConfig.HSTORE_HANDLING_MODE, PostgresConnectorConfig.HStoreHandlingMode.MAP));
assertInsert(INSERT_HSTORE_TYPE_STMT, 1, schemaAndValueFieldForMapEncodedHStoreType());
}
@Test
@FixFor("DBZ-898")
public void shouldReceiveHStoreTypeWithMultipleValuesAsMap() throws Exception {
TestHelper.executeDDL("postgres_create_tables.ddl");
startConnector(config -> config.with(PostgresConnectorConfig.HSTORE_HANDLING_MODE, PostgresConnectorConfig.HStoreHandlingMode.MAP));
assertInsert(INSERT_HSTORE_TYPE_WITH_MULTIPLE_VALUES_STMT, 1, schemaAndValueFieldForMapEncodedHStoreTypeWithMultipleValues());
}
@Test
@FixFor("DBZ-898")
public void shouldReceiveHStoreTypeWithNullValuesAsMap() throws Exception {
TestHelper.executeDDL("postgres_create_tables.ddl");
startConnector(config -> config.with(PostgresConnectorConfig.HSTORE_HANDLING_MODE, PostgresConnectorConfig.HStoreHandlingMode.MAP));
assertInsert(INSERT_HSTORE_TYPE_WITH_NULL_VALUES_STMT, 1, schemaAndValueFieldForMapEncodedHStoreTypeWithNullValues());
}
@Test
@FixFor("DBZ-898")
public void shouldReceiveHStoreTypeWithSpecialCharactersInValuesAsMap() throws Exception {
TestHelper.executeDDL("postgres_create_tables.ddl");
startConnector(config -> config.with(PostgresConnectorConfig.HSTORE_HANDLING_MODE, PostgresConnectorConfig.HStoreHandlingMode.MAP));
assertInsert(INSERT_HSTORE_TYPE_WITH_SPECIAL_CHAR_STMT, 1, schemaAndValueFieldForMapEncodedHStoreTypeWithSpecialCharacters());
}
@Test
@FixFor("DBZ-898")
public void shouldReceiveHStoreTypeAsJsonString() throws Exception {
TestHelper.executeDDL("postgres_create_tables.ddl");
consumer = testConsumer(1);
startConnector(config -> config.with(PostgresConnectorConfig.HSTORE_HANDLING_MODE, PostgresConnectorConfig.HStoreHandlingMode.JSON));
assertInsert(INSERT_HSTORE_TYPE_STMT, 1, schemaAndValueFieldForJsonEncodedHStoreType());
}
@Test
@FixFor("DBZ-898")
public void shouldReceiveHStoreTypeWithMultipleValuesAsJsonString() throws Exception {
TestHelper.executeDDL("postgres_create_tables.ddl");
startConnector(config -> config.with(PostgresConnectorConfig.HSTORE_HANDLING_MODE, PostgresConnectorConfig.HStoreHandlingMode.JSON));
assertInsert(INSERT_HSTORE_TYPE_WITH_MULTIPLE_VALUES_STMT, 1, schemaAndValueFieldForJsonEncodedHStoreTypeWithMultipleValues());
}
@Test
@FixFor("DBZ-898")
public void shouldReceiveHStoreTypeWithSpecialValuesInJsonString() throws Exception {
TestHelper.executeDDL("postgres_create_tables.ddl");
startConnector(config -> config.with(PostgresConnectorConfig.HSTORE_HANDLING_MODE, PostgresConnectorConfig.HStoreHandlingMode.JSON));
assertInsert(INSERT_HSTORE_TYPE_WITH_SPECIAL_CHAR_STMT, 1, schemaAndValueFieldForJsonEncodedHStoreTypeWithSpcialCharacters());
}
@Test
@FixFor("DBZ-898")
public void shouldReceiveHStoreTypeWithNullValuesAsJsonString() throws Exception {
TestHelper.executeDDL("postgres_create_tables.ddl");
startConnector(config -> config.with(PostgresConnectorConfig.HSTORE_HANDLING_MODE, PostgresConnectorConfig.HStoreHandlingMode.JSON));
assertInsert(INSERT_HSTORE_TYPE_WITH_NULL_VALUES_STMT, 1, schemaAndValueFieldForJsonEncodedHStoreTypeWithNullValues());
}
@Test
@FixFor("DBZ-1814")
public void shouldReceiveByteaBytes() throws Exception {
TestHelper.executeDDL("postgres_create_tables.ddl");
startConnector(config -> config.with(PostgresConnectorConfig.BINARY_HANDLING_MODE, PostgresConnectorConfig.BinaryHandlingMode.BYTES));
assertInsert(INSERT_BYTEA_BINMODE_STMT, 1, schemaAndValueForByteaBytes());
}
@Test
@FixFor("DBZ-1814")
public void shouldReceiveByteaBase64String() throws Exception {
TestHelper.executeDDL("postgres_create_tables.ddl");
startConnector(config -> config.with(PostgresConnectorConfig.BINARY_HANDLING_MODE, PostgresConnectorConfig.BinaryHandlingMode.BASE64));
assertInsert(INSERT_BYTEA_BINMODE_STMT, 1, schemaAndValueForByteaBase64());
}
@Test
@FixFor("DBZ-5544")
public void shouldReceiveByteaBase64UrlSafeString() throws Exception {
TestHelper.executeDDL("postgres_create_tables.ddl");
startConnector(config -> config.with(PostgresConnectorConfig.BINARY_HANDLING_MODE, PostgresConnectorConfig.BinaryHandlingMode.BASE64_URL_SAFE));
assertInsert(INSERT_BYTEA_BINMODE_STMT, 1, schemaAndValueForByteaBase64UrlSafe());
}
@Test
@FixFor("DBZ-1814")
public void shouldReceiveByteaHexString() throws Exception {
TestHelper.executeDDL("postgres_create_tables.ddl");
startConnector(config -> config.with(PostgresConnectorConfig.BINARY_HANDLING_MODE, PostgresConnectorConfig.BinaryHandlingMode.HEX));
assertInsert(INSERT_BYTEA_BINMODE_STMT, 1, schemaAndValueForByteaHex());
}
@Test
@FixFor("DBZ-1814")
public void shouldReceiveUnknownTypeAsBytes() throws Exception {
TestHelper.executeDDL("postgres_create_tables.ddl");
startConnector(config -> config.with(PostgresConnectorConfig.INCLUDE_UNKNOWN_DATATYPES, true));
assertInsert(INSERT_CIRCLE_STMT, 1, schemaAndValueForUnknownColumnBytes());
}
@Test
@FixFor("DBZ-1814")
public void shouldReceiveUnknownTypeAsBase64() throws Exception {
TestHelper.executeDDL("postgres_create_tables.ddl");
startConnector(config -> config.with(PostgresConnectorConfig.INCLUDE_UNKNOWN_DATATYPES, true)
.with(PostgresConnectorConfig.BINARY_HANDLING_MODE, BinaryHandlingMode.BASE64));
assertInsert(INSERT_CIRCLE_STMT, 1, schemaAndValueForUnknownColumnBase64());
}
@Test
@FixFor("DBZ-5544")
public void shouldReceiveUnknownTypeAsBase64UrlSafe() throws Exception {
TestHelper.executeDDL("postgres_create_tables.ddl");
startConnector(config -> config.with(PostgresConnectorConfig.INCLUDE_UNKNOWN_DATATYPES, true)
.with(PostgresConnectorConfig.BINARY_HANDLING_MODE, BinaryHandlingMode.BASE64_URL_SAFE));
assertInsert(INSERT_CIRCLE_STMT, 1, schemaAndValueForUnknownColumnBase64UrlSafe());
}
@Test
@FixFor("DBZ-1814")
public void shouldReceiveUnknownTypeAsHex() throws Exception {
TestHelper.executeDDL("postgres_create_tables.ddl");
startConnector(config -> config.with(PostgresConnectorConfig.INCLUDE_UNKNOWN_DATATYPES, true)
.with(PostgresConnectorConfig.BINARY_HANDLING_MODE, BinaryHandlingMode.HEX));
assertInsert(INSERT_CIRCLE_STMT, 1, schemaAndValueForUnknownColumnHex());
}
@Test
@FixFor("DBZ-259")
public void shouldProcessIntervalDelete() throws Exception {
final String statements = "INSERT INTO table_with_interval VALUES (default, 'Foo', default);" +
"INSERT INTO table_with_interval VALUES (default, 'Bar', default);" +
"DELETE FROM table_with_interval WHERE id = 1;";
startConnector();
consumer.expects(4);
executeAndWait(statements);
final String topicPrefix = "public.table_with_interval";
final String topicName = topicName(topicPrefix);
final String pk = "id";
assertRecordInserted(topicPrefix, pk, 1);
assertRecordInserted(topicPrefix, pk, 2);
// first entry removed
SourceRecord record = consumer.remove();
assertEquals(topicName, record.topic());
VerifyRecord.isValidDelete(record, pk, 1);
// followed by a tombstone
record = consumer.remove();
assertEquals(topicName, record.topic());
VerifyRecord.isValidTombstone(record, pk, 1);
}
@Test
@FixFor("DBZ-644")
public void shouldPropagateSourceColumnTypeToSchemaParameter() throws Exception {
TestHelper.executeDDL("postgres_create_tables.ddl");
startConnector(config -> config.with("column.propagate.source.type", ".*vc.*"));
assertInsert(INSERT_STRING_TYPES_STMT, 1, schemasAndValuesForStringTypesWithSourceColumnTypeInfo());
}
@Test
@FixFor("DBZ-1073")
public void shouldPropagateSourceColumnTypeScaleToSchemaParameter() throws Exception {
TestHelper.executeDDL("postgres_create_tables.ddl");
startConnector(config -> config
.with("column.propagate.source.type", ".*(d|dzs)")
.with(PostgresConnectorConfig.DECIMAL_HANDLING_MODE, PostgresConnectorConfig.DecimalHandlingMode.DOUBLE));
assertInsert(INSERT_NUMERIC_DECIMAL_TYPES_STMT, 1, schemasAndValuesForNumericTypesWithSourceColumnTypeInfo());
}
@Test
@FixFor("DBZ-800")
public void shouldReceiveHeartbeatAlsoWhenChangingNonWhitelistedTable() throws Exception {
// Testing.Print.enable();
startConnector(config -> config
.with(Heartbeat.HEARTBEAT_INTERVAL, "100")
.with(PostgresConnectorConfig.POLL_INTERVAL_MS, "50")
.with(PostgresConnectorConfig.TABLE_INCLUDE_LIST, "s1\\.b")
.with(PostgresConnectorConfig.SNAPSHOT_MODE, SnapshotMode.NO_DATA),
false);
waitForStreamingToStart();
String statement = "CREATE SCHEMA s1;" +
"CREATE TABLE s1.a (pk SERIAL, aa integer, PRIMARY KEY(pk));" +
"CREATE TABLE s1.b (pk SERIAL, bb integer, PRIMARY KEY(pk));" +
"INSERT INTO s1.b (bb) VALUES (22);";
Testing.print("Executing test statements");
TestHelper.execute(statement);
try {
final AtomicInteger heartbeatCount = new AtomicInteger();
final AtomicBoolean receivedInsert = new AtomicBoolean();
Awaitility.await().atMost(TestHelper.waitTimeForRecords() * 5, TimeUnit.SECONDS).until(() -> {
final SourceRecord record = consumeRecord();
Testing.print("Arrived record " + record);
if (record != null) {
if (record.topic().endsWith("s1.b")) {
assertRecordInserted(record, "s1.b", PK_FIELD, 1);
receivedInsert.set(true);
}
else {
assertHeartBeatRecord(record);
heartbeatCount.incrementAndGet();
}
}
return receivedInsert.get() && heartbeatCount.get() > 0;
});
}
catch (ConditionTimeoutException e) {
fail("Failed to receive insert and at least 1 heartbeat message", e);
}
final Set<Long> lsn = new HashSet<>();
TestHelper.execute("INSERT INTO s1.a (aa) VALUES (11);");
try {
Awaitility.await().atMost(TestHelper.waitTimeForRecords() * 5, TimeUnit.SECONDS).until(() -> {
final SourceRecord record = consumeRecord();
if (record != null) {
lsn.add((Long) record.sourceOffset().get("lsn"));
return lsn.size() >= 2;
}
return false;
});
}
catch (ConditionTimeoutException e) {
fail("Failed to detect at least 2 LSN changes", e);
}
Testing.print("Done");
}
@Test
@FixFor("DBZ-1565")
public void shouldWarnOnMissingHeartbeatForFilteredEvents() throws Exception {
final LogInterceptor logInterceptor = new LogInterceptor(PostgresStreamingChangeEventSource.class);
startConnector(config -> config
.with(PostgresConnectorConfig.POLL_INTERVAL_MS, "50")
.with(PostgresConnectorConfig.TABLE_INCLUDE_LIST, "s1\\.b")
.with(PostgresConnectorConfig.SNAPSHOT_MODE, SnapshotMode.NO_DATA),
false);
waitForStreamingToStart();
String statement = "CREATE SCHEMA s1;" +
"CREATE TABLE s1.a (pk SERIAL, aa integer, PRIMARY KEY(pk));" +
"CREATE TABLE s1.b (pk SERIAL, bb integer, PRIMARY KEY(pk));" +
"INSERT INTO s1.a (aa) VALUES (11);" +
"INSERT INTO s1.b (bb) VALUES (22);";
consumer = testConsumer(1);
executeAndWait(statement);
final int filteredCount = 10_100;
TestHelper.execute(
IntStream.range(0, filteredCount)
.mapToObj(x -> "INSERT INTO s1.a (pk) VALUES (default);")
.collect(Collectors.joining()));
Awaitility.await().alias("WAL growing log message").pollInterval(1, TimeUnit.SECONDS).atMost(5 * TestHelper.waitTimeForRecords(), TimeUnit.SECONDS)
.until(() -> logInterceptor.containsWarnMessage(
"Received 10001 events which were all filtered out, so no offset could be committed. This prevents the replication slot from acknowledging the processed WAL offsets, causing a growing backlog of non-removeable WAL segments on the database server. Consider to either adjust your filter configuration or enable heartbeat events (via the heartbeat.interval.ms option) to avoid this situation."));
}
@Test
@FixFor("DBZ-911")
@SkipWhenDecoderPluginNameIs(value = PGOUTPUT, reason = "Decoder synchronizes all schema columns when processing relation messages")
public void shouldNotRefreshSchemaOnUnchangedToastedData() throws Exception {
startConnector(config -> config
.with(PostgresConnectorConfig.SCHEMA_REFRESH_MODE, PostgresConnectorConfig.SchemaRefreshMode.COLUMNS_DIFF_EXCLUDE_UNCHANGED_TOAST));
String toastedValue = RandomStringUtils.randomAlphanumeric(10000);
// inserting a toasted value should /always/ produce a correct record
String statement = "ALTER TABLE test_table ADD COLUMN not_toast integer; INSERT INTO test_table (not_toast, text) values (10, '" + toastedValue + "')";
consumer = testConsumer(1);
executeAndWait(statement);
SourceRecord record = consumer.remove();
// after record should contain the toasted value
List<SchemaAndValueField> expectedAfter = Arrays.asList(
new SchemaAndValueField("not_toast", SchemaBuilder.OPTIONAL_INT32_SCHEMA, 10),
new SchemaAndValueField("text", SchemaBuilder.OPTIONAL_STRING_SCHEMA, toastedValue));
assertRecordSchemaAndValues(expectedAfter, record, Envelope.FieldName.AFTER);
// now we remove the toast column and update the not_toast column to see that our unchanged toast data
// does not trigger a table schema refresh. the after schema should look the same as before.
statement = "ALTER TABLE test_table DROP COLUMN text; update test_table set not_toast = 5 where not_toast = 10";
consumer.expects(1);
executeAndWait(statement);
assertWithTask(task -> {
Table tbl = ((PostgresConnectorTask) task).getTaskContext().schema().tableFor(TableId.parse("public.test_table", false));
assertEquals(Arrays.asList("pk", "text", "not_toast"), tbl.retrieveColumnNames());
});
TestHelper.assertNoOpenTransactions();
}
@Test
@FixFor("DBZ-911")
@SkipWhenDecoderPluginNameIsNot(value = SkipWhenDecoderPluginNameIsNot.DecoderPluginName.PGOUTPUT, reason = "Decoder synchronizes all schema columns when processing relation messages")
public void shouldRefreshSchemaOnUnchangedToastedDataWhenSchemaChanged() throws Exception {
startConnector(config -> config
.with(PostgresConnectorConfig.SCHEMA_REFRESH_MODE, PostgresConnectorConfig.SchemaRefreshMode.COLUMNS_DIFF_EXCLUDE_UNCHANGED_TOAST));
String toastedValue = RandomStringUtils.randomAlphanumeric(10000);
// inserting a toasted value should /always/ produce a correct record
String statement = "ALTER TABLE test_table ADD COLUMN not_toast integer; INSERT INTO test_table (not_toast, text) values (10, '" + toastedValue + "')";
consumer = testConsumer(1);
executeAndWait(statement);
SourceRecord record = consumer.remove();
// after record should contain the toasted value
List<SchemaAndValueField> expectedAfter = Arrays.asList(
new SchemaAndValueField("not_toast", SchemaBuilder.OPTIONAL_INT32_SCHEMA, 10),
new SchemaAndValueField("text", SchemaBuilder.OPTIONAL_STRING_SCHEMA, toastedValue));
assertRecordSchemaAndValues(expectedAfter, record, Envelope.FieldName.AFTER);
// now we remove the toast column and update the not_toast column to see that our unchanged toast data
// does trigger a table schema refresh. the after schema should be reflect the changes
statement = "ALTER TABLE test_table DROP COLUMN text; update test_table set not_toast = 5 where not_toast = 10";
consumer.expects(1);
executeAndWait(statement);
assertWithTask(task -> {
Table tbl = ((PostgresConnectorTask) task).getTaskContext().schema().tableFor(TableId.parse("public.test_table", false));
assertEquals(Arrays.asList("pk", "not_toast"), tbl.retrieveColumnNames());
});
}
@Test
@FixFor("DBZ-842")
public void shouldNotPropagateUnchangedToastedData() throws Exception {
startConnector(config -> config
.with(PostgresConnectorConfig.SCHEMA_REFRESH_MODE, PostgresConnectorConfig.SchemaRefreshMode.COLUMNS_DIFF_EXCLUDE_UNCHANGED_TOAST));
final String toastedValue1 = RandomStringUtils.randomAlphanumeric(10000);
final String toastedValue2 = RandomStringUtils.randomAlphanumeric(10000);
final String toastedValue3 = RandomStringUtils.randomAlphanumeric(10000);
// inserting a toasted value should /always/ produce a correct record
String statement = "ALTER TABLE test_table ADD COLUMN not_toast integer;"
+ "ALTER TABLE test_table ADD COLUMN mandatory_text TEXT NOT NULL DEFAULT '';"
+ "ALTER TABLE test_table ALTER COLUMN mandatory_text SET STORAGE EXTENDED;"
+ "ALTER TABLE test_table ALTER COLUMN mandatory_text SET DEFAULT '" + toastedValue3 + "';"
+ "INSERT INTO test_table (not_toast, text, mandatory_text) values (10, '" + toastedValue1 + "', '" + toastedValue1 + "');"
+ "INSERT INTO test_table (not_toast, text, mandatory_text) values (10, '" + toastedValue2 + "', '" + toastedValue2 + "');";
consumer = testConsumer(2);
executeAndWait(statement);
// after record should contain the toasted value
assertRecordSchemaAndValues(Arrays.asList(
new SchemaAndValueField("not_toast", SchemaBuilder.OPTIONAL_INT32_SCHEMA, 10),
new SchemaAndValueField("text", SchemaBuilder.OPTIONAL_STRING_SCHEMA, toastedValue1),
new SchemaAndValueField("mandatory_text", SchemaBuilder.string().defaultValue(toastedValue3).build(), toastedValue1)), consumer.remove(),
Envelope.FieldName.AFTER);
assertRecordSchemaAndValues(Arrays.asList(
new SchemaAndValueField("not_toast", SchemaBuilder.OPTIONAL_INT32_SCHEMA, 10),
new SchemaAndValueField("text", SchemaBuilder.OPTIONAL_STRING_SCHEMA, toastedValue2),
new SchemaAndValueField("mandatory_text", SchemaBuilder.string().defaultValue(toastedValue3).build(), toastedValue2)), consumer.remove(),
Envelope.FieldName.AFTER);
statement = "UPDATE test_table SET not_toast = 2;"
+ "UPDATE test_table SET not_toast = 3;";
consumer.expects(6);
executeAndWait(statement);
consumer.process(record -> {
assertWithTask(task -> {
Table tbl = ((PostgresConnectorTask) task).getTaskContext().schema().tableFor(TableId.parse("public.test_table", false));
assertEquals(Arrays.asList("pk", "text", "not_toast", "mandatory_text"), tbl.retrieveColumnNames());
});
});
assertRecordSchemaAndValues(Arrays.asList(
new SchemaAndValueField("not_toast", SchemaBuilder.OPTIONAL_INT32_SCHEMA, 2),
new SchemaAndValueField("text", SchemaBuilder.OPTIONAL_STRING_SCHEMA, "insert"),
new SchemaAndValueField("mandatory_text", SchemaBuilder.string().defaultValue(toastedValue3).build(), "")), consumer.remove(), Envelope.FieldName.AFTER);
assertRecordSchemaAndValues(Arrays.asList(
new SchemaAndValueField("not_toast", SchemaBuilder.OPTIONAL_INT32_SCHEMA, 2),
new SchemaAndValueField("text", SchemaBuilder.OPTIONAL_STRING_SCHEMA, DecoderDifferences.optionalToastedValuePlaceholder()),
new SchemaAndValueField("mandatory_text", SchemaBuilder.string().defaultValue(toastedValue3).build(),
DecoderDifferences.mandatoryToastedValuePlaceholder())),
consumer.remove(),
Envelope.FieldName.AFTER);
assertRecordSchemaAndValues(Arrays.asList(
new SchemaAndValueField("not_toast", SchemaBuilder.OPTIONAL_INT32_SCHEMA, 2),
new SchemaAndValueField("text", SchemaBuilder.OPTIONAL_STRING_SCHEMA, DecoderDifferences.optionalToastedValuePlaceholder()),
new SchemaAndValueField("mandatory_text", SchemaBuilder.string().defaultValue(toastedValue3).build(),
DecoderDifferences.mandatoryToastedValuePlaceholder())),
consumer.remove(),
Envelope.FieldName.AFTER);
assertRecordSchemaAndValues(Arrays.asList(
new SchemaAndValueField("not_toast", SchemaBuilder.OPTIONAL_INT32_SCHEMA, 3),
new SchemaAndValueField("text", SchemaBuilder.OPTIONAL_STRING_SCHEMA, "insert"),
new SchemaAndValueField("mandatory_text", SchemaBuilder.string().defaultValue(toastedValue3).build(), "")), consumer.remove(), Envelope.FieldName.AFTER);
assertRecordSchemaAndValues(Arrays.asList(
new SchemaAndValueField("not_toast", SchemaBuilder.OPTIONAL_INT32_SCHEMA, 3),
new SchemaAndValueField("text", SchemaBuilder.OPTIONAL_STRING_SCHEMA, DecoderDifferences.optionalToastedValuePlaceholder()),
new SchemaAndValueField("mandatory_text", SchemaBuilder.string().defaultValue(toastedValue3).build(),
DecoderDifferences.mandatoryToastedValuePlaceholder())),
consumer.remove(),
Envelope.FieldName.AFTER);
assertRecordSchemaAndValues(Arrays.asList(
new SchemaAndValueField("not_toast", SchemaBuilder.OPTIONAL_INT32_SCHEMA, 3),
new SchemaAndValueField("text", SchemaBuilder.OPTIONAL_STRING_SCHEMA, DecoderDifferences.optionalToastedValuePlaceholder()),
new SchemaAndValueField("mandatory_text", SchemaBuilder.string().defaultValue(toastedValue3).build(),
DecoderDifferences.mandatoryToastedValuePlaceholder())),
consumer.remove(),
Envelope.FieldName.AFTER);
}
@Test
@FixFor("DBZ-4941")
public void shouldHandleToastedArrayColumn() throws Exception {
TestHelper.execute(
"DROP TABLE IF EXISTS test_toast_table;",
"CREATE TABLE test_toast_table (id SERIAL PRIMARY KEY, text TEXT);");
startConnector(Function.identity(), false);
final String toastedValue = RandomStringUtils.randomAlphanumeric(10000);
String statement = "ALTER TABLE test_toast_table ADD COLUMN not_toast integer;"
+ "ALTER TABLE test_toast_table ADD COLUMN mandatory_text_array TEXT[] NOT NULL;"
+ "ALTER TABLE test_toast_table ALTER COLUMN mandatory_text_array SET STORAGE EXTENDED;"
+ "INSERT INTO test_toast_table (not_toast, text, mandatory_text_array) values (10, 'text', ARRAY ['" + toastedValue + "']);";
consumer = testConsumer(1);
executeAndWait(statement);
// after record should contain the toasted value
assertRecordSchemaAndValues(Arrays.asList(
new SchemaAndValueField("not_toast", SchemaBuilder.OPTIONAL_INT32_SCHEMA, 10),
new SchemaAndValueField("text", SchemaBuilder.OPTIONAL_STRING_SCHEMA, "text"),
new SchemaAndValueField("mandatory_text_array", SchemaBuilder.array(Schema.OPTIONAL_STRING_SCHEMA).build(), Arrays.asList(toastedValue))),
consumer.remove(),
Envelope.FieldName.AFTER);
statement = "UPDATE test_toast_table SET not_toast = 2;";
consumer.expects(1);
executeAndWait(statement);
consumer.process(record -> {
assertWithTask(task -> {
Table tbl = ((PostgresConnectorTask) task).getTaskContext().schema().tableFor(TableId.parse("public.test_toast_table", false));
assertEquals(Arrays.asList("id", "text", "not_toast", "mandatory_text_array"), tbl.retrieveColumnNames());
});
});
assertRecordSchemaAndValues(Arrays.asList(
new SchemaAndValueField("not_toast", SchemaBuilder.OPTIONAL_INT32_SCHEMA, 2),
new SchemaAndValueField("text", SchemaBuilder.OPTIONAL_STRING_SCHEMA, "text"),
new SchemaAndValueField("mandatory_text_array", SchemaBuilder.array(Schema.OPTIONAL_STRING_SCHEMA).build(),
Arrays.asList(DecoderDifferences.mandatoryToastedValuePlaceholder()))),
consumer.remove(),
Envelope.FieldName.AFTER);
}
@Test
@FixFor("DBZ-6122")
public void shouldHandleToastedArrayColumnCharacterVarying() throws Exception {
TestHelper.execute(
"DROP TABLE IF EXISTS test_toast_table;",
"CREATE TABLE test_toast_table (id SERIAL PRIMARY KEY, text character varying(255));");
startConnector(Function.identity(), false);
final String toastedValue = RandomStringUtils.randomAlphanumeric(10000);
String statement = "ALTER TABLE test_toast_table ADD COLUMN not_toast integer;"
+ "ALTER TABLE test_toast_table ADD COLUMN mandatory_text_array character varying(20000)[] NOT NULL;"
+ "ALTER TABLE test_toast_table ALTER COLUMN mandatory_text_array SET STORAGE EXTENDED;"
+ "INSERT INTO test_toast_table (not_toast, text, mandatory_text_array) values (10, 'text', ARRAY ['" + toastedValue + "']);";
consumer = testConsumer(1);
executeAndWait(statement);
// after record should contain the toasted value
assertRecordSchemaAndValues(Arrays.asList(
new SchemaAndValueField("not_toast", SchemaBuilder.OPTIONAL_INT32_SCHEMA, 10),
new SchemaAndValueField("text", SchemaBuilder.OPTIONAL_STRING_SCHEMA, "text"),
new SchemaAndValueField("mandatory_text_array", SchemaBuilder.array(Schema.OPTIONAL_STRING_SCHEMA).build(), Arrays.asList(toastedValue))),
consumer.remove(),
Envelope.FieldName.AFTER);
statement = "UPDATE test_toast_table SET not_toast = 2;";
consumer.expects(1);
executeAndWait(statement);
consumer.process(record -> {
assertWithTask(task -> {
Table tbl = ((PostgresConnectorTask) task).getTaskContext().schema().tableFor(TableId.parse("public.test_toast_table", false));
assertEquals(Arrays.asList("id", "text", "not_toast", "mandatory_text_array"), tbl.retrieveColumnNames());
});
});
assertRecordSchemaAndValues(Arrays.asList(
new SchemaAndValueField("not_toast", SchemaBuilder.OPTIONAL_INT32_SCHEMA, 2),
new SchemaAndValueField("text", SchemaBuilder.OPTIONAL_STRING_SCHEMA, "text"),
new SchemaAndValueField("mandatory_text_array", SchemaBuilder.array(Schema.OPTIONAL_STRING_SCHEMA).build(),
Arrays.asList(DecoderDifferences.mandatoryToastedValuePlaceholder()))),
consumer.remove(),
Envelope.FieldName.AFTER);
}
@Test
@FixFor("DBZ-6122")
public void shouldHandleToastedDateArrayColumn() throws Exception {
TestHelper.execute(
"DROP TABLE IF EXISTS test_toast_table;",
"CREATE TABLE test_toast_table (id SERIAL PRIMARY KEY);");
startConnector(Function.identity(), false);
List<Integer> intList = IntStream.range(1, 100000).boxed().map((x) -> 19338).collect(Collectors.toList());
final String toastedValue = intList.stream().map((x) -> "'2022-12-12'::date").collect(Collectors.joining(","));
String statement = "ALTER TABLE test_toast_table ADD COLUMN not_toast integer;"
+ "ALTER TABLE test_toast_table ADD COLUMN date_array date[];"
+ "ALTER TABLE test_toast_table ALTER COLUMN date_array SET STORAGE EXTENDED;"
+ "INSERT INTO test_toast_table (not_toast, date_array) values (10, ARRAY [" + toastedValue + "]);";
consumer = testConsumer(1);
executeAndWait(statement);
// after record should contain the toasted value
assertRecordSchemaAndValues(Arrays.asList(
new SchemaAndValueField("not_toast", SchemaBuilder.OPTIONAL_INT32_SCHEMA, 10),
new SchemaAndValueField("date_array",
SchemaBuilder.array(SchemaBuilder.int32().name("io.debezium.time.Date").optional().version(1).build()).optional().build(),
intList)),
consumer.remove(),
Envelope.FieldName.AFTER);
statement = "UPDATE test_toast_table SET not_toast = 2;";
consumer.expects(1);
executeAndWait(statement);
consumer.process(record -> {
assertWithTask(task -> {
Table tbl = ((PostgresConnectorTask) task).getTaskContext().schema().tableFor(TableId.parse("public.test_toast_table", false));
assertEquals(Arrays.asList("id", "not_toast", "date_array"), tbl.retrieveColumnNames());
});
});
assertRecordSchemaAndValues(Arrays.asList(
new SchemaAndValueField("not_toast", SchemaBuilder.OPTIONAL_INT32_SCHEMA, 2),
new SchemaAndValueField("date_array",
SchemaBuilder.array(SchemaBuilder.int32().name("io.debezium.time.Date").optional().version(1).build()).optional().build(),
DecoderDifferences.toastedValueIntPlaceholder())),
consumer.remove(),
Envelope.FieldName.AFTER);
}
@Test
@FixFor("DBZ-6122")
public void shouldHandleToastedByteArrayColumn() throws Exception {
// Testing.Print.enable();
TestHelper.execute(
"DROP TABLE IF EXISTS test_toast_table;",
"CREATE TABLE test_toast_table (id SERIAL PRIMARY KEY);");
startConnector(Function.identity(), false);
List<Integer> intList = IntStream.range(1, 100000).boxed().map((x) -> 19338).collect(Collectors.toList());
final String toastedValue = RandomStringUtils.randomNumeric(10000);
String statement = "ALTER TABLE test_toast_table ADD COLUMN not_toast integer;"
+ "ALTER TABLE test_toast_table ADD COLUMN bytea_array bytea[];"
+ "ALTER TABLE test_toast_table ALTER COLUMN bytea_array SET STORAGE EXTENDED;"
+ "INSERT INTO test_toast_table (not_toast, bytea_array) values (10, ARRAY ['" + toastedValue + "'::bytea]);";
consumer = testConsumer(1);
executeAndWait(statement);
// after record should contain the toasted value
assertRecordSchemaAndValues(Arrays.asList(
new SchemaAndValueField("not_toast", SchemaBuilder.OPTIONAL_INT32_SCHEMA, 10),
new SchemaAndValueField("bytea_array",
SchemaBuilder.array(Schema.OPTIONAL_BYTES_SCHEMA).optional().build(), Arrays.asList(ByteBuffer.wrap(toastedValue.getBytes())))),
consumer.remove(),
Envelope.FieldName.AFTER);
statement = "UPDATE test_toast_table SET not_toast = 2;";
consumer.expects(1);
executeAndWait(statement);
consumer.process(record -> {
assertWithTask(task -> {
Table tbl = ((PostgresConnectorTask) task).getTaskContext().schema().tableFor(TableId.parse("public.test_toast_table", false));
assertEquals(Arrays.asList("id", "not_toast", "bytea_array"), tbl.retrieveColumnNames());
});
});
final var record = consumer.remove();
assertRecordSchemaAndValues(Arrays.asList(
new SchemaAndValueField("not_toast", SchemaBuilder.OPTIONAL_INT32_SCHEMA, 2)),
record,
Envelope.FieldName.AFTER);
final var after = ((Struct) record.value()).getStruct(Envelope.FieldName.AFTER);
final var byteaArray = after.getArray("bytea_array");
Assertions.assertThat(byteaArray).hasSize(1);
Assertions.assertThat(byteaArray.get(0)).isEqualTo(DecoderDifferences.mandatoryToastedValueBinaryPlaceholder());
Assertions.assertThat(after.schema().field("bytea_array").schema())
.isEqualTo(SchemaBuilder.array(Schema.OPTIONAL_BYTES_SCHEMA).optional().build());
}
@Test
public void shouldHandleToastedByteaColumnInHexMode() throws Exception {
TestHelper.execute(
"DROP TABLE IF EXISTS test_toast_table;",
"CREATE TABLE test_toast_table (id SERIAL PRIMARY KEY);");
startConnector(config -> config.with(CommonConnectorConfig.BINARY_HANDLING_MODE, BinaryHandlingMode.HEX), false);
final String toastedValue = RandomStringUtils.randomNumeric(10000);
String statement = "ALTER TABLE test_toast_table ADD COLUMN not_toast integer;"
+ "ALTER TABLE test_toast_table ADD COLUMN bytea_ bytea;"
+ "ALTER TABLE test_toast_table ALTER COLUMN bytea_ SET STORAGE EXTENDED;"
+ "INSERT INTO test_toast_table (not_toast, bytea_) values (10, '" + toastedValue + "'::bytea);";
consumer = testConsumer(1);
executeAndWait(statement);
// after record should contain the toasted value
assertValueField(consumer.remove(), "after/bytea_", HexConverter.convertToHexString(toastedValue.getBytes(StandardCharsets.UTF_8)));
statement = "UPDATE test_toast_table SET not_toast = 2;";
consumer.expects(1);
executeAndWait(statement);
// after update of toasted value record should contain the placeholder
assertValueField(consumer.remove(), "after/bytea_", DecoderDifferences.mandatoryToastedValuePlaceholder());
}
@Test
@FixFor("DBZ-5936")
public void shouldHandleToastedIntegerArrayColumn() throws Exception {
TestHelper.execute(
"DROP TABLE IF EXISTS test_toast_table;",
"CREATE TABLE test_toast_table (id SERIAL PRIMARY KEY);");
startConnector(Function.identity(), false);
List<Integer> intList = IntStream.range(1, 10000).boxed().collect(Collectors.toList());
final String toastedValue = intList.stream().map(String::valueOf)
.collect(Collectors.joining(","));
String statement = "ALTER TABLE test_toast_table ADD COLUMN not_toast integer;"
+ "ALTER TABLE test_toast_table ADD COLUMN int_array int[];"
+ "ALTER TABLE test_toast_table ALTER COLUMN int_array SET STORAGE EXTENDED;"
+ "INSERT INTO test_toast_table (not_toast, int_array) values (10, ARRAY [" + toastedValue + "]);";
consumer = testConsumer(1);
executeAndWait(statement);
// after record should contain the toasted value
assertRecordSchemaAndValues(Arrays.asList(
new SchemaAndValueField("not_toast", SchemaBuilder.OPTIONAL_INT32_SCHEMA, 10),
new SchemaAndValueField("int_array", SchemaBuilder.array(Schema.OPTIONAL_INT32_SCHEMA).optional().build(), intList)),
consumer.remove(),
Envelope.FieldName.AFTER);
statement = "UPDATE test_toast_table SET not_toast = 2;";
consumer.expects(1);
executeAndWait(statement);
consumer.process(record -> {
assertWithTask(task -> {
Table tbl = ((PostgresConnectorTask) task).getTaskContext().schema().tableFor(TableId.parse("public.test_toast_table", false));
assertEquals(Arrays.asList("id", "not_toast", "int_array"), tbl.retrieveColumnNames());
});
});
assertRecordSchemaAndValues(Arrays.asList(
new SchemaAndValueField("not_toast", SchemaBuilder.OPTIONAL_INT32_SCHEMA, 2),
new SchemaAndValueField("int_array", SchemaBuilder.array(Schema.OPTIONAL_INT32_SCHEMA).optional().build(),
DecoderDifferences.toastedValueIntPlaceholder())),
consumer.remove(),
Envelope.FieldName.AFTER);
}
@Test
@FixFor("DBZ-5936")
public void shouldHandleToastedBigIntArrayColumn() throws Exception {
TestHelper.execute(
"DROP TABLE IF EXISTS test_toast_table;",
"CREATE TABLE test_toast_table (id SERIAL PRIMARY KEY);");
startConnector(Function.identity(), false);
List<Long> bigintList = LongStream.range(1, 10000).boxed().collect(Collectors.toList());
final String toastedValue = bigintList.stream().map(String::valueOf)
.collect(Collectors.joining(","));
String statement = "ALTER TABLE test_toast_table ADD COLUMN not_toast integer;"
+ "ALTER TABLE test_toast_table ADD COLUMN bigint_array bigint[];"
+ "ALTER TABLE test_toast_table ALTER COLUMN bigint_array SET STORAGE EXTENDED;"
+ "INSERT INTO test_toast_table (not_toast, bigint_array) values (10, ARRAY [" + toastedValue + "]);";
consumer = testConsumer(1);
executeAndWait(statement);
// after record should contain the toasted value
assertRecordSchemaAndValues(Arrays.asList(
new SchemaAndValueField("not_toast", SchemaBuilder.OPTIONAL_INT32_SCHEMA, 10),
new SchemaAndValueField("bigint_array", SchemaBuilder.array(Schema.OPTIONAL_INT64_SCHEMA).optional().build(), bigintList)),
consumer.remove(),
Envelope.FieldName.AFTER);
statement = "UPDATE test_toast_table SET not_toast = 2;";
consumer.expects(1);
executeAndWait(statement);
consumer.process(record -> {
assertWithTask(task -> {
Table tbl = ((PostgresConnectorTask) task).getTaskContext().schema().tableFor(TableId.parse("public.test_toast_table", false));
assertEquals(Arrays.asList("id", "not_toast", "bigint_array"), tbl.retrieveColumnNames());
});
});
assertRecordSchemaAndValues(Arrays.asList(
new SchemaAndValueField("not_toast", SchemaBuilder.OPTIONAL_INT32_SCHEMA, 2),
new SchemaAndValueField("bigint_array", SchemaBuilder.array(Schema.OPTIONAL_INT64_SCHEMA).optional().build(),
DecoderDifferences.toastedValueBigintPlaceholder())),
consumer.remove(),
Envelope.FieldName.AFTER);
}
@Test
@FixFor("DBZ-5936")
public void shouldHandleToastedJsonArrayColumn() throws Exception {
TestHelper.execute(
"DROP TABLE IF EXISTS test_toast_table;",
"CREATE TABLE test_toast_table (id SERIAL PRIMARY KEY, text TEXT);");
startConnector(Function.identity(), false);
final String toastedValue = RandomStringUtils.randomAlphanumeric(10000);
String statement = "ALTER TABLE test_toast_table ADD COLUMN not_toast integer;"
+ "ALTER TABLE test_toast_table ADD COLUMN json_array json[];"
+ "ALTER TABLE test_toast_table ALTER COLUMN json_array SET STORAGE EXTENDED;"
+ "INSERT INTO test_toast_table (not_toast, text, json_array) "
+ "VALUES (10, 'text', ARRAY [ '{\"key\": \"" + toastedValue + "\" }'::json ]);";
consumer = testConsumer(1);
executeAndWait(statement);
// after record should contain the toasted value
assertRecordSchemaAndValues(Arrays.asList(
new SchemaAndValueField("not_toast", SchemaBuilder.OPTIONAL_INT32_SCHEMA, 10),
new SchemaAndValueField("text", SchemaBuilder.OPTIONAL_STRING_SCHEMA, "text"),
new SchemaAndValueField("json_array", SchemaBuilder.array(
io.debezium.data.Json.builder().optional().build()).optional().build(),
Arrays.asList("{\"key\": \"" + toastedValue + "\" }"))),
consumer.remove(),
Envelope.FieldName.AFTER);
statement = "UPDATE test_toast_table SET not_toast = 2;";
consumer.expects(1);
executeAndWait(statement);
consumer.process(record -> {
assertWithTask(task -> {
Table tbl = ((PostgresConnectorTask) task).getTaskContext().schema().tableFor(TableId.parse("public.test_toast_table", false));
assertEquals(Arrays.asList("id", "text", "not_toast", "json_array"), tbl.retrieveColumnNames());
});
});
assertRecordSchemaAndValues(Arrays.asList(
new SchemaAndValueField("not_toast", SchemaBuilder.OPTIONAL_INT32_SCHEMA, 2),
new SchemaAndValueField("text", SchemaBuilder.OPTIONAL_STRING_SCHEMA, "text"),
new SchemaAndValueField("json_array", SchemaBuilder.array(
io.debezium.data.Json.builder().optional().build()).optional().build(),
Arrays.asList(DecoderDifferences.mandatoryToastedValuePlaceholder()))),
consumer.remove(),
Envelope.FieldName.AFTER);
}
@Test
@FixFor("DBZ-6379")
public void shouldHandleToastedHstoreInHstoreMapMode() throws Exception {
TestHelper.execute(
"DROP TABLE IF EXISTS test_toast_table;",
"CREATE TABLE test_toast_table (id SERIAL PRIMARY KEY, text TEXT);");
startConnector(config -> config.with(PostgresConnectorConfig.HSTORE_HANDLING_MODE, PostgresConnectorConfig.HStoreHandlingMode.MAP));
final String toastedValue = RandomStringUtils.randomAlphanumeric(100000);
String statement = "ALTER TABLE test_toast_table ADD COLUMN col hstore;"
+ "INSERT INTO test_toast_table (id, col) values (10, 'a=>" + toastedValue + "');";
consumer = testConsumer(1);
executeAndWait(statement);
// after record should contain the toasted value
HashMap colValue = new HashMap();
colValue.put("a", toastedValue);
assertRecordSchemaAndValues(Arrays.asList(
new SchemaAndValueField("col", SchemaBuilder.map(SchemaBuilder.STRING_SCHEMA,
SchemaBuilder.OPTIONAL_STRING_SCHEMA).optional().build(), colValue)),
consumer.remove(),
Envelope.FieldName.AFTER);
statement = "UPDATE test_toast_table SET text = 'text';";
consumer.expects(1);
executeAndWait(statement);
consumer.process(record -> {
assertWithTask(task -> {
Table tbl = ((PostgresConnectorTask) task).getTaskContext().schema().tableFor(TableId.parse("public.test_toast_table", false));
assertEquals(Arrays.asList("id", "text", "col"), tbl.retrieveColumnNames());
});
});
colValue.clear();
colValue.put(DecoderDifferences.optionalToastedValuePlaceholder(), DecoderDifferences.optionalToastedValuePlaceholder());
assertRecordSchemaAndValues(Arrays.asList(
new SchemaAndValueField("text", SchemaBuilder.OPTIONAL_STRING_SCHEMA, "text"),
new SchemaAndValueField("col", SchemaBuilder.map(SchemaBuilder.STRING_SCHEMA,
SchemaBuilder.OPTIONAL_STRING_SCHEMA).optional().build(), colValue)),
consumer.remove(),
Envelope.FieldName.AFTER);
}
@Test
@FixFor("DBZ-6720")
public void shouldHandleToastedUuidArrayColumn() throws Exception {
TestHelper.execute(
"DROP TABLE IF EXISTS test_toast_table;",
"CREATE TABLE test_toast_table (id SERIAL PRIMARY KEY, text TEXT);");
startConnector(Function.identity(), false);
final List<String> toastedValueList = Stream.generate(UUID::randomUUID).map(String::valueOf).limit(10000).collect(Collectors.toList());
final String[] toastedValueArray = toastedValueList.toArray(new String[toastedValueList.size()]);
final String toastedValueQuotedString = toastedValueList.stream().map(uuid_str -> ("'" + uuid_str + "'")).collect(Collectors.joining(","));
String statement = "ALTER TABLE test_toast_table ADD COLUMN not_toast integer;"
+ "ALTER TABLE test_toast_table ADD COLUMN uuid_array uuid[];"
+ "ALTER TABLE test_toast_table ALTER COLUMN uuid_array SET STORAGE EXTENDED;"
+ "INSERT INTO test_toast_table (not_toast, text, uuid_array) "
+ "VALUES (10, 'text', ARRAY [" + toastedValueQuotedString + "]::uuid[]);";
consumer = testConsumer(1);
executeAndWait(statement);
// after record should contain the toasted value
assertRecordSchemaAndValues(Arrays.asList(
new SchemaAndValueField("not_toast", SchemaBuilder.OPTIONAL_INT32_SCHEMA, 10),
new SchemaAndValueField("text", SchemaBuilder.OPTIONAL_STRING_SCHEMA, "text"),
new SchemaAndValueField("uuid_array", SchemaBuilder.array(
io.debezium.data.Uuid.builder().optional().build()).optional().build(),
Arrays.asList(toastedValueArray))),
consumer.remove(),
Envelope.FieldName.AFTER);
statement = "UPDATE test_toast_table SET not_toast = 2;";
consumer.expects(1);
executeAndWait(statement);
consumer.process(record -> {
assertWithTask(task -> {
Table tbl = ((PostgresConnectorTask) task).getTaskContext().schema().tableFor(TableId.parse("public.test_toast_table", false));
assertEquals(Arrays.asList("id", "text", "not_toast", "uuid_array"), tbl.retrieveColumnNames());
});
});
assertRecordSchemaAndValues(Arrays.asList(
new SchemaAndValueField("not_toast", SchemaBuilder.OPTIONAL_INT32_SCHEMA, 2),
new SchemaAndValueField("text", SchemaBuilder.OPTIONAL_STRING_SCHEMA, "text"),
new SchemaAndValueField("uuid_array", SchemaBuilder.array(
io.debezium.data.Uuid.builder().optional().build()).optional().build(),
Arrays.asList(DecoderDifferences.mandatoryToastedValueUuidPlaceholder()))),
consumer.remove(),
Envelope.FieldName.AFTER);
}
@Test
@FixFor("DBZ-7193")
public void shouldHandleToastedArrayColumnForReplicaIdentityFullTable() throws Exception {
TestHelper.execute(
"DROP TABLE IF EXISTS test_toast_table;",
"CREATE TABLE test_toast_table (id SERIAL PRIMARY KEY);");
startConnector(Function.identity(), false);
assertConnectorIsRunning();
final String toastedValue = RandomStringUtils.randomAlphanumeric(10000);
// INSERT
String statement = "ALTER TABLE test_toast_table ADD COLUMN not_toast integer;"
+ "ALTER TABLE test_toast_table ADD COLUMN mandatory_text_array TEXT[] NOT NULL;"
+ "ALTER TABLE test_toast_table ALTER COLUMN mandatory_text_array SET STORAGE EXTENDED;"
+ "ALTER TABLE test_toast_table REPLICA IDENTITY FULL;"
+ "INSERT INTO test_toast_table (not_toast, mandatory_text_array) values (10, ARRAY ['" + toastedValue + "']);";
consumer = testConsumer(1);
executeAndWait(statement);
// after record should contain the toasted value
assertRecordSchemaAndValues(Arrays.asList(
new SchemaAndValueField("not_toast", SchemaBuilder.OPTIONAL_INT32_SCHEMA, 10),
new SchemaAndValueField("mandatory_text_array", SchemaBuilder.array(Schema.OPTIONAL_STRING_SCHEMA).build(), Arrays.asList(toastedValue))),
consumer.remove(),
Envelope.FieldName.AFTER);
// UPDATE
statement = "UPDATE test_toast_table SET not_toast = 20;";
consumer.expects(1);
executeAndWait(statement);
consumer.process(record -> {
assertWithTask(task -> {
Table tbl = ((PostgresConnectorTask) task).getTaskContext().schema().tableFor(TableId.parse("public.test_toast_table", false));
assertEquals(Arrays.asList("id", "not_toast", "mandatory_text_array"), tbl.retrieveColumnNames());
});
});
SourceRecord updatedRecord = consumer.remove();
// before and after record should contain the toasted value
assertRecordSchemaAndValues(Arrays.asList(
new SchemaAndValueField("not_toast", SchemaBuilder.OPTIONAL_INT32_SCHEMA, 10),
new SchemaAndValueField("mandatory_text_array", SchemaBuilder.array(Schema.OPTIONAL_STRING_SCHEMA).build(), Arrays.asList(toastedValue))),
updatedRecord, Envelope.FieldName.BEFORE);
assertRecordSchemaAndValues(Arrays.asList(
new SchemaAndValueField("not_toast", SchemaBuilder.OPTIONAL_INT32_SCHEMA, 20),
new SchemaAndValueField("mandatory_text_array", SchemaBuilder.array(Schema.OPTIONAL_STRING_SCHEMA).build(), Arrays.asList(toastedValue))),
updatedRecord, Envelope.FieldName.AFTER);
}
@Test
@FixFor("DBZ-7193")
public void your_sha256_hashFullTable() throws Exception {
TestHelper.execute(
"DROP TABLE IF EXISTS test_toast_table;",
"CREATE TABLE test_toast_table (id SERIAL PRIMARY KEY);");
startConnector(Function.identity(), false);
assertConnectorIsRunning();
final String toastedValue = RandomStringUtils.randomAlphanumeric(10000);
// INSERT
String statement = "ALTER TABLE test_toast_table ADD COLUMN not_toast integer;"
+ "ALTER TABLE test_toast_table ADD COLUMN mandatory_text_array character varying(20000)[] NOT NULL;"
+ "ALTER TABLE test_toast_table ALTER COLUMN mandatory_text_array SET STORAGE EXTENDED;"
+ "ALTER TABLE test_toast_table REPLICA IDENTITY FULL;"
+ "INSERT INTO test_toast_table (not_toast, mandatory_text_array) values (10, ARRAY ['" + toastedValue + "']);";
consumer = testConsumer(1);
executeAndWait(statement);
// after record should contain the toasted value
assertRecordSchemaAndValues(Arrays.asList(
new SchemaAndValueField("not_toast", SchemaBuilder.OPTIONAL_INT32_SCHEMA, 10),
new SchemaAndValueField("mandatory_text_array", SchemaBuilder.array(Schema.OPTIONAL_STRING_SCHEMA).build(), Arrays.asList(toastedValue))),
consumer.remove(),
Envelope.FieldName.AFTER);
// UPDATE
statement = "UPDATE test_toast_table SET not_toast = 20;";
consumer.expects(1);
executeAndWait(statement);
consumer.process(record -> {
assertWithTask(task -> {
Table tbl = ((PostgresConnectorTask) task).getTaskContext().schema().tableFor(TableId.parse("public.test_toast_table", false));
assertEquals(Arrays.asList("id", "not_toast", "mandatory_text_array"), tbl.retrieveColumnNames());
});
});
SourceRecord updatedRecord = consumer.remove();
// before and after record should contain the toasted value
assertRecordSchemaAndValues(Arrays.asList(
new SchemaAndValueField("not_toast", SchemaBuilder.OPTIONAL_INT32_SCHEMA, 10),
new SchemaAndValueField("mandatory_text_array", SchemaBuilder.array(Schema.OPTIONAL_STRING_SCHEMA).build(), Arrays.asList(toastedValue))),
updatedRecord, Envelope.FieldName.BEFORE);
assertRecordSchemaAndValues(Arrays.asList(
new SchemaAndValueField("not_toast", SchemaBuilder.OPTIONAL_INT32_SCHEMA, 20),
new SchemaAndValueField("mandatory_text_array", SchemaBuilder.array(Schema.OPTIONAL_STRING_SCHEMA).build(), Arrays.asList(toastedValue))),
updatedRecord, Envelope.FieldName.AFTER);
}
@Test
@FixFor("DBZ-7193")
public void shouldHandleToastedDateArrayColumnForReplicaIdentityFullTable() throws Exception {
TestHelper.execute(
"DROP TABLE IF EXISTS test_toast_table;",
"CREATE TABLE test_toast_table (id SERIAL PRIMARY KEY);");
startConnector(Function.identity(), false);
assertConnectorIsRunning();
List<Integer> intList = IntStream.range(1, 100000).boxed().map((x) -> 19338).collect(Collectors.toList());
final String toastedValue = intList.stream().map((x) -> "'2022-12-12'::date").collect(Collectors.joining(","));
// INSERT
String statement = "ALTER TABLE test_toast_table ADD COLUMN not_toast integer;"
+ "ALTER TABLE test_toast_table ADD COLUMN date_array date[];"
+ "ALTER TABLE test_toast_table ALTER COLUMN date_array SET STORAGE EXTENDED;"
+ "ALTER TABLE test_toast_table REPLICA IDENTITY FULL;"
+ "INSERT INTO test_toast_table (not_toast, date_array) values (10, ARRAY [" + toastedValue + "]);";
consumer = testConsumer(1);
executeAndWait(statement);
// after record should contain the toasted value
assertRecordSchemaAndValues(Arrays.asList(
new SchemaAndValueField("not_toast", SchemaBuilder.OPTIONAL_INT32_SCHEMA, 10),
new SchemaAndValueField("date_array",
SchemaBuilder.array(SchemaBuilder.int32().name("io.debezium.time.Date").optional().version(1).build()).optional().build(),
intList)),
consumer.remove(),
Envelope.FieldName.AFTER);
// UPDATE
statement = "UPDATE test_toast_table SET not_toast = 20;";
consumer.expects(1);
executeAndWait(statement);
consumer.process(record -> {
assertWithTask(task -> {
Table tbl = ((PostgresConnectorTask) task).getTaskContext().schema().tableFor(TableId.parse("public.test_toast_table", false));
assertEquals(Arrays.asList("id", "not_toast", "date_array"), tbl.retrieveColumnNames());
});
});
SourceRecord updatedRecord = consumer.remove();
// before and after record should contain the toasted value
assertRecordSchemaAndValues(Arrays.asList(
new SchemaAndValueField("not_toast", SchemaBuilder.OPTIONAL_INT32_SCHEMA, 10),
new SchemaAndValueField("date_array",
SchemaBuilder.array(SchemaBuilder.int32().name("io.debezium.time.Date").optional().version(1).build()).optional().build(),
intList)),
updatedRecord, Envelope.FieldName.BEFORE);
assertRecordSchemaAndValues(Arrays.asList(
new SchemaAndValueField("not_toast", SchemaBuilder.OPTIONAL_INT32_SCHEMA, 20),
new SchemaAndValueField("date_array",
SchemaBuilder.array(SchemaBuilder.int32().name("io.debezium.time.Date").optional().version(1).build()).optional().build(),
intList)),
updatedRecord, Envelope.FieldName.AFTER);
}
@Test
@FixFor("DBZ-7193")
public void shouldHandleToastedByteArrayColumnForReplicaIdentityFullTable() throws Exception {
TestHelper.execute(
"DROP TABLE IF EXISTS test_toast_table;",
"CREATE TABLE test_toast_table (id SERIAL PRIMARY KEY);");
startConnector(Function.identity(), false);
assertConnectorIsRunning();
List<Integer> intList = IntStream.range(1, 100000).boxed().map((x) -> 19338).collect(Collectors.toList());
final String toastedValue = RandomStringUtils.randomNumeric(10000);
// INSERT
String statement = "ALTER TABLE test_toast_table ADD COLUMN not_toast integer;"
+ "ALTER TABLE test_toast_table ADD COLUMN bytea_array bytea[];"
+ "ALTER TABLE test_toast_table ALTER COLUMN bytea_array SET STORAGE EXTENDED;"
+ "ALTER TABLE test_toast_table REPLICA IDENTITY FULL;"
+ "INSERT INTO test_toast_table (not_toast, bytea_array) values (10, ARRAY ['" + toastedValue + "'::bytea]);";
consumer = testConsumer(1);
executeAndWait(statement);
// after record should contain the toasted value
assertRecordSchemaAndValues(Arrays.asList(
new SchemaAndValueField("not_toast", SchemaBuilder.OPTIONAL_INT32_SCHEMA, 10),
new SchemaAndValueField("bytea_array",
SchemaBuilder.array(Schema.OPTIONAL_BYTES_SCHEMA).optional().build(), Arrays.asList(ByteBuffer.wrap(toastedValue.getBytes())))),
consumer.remove(),
Envelope.FieldName.AFTER);
// UPDATE
statement = "UPDATE test_toast_table SET not_toast = 20;";
consumer.expects(1);
executeAndWait(statement);
consumer.process(record -> {
assertWithTask(task -> {
Table tbl = ((PostgresConnectorTask) task).getTaskContext().schema().tableFor(TableId.parse("public.test_toast_table", false));
assertEquals(Arrays.asList("id", "not_toast", "bytea_array"), tbl.retrieveColumnNames());
});
});
SourceRecord updatedRecord = consumer.remove();
// before and after record should contain the toasted value
assertRecordSchemaAndValues(Arrays.asList(
new SchemaAndValueField("not_toast", SchemaBuilder.OPTIONAL_INT32_SCHEMA, 10),
new SchemaAndValueField("bytea_array",
SchemaBuilder.array(Schema.OPTIONAL_BYTES_SCHEMA).optional().build(),
Arrays.asList(ByteBuffer.wrap(toastedValue.getBytes())))),
updatedRecord, Envelope.FieldName.BEFORE);
assertRecordSchemaAndValues(Arrays.asList(
new SchemaAndValueField("not_toast", SchemaBuilder.OPTIONAL_INT32_SCHEMA, 20),
new SchemaAndValueField("bytea_array",
SchemaBuilder.array(Schema.OPTIONAL_BYTES_SCHEMA).optional().build(),
Arrays.asList(ByteBuffer.wrap(toastedValue.getBytes())))),
updatedRecord, Envelope.FieldName.AFTER);
}
@Test
@FixFor("DBZ-7193")
public void your_sha256_hash() throws Exception {
TestHelper.execute(
"DROP TABLE IF EXISTS test_toast_table;",
"CREATE TABLE test_toast_table (id SERIAL PRIMARY KEY);");
startConnector(Function.identity(), false);
assertConnectorIsRunning();
List<Integer> intList = IntStream.range(1, 10000).boxed().collect(Collectors.toList());
final String toastedValue = intList.stream().map(String::valueOf)
.collect(Collectors.joining(","));
// INSERT
String statement = "ALTER TABLE test_toast_table ADD COLUMN not_toast integer;"
+ "ALTER TABLE test_toast_table ADD COLUMN int_array int[];"
+ "ALTER TABLE test_toast_table ALTER COLUMN int_array SET STORAGE EXTENDED;"
+ "ALTER TABLE test_toast_table REPLICA IDENTITY FULL;"
+ "INSERT INTO test_toast_table (not_toast, int_array) values (10, ARRAY [" + toastedValue + "]);";
consumer = testConsumer(1);
executeAndWait(statement);
// after record should contain the toasted value
assertRecordSchemaAndValues(Arrays.asList(
new SchemaAndValueField("not_toast", SchemaBuilder.OPTIONAL_INT32_SCHEMA, 10),
new SchemaAndValueField("int_array", SchemaBuilder.array(Schema.OPTIONAL_INT32_SCHEMA).optional().build(), intList)),
consumer.remove(),
Envelope.FieldName.AFTER);
// UPDATE
statement = "UPDATE test_toast_table SET not_toast = 20;";
consumer.expects(1);
executeAndWait(statement);
consumer.process(record -> {
assertWithTask(task -> {
Table tbl = ((PostgresConnectorTask) task).getTaskContext().schema().tableFor(TableId.parse("public.test_toast_table", false));
assertEquals(Arrays.asList("id", "not_toast", "int_array"), tbl.retrieveColumnNames());
});
});
SourceRecord updatedRecord = consumer.remove();
// before and after record should contain the toasted value
assertRecordSchemaAndValues(Arrays.asList(
new SchemaAndValueField("not_toast", SchemaBuilder.OPTIONAL_INT32_SCHEMA, 10),
new SchemaAndValueField("int_array", SchemaBuilder.array(Schema.OPTIONAL_INT32_SCHEMA).optional().build(), intList)),
updatedRecord, Envelope.FieldName.BEFORE);
assertRecordSchemaAndValues(Arrays.asList(
new SchemaAndValueField("not_toast", SchemaBuilder.OPTIONAL_INT32_SCHEMA, 20),
new SchemaAndValueField("int_array", SchemaBuilder.array(Schema.OPTIONAL_INT32_SCHEMA).optional().build(), intList)),
updatedRecord, Envelope.FieldName.AFTER);
}
@Test
@FixFor("DBZ-7193")
public void shouldHandleToastedBigIntArrayColumnForReplicaIdentityFullTable() throws Exception {
TestHelper.execute(
"DROP TABLE IF EXISTS test_toast_table;",
"CREATE TABLE test_toast_table (id SERIAL PRIMARY KEY);");
startConnector(Function.identity(), false);
assertConnectorIsRunning();
List<Long> bigintList = LongStream.range(1, 10000).boxed().collect(Collectors.toList());
final String toastedValue = bigintList.stream().map(String::valueOf)
.collect(Collectors.joining(","));
// INSERT
String statement = "ALTER TABLE test_toast_table ADD COLUMN not_toast integer;"
+ "ALTER TABLE test_toast_table ADD COLUMN bigint_array bigint[];"
+ "ALTER TABLE test_toast_table ALTER COLUMN bigint_array SET STORAGE EXTENDED;"
+ "ALTER TABLE test_toast_table REPLICA IDENTITY FULL;"
+ "INSERT INTO test_toast_table (not_toast, bigint_array) values (10, ARRAY [" + toastedValue + "]);";
consumer = testConsumer(1);
executeAndWait(statement);
// after record should contain the toasted value
assertRecordSchemaAndValues(Arrays.asList(
new SchemaAndValueField("not_toast", SchemaBuilder.OPTIONAL_INT32_SCHEMA, 10),
new SchemaAndValueField("bigint_array", SchemaBuilder.array(Schema.OPTIONAL_INT64_SCHEMA).optional().build(), bigintList)),
consumer.remove(),
Envelope.FieldName.AFTER);
// UPDATE
statement = "UPDATE test_toast_table SET not_toast = 20;";
consumer.expects(1);
executeAndWait(statement);
consumer.process(record -> {
assertWithTask(task -> {
Table tbl = ((PostgresConnectorTask) task).getTaskContext().schema().tableFor(TableId.parse("public.test_toast_table", false));
assertEquals(Arrays.asList("id", "not_toast", "bigint_array"), tbl.retrieveColumnNames());
});
});
SourceRecord updatedRecord = consumer.remove();
// before and after record should contain the toasted value
assertRecordSchemaAndValues(Arrays.asList(
new SchemaAndValueField("not_toast", SchemaBuilder.OPTIONAL_INT32_SCHEMA, 10),
new SchemaAndValueField("bigint_array", SchemaBuilder.array(Schema.OPTIONAL_INT64_SCHEMA).optional().build(), bigintList)),
updatedRecord, Envelope.FieldName.BEFORE);
assertRecordSchemaAndValues(Arrays.asList(
new SchemaAndValueField("not_toast", SchemaBuilder.OPTIONAL_INT32_SCHEMA, 20),
new SchemaAndValueField("bigint_array", SchemaBuilder.array(Schema.OPTIONAL_INT64_SCHEMA).optional().build(), bigintList)),
updatedRecord, Envelope.FieldName.AFTER);
}
@Test
@FixFor("DBZ-7193")
public void shouldHandleToastedUuidArrayColumnForReplicaIdentityFullTable() throws Exception {
TestHelper.execute(
"DROP TABLE IF EXISTS test_toast_table;",
"CREATE TABLE test_toast_table (id SERIAL PRIMARY KEY);");
startConnector(Function.identity(), false);
assertConnectorIsRunning();
final List<String> toastedValueList = Stream.generate(UUID::randomUUID).map(String::valueOf).limit(10000).collect(Collectors.toList());
final String[] toastedValueArray = toastedValueList.toArray(new String[toastedValueList.size()]);
final String toastedValueQuotedString = toastedValueList.stream().map(uuid_str -> ("'" + uuid_str + "'")).collect(Collectors.joining(","));
// INSERT
String statement = "ALTER TABLE test_toast_table ADD COLUMN not_toast integer;"
+ "ALTER TABLE test_toast_table ADD COLUMN uuid_array uuid[];"
+ "ALTER TABLE test_toast_table ALTER COLUMN uuid_array SET STORAGE EXTENDED;"
+ "ALTER TABLE test_toast_table REPLICA IDENTITY FULL;"
+ "INSERT INTO test_toast_table (not_toast, uuid_array) "
+ "VALUES (10, ARRAY [" + toastedValueQuotedString + "]::uuid[]);";
consumer = testConsumer(1);
executeAndWait(statement);
// after record should contain the toasted value
assertRecordSchemaAndValues(Arrays.asList(
new SchemaAndValueField("not_toast", SchemaBuilder.OPTIONAL_INT32_SCHEMA, 10),
new SchemaAndValueField("uuid_array",
SchemaBuilder.array(io.debezium.data.Uuid.builder().optional().build()).optional().build(),
Arrays.asList(toastedValueArray))),
consumer.remove(),
Envelope.FieldName.AFTER);
// UPDATE
statement = "UPDATE test_toast_table SET not_toast = 20;";
consumer.expects(1);
executeAndWait(statement);
consumer.process(record -> {
assertWithTask(task -> {
Table tbl = ((PostgresConnectorTask) task).getTaskContext().schema().tableFor(TableId.parse("public.test_toast_table", false));
assertEquals(Arrays.asList("id", "not_toast", "uuid_array"), tbl.retrieveColumnNames());
});
});
SourceRecord updatedRecord = consumer.remove();
// before and after record should contain the toasted value
assertRecordSchemaAndValues(Arrays.asList(
new SchemaAndValueField("not_toast", SchemaBuilder.OPTIONAL_INT32_SCHEMA, 10),
new SchemaAndValueField("uuid_array",
SchemaBuilder.array(io.debezium.data.Uuid.builder().optional().build()).optional().build(),
Arrays.asList(toastedValueArray))),
updatedRecord, Envelope.FieldName.BEFORE);
assertRecordSchemaAndValues(Arrays.asList(
new SchemaAndValueField("not_toast", SchemaBuilder.OPTIONAL_INT32_SCHEMA, 20),
new SchemaAndValueField("uuid_array",
SchemaBuilder.array(io.debezium.data.Uuid.builder().optional().build()).optional().build(),
Arrays.asList(toastedValueArray))),
updatedRecord, Envelope.FieldName.AFTER);
}
@Test
@FixFor("DBZ-1029")
public void shouldReceiveChangesForTableWithoutPrimaryKey() throws Exception {
TestHelper.execute(
"DROP TABLE IF EXISTS test_table;",
"CREATE TABLE test_table (id SERIAL, text TEXT);",
"ALTER TABLE test_table REPLICA IDENTITY FULL");
startConnector(Function.identity(), false);
consumer = testConsumer(1);
// INSERT
String statement = "INSERT INTO test_table (text) VALUES ('a');";
assertInsert(
statement,
Arrays.asList(
new SchemaAndValueField("id", SchemaBuilder.int32().defaultValue(0).build(), 1), // SERIAL is NOT NULL implicitly
new SchemaAndValueField("text", SchemaBuilder.OPTIONAL_STRING_SCHEMA, "a")));
// UPDATE
consumer.expects(1);
executeAndWait("UPDATE test_table set text='b' WHERE id=1");
SourceRecord updatedRecord = consumer.remove();
VerifyRecord.isValidUpdate(updatedRecord);
List<SchemaAndValueField> expectedBefore = Arrays.asList(
new SchemaAndValueField("id", SchemaBuilder.int32().defaultValue(0).build(), 1),
new SchemaAndValueField("text", SchemaBuilder.OPTIONAL_STRING_SCHEMA, "a"));
assertRecordSchemaAndValues(expectedBefore, updatedRecord, Envelope.FieldName.BEFORE);
List<SchemaAndValueField> expectedAfter = Arrays.asList(
new SchemaAndValueField("id", SchemaBuilder.int32().defaultValue(0).build(), 1),
new SchemaAndValueField("text", SchemaBuilder.OPTIONAL_STRING_SCHEMA, "b"));
assertRecordSchemaAndValues(expectedAfter, updatedRecord, Envelope.FieldName.AFTER);
// DELETE
consumer.expects(2);
executeAndWait("DELETE FROM test_table WHERE id=1");
SourceRecord deletedRecord = consumer.remove();
VerifyRecord.isValidDelete(deletedRecord);
expectedBefore = Arrays.asList(
new SchemaAndValueField("id", SchemaBuilder.int32().defaultValue(0).build(), 1),
new SchemaAndValueField("text", SchemaBuilder.OPTIONAL_STRING_SCHEMA, "b"));
assertRecordSchemaAndValues(expectedBefore, deletedRecord, Envelope.FieldName.BEFORE);
expectedAfter = null;
assertRecordSchemaAndValues(expectedAfter, deletedRecord, Envelope.FieldName.AFTER);
}
@Test
@FixFor("DBZ-1146")
public void your_sha256_hashableFromSnapshot() throws Exception {
testReceiveChangesForReplicaIdentityFullTableWithToastedValue(SchemaRefreshMode.COLUMNS_DIFF_EXCLUDE_UNCHANGED_TOAST, true);
}
@Test
@FixFor("DBZ-1146")
public void your_sha256_hashableFromStreaming() throws Exception {
testReceiveChangesForReplicaIdentityFullTableWithToastedValue(SchemaRefreshMode.COLUMNS_DIFF_EXCLUDE_UNCHANGED_TOAST, false);
}
@Test
@FixFor("DBZ-1146")
public void your_sha256_hashableFromSnapshotFullDiff() throws Exception {
testReceiveChangesForReplicaIdentityFullTableWithToastedValue(SchemaRefreshMode.COLUMNS_DIFF, true);
}
@Test
@FixFor("DBZ-1146")
public void your_sha256_hashableFromStreamingFullDiff() throws Exception {
testReceiveChangesForReplicaIdentityFullTableWithToastedValue(SchemaRefreshMode.COLUMNS_DIFF, false);
}
@Test()
@FixFor("DBZ-1181")
@SkipWhenDecoderPluginNameIs(value = PGOUTPUT, reason = "Pgoutput does not dispatch events on schema changes alone")
public void testEmptyChangesProducesHeartbeat() throws Exception {
// the low heartbeat interval should make sure that a heartbeat message is emitted after each change record
// received from Postgres
startConnector(config -> config.with(Heartbeat.HEARTBEAT_INTERVAL, "100"));
waitForStreamingToStart();
TestHelper.execute(
"DROP TABLE IF EXISTS test_table;" +
"CREATE TABLE test_table (id SERIAL, text TEXT);" +
"INSERT INTO test_table (text) VALUES ('mydata');");
// Expecting 1 data change
Awaitility.await().atMost(TestHelper.waitTimeForRecords() * 10, TimeUnit.SECONDS).until(() -> {
final SourceRecord record = consumeRecord();
return record != null && Envelope.isEnvelopeSchema(record.valueSchema());
});
// Wait for heartbeat that is emitted after the data change
// This is necessary to make sure that timing does not influence the lsn count check
final Set<Long> lsns = new HashSet<>();
Awaitility.await().atMost(TestHelper.waitTimeForRecords() * 10, TimeUnit.SECONDS).until(() -> {
final SourceRecord record = consumeRecord();
if (record == null) {
return false;
}
assertThat(record.valueSchema().name()).endsWith(".Heartbeat");
lsns.add((Long) record.sourceOffset().get("lsn"));
return true;
});
// Expecting one empty DDL change
String statement = "CREATE SCHEMA s1;";
TestHelper.execute(statement);
// Expecting changes for the empty DDL change
Awaitility.await().atMost(TestHelper.waitTimeForRecords() * 10, TimeUnit.SECONDS).until(() -> {
final SourceRecord record = consumeRecord();
assertThat(record.valueSchema().name()).endsWith(".Heartbeat");
lsns.add((Long) record.sourceOffset().get("lsn"));
// CREATE SCHEMA should change LSN
return lsns.size() == 2;
});
assertThat(consumer.isEmpty()).isTrue();
}
@Test
@FixFor("DBZ-1082")
public void shouldHaveNoXminWhenNotEnabled() throws Exception {
startConnector(config -> config.with(PostgresConnectorConfig.XMIN_FETCH_INTERVAL, "0"));
TestHelper.execute("ALTER TABLE test_table REPLICA IDENTITY DEFAULT;");
String statement = "INSERT INTO test_table (text) VALUES ('no_xmin');";
executeAndWait(statement);
// Verify the record that made it does not have an xmin
SourceRecord rec = assertRecordInserted("public.test_table", PK_FIELD, 2);
assertSourceInfo(rec, "postgres", "public", "test_table");
Struct source = ((Struct) rec.value()).getStruct("source");
assertThat(source.getInt64("xmin")).isNull();
assertThat(consumer.isEmpty()).isTrue();
}
@Test
@FixFor("DBZ-1082")
public void shouldHaveXminWhenEnabled() throws Exception {
startConnector(config -> config.with(PostgresConnectorConfig.XMIN_FETCH_INTERVAL, "10"));
TestHelper.execute("ALTER TABLE test_table REPLICA IDENTITY DEFAULT;");
String statement = "INSERT INTO test_table (text) VALUES ('with_xmin');";
executeAndWait(statement);
// Verify the record that made it does not have an xmin
SourceRecord rec = assertRecordInserted("public.test_table", PK_FIELD, 2);
assertSourceInfo(rec, "postgres", "public", "test_table");
Struct source = ((Struct) rec.value()).getStruct("source");
assertThat(source.getInt64("xmin")).isGreaterThan(0L);
assertThat(consumer.isEmpty()).isTrue();
}
@Test
public void shouldProcessLargerTx() throws Exception {
Testing.Print.disable();
final int numberOfEvents = 1000;
startConnector();
waitForStreamingToStart();
final String topicPrefix = "public.test_table";
final String topicName = topicName(topicPrefix);
final Stopwatch stopwatch = Stopwatch.reusable();
consumer = testConsumer(numberOfEvents);
// This is not accurate as we measure also including the data but
// it is sufficient to confirm there is no large difference
// in runtime between the cases
stopwatch.start();
executeAndWait(IntStream.rangeClosed(2, numberOfEvents + 1)
.boxed()
.map(x -> "INSERT INTO test_table (text) VALUES ('insert" + x + "')")
.collect(Collectors.joining(";")));
stopwatch.stop();
final long firstRun = stopwatch.durations().statistics().getTotal().toMillis();
logger.info("Single tx duration = {} ms", firstRun);
for (int i = 0; i < numberOfEvents; i++) {
SourceRecord record = consumer.remove();
assertEquals(topicName, record.topic());
VerifyRecord.isValidInsert(record, PK_FIELD, i + 2);
}
consumer.expects(numberOfEvents);
IntStream.rangeClosed(2, numberOfEvents + 1).forEach(x -> TestHelper.execute("INSERT INTO test_table (text) VALUES ('insert" + x + "')"));
stopwatch.start();
// There should be no significant difference between many TX runtime and single large TX
// We still add generous limits as the runtime is in seconds and we cannot provide
// a stable scheduling environment
consumer.await(3 * firstRun, TimeUnit.MILLISECONDS);
stopwatch.stop();
for (int i = 0; i < numberOfEvents; i++) {
SourceRecord record = consumer.remove();
assertEquals(topicName, record.topic());
VerifyRecord.isValidInsert(record, PK_FIELD, i + 1002);
}
logger.info("Many tx duration = {} ms", stopwatch.durations().statistics().getTotal().toMillis());
}
@Test
@FixFor("DBZ-1824")
public void stopInTheMiddleOfTxAndResume() throws Exception {
// Testing.Print.enable();
final int numberOfEvents = 50;
final int STOP_ID = 20;
startConnector(config -> config.with(PostgresConnectorConfig.DROP_SLOT_ON_STOP, false), true, record -> {
if (!"test_server.public.test_table.Envelope".equals(record.valueSchema().name())) {
return false;
}
final Struct envelope = (Struct) record.value();
final Struct after = envelope.getStruct("after");
final Integer pk = after.getInt32("pk");
return pk == STOP_ID;
});
waitForStreamingToStart();
final String topicPrefix = "public.test_table";
final String topicName = topicName(topicPrefix);
final int expectFirstRun = STOP_ID - 2;
final int expectSecondRun = numberOfEvents - STOP_ID;
consumer = testConsumer(expectFirstRun);
executeAndWait(IntStream.rangeClosed(2, numberOfEvents + 1)
.boxed()
.map(x -> "INSERT INTO test_table (text) VALUES ('insert" + x + "')")
.collect(Collectors.joining(";")));
// 2..19, 1 is from snapshot
for (int i = 0; i < expectFirstRun; i++) {
SourceRecord record = consumer.remove();
assertEquals(topicName, record.topic());
VerifyRecord.isValidInsert(record, PK_FIELD, i + 2);
}
stopConnector();
startConnector(Function.identity(), false);
consumer.expects(expectSecondRun);
consumer.await(TestHelper.waitTimeForRecords() * 30, TimeUnit.SECONDS);
// 20..51
for (int i = 0; i < expectSecondRun; i++) {
SourceRecord record = consumer.remove();
assertEquals(topicName, record.topic());
VerifyRecord.isValidInsert(record, PK_FIELD, STOP_ID + i);
}
}
@Test
@FixFor("DBZ-2397")
public void restartConnectorInTheMiddleOfUncommittedTx() throws Exception {
// Testing.Print.enable();
final PostgresConnection tx1Connection = TestHelper.create();
tx1Connection.setAutoCommit(false);
final PostgresConnection tx2Connection = TestHelper.create();
tx2Connection.setAutoCommit(true);
startConnector(config -> config.with(PostgresConnectorConfig.DROP_SLOT_ON_STOP, false), true);
waitForStreamingToStart();
tx1Connection.executeWithoutCommitting("INSERT INTO test_table (text) VALUES ('tx-1-1')");
tx2Connection.execute("INSERT INTO test_table (text) VALUES ('tx-2-1')");
consumer = testConsumer(1);
consumer.await(TestHelper.waitTimeForRecords(), TimeUnit.SECONDS);
assertThat(((Struct) consumer.remove().value()).getStruct("after").getString("text")).isEqualTo("tx-2-1");
stopConnector();
startConnector(Function.identity(), false);
waitForStreamingToStart();
tx1Connection.executeWithoutCommitting("INSERT INTO test_table (text) VALUES ('tx-1-2')");
tx2Connection.execute("INSERT INTO test_table (text) VALUES ('tx-2-2')");
tx1Connection.executeWithoutCommitting("INSERT INTO test_table (text) VALUES ('tx-1-3')");
tx2Connection.execute("INSERT INTO test_table (text) VALUES ('tx-2-3')");
tx1Connection.commit();
consumer = testConsumer(5);
consumer.await(TestHelper.waitTimeForRecords(), TimeUnit.SECONDS);
assertThat(((Struct) consumer.remove().value()).getStruct("after").getString("text")).isEqualTo("tx-2-2");
assertThat(((Struct) consumer.remove().value()).getStruct("after").getString("text")).isEqualTo("tx-2-3");
assertThat(((Struct) consumer.remove().value()).getStruct("after").getString("text")).isEqualTo("tx-1-1");
assertThat(((Struct) consumer.remove().value()).getStruct("after").getString("text")).isEqualTo("tx-1-2");
assertThat(((Struct) consumer.remove().value()).getStruct("after").getString("text")).isEqualTo("tx-1-3");
}
@Test
@FixFor("DBZ-1730")
public void shouldStartConsumingFromSlotLocation() throws Exception {
// Testing.Print.enable();
startConnector(config -> config
.with(PostgresConnectorConfig.DROP_SLOT_ON_STOP, false)
.with(EmbeddedEngineConfig.OFFSET_STORAGE, MemoryOffsetBackingStore.class), true);
waitForStreamingToStart();
consumer = testConsumer(1);
executeAndWait("INSERT INTO test_table (text) VALUES ('insert2')");
consumer.remove();
stopConnector();
TestHelper.execute(
"INSERT INTO test_table (text) VALUES ('insert3');",
"INSERT INTO test_table (text) VALUES ('insert4')");
startConnector(config -> config
.with(PostgresConnectorConfig.DROP_SLOT_ON_STOP, true)
.with(PostgresConnectorConfig.SNAPSHOT_MODE, PostgresConnectorConfig.SnapshotMode.NO_DATA)
.with(EmbeddedEngineConfig.OFFSET_STORAGE, MemoryOffsetBackingStore.class), false);
consumer.expects(3);
consumer.await(TestHelper.waitTimeForRecords() * 5, TimeUnit.SECONDS);
// After loss of offset and not doing snapshot we always stream the first record available in replication slot
// even if we have seen it as it is not possible to make a difference from plain snapshot never mode
assertThat(((Struct) consumer.remove().value()).getStruct("after").getString("text")).isEqualTo("insert2");
assertThat(((Struct) consumer.remove().value()).getStruct("after").getString("text")).isEqualTo("insert3");
assertThat(((Struct) consumer.remove().value()).getStruct("after").getString("text")).isEqualTo("insert4");
stopConnector();
}
@Test
@SkipWhenDatabaseVersion(check = EqualityCheck.LESS_THAN, major = 11, reason = "TRUNCATE events only supported in PG11+ PGOUTPUT Plugin")
@SkipWhenDecoderPluginNameIsNot(value = SkipWhenDecoderPluginNameIsNot.DecoderPluginName.PGOUTPUT, reason = "Tests specifically that pgoutput handles TRUNCATE messages")
public void shouldProcessTruncateMessages() throws Exception {
startConnector(builder -> builder
.with(PostgresConnectorConfig.SKIPPED_OPERATIONS, "none"));
waitForStreamingToStart();
consumer = testConsumer(1);
executeAndWait("INSERT INTO test_table (text) values ('TRUNCATE TEST');");
SourceRecord record = consumer.remove();
assertEquals(TestHelper.topicName("public.test_table"), record.topic());
VerifyRecord.isValidInsert(record, PK_FIELD, 2);
consumer.expects(1);
TestHelper.execute("TRUNCATE TABLE public.test_table RESTART IDENTITY CASCADE;");
consumer.await(TestHelper.waitTimeForRecords(), TimeUnit.SECONDS);
assertFalse(consumer.isEmpty());
SourceRecord truncateRecord = consumer.remove();
assertNotNull(truncateRecord);
VerifyRecord.isValidTruncate(truncateRecord);
assertTrue(consumer.isEmpty());
}
@Test
@SkipWhenDatabaseVersion(check = EqualityCheck.LESS_THAN, major = 11, reason = "TRUNCATE events only supported in PG11+ PGOUTPUT Plugin")
@SkipWhenDecoderPluginNameIsNot(value = SkipWhenDecoderPluginNameIsNot.DecoderPluginName.PGOUTPUT, reason = "Tests specifically that pgoutput handles TRUNCATE messages")
public void your_sha256_hashoutTruncate() throws Exception {
startConnector(builder -> builder
.with(PostgresConnectorConfig.SKIPPED_OPERATIONS, "u"));
waitForStreamingToStart();
consumer = testConsumer(1);
executeAndWait("INSERT INTO test_table (text) values ('TRUNCATE TEST');");
SourceRecord record = consumer.remove();
assertEquals(TestHelper.topicName("public.test_table"), record.topic());
VerifyRecord.isValidInsert(record, PK_FIELD, 2);
consumer.expects(1);
TestHelper.execute("TRUNCATE TABLE public.test_table RESTART IDENTITY CASCADE;");
consumer.await(TestHelper.waitTimeForRecords(), TimeUnit.SECONDS);
assertFalse(consumer.isEmpty());
SourceRecord truncateRecord = consumer.remove();
assertNotNull(truncateRecord);
VerifyRecord.isValidTruncate(truncateRecord);
assertTrue(consumer.isEmpty());
}
@Test
@SkipWhenDatabaseVersion(check = EqualityCheck.LESS_THAN, major = 11, reason = "TRUNCATE events only supported in PG11+ PGOUTPUT Plugin")
@SkipWhenDecoderPluginNameIsNot(value = SkipWhenDecoderPluginNameIsNot.DecoderPluginName.PGOUTPUT, reason = "Tests specifically that pgoutput handles TRUNCATE messages")
public void shouldSkipTruncateMessagesWithSkipped() throws Exception {
startConnector(builder -> builder.with(PostgresConnectorConfig.SKIPPED_OPERATIONS, "t"));
waitForStreamingToStart();
consumer = testConsumer(1);
executeAndWait("INSERT INTO test_table (text) values ('TRUNCATE TEST');");
SourceRecord record = consumer.remove();
assertEquals(TestHelper.topicName("public.test_table"), record.topic());
VerifyRecord.isValidInsert(record, PK_FIELD, 2);
consumer.expects(0);
TestHelper.execute("TRUNCATE TABLE public.test_table RESTART IDENTITY CASCADE;");
consumer.await(TestHelper.waitTimeForRecords(), TimeUnit.SECONDS);
assertTrue(consumer.isEmpty());
}
@Test
@SkipWhenDatabaseVersion(check = EqualityCheck.LESS_THAN, major = 11, reason = "TRUNCATE events only supported in PG11+ PGOUTPUT Plugin")
@SkipWhenDecoderPluginNameIsNot(value = SkipWhenDecoderPluginNameIsNot.DecoderPluginName.PGOUTPUT, reason = "Tests specifically that pgoutput handled TRUNCATE these messages")
public void shouldProcessTruncateMessagesForMultipleTableTruncateStatement() throws Exception {
TestHelper.execute("CREATE TABLE test_table_2 (pk SERIAL, text TEXT, PRIMARY KEY(pk));");
startConnector(builder -> builder.with(PostgresConnectorConfig.SKIPPED_OPERATIONS, "none"));
waitForStreamingToStart();
consumer = testConsumer(1);
executeAndWait("INSERT INTO test_table (text) values ('TRUNCATE TEST');");
SourceRecord record = consumer.remove();
assertEquals(TestHelper.topicName("public.test_table"), record.topic());
VerifyRecord.isValidInsert(record, PK_FIELD, 2);
executeAndWait("INSERT INTO test_table_2 (text) values ('TRUNCATE TEST 2');");
SourceRecord record_2 = consumer.remove();
assertEquals(TestHelper.topicName("public.test_table_2"), record_2.topic());
VerifyRecord.isValidInsert(record_2, PK_FIELD, 1);
consumer.expects(2);
TestHelper.execute("TRUNCATE TABLE public.test_table, public.test_table_2;");
consumer.await(TestHelper.waitTimeForRecords(), TimeUnit.SECONDS);
assertFalse(consumer.isEmpty());
SourceRecord truncateRecord = consumer.remove();
assertNotNull(truncateRecord);
VerifyRecord.isValidTruncate(truncateRecord);
SourceRecord truncateRecord_2 = consumer.remove();
assertNotNull(truncateRecord_2);
VerifyRecord.isValidTruncate(truncateRecord_2);
assertTrue(consumer.isEmpty());
assertEquals(truncateRecord.sourceOffset().get("lsn_commit"), truncateRecord_2.sourceOffset().get("lsn_commit"));
assertEquals(truncateRecord.sourceOffset().get("lsn"), truncateRecord_2.sourceOffset().get("lsn"));
assertEquals(truncateRecord.sourceOffset().get("txId"), truncateRecord_2.sourceOffset().get("txId"));
consumer = testConsumer(1);
executeAndWait("INSERT INTO test_table (text) values ('TRUNCATE TEST');");
}
@Test
@FixFor("DBZ-1413")
public void shouldStreamChangesForDataTypeAlias() throws Exception {
TestHelper.execute("CREATE DOMAIN money2 AS money DEFAULT 0.0;");
TestHelper.execute("CREATE TABLE alias_table (pk SERIAL, data VARCHAR(50), salary money, salary2 money2, PRIMARY KEY(pk));");
startConnector(config -> config
.with(PostgresConnectorConfig.DECIMAL_HANDLING_MODE, DecimalHandlingMode.PRECISE)
.with(PostgresConnectorConfig.INCLUDE_UNKNOWN_DATATYPES, true)
.with(PostgresConnectorConfig.SNAPSHOT_MODE, SnapshotMode.INITIAL)
.with(PostgresConnectorConfig.TABLE_INCLUDE_LIST, "public.alias_table"),
false);
waitForStreamingToStart();
consumer = testConsumer(1);
executeAndWait("INSERT INTO alias_table (data, salary, salary2) values ('hello', 7.25, 8.25);");
SourceRecord rec = assertRecordInserted("public.alias_table", PK_FIELD, 1);
assertSourceInfo(rec, "postgres", "public", "alias_table");
List<SchemaAndValueField> expected = Arrays.asList(
new SchemaAndValueField("pk", SchemaBuilder.int32().defaultValue(0).build(), 1),
new SchemaAndValueField("data", SchemaBuilder.OPTIONAL_STRING_SCHEMA, "hello"),
new SchemaAndValueField("salary", Decimal.builder(2).optional().build(), new BigDecimal(7.25)),
new SchemaAndValueField("salary2", Decimal.builder(2).optional().build(), new BigDecimal(8.25)));
assertRecordSchemaAndValues(expected, rec, Envelope.FieldName.AFTER);
assertThat(consumer.isEmpty()).isTrue();
}
@Test
@FixFor("DBZ-1413")
public void shouldStreamChangesForDomainAliasAlterTable() throws Exception {
TestHelper.execute("CREATE TABLE alias_table (pk SERIAL, data VARCHAR(50), salary money, PRIMARY KEY(pk));");
startConnector(config -> config
.with(PostgresConnectorConfig.DECIMAL_HANDLING_MODE, DecimalHandlingMode.DOUBLE)
.with(PostgresConnectorConfig.INCLUDE_UNKNOWN_DATATYPES, true)
.with(PostgresConnectorConfig.SNAPSHOT_MODE, SnapshotMode.NO_DATA)
.with(PostgresConnectorConfig.TABLE_INCLUDE_LIST, "public.alias_table")
.with("column.propagate.source.type", "public.alias_table.salary3"),
false);
waitForStreamingToStart();
// Now that streaming has started, alter the table schema
TestHelper.execute("CREATE DOMAIN money2 AS money DEFAULT 0.0;");
TestHelper.execute("CREATE DOMAIN money3 AS numeric(8,3) DEFAULT 0.0;");
TestHelper.execute("ALTER TABLE alias_table ADD COLUMN salary2 money2 NOT NULL;");
TestHelper.execute("ALTER TABLE alias_table ADD COLUMN salary3 money3 NOT NULL;");
consumer = testConsumer(1);
executeAndWait("INSERT INTO alias_table (data, salary, salary2, salary3) values ('hello', 7.25, 8.25, 123.456);");
SourceRecord rec = assertRecordInserted("public.alias_table", PK_FIELD, 1);
assertSourceInfo(rec, "postgres", "public", "alias_table");
List<SchemaAndValueField> expected = Arrays.asList(
new SchemaAndValueField("pk", SchemaBuilder.int32().defaultValue(0).build(), 1),
new SchemaAndValueField("data", SchemaBuilder.OPTIONAL_STRING_SCHEMA, "hello"),
new SchemaAndValueField("salary", SchemaBuilder.OPTIONAL_FLOAT64_SCHEMA, 7.25),
new SchemaAndValueField("salary2", SchemaBuilder.FLOAT64_SCHEMA, 8.25),
new SchemaAndValueField("salary3", SchemaBuilder.float64()
.parameter(TestHelper.TYPE_NAME_PARAMETER_KEY, "MONEY3")
.parameter(TestHelper.TYPE_LENGTH_PARAMETER_KEY, "8")
.parameter(TestHelper.TYPE_SCALE_PARAMETER_KEY, "3")
.parameter(TestHelper.COLUMN_NAME_PARAMETER_KEY, "salary3")
.build(), 123.456));
assertRecordSchemaAndValues(expected, rec, Envelope.FieldName.AFTER);
assertThat(consumer.isEmpty()).isTrue();
}
@Test
@FixFor("DBZ-1413")
public void shouldStreamDomainAliasWithProperModifiers() throws Exception {
TestHelper.execute("CREATE TABLE alias_table (pk SERIAL, PRIMARY KEY(pk));");
startConnector(config -> config
.with(PostgresConnectorConfig.DECIMAL_HANDLING_MODE, DecimalHandlingMode.DOUBLE)
.with(PostgresConnectorConfig.INCLUDE_UNKNOWN_DATATYPES, true)
.with(PostgresConnectorConfig.SNAPSHOT_MODE, SnapshotMode.NO_DATA)
.with(PostgresConnectorConfig.TABLE_INCLUDE_LIST, "public.alias_table"),
false);
waitForStreamingToStart();
TestHelper.execute("CREATE DOMAIN varbit2 AS varbit(3);");
TestHelper.execute("ALTER TABLE public.alias_table ADD COLUMN value varbit2 NOT NULL;");
consumer = testConsumer(1);
executeAndWait("INSERT INTO public.alias_table (value) VALUES (B'101');");
SourceRecord rec = assertRecordInserted("public.alias_table", PK_FIELD, 1);
assertSourceInfo(rec, "postgres", "public", "alias_table");
List<SchemaAndValueField> expected = Arrays.asList(
new SchemaAndValueField(PK_FIELD, SchemaBuilder.int32().defaultValue(0).build(), 1),
new SchemaAndValueField("value", Bits.builder(3).build(), new byte[]{ 5 }));
assertRecordSchemaAndValues(expected, rec, Envelope.FieldName.AFTER);
assertThat(consumer.isEmpty()).isTrue();
}
@Test
@FixFor("DBZ-1413")
public void shouldStreamValuesForDomainTypeOfDomainType() throws Exception {
TestHelper.execute("CREATE DOMAIN numeric82 as numeric(8,2);");
TestHelper.execute("CREATE DOMAIN numericex as numeric82;");
TestHelper.execute("CREATE TABLE alias_table (pk SERIAL, value numericex, PRIMARY KEY (pk));");
startConnector(config -> config
.with(PostgresConnectorConfig.DECIMAL_HANDLING_MODE, DecimalHandlingMode.DOUBLE)
.with(PostgresConnectorConfig.INCLUDE_UNKNOWN_DATATYPES, true)
.with(PostgresConnectorConfig.SNAPSHOT_MODE, SnapshotMode.NO_DATA)
.with(PostgresConnectorConfig.TABLE_INCLUDE_LIST, "public.alias_table")
.with("column.propagate.source.type", "public.alias_table.value"), false);
waitForStreamingToStart();
consumer = testConsumer(1);
executeAndWait("INSERT INTO alias_table (value) values (123.45);");
SourceRecord rec = assertRecordInserted("public.alias_table", PK_FIELD, 1);
assertSourceInfo(rec, "postgres", "public", "alias_table");
List<SchemaAndValueField> expected = Arrays.asList(
new SchemaAndValueField(PK_FIELD, SchemaBuilder.int32().defaultValue(0).build(), 1),
new SchemaAndValueField("value", SpecialValueDecimal.builder(DecimalMode.DOUBLE, 8, 2)
.optional()
.parameter(TestHelper.TYPE_NAME_PARAMETER_KEY, "NUMERICEX")
.parameter(TestHelper.TYPE_LENGTH_PARAMETER_KEY, "8")
.parameter(TestHelper.TYPE_SCALE_PARAMETER_KEY, "2")
.parameter(TestHelper.COLUMN_NAME_PARAMETER_KEY, "value")
.build(), 123.45));
assertRecordSchemaAndValues(expected, rec, Envelope.FieldName.AFTER);
assertThat(consumer.isEmpty()).isTrue();
}
@Test
@FixFor("DBZ-1413")
public void shouldStreamValuesForAliasLikeBaseTypes() throws Exception {
TestHelper.execute("CREATE TABLE alias_table (pk SERIAL, PRIMARY KEY (pk));");
startConnector(config -> config
.with(PostgresConnectorConfig.DECIMAL_HANDLING_MODE, DecimalHandlingMode.DOUBLE)
.with(PostgresConnectorConfig.INCLUDE_UNKNOWN_DATATYPES, true)
.with(PostgresConnectorConfig.SNAPSHOT_MODE, SnapshotMode.NO_DATA)
.with(PostgresConnectorConfig.TABLE_INCLUDE_LIST, "public.alias_table"),
false);
waitForStreamingToStart();
// note: skipped macaddr8 as that is only supported on PG10+ but was manually tested
TestHelper.execute("CREATE DOMAIN bit2 AS BIT(3);");
TestHelper.execute("CREATE DOMAIN smallint2 AS smallint;");
TestHelper.execute("CREATE DOMAIN integer2 as integer;");
TestHelper.execute("CREATE DOMAIN bigint2 as bigint;");
TestHelper.execute("CREATE DOMAIN real2 as real;");
TestHelper.execute("CREATE DOMAIN bool2 AS BOOL DEFAULT false;");
TestHelper.execute("CREATE DOMAIN float82 as float8;");
TestHelper.execute("CREATE DOMAIN numeric2 as numeric(6,2);");
TestHelper.execute("CREATE DOMAIN string2 AS varchar(25) DEFAULT NULL;");
TestHelper.execute("CREATE DOMAIN date2 AS date;");
TestHelper.execute("CREATE DOMAIN time2 as time;");
TestHelper.execute("CREATE DOMAIN timetz2 as timetz;");
TestHelper.execute("CREATE DOMAIN timestamp2 as timestamp;");
TestHelper.execute("CREATE DOMAIN timestamptz2 AS timestamptz;");
TestHelper.execute("CREATE DOMAIN timewotz2 as time without time zone;");
TestHelper.execute("CREATE DOMAIN box2 as box;");
TestHelper.execute("CREATE DOMAIN circle2 as circle;");
TestHelper.execute("CREATE DOMAIN interval2 as interval;");
TestHelper.execute("CREATE DOMAIN line2 as line;");
TestHelper.execute("CREATE DOMAIN lseg2 as lseg;");
TestHelper.execute("CREATE DOMAIN path2 as path;");
TestHelper.execute("CREATE DOMAIN point2 as point;");
TestHelper.execute("CREATE DOMAIN polygon2 as polygon;");
TestHelper.execute("CREATE DOMAIN char2 as char;");
TestHelper.execute("CREATE DOMAIN text2 as text;");
TestHelper.execute("CREATE DOMAIN json2 as json;");
TestHelper.execute("CREATE DOMAIN xml2 as xml;");
TestHelper.execute("CREATE DOMAIN uuid2 as uuid;");
TestHelper.execute("CREATE DOMAIN varbit2 as varbit(3);");
TestHelper.execute("CREATE DOMAIN inet2 as inet;");
TestHelper.execute("CREATE DOMAIN cidr2 as cidr;");
TestHelper.execute("CREATE DOMAIN macaddr2 as macaddr;");
TestHelper.execute("ALTER TABLE alias_table "
+ "ADD COLUMN bit_base bit(3) NOT NULL, ADD COLUMN bit_alias bit2 NOT NULL, "
+ "ADD COLUMN smallint_base smallint NOT NULL, ADD COLUMN smallint_alias smallint2 NOT NULL, "
+ "ADD COLUMN integer_base integer NOT NULL, ADD COLUMN integer_alias integer2 NOT NULL, "
+ "ADD COLUMN bigint_base bigint NOT NULL, ADD COLUMN bigint_alias bigint2 NOT NULL, "
+ "ADD COLUMN real_base real NOT NULL, ADD COLUMN real_alias real2 NOT NULL, "
+ "ADD COLUMN float8_base float8 NOT NULL, ADD COLUMN float8_alias float82 NOT NULL, "
+ "ADD COLUMN numeric_base numeric(6,2) NOT NULL, ADD COLUMN numeric_alias numeric2 NOT NULL, "
+ "ADD COLUMN bool_base bool NOT NULL, ADD COLUMN bool_alias bool2 NOT NULL, "
+ "ADD COLUMN string_base varchar(25) NOT NULL, ADD COLUMN string_alias string2 NOT NULL, "
+ "ADD COLUMN date_base date NOT NULL, ADD COLUMN date_alias date2 NOT NULL, "
+ "ADD COLUMN time_base time NOT NULL, ADD COLUMN time_alias time2 NOT NULL, "
+ "ADD COLUMN timetz_base timetz NOT NULL, ADD COLUMN timetz_alias timetz2 NOT NULL, "
+ "ADD COLUMN timestamp_base timestamp NOT NULL, ADD COLUMN timestamp_alias timestamp2 NOT NULL, "
+ "ADD COLUMN timestamptz_base timestamptz NOT NULL, ADD COLUMN timestamptz_alias timestamptz2 NOT NULL, "
+ "ADD COLUMN timewottz_base time without time zone NOT NULL, ADD COLUMN timewottz_alias timewotz2 NOT NULL, "
+ "ADD COLUMN box_base box NOT NULL, ADD COLUMN box_alias box2 NOT NULL, "
+ "ADD COLUMN circle_base circle NOT NULL, ADD COLUMN circle_alias circle2 NOT NULL, "
+ "ADD COLUMN interval_base interval NOT NULL, ADD COLUMN interval_alias interval2 NOT NULL, "
+ "ADD COLUMN line_base line NOT NULL, ADD COLUMN line_alias line2 NOT NULL, "
+ "ADD COLUMN lseg_base lseg NOT NULL, ADD COLUMN lseg_alias lseg2 NOT NULL, "
+ "ADD COLUMN path_base path NOT NULL, ADD COLUMN path_alias path2 NOT NULL, "
+ "ADD COLUMN point_base point NOT NULL, ADD COLUMN point_alias point2 NOT NULL, "
+ "ADD COLUMN polygon_base polygon NOT NULL, ADD COLUMN polygon_alias polygon2 NOT NULL, "
+ "ADD COLUMN char_base char NOT NULL, ADD COLUMN char_alias char2 NOT NULL, "
+ "ADD COLUMN text_base text NOT NULL, ADD COLUMN text_alias text2 NOT NULL, "
+ "ADD COLUMN json_base json NOT NULL, ADD COLUMN json_alias json2 NOT NULL, "
+ "ADD COLUMN xml_base xml NOT NULL, ADD COLUMN xml_alias xml2 NOT NULL, "
+ "ADD COLUMN uuid_base UUID NOT NULL, ADD COLUMN uuid_alias uuid2 NOT NULL, "
+ "ADD COLUMN varbit_base varbit(3) NOT NULL, ADD COLUMN varbit_alias varbit2 NOT NULL,"
+ "ADD COLUMN inet_base inet NOT NULL, ADD COLUMN inet_alias inet2 NOT NULL, "
+ "ADD COLUMN cidr_base cidr NOT NULL, ADD COLUMN cidr_alias cidr2 NOT NULL, "
+ "ADD COLUMN macaddr_base macaddr NOT NULL, ADD COLUMN macaddr_alias macaddr2 NOT NULL");
consumer = testConsumer(1);
executeAndWait("INSERT INTO alias_table ("
+ "bit_base, bit_alias, "
+ "smallint_base, smallint_alias, "
+ "integer_base, integer_alias, "
+ "bigint_base, bigint_alias, "
+ "real_base, real_alias, "
+ "float8_base, float8_alias, "
+ "numeric_base, numeric_alias, "
+ "bool_base, bool_alias, "
+ "string_base, string_alias, "
+ "date_base, date_alias, "
+ "time_base, time_alias, "
+ "timetz_base, timetz_alias, "
+ "timestamp_base, timestamp_alias, "
+ "timestamptz_base, timestamptz_alias, "
+ "timewottz_base, timewottz_alias, "
+ "box_base, box_alias, "
+ "circle_base, circle_alias, "
+ "interval_base, interval_alias, "
+ "line_base, line_alias, "
+ "lseg_base, lseg_alias, "
+ "path_base, path_alias, "
+ "point_base, point_alias, "
+ "polygon_base, polygon_alias, "
+ "char_base, char_alias, "
+ "text_base, text_alias, "
+ "json_base, json_alias, "
+ "xml_base, xml_alias, "
+ "uuid_base, uuid_alias, "
+ "varbit_base, varbit_alias, "
+ "inet_base, inet_alias, "
+ "cidr_base, cidr_alias, "
+ "macaddr_base, macaddr_alias "
+ ") VALUES ("
+ "B'101', B'101', "
+ "1, 1, "
+ "1, 1, "
+ "1000, 1000, "
+ "3.14, 3.14, "
+ "3.14, 3.14, "
+ "1234.12, 1234.12, "
+ "true, true, "
+ "'hello', 'hello', "
+ "'2019-10-02', '2019-10-02', "
+ "'01:02:03', '01:02:03', "
+ "'01:02:03.123789Z', '01:02:03.123789Z', "
+ "'2019-10-02T01:02:03.123456', '2019-10-02T01:02:03.123456', "
+ "'2019-10-02T13:51:30.123456+02:00'::TIMESTAMPTZ, '2019-10-02T13:51:30.123456+02:00'::TIMESTAMPTZ, "
+ "'01:02:03', '01:02:03', "
+ "'(0,0),(1,1)', '(0,0),(1,1)', "
+ "'10,4,10', '10,4,10', "
+ "'1 year 2 months 3 days 4 hours 5 minutes 6 seconds', '1 year 2 months 3 days 4 hours 5 minutes 6 seconds', "
+ "'(0,0),(0,1)', '(0,0),(0,1)', "
+ "'((0,0),(0,1))', '((0,0),(0,1))', "
+ "'((0,0),(0,1),(0,2))', '((0,0),(0,1),(0,2))', "
+ "'(1,1)', '(1,1)', "
+ "'((0,0),(0,1),(1,0),(0,0))', '((0,0),(0,1),(1,0),(0,0))', "
+ "'a', 'a', "
+ "'Hello World', 'Hello World', "
+ "'{\"key\": \"value\"}', '{\"key\": \"value\"}', "
+ "XML('<foo>Hello</foo>'), XML('<foo>Hello</foo>'), "
+ "'40e6215d-b5c6-4896-987c-f30f3678f608', '40e6215d-b5c6-4896-987c-f30f3678f608', "
+ "B'101', B'101', "
+ "'192.168.0.1', '192.168.0.1', "
+ "'192.168/24', '192.168/24', "
+ "'08:00:2b:01:02:03', '08:00:2b:01:02:03' "
+ ");");
SourceRecord rec = assertRecordInserted("public.alias_table", PK_FIELD, 1);
assertSourceInfo(rec, "postgres", "public", "alias_table");
assertRecordSchemaAndValues(schemasAndValuesForDomainAliasTypes(true), rec, Envelope.FieldName.AFTER);
assertThat(consumer.isEmpty()).isTrue();
}
@Test
@FixFor("DBZ-920")
public void shouldStreamEnumAsKnownType() throws Exception {
// Specifically enable `column.propagate.source.type` here to validate later that the actual
// type, length, and scale values are resolved correctly when paired with Enum types.
TestHelper.execute("CREATE TABLE enum_table (pk SERIAL, PRIMARY KEY (pk));");
startConnector(config -> config
.with(PostgresConnectorConfig.INCLUDE_UNKNOWN_DATATYPES, true)
.with(PostgresConnectorConfig.SNAPSHOT_MODE, SnapshotMode.NO_DATA)
.with("column.propagate.source.type", "public.enum_table.value")
.with(PostgresConnectorConfig.TABLE_INCLUDE_LIST, "public.enum_table"), false);
waitForStreamingToStart();
// We create the enum type after streaming started to simulate some future schema change
TestHelper.execute("CREATE TYPE test_type AS ENUM ('V1','V2');");
TestHelper.execute("ALTER TABLE enum_table ADD COLUMN value test_type NOT NULL");
consumer = testConsumer(1);
executeAndWait("INSERT INTO enum_table (value) VALUES ('V1');");
SourceRecord rec = assertRecordInserted("public.enum_table", PK_FIELD, 1);
assertSourceInfo(rec, "postgres", "public", "enum_table");
List<SchemaAndValueField> expected = Arrays.asList(
new SchemaAndValueField(PK_FIELD, SchemaBuilder.int32().defaultValue(0).build(), 1),
new SchemaAndValueField("value", Enum.builder("V1,V2")
.parameter(TestHelper.TYPE_NAME_PARAMETER_KEY, "TEST_TYPE")
.parameter(TestHelper.TYPE_LENGTH_PARAMETER_KEY, String.valueOf(Integer.MAX_VALUE))
.parameter(TestHelper.TYPE_SCALE_PARAMETER_KEY, "0")
.parameter(TestHelper.COLUMN_NAME_PARAMETER_KEY, "value")
.build(), "V1"));
assertRecordSchemaAndValues(expected, rec, Envelope.FieldName.AFTER);
assertThat(consumer.isEmpty()).isTrue();
}
@Test
@FixFor("DBZ-5038")
public void shouldEmitEnumColumnDefaultValuesInSchema() throws Exception {
// Specifically enable `column.propagate.source.type` here to validate later that the actual
// type, length, and scale values are resolved correctly when paired with Enum types.
TestHelper.execute("CREATE TABLE enum_table (pk SERIAL, PRIMARY KEY (pk));");
startConnector(config -> config
.with(PostgresConnectorConfig.INCLUDE_UNKNOWN_DATATYPES, true)
.with(PostgresConnectorConfig.SNAPSHOT_MODE, SnapshotMode.NO_DATA)
.with("column.propagate.source.type", "public.enum_table.value")
.with(PostgresConnectorConfig.TABLE_INCLUDE_LIST, "public.enum_table"), false);
waitForStreamingToStart();
// We create the enum type after streaming started to simulate some future schema change
TestHelper.execute("CREATE TYPE test_type AS ENUM ('V1','V2');");
TestHelper.execute("ALTER TABLE enum_table ADD COLUMN data varchar(50) NOT NULL");
TestHelper.execute("ALTER TABLE enum_table ADD COLUMN value test_type NOT NULL DEFAULT 'V2'::test_type");
consumer = testConsumer(1);
executeAndWait("INSERT INTO enum_table (data) VALUES ('V1');");
SourceRecord rec = assertRecordInserted("public.enum_table", PK_FIELD, 1);
assertSourceInfo(rec, "postgres", "public", "enum_table");
List<SchemaAndValueField> expected = Arrays.asList(
new SchemaAndValueField(PK_FIELD, SchemaBuilder.int32().defaultValue(0).build(), 1),
new SchemaAndValueField("data", SchemaBuilder.string().build(), "V1"),
new SchemaAndValueField("value", Enum.builder("V1,V2")
.parameter(TestHelper.TYPE_NAME_PARAMETER_KEY, "TEST_TYPE")
.parameter(TestHelper.TYPE_LENGTH_PARAMETER_KEY, String.valueOf(Integer.MAX_VALUE))
.parameter(TestHelper.TYPE_SCALE_PARAMETER_KEY, "0")
.parameter(TestHelper.COLUMN_NAME_PARAMETER_KEY, "value")
.defaultValue("V2")
.build(), "V2"));
assertRecordSchemaAndValues(expected, rec, Envelope.FieldName.AFTER);
assertThat(consumer.isEmpty()).isTrue();
}
@Test
public void shouldStreamEnumArrayAsKnownType() throws Exception {
// Specifically enable `column.propagate.source.type` here to validate later that the actual
// type, length, and scale values are resolved correctly when paired with Enum types.
TestHelper.execute("CREATE TABLE enum_array_table (pk SERIAL, PRIMARY KEY (pk));");
startConnector(config -> config
.with(PostgresConnectorConfig.INCLUDE_UNKNOWN_DATATYPES, false)
.with(PostgresConnectorConfig.SNAPSHOT_MODE, SnapshotMode.NO_DATA)
.with("column.propagate.source.type", "public.enum_array_table.value")
.with(PostgresConnectorConfig.TABLE_INCLUDE_LIST, "public.enum_array_table"), false);
waitForStreamingToStart();
// We create the enum type after streaming started to simulate some future schema change
TestHelper.execute("CREATE TYPE test_type AS ENUM ('V1','V2');");
TestHelper.execute("ALTER TABLE enum_array_table ADD COLUMN value test_type[] NOT NULL;");
consumer = testConsumer(1);
// INSERT
executeAndWait("INSERT INTO enum_array_table (value) VALUES ('{V1, V2}');");
SourceRecord insertRec = assertRecordInserted("public.enum_array_table", PK_FIELD, 1);
assertSourceInfo(insertRec, "postgres", "public", "enum_array_table");
List<SchemaAndValueField> expectedInsert = Arrays.asList(
new SchemaAndValueField(PK_FIELD, SchemaBuilder.int32().defaultValue(0).build(), 1),
new SchemaAndValueField("value", SchemaBuilder.array(Enum.builder("V1,V2"))
.parameter(TestHelper.TYPE_NAME_PARAMETER_KEY, "_TEST_TYPE")
.parameter(TestHelper.TYPE_LENGTH_PARAMETER_KEY, String.valueOf(Integer.MAX_VALUE))
.parameter(TestHelper.TYPE_SCALE_PARAMETER_KEY, "0")
.parameter(TestHelper.COLUMN_NAME_PARAMETER_KEY, "value")
.build(), Arrays.asList("V1", "V2")));
assertRecordSchemaAndValues(expectedInsert, insertRec, Envelope.FieldName.AFTER);
assertThat(consumer.isEmpty()).isTrue();
// UPDATE
executeAndWait("UPDATE enum_array_table set value = '{V1}';");
SourceRecord updateRec = consumer.remove();
assertSourceInfo(updateRec, "postgres", "public", "enum_array_table");
List<SchemaAndValueField> expectedUpdate = Arrays.asList(
new SchemaAndValueField(PK_FIELD, SchemaBuilder.int32().defaultValue(0).build(), 1),
new SchemaAndValueField("value", SchemaBuilder.array(Enum.builder("V1,V2"))
.parameter(TestHelper.TYPE_NAME_PARAMETER_KEY, "_TEST_TYPE")
.parameter(TestHelper.TYPE_LENGTH_PARAMETER_KEY, String.valueOf(Integer.MAX_VALUE))
.parameter(TestHelper.TYPE_SCALE_PARAMETER_KEY, "0")
.parameter(TestHelper.COLUMN_NAME_PARAMETER_KEY, "value")
.build(), Arrays.asList("V1")));
assertRecordSchemaAndValues(expectedUpdate, updateRec, Envelope.FieldName.AFTER);
assertThat(consumer.isEmpty()).isTrue();
// DELETE
executeAndWait("DELETE FROM enum_array_table;");
SourceRecord deleteRec = consumer.remove();
VerifyRecord.isValidDelete(deleteRec, PK_FIELD, 1);
assertSourceInfo(updateRec, "postgres", "public", "enum_array_table");
assertThat(consumer.isEmpty()).isTrue();
}
@Test
@FixFor("DBZ-1969")
public void shouldStreamTimeArrayTypesAsKnownTypes() throws Exception {
TestHelper.execute("CREATE TABLE time_array_table (pk SERIAL, "
+ "timea time[] NOT NULL, "
+ "timetza timetz[] NOT NULL, "
+ "timestampa timestamp[] NOT NULL, "
+ "timestamptza timestamptz[] NOT NULL, primary key(pk));");
startConnector(config -> config
.with(PostgresConnectorConfig.INCLUDE_UNKNOWN_DATATYPES, false)
.with(PostgresConnectorConfig.SNAPSHOT_MODE, SnapshotMode.NO_DATA)
.with(PostgresConnectorConfig.TABLE_INCLUDE_LIST, "public.time_array_table"), false);
waitForStreamingToStart();
consumer = testConsumer(1);
// INSERT
executeAndWait("INSERT INTO time_array_table (timea, timetza, timestampa, timestamptza) "
+ "values ("
+ "'{00:01:02,01:02:03}', "
+ "'{13:51:02+0200,14:51:03+0200}', "
+ "'{2020-04-01 00:01:02,2020-04-01 01:02:03}', "
+ "'{2020-04-01 13:51:02+02,2020-04-01 14:51:03+02}')");
SourceRecord insert = assertRecordInserted("public.time_array_table", PK_FIELD, 1);
assertSourceInfo(insert, "postgres", "public", "time_array_table");
assertRecordSchemaAndValues(schemaAndValuesForTimeArrayTypes(), insert, Envelope.FieldName.AFTER);
assertThat(consumer.isEmpty()).isTrue();
// UPDATE
executeAndWait("UPDATE time_array_table SET "
+ "timea = '{00:01:02,02:03:04}', "
+ "timetza = '{00:01:02-0400,01:03:04-0400}', "
+ "timestampa = '{2020-04-01 00:01:02,2020-04-25 03:04:05}', "
+ "timestamptza = '{2020-04-01 00:01:02-04,2020-04-25 03:04:05-04}'");
SourceRecord update = consumer.remove();
assertSourceInfo(update, "postgres", "public", "time_array_table");
List<SchemaAndValueField> expectedUpdate = Arrays.asList(
new SchemaAndValueField("timea",
SchemaBuilder.array(MicroTime.builder().optional().build()).build(),
Arrays.asList(LocalTime.parse("00:01:02").toNanoOfDay() / 1_000,
LocalTime.parse("02:03:04").toNanoOfDay() / 1_000)),
new SchemaAndValueField("timetza",
SchemaBuilder.array(ZonedTime.builder().optional().build()).build(),
Arrays.asList("04:01:02Z", "05:03:04Z")),
new SchemaAndValueField("timestampa",
SchemaBuilder.array(MicroTimestamp.builder().optional().build()).build(),
Arrays.asList(OffsetDateTime.of(2020, 4, 1, 0, 1, 2, 0, ZoneOffset.UTC).toInstant().toEpochMilli() * 1_000,
OffsetDateTime.of(2020, 4, 25, 3, 4, 5, 0, ZoneOffset.UTC).toInstant().toEpochMilli() * 1_000)),
new SchemaAndValueField("timestamptza",
SchemaBuilder.array(ZonedTimestamp.builder().optional().build()).build(),
Arrays.asList("2020-04-01T04:01:02.000000Z", "2020-04-25T07:04:05.000000Z")));
assertRecordSchemaAndValues(expectedUpdate, update, Envelope.FieldName.AFTER);
assertThat(consumer.isEmpty()).isTrue();
// DELETE
executeAndWait("DELETE FROM time_array_table;");
SourceRecord deleteRec = consumer.remove();
VerifyRecord.isValidDelete(deleteRec, PK_FIELD, 1);
assertSourceInfo(deleteRec, "postgres", "public", "time_array_table");
assertThat(consumer.isEmpty()).isTrue();
}
@Test
@FixFor({ "DBZ-1680", "DBZ-5038" })
public void shouldStreamEnumsWhenIncludeUnknownDataTypesDisabled() throws Exception {
// Specifically enable `column.propagate.source.type` here to validate later that the actual
// type, length, and scale values are resolved correctly when paired with Enum types.
TestHelper.execute("CREATE TYPE test_type AS ENUM ('V1','V2');");
TestHelper.execute("CREATE TABLE enum_table (pk SERIAL, data varchar(25) NOT NULL, value test_type NOT NULL DEFAULT 'V1', PRIMARY KEY (pk));");
startConnector(config -> config
.with(PostgresConnectorConfig.INCLUDE_UNKNOWN_DATATYPES, false)
.with(PostgresConnectorConfig.SNAPSHOT_MODE, SnapshotMode.NO_DATA)
.with("column.propagate.source.type", "public.enum_table.value")
.with(PostgresConnectorConfig.TABLE_INCLUDE_LIST, "public.enum_table"), false);
waitForStreamingToStart();
consumer = testConsumer(1);
executeAndWait("INSERT INTO enum_table (data) VALUES ('hello');");
SourceRecord rec = assertRecordInserted("public.enum_table", PK_FIELD, 1);
assertSourceInfo(rec, "postgres", "public", "enum_table");
List<SchemaAndValueField> expected = Arrays.asList(
new SchemaAndValueField(PK_FIELD, SchemaBuilder.int32().defaultValue(0).build(), 1),
new SchemaAndValueField("data", Schema.STRING_SCHEMA, "hello"),
new SchemaAndValueField("value", Enum.builder("V1,V2")
.parameter(TestHelper.TYPE_NAME_PARAMETER_KEY, "TEST_TYPE")
.parameter(TestHelper.TYPE_LENGTH_PARAMETER_KEY, String.valueOf(Integer.MAX_VALUE))
.parameter(TestHelper.TYPE_SCALE_PARAMETER_KEY, "0")
.parameter(TestHelper.COLUMN_NAME_PARAMETER_KEY, "value")
.defaultValue("V1")
.build(), "V1"));
assertRecordSchemaAndValues(expected, rec, Envelope.FieldName.AFTER);
assertThat(consumer.isEmpty()).isTrue();
}
private void testReceiveChangesForReplicaIdentityFullTableWithToastedValue(PostgresConnectorConfig.SchemaRefreshMode mode, boolean tablesBeforeStart)
throws Exception {
if (tablesBeforeStart) {
TestHelper.execute(
"DROP TABLE IF EXISTS test_table;",
"CREATE TABLE test_table (id SERIAL, not_toast int, text TEXT);",
"ALTER TABLE test_table REPLICA IDENTITY FULL");
awaitTableMetaDataIsQueryable(new TableId(null, "public", "test_table"));
}
startConnector(config -> config.with(PostgresConnectorConfig.SCHEMA_REFRESH_MODE, mode), false);
assertConnectorIsRunning();
consumer = testConsumer(1);
final String toastedValue = RandomStringUtils.randomAlphanumeric(10000);
if (!tablesBeforeStart) {
waitForStreamingToStart();
TestHelper.execute(
"DROP TABLE IF EXISTS test_table;",
"CREATE TABLE test_table (id SERIAL, not_toast int, text TEXT);",
"ALTER TABLE test_table REPLICA IDENTITY FULL");
awaitTableMetaDataIsQueryable(new TableId(null, "public", "test_table"));
}
// INSERT
String statement = "INSERT INTO test_table (not_toast, text) VALUES (10,'" + toastedValue + "');";
assertInsert(
statement,
Arrays.asList(
new SchemaAndValueField("id", SchemaBuilder.int32().defaultValue(0).build(), 1), // SERIAL is NOT NULL implicitly
new SchemaAndValueField("not_toast", SchemaBuilder.OPTIONAL_INT32_SCHEMA, 10),
new SchemaAndValueField("text", SchemaBuilder.OPTIONAL_STRING_SCHEMA, toastedValue)));
// UPDATE
consumer.expects(1);
executeAndWait("UPDATE test_table set not_toast = 20");
SourceRecord updatedRecord = consumer.remove();
assertRecordSchemaAndValues(Arrays.asList(
new SchemaAndValueField("id", SchemaBuilder.int32().defaultValue(0).build(), 1),
new SchemaAndValueField("not_toast", SchemaBuilder.OPTIONAL_INT32_SCHEMA, 10),
new SchemaAndValueField("text", SchemaBuilder.OPTIONAL_STRING_SCHEMA, toastedValue)), updatedRecord, Envelope.FieldName.BEFORE);
assertRecordSchemaAndValues(Arrays.asList(
new SchemaAndValueField("id", SchemaBuilder.int32().defaultValue(0).build(), 1),
new SchemaAndValueField("not_toast", SchemaBuilder.OPTIONAL_INT32_SCHEMA, 20),
new SchemaAndValueField("text", SchemaBuilder.OPTIONAL_STRING_SCHEMA, toastedValue)), updatedRecord, Envelope.FieldName.AFTER);
// DELETE
consumer.expects(2);
executeAndWait("DELETE FROM test_table");
SourceRecord deletedRecord = consumer.remove();
SourceRecord tombstoneRecord = consumer.remove();
assertThat(tombstoneRecord.value()).isNull();
assertThat(tombstoneRecord.valueSchema()).isNull();
assertRecordSchemaAndValues(Arrays.asList(
new SchemaAndValueField("id", SchemaBuilder.int32().defaultValue(0).build(), 1),
new SchemaAndValueField("not_toast", SchemaBuilder.OPTIONAL_INT32_SCHEMA, 20),
new SchemaAndValueField("text", SchemaBuilder.OPTIONAL_STRING_SCHEMA, toastedValue)), deletedRecord, Envelope.FieldName.BEFORE);
// INSERT null
consumer.expects(1);
statement = "INSERT INTO test_table (not_toast, text) VALUES (100, null);";
assertInsert(
statement,
Arrays.asList(
new SchemaAndValueField("id", SchemaBuilder.int32().defaultValue(0).build(), 2), // SERIAL is NOT NULL implicitly
new SchemaAndValueField("not_toast", SchemaBuilder.OPTIONAL_INT32_SCHEMA, 100),
new SchemaAndValueField("text", SchemaBuilder.OPTIONAL_STRING_SCHEMA, null)));
// UPDATE null
consumer.expects(1);
executeAndWait("UPDATE test_table set not_toast = 200 WHERE id=2");
updatedRecord = consumer.remove();
assertRecordSchemaAndValues(Arrays.asList(
new SchemaAndValueField("id", SchemaBuilder.int32().defaultValue(0).build(), 2),
new SchemaAndValueField("not_toast", SchemaBuilder.OPTIONAL_INT32_SCHEMA, 100),
new SchemaAndValueField("text", SchemaBuilder.OPTIONAL_STRING_SCHEMA, null)), updatedRecord, Envelope.FieldName.BEFORE);
assertRecordSchemaAndValues(Arrays.asList(
new SchemaAndValueField("id", SchemaBuilder.int32().defaultValue(0).build(), 2),
new SchemaAndValueField("not_toast", SchemaBuilder.OPTIONAL_INT32_SCHEMA, 200),
new SchemaAndValueField("text", SchemaBuilder.OPTIONAL_STRING_SCHEMA, null)), updatedRecord, Envelope.FieldName.AFTER);
// DELETE null
consumer.expects(2);
executeAndWait("DELETE FROM test_table WHERE id=2");
deletedRecord = consumer.remove();
tombstoneRecord = consumer.remove();
assertThat(tombstoneRecord.value()).isNull();
assertThat(tombstoneRecord.valueSchema()).isNull();
assertRecordSchemaAndValues(Arrays.asList(
new SchemaAndValueField("id", SchemaBuilder.int32().defaultValue(0).build(), 2),
new SchemaAndValueField("not_toast", SchemaBuilder.OPTIONAL_INT32_SCHEMA, 200),
new SchemaAndValueField("text", SchemaBuilder.OPTIONAL_STRING_SCHEMA, null)), deletedRecord, Envelope.FieldName.BEFORE);
}
/**
* It appears in some cases retrieving column metadata "too quickly" raises
* a PSQLException: ERROR: could not open relation with OID xyz.
* This causes intermittent failures during schema refresh.
* This is an attempt to avoid that situation by making sure the metadata can be retrieved
* before proceeding.
*/
private void awaitTableMetaDataIsQueryable(TableId tableId) {
Awaitility.await()
.atMost(TestHelper.waitTimeForRecords() * 10, TimeUnit.SECONDS)
.ignoreException(PSQLException.class)
.until(() -> {
try (PostgresConnection connection = TestHelper.createWithTypeRegistry()) {
Tables tables = new Tables();
connection.readSchema(tables, null, "public", TableFilter.fromPredicate(t -> t.equals(tableId)), null, false);
return tables.forTable(tableId) != null;
}
});
}
@Test()
@FixFor("DBZ-1815")
public void testHeartbeatActionQueryExecuted() throws Exception {
TestHelper.execute(
"DROP TABLE IF EXISTS test_table;" +
"CREATE TABLE test_table (id SERIAL, text TEXT);" +
"INSERT INTO test_table (text) VALUES ('mydata');");
TestHelper.execute(
"DROP TABLE IF EXISTS test_heartbeat_table;" +
"CREATE TABLE test_heartbeat_table (text TEXT);");
// A low heartbeat interval should make sure that a heartbeat message is emitted at least once during the test.
startConnector(config -> config
.with(Heartbeat.HEARTBEAT_INTERVAL, "100")
.with(DatabaseHeartbeatImpl.HEARTBEAT_ACTION_QUERY,
"INSERT INTO test_heartbeat_table (text) VALUES ('test_heartbeat');"));
// Expecting 1 data change
Awaitility.await().atMost(TestHelper.waitTimeForRecords() * 10, TimeUnit.SECONDS).until(() -> {
final SourceRecord record = consumeRecord();
return record != null && Envelope.isEnvelopeSchema(record.valueSchema());
});
// Confirm that the heartbeat.action.query was executed with the heartbeat. It is difficult to determine the
// exact amount of times the heartbeat will fire because the run time of the test will vary, but if there is
// anything in test_heartbeat_table then this test is confirmed.
int numOfHeartbeatActions;
final String slotQuery = "SELECT COUNT(*) FROM test_heartbeat_table;";
final JdbcConnection.ResultSetMapper<Integer> slotQueryMapper = rs -> {
rs.next();
return rs.getInt(1);
};
try (PostgresConnection connection = TestHelper.create()) {
numOfHeartbeatActions = connection.queryAndMap(slotQuery, slotQueryMapper);
}
assertTrue(numOfHeartbeatActions > 0);
}
@Test
@FixFor({ "DBZ-1916", "DBZ-1830" })
public void shouldPropagateSourceTypeByDatatype() throws Exception {
TestHelper.execute("DROP TABLE IF EXISTS test_table;");
TestHelper.execute("CREATE TABLE test_table (id SERIAL, c1 INT, c2 INT, c3a NUMERIC(5,2), c3b VARCHAR(128), f1 float(10), f2 decimal(8,4), primary key (id));");
startConnector(config -> config
.with(PostgresConnectorConfig.SNAPSHOT_MODE, SnapshotMode.NO_DATA)
.with("datatype.propagate.source.type", ".+\\.NUMERIC,.+\\.VARCHAR,.+\\.FLOAT4"), false);
waitForStreamingToStart();
consumer = testConsumer(1);
executeAndWait("INSERT INTO test_table (id,c1,c2,c3a,c3b,f1,f2) values (1, 123, 456, 789.01, 'test', 1.228, 234.56);");
final SourceRecord record = assertRecordInserted("public.test_table", "id", 1);
final Field before = record.valueSchema().field("before");
// no type info requested as per given data types
assertThat(before.schema().field("id").schema().parameters()).isNull();
assertThat(before.schema().field("c1").schema().parameters()).isNull();
assertThat(before.schema().field("c2").schema().parameters()).isNull();
assertThat(before.schema().field("c3a").schema().parameters()).contains(
entry(TYPE_NAME_PARAMETER_KEY, "NUMERIC"),
entry(TYPE_LENGTH_PARAMETER_KEY, "5"),
entry(TYPE_SCALE_PARAMETER_KEY, "2"));
// variable width, name and length info
assertThat(before.schema().field("c3b").schema().parameters()).contains(
entry(TYPE_NAME_PARAMETER_KEY, "VARCHAR"),
entry(TYPE_LENGTH_PARAMETER_KEY, "128"));
assertThat(before.schema().field("f2").schema().parameters()).contains(
entry(TYPE_NAME_PARAMETER_KEY, "NUMERIC"),
entry(TYPE_LENGTH_PARAMETER_KEY, "8"),
entry(TYPE_SCALE_PARAMETER_KEY, "4"));
assertThat(before.schema().field("f1").schema().parameters()).contains(
entry(TYPE_NAME_PARAMETER_KEY, "FLOAT4"),
entry(TYPE_LENGTH_PARAMETER_KEY, "8"),
entry(TYPE_SCALE_PARAMETER_KEY, "8"));
}
@Test
@FixFor({ "DBZ-3074" })
public void shouldMaintainPrimaryKeyOrderOnSchemaChange() throws Exception {
startConnector();
consumer = testConsumer(1);
executeAndWait("CREATE TABLE test_should_maintain_primary_key_order(b INTEGER, d INTEGER, c INTEGER, a INTEGER, val INTEGER, PRIMARY KEY (b, d, c, a));" +
"INSERT INTO test_should_maintain_primary_key_order VALUES (1, 2, 3, 4, 5);");
SourceRecord record = consumer.remove();
assertEquals(1, ((Struct) record.value()).getStruct("after").getInt32("b").intValue());
List<Field> fields = record.keySchema().fields();
String[] expectedFieldOrder = new String[]{ "b", "d", "c", "a" };
for (int i = 0; i < fields.size(); i++) {
assertEquals("Key field names should in order", expectedFieldOrder[i], fields.get(i).name());
}
// Alter the table to trigger a schema change event. Validate that the new schema maintains the primary key order.
consumer.expects(1);
executeAndWait("ALTER TABLE test_should_maintain_primary_key_order ADD COLUMN val2 INTEGER;" +
"INSERT INTO test_should_maintain_primary_key_order VALUES (10, 11, 12, 13, 14, 15);");
record = consumer.remove();
assertEquals(10, ((Struct) record.value()).getStruct("after").getInt32("b").intValue());
fields = record.keySchema().fields();
for (int i = 0; i < fields.size(); i++) {
assertEquals("Key field names should in order", expectedFieldOrder[i], fields.get(i).name());
}
}
@Test
@FixFor("DBZ-1931")
public void testStreamMoneyAsDefaultPrecise() throws Exception {
TestHelper.execute("CREATE TABLE salary (pk SERIAL, name VARCHAR(50), salary money, PRIMARY KEY(pk));");
startConnector(config -> config
.with(PostgresConnectorConfig.SNAPSHOT_MODE, SnapshotMode.INITIAL)
.with(PostgresConnectorConfig.TABLE_INCLUDE_LIST, "public.salary"),
false);
waitForStreamingToStart();
consumer = testConsumer(1);
executeAndWait("INSERT INTO salary (name, salary) values ('Joe', 123.45);");
SourceRecord rec = assertRecordInserted("public.salary", PK_FIELD, 1);
assertSourceInfo(rec, "postgres", "public", "salary");
List<SchemaAndValueField> expected = Arrays.asList(
new SchemaAndValueField("pk", SchemaBuilder.int32().defaultValue(0).build(), 1),
new SchemaAndValueField("name", SchemaBuilder.OPTIONAL_STRING_SCHEMA, "Joe"),
new SchemaAndValueField("salary", Decimal.builder(2).optional().build(), BigDecimal.valueOf(123.45)));
assertRecordSchemaAndValues(expected, rec, Envelope.FieldName.AFTER);
assertThat(consumer.isEmpty()).isTrue();
}
@Test
@FixFor("DBZ-1931")
public void testStreamMoneyAsString() throws Exception {
TestHelper.execute("CREATE TABLE salary (pk SERIAL, name VARCHAR(50), salary money, PRIMARY KEY(pk));");
startConnector(config -> config
.with(PostgresConnectorConfig.DECIMAL_HANDLING_MODE, DecimalHandlingMode.STRING)
.with(PostgresConnectorConfig.SNAPSHOT_MODE, SnapshotMode.INITIAL)
.with(PostgresConnectorConfig.TABLE_INCLUDE_LIST, "public.salary"),
false);
waitForStreamingToStart();
consumer = testConsumer(1);
executeAndWait("INSERT INTO salary (name, salary) values ('Joe', 123.45);");
SourceRecord rec = assertRecordInserted("public.salary", PK_FIELD, 1);
assertSourceInfo(rec, "postgres", "public", "salary");
List<SchemaAndValueField> expected = Arrays.asList(
new SchemaAndValueField("pk", SchemaBuilder.int32().defaultValue(0).build(), 1),
new SchemaAndValueField("name", SchemaBuilder.OPTIONAL_STRING_SCHEMA, "Joe"),
new SchemaAndValueField("salary", SchemaBuilder.OPTIONAL_STRING_SCHEMA, "123.45"));
assertRecordSchemaAndValues(expected, rec, Envelope.FieldName.AFTER);
assertThat(consumer.isEmpty()).isTrue();
}
@Test
@FixFor("DBZ-1931")
public void testStreamMoneyAsDouble() throws Exception {
TestHelper.execute("CREATE TABLE salary (pk SERIAL, name VARCHAR(50), salary money, PRIMARY KEY(pk));");
startConnector(config -> config
.with(PostgresConnectorConfig.DECIMAL_HANDLING_MODE, DecimalHandlingMode.DOUBLE)
.with(PostgresConnectorConfig.SNAPSHOT_MODE, SnapshotMode.INITIAL)
.with(PostgresConnectorConfig.TABLE_INCLUDE_LIST, "public.salary"),
false);
waitForStreamingToStart();
consumer = testConsumer(1);
executeAndWait("INSERT INTO salary (name, salary) values ('Joe', 123.45);");
SourceRecord rec = assertRecordInserted("public.salary", PK_FIELD, 1);
assertSourceInfo(rec, "postgres", "public", "salary");
List<SchemaAndValueField> expected = Arrays.asList(
new SchemaAndValueField("pk", SchemaBuilder.int32().defaultValue(0).build(), 1),
new SchemaAndValueField("name", SchemaBuilder.OPTIONAL_STRING_SCHEMA, "Joe"),
new SchemaAndValueField("salary", SchemaBuilder.OPTIONAL_FLOAT64_SCHEMA, 123.45));
assertRecordSchemaAndValues(expected, rec, Envelope.FieldName.AFTER);
assertThat(consumer.isEmpty()).isTrue();
}
@Test
@FixFor("DBZ-1931")
public void testStreamMoneyPreciseDecimalFraction() throws Exception {
TestHelper.execute("CREATE TABLE salary (pk SERIAL, name VARCHAR(50), salary money, PRIMARY KEY(pk));");
startConnector(config -> config
.with(PostgresConnectorConfig.DECIMAL_HANDLING_MODE, DecimalHandlingMode.PRECISE)
.with(PostgresConnectorConfig.MONEY_FRACTION_DIGITS, 1)
.with(PostgresConnectorConfig.SNAPSHOT_MODE, SnapshotMode.INITIAL)
.with(PostgresConnectorConfig.TABLE_INCLUDE_LIST, "public.salary"),
false);
waitForStreamingToStart();
consumer = testConsumer(1);
executeAndWait("INSERT INTO salary (name, salary) values ('Joe', 123.4567);");
SourceRecord rec = assertRecordInserted("public.salary", PK_FIELD, 1);
assertSourceInfo(rec, "postgres", "public", "salary");
List<SchemaAndValueField> expected = Arrays.asList(
new SchemaAndValueField("pk", SchemaBuilder.int32().defaultValue(0).build(), 1),
new SchemaAndValueField("name", SchemaBuilder.OPTIONAL_STRING_SCHEMA, "Joe"),
new SchemaAndValueField("salary", Decimal.builder(1).optional().build(), BigDecimal.valueOf(123.5)));
assertRecordSchemaAndValues(expected, rec, Envelope.FieldName.AFTER);
assertThat(consumer.isEmpty()).isTrue();
}
@Test
@FixFor("DBZ-6648")
public void shouldHandleNonNullIntervalFiledDelete() throws Exception {
TestHelper.execute("CREATE TABLE test_interval (pk SERIAL, i interval NOT NULL, PRIMARY KEY(pk));");
// add a new entry and remove both
String statements = "INSERT INTO test_interval (pk, i) VALUES (1, '2 Months 3 Days');" +
"DELETE FROM test_interval WHERE pk = 1;";
startConnector(config -> config.with(PostgresConnectorConfig.INTERVAL_HANDLING_MODE, IntervalHandlingMode.STRING));
waitForStreamingToStart();
consumer = testConsumer(3);
executeAndWait(statements);
String topicPrefix = "public.test_interval";
String topicName = topicName(topicPrefix);
assertRecordInserted(topicPrefix, PK_FIELD, 1);
// entry removed
SourceRecord record = consumer.remove();
assertEquals(topicName, record.topic());
VerifyRecord.isValidDelete(record, PK_FIELD, 1);
// followed by a tombstone
record = consumer.remove();
assertEquals(topicName, record.topic());
VerifyRecord.isValidTombstone(record, PK_FIELD, 1);
}
@Test()
@FixFor({ "DBZ-6635", "DBZ-7316" })
public void testSendingHeartbeatsWithoutWalUpdates() throws Exception {
Function<Configuration.Builder, Configuration.Builder> configMapper = config -> config
.with(PostgresConnectorConfig.SNAPSHOT_MODE, SnapshotMode.NO_DATA)
.with(Heartbeat.HEARTBEAT_INTERVAL, "100")
.with(PostgresConnectorConfig.DROP_SLOT_ON_STOP, false);
// Start and stop the connector to ensure that the replication slot is created, and also
// to test that some initial heartbeats are created (DBZ-6635). Note that even though we
// aren't explicitly making any database changes to stream here, PG often will have some
// WAL activity to consume anyway that the searchWalPosition function will find.
startConnector(configMapper);
waitForStreamingToStart();
waitForSeveralHeartbeats();
stopConnector();
// Completely drain any remaining records from the first connector start, so that we are
// starting completely fresh with the connector restart.
consumeAvailableRecords(null);
// Also make sure that the PG replication slot is COMPLETELY CLEARED. (While manually reproducing DBZ-7316
// it was noted that sometimes there is still WAL to consume after the connector is stopped. So this is just
// being extra-safe that the next portion of the test really tests with an empty WAL to consume).
TestHelper.execute(getReplicationSlotChangesQuery());
try (PostgresConnection connection = TestHelper.create()) {
// Assert that the previous statement did indeed clear out the pending changes in the replication slot
String query = String.format("SELECT count(*) AS change_count FROM (%s) AS changes", getReplicationSlotChangesQuery());
long changeCount = connection.queryAndMap(
query,
rs -> {
assertThat(rs.next()).isTrue();
return rs.getLong(1);
});
assertThat(changeCount).isEqualTo(0);
}
// Start the connector again. This time, we're resuming from an existing replication slot,
// so the searchWalPosition function will be looking for a place to resume. We need to
// test that the loop in searchWalPosition is emitting heartbeats (DBZ-7316) when there
// is no WAL to consume (since we just cleared it out and asserted there was none).
startConnector(configMapper);
waitForSeveralHeartbeats();
// Manual cleanup, since DROP_SLOT_ON_STOP is false
stopConnector();
TestHelper.dropDefaultReplicationSlot();
TestHelper.dropPublication();
}
private void assertHeartBeatRecord(SourceRecord heartbeat) {
assertEquals("__debezium-heartbeat." + TestHelper.TEST_SERVER, heartbeat.topic());
Struct key = (Struct) heartbeat.key();
assertThat(key.get("serverName")).isEqualTo(TestHelper.TEST_SERVER);
Struct value = (Struct) heartbeat.value();
assertThat(value.getInt64("ts_ms")).isLessThanOrEqualTo(Instant.now().toEpochMilli());
}
private void waitForSeveralHeartbeats() {
final AtomicInteger heartbeatCount = new AtomicInteger();
Awaitility.await().atMost(10, TimeUnit.SECONDS).until(() -> {
final SourceRecord record = consumeRecord();
if (record != null) {
if (record.topic().equalsIgnoreCase("__debezium-heartbeat.test_server")) {
assertHeartBeatRecord(record);
heartbeatCount.incrementAndGet();
}
}
return heartbeatCount.get() > 10;
});
}
private String getReplicationSlotChangesQuery() {
switch (TestHelper.decoderPlugin()) {
case DECODERBUFS:
return "SELECT pg_logical_slot_get_binary_changes('" + ReplicationConnection.Builder.DEFAULT_SLOT_NAME + "', " +
"NULL, NULL)";
case PGOUTPUT:
return "SELECT pg_logical_slot_get_binary_changes('" + ReplicationConnection.Builder.DEFAULT_SLOT_NAME + "', " +
"NULL, NULL, 'proto_version', '1', 'publication_names', '" + ReplicationConnection.Builder.DEFAULT_PUBLICATION_NAME + "')";
}
throw new UnsupportedOperationException("Test must be updated for new logical decoder type.");
}
private void assertInsert(String statement, List<SchemaAndValueField> expectedSchemaAndValuesByColumn) {
assertInsert(statement, null, expectedSchemaAndValuesByColumn);
}
private void assertInsert(String statement, Integer pk, List<SchemaAndValueField> expectedSchemaAndValuesByColumn) {
TableId table = tableIdFromInsertStmt(statement);
String expectedTopicName = table.schema() + "." + table.table();
expectedTopicName = expectedTopicName.replaceAll("[ \"]", "_");
try {
executeAndWait(statement);
SourceRecord record = assertRecordInserted(expectedTopicName, pk != null ? PK_FIELD : null, pk);
assertRecordOffsetAndSnapshotSource(record, SnapshotRecord.FALSE);
assertSourceInfo(record, "postgres", table.schema(), table.table());
assertRecordSchemaAndValues(expectedSchemaAndValuesByColumn, record, Envelope.FieldName.AFTER);
}
catch (Exception e) {
throw new RuntimeException(e);
}
}
private void assertDelete(String statement, Integer pk,
List<SchemaAndValueField> expectedSchemaAndValuesByColumn) {
TableId table = tableIdFromDeleteStmt(statement);
String expectedTopicName = table.schema() + "." + table.table();
expectedTopicName = expectedTopicName.replaceAll("[ \"]", "_");
try {
executeAndWait(statement);
SourceRecord record = assertRecordDeleted(expectedTopicName, pk != null ? PK_FIELD : null, pk);
assertRecordOffsetAndSnapshotSource(record, SnapshotRecord.FALSE);
assertSourceInfo(record, "postgres", table.schema(), table.table());
assertRecordSchemaAndValues(expectedSchemaAndValuesByColumn, record, Envelope.FieldName.BEFORE);
assertRecordSchemaAndValues(null, record, Envelope.FieldName.AFTER);
}
catch (Exception e) {
throw new RuntimeException(e);
}
}
private SourceRecord assertRecordInserted(SourceRecord insertedRecord, String expectedTopicName, String pkColumn, Integer pk) throws InterruptedException {
assertEquals(topicName(expectedTopicName), insertedRecord.topic());
if (pk != null) {
VerifyRecord.isValidInsert(insertedRecord, pkColumn, pk);
}
else {
VerifyRecord.isValidInsert(insertedRecord);
}
return insertedRecord;
}
private SourceRecord assertRecordDeleted(String expectedTopicName, String pkColumn, Integer pk) throws InterruptedException {
assertFalse("records not generated", consumer.isEmpty());
SourceRecord deletedRecord = consumer.remove();
return assertRecordDeleted(deletedRecord, expectedTopicName, pkColumn, pk);
}
private SourceRecord assertRecordDeleted(SourceRecord deletedRecord, String expectedTopicName, String pkColumn, Integer pk) throws InterruptedException {
assertEquals(topicName(expectedTopicName), deletedRecord.topic());
if (pk != null) {
VerifyRecord.isValidDelete(deletedRecord, pkColumn, pk);
}
else {
VerifyRecord.isValidDelete(deletedRecord);
}
return deletedRecord;
}
private SourceRecord assertRecordInserted(String expectedTopicName, String pkColumn, Integer pk) throws InterruptedException {
assertFalse("records not generated", consumer.isEmpty());
SourceRecord insertedRecord = consumer.remove();
return assertRecordInserted(insertedRecord, expectedTopicName, pkColumn, pk);
}
private void executeAndWait(String statements) throws Exception {
TestHelper.execute(statements);
consumer.await(TestHelper.waitTimeForRecords() * 30, TimeUnit.SECONDS);
}
private void executeAndWaitForNoRecords(String statements) throws Exception {
TestHelper.execute(statements);
consumer.await(5, TimeUnit.SECONDS);
}
}
```
|
```smalltalk
/* ****************************************************************************
*
*
* dlr@microsoft.com. By using this source code in any fashion, you are agreeing to be bound
*
* You must not remove this notice, or any other, from this software.
*
*
* ***************************************************************************/
using System.Collections.Generic;
using System.Globalization;
namespace System.Management.Automation.Interpreter
{
internal sealed class LoadObjectInstruction : Instruction
{
private readonly object _value;
internal LoadObjectInstruction(object value)
{
_value = value;
}
public override int ProducedStack { get { return 1; } }
public override int Run(InterpretedFrame frame)
{
frame.Data[frame.StackIndex++] = _value;
return +1;
}
public override string ToString()
{
return "LoadObject(" + (_value ?? "null") + ")";
}
}
internal sealed class LoadCachedObjectInstruction : Instruction
{
private readonly uint _index;
internal LoadCachedObjectInstruction(uint index)
{
_index = index;
}
public override int ProducedStack { get { return 1; } }
public override int Run(InterpretedFrame frame)
{
frame.Data[frame.StackIndex++] = frame.Interpreter._objects[_index];
return +1;
}
public override string ToDebugString(int instructionIndex, object cookie, Func<int, int> labelIndexer, IList<object> objects)
{
return string.Format(CultureInfo.InvariantCulture, "LoadCached({0}: {1})", _index, objects[(int)_index]);
}
public override string ToString()
{
return "LoadCached(" + _index + ")";
}
}
internal sealed class PopInstruction : Instruction
{
internal static readonly PopInstruction Instance = new PopInstruction();
private PopInstruction() { }
public override int ConsumedStack { get { return 1; } }
public override int Run(InterpretedFrame frame)
{
frame.Pop();
return +1;
}
public override string ToString()
{
return "Pop()";
}
}
internal sealed class DupInstruction : Instruction
{
internal static readonly DupInstruction Instance = new DupInstruction();
private DupInstruction() { }
public override int ConsumedStack { get { return 0; } }
public override int ProducedStack { get { return 1; } }
public override int Run(InterpretedFrame frame)
{
frame.Data[frame.StackIndex++] = frame.Peek();
return +1;
}
public override string ToString()
{
return "Dup()";
}
}
}
```
|
Thakin Soe Myint () was a Burmese politician and a leader of the National League for Democracy. Born in the Irrawaddy delta region in 1923, he first entered politics by joining the Dobama Asiayone branch at Myaungmya Township. He was a member of several political parties, including the People's Revolutionary Party, Myaungmya District Socialist Party, Anti-Fascist People's Freedom League, Socialist Party and People's Youth League. Most recently, he served as a member of the National League for Democracy's Central Executive Committee, joining in 1988, during the 8888 Uprising.
He died at his home in Yangon's South Okkalapa Township on 20 May 2010, of a heart attack. Soe Myint was cremated at Yangon's Yayway Cemetery on 22 May 2010.
References
National League for Democracy politicians
People from Ayeyarwady Region
1923 births
2010 deaths
|
```c++
//
// 2.6.auto.cpp
// chapter 2 language usability
// modern cpp tutorial
//
// created by changkun at changkun.de
// path_to_url
//
#include <initializer_list>
#include <vector>
#include <iostream>
class MagicFoo {
public:
std::vector<int> vec;
MagicFoo(std::initializer_list<int> list) {
for (auto it = list.begin(); it != list.end(); ++it) {
vec.push_back(*it);
}
}
};
int add(auto x, auto y) { // Supported in C++20
return x+y;
}
int main() {
MagicFoo magicFoo = {1, 2, 3, 4, 5};
std::cout << "magicFoo: ";
for (auto it = magicFoo.vec.begin(); it != magicFoo.vec.end(); ++it) {
std::cout << *it << ", ";
}
std::cout << std::endl;
auto i = 5; // type int
auto j = 6; // type int
std::cout << add(i, j) << std::endl;
auto arr = new auto(10); // type int*
// auto auto_arr2[10] = {arr}; // invalid
return 0;
}
```
|
```go
//
//
// path_to_url
//
// Unless required by applicable law or agreed to in writing, software
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
package istioagent
import (
"context"
"fmt"
"strings"
"time"
discovery "github.com/envoyproxy/go-control-plane/envoy/service/discovery/v3"
"go.uber.org/atomic"
google_rpc "google.golang.org/genproto/googleapis/rpc/status"
"google.golang.org/grpc"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/metadata"
anypb "google.golang.org/protobuf/types/known/anypb"
"istio.io/istio/pilot/pkg/features"
"istio.io/istio/pkg/channels"
"istio.io/istio/pkg/istio-agent/metrics"
"istio.io/istio/pkg/log"
"istio.io/istio/pkg/model"
"istio.io/istio/pkg/slices"
"istio.io/istio/pkg/wasm"
)
// sendDeltaRequest is a small wrapper around sending to con.requestsChan. This ensures that we do not
// block forever on
func (con *ProxyConnection) sendDeltaRequest(req *discovery.DeltaDiscoveryRequest) {
con.deltaRequestsChan.Put(req)
}
// DeltaAggregatedResources is an implementation of Delta XDS API used for proxying between Istiod and Envoy.
// Every time envoy makes a fresh connection to the agent, we reestablish a new connection to the upstream xds
// This ensures that a new connection between istiod and agent doesn't end up consuming pending messages from envoy
// as the new connection may not go to the same istiod. Vice versa case also applies.
func (p *XdsProxy) DeltaAggregatedResources(downstream DeltaDiscoveryStream) error {
proxyLog.Debugf("accepted delta xds connection from envoy, forwarding to upstream")
con := &ProxyConnection{
conID: connectionNumber.Inc(),
upstreamError: make(chan error), // can be produced by recv and send
downstreamError: make(chan error), // can be produced by recv and send
deltaRequestsChan: channels.NewUnbounded[*discovery.DeltaDiscoveryRequest](),
// Allow a buffer of 1. This ensures we queue up at most 2 (one in process, 1 pending) responses before forwarding.
deltaResponsesChan: make(chan *discovery.DeltaDiscoveryResponse, 1),
stopChan: make(chan struct{}),
downstreamDeltas: downstream,
}
p.registerStream(con)
defer p.unregisterStream(con)
ctx, cancel := context.WithTimeout(context.Background(), time.Second*5)
defer cancel()
upstreamConn, err := p.buildUpstreamConn(ctx)
if err != nil {
proxyLog.Errorf("failed to connect to upstream %s: %v", p.istiodAddress, err)
metrics.IstiodConnectionFailures.Increment()
return err
}
defer upstreamConn.Close()
xds := discovery.NewAggregatedDiscoveryServiceClient(upstreamConn)
ctx = metadata.AppendToOutgoingContext(context.Background(), "ClusterID", p.clusterID)
for k, v := range p.xdsHeaders {
ctx = metadata.AppendToOutgoingContext(ctx, k, v)
}
// We must propagate upstream termination to Envoy. This ensures that we resume the full XDS sequence on new connection
return p.handleDeltaUpstream(ctx, con, xds)
}
func (p *XdsProxy) handleDeltaUpstream(ctx context.Context, con *ProxyConnection, xds discovery.AggregatedDiscoveryServiceClient) error {
log := proxyLog.WithLabels("id", con.conID)
deltaUpstream, err := xds.DeltaAggregatedResources(ctx,
grpc.MaxCallRecvMsgSize(defaultClientMaxReceiveMessageSize))
if err != nil {
// Envoy logs errors again, so no need to log beyond debug level
log.Debugf("failed to create delta upstream grpc client: %v", err)
// Increase metric when xds connection error, for example: forgot to restart ingressgateway or sidecar after changing root CA.
metrics.IstiodConnectionErrors.Increment()
return err
}
log.Infof("connected to delta upstream XDS server: %s", p.istiodAddress)
defer log.Debugf("disconnected from delta XDS server: %s", p.istiodAddress)
con.upstreamDeltas = deltaUpstream
// handle responses from istiod
go func() {
for {
resp, err := con.upstreamDeltas.Recv()
if err != nil {
upstreamErr(con, err)
return
}
select {
case con.deltaResponsesChan <- resp:
case <-con.stopChan:
}
}
}()
go p.handleUpstreamDeltaRequest(con)
go p.handleUpstreamDeltaResponse(con)
for {
select {
case err := <-con.upstreamError:
return err
case err := <-con.downstreamError:
// On downstream error, we will return. This propagates the error to downstream envoy which will trigger reconnect
return err
case <-con.stopChan:
log.Debugf("upstream stopped")
return nil
}
}
}
func (p *XdsProxy) handleUpstreamDeltaRequest(con *ProxyConnection) {
log := proxyLog.WithLabels("id", con.conID)
initialRequestsSent := atomic.NewBool(false)
go func() {
for {
// recv delta xds requests from envoy
req, err := con.downstreamDeltas.Recv()
if err != nil {
downstreamErr(con, err)
return
}
// forward to istiod
con.sendDeltaRequest(req)
if !initialRequestsSent.Load() && req.TypeUrl == model.ListenerType {
// fire off an initial NDS request
if _, f := p.handlers[model.NameTableType]; f {
con.sendDeltaRequest(&discovery.DeltaDiscoveryRequest{
TypeUrl: model.NameTableType,
})
}
// fire off an initial PCDS request
if _, f := p.handlers[model.ProxyConfigType]; f {
con.sendDeltaRequest(&discovery.DeltaDiscoveryRequest{
TypeUrl: model.ProxyConfigType,
})
}
// set flag before sending the initial request to prevent race.
initialRequestsSent.Store(true)
// Fire of a configured initial request, if there is one
p.connectedMutex.RLock()
initialRequest := p.initialDeltaHealthRequest
if initialRequest != nil {
con.sendDeltaRequest(initialRequest)
}
p.connectedMutex.RUnlock()
}
}
}()
defer func() {
_ = con.upstreamDeltas.CloseSend()
}()
for {
select {
case req := <-con.deltaRequestsChan.Get():
con.deltaRequestsChan.Load()
if req.TypeUrl == model.HealthInfoType && !initialRequestsSent.Load() {
// only send healthcheck probe after LDS request has been sent
continue
}
log.WithLabels(
"type", model.GetShortType(req.TypeUrl),
"sub", len(req.ResourceNamesSubscribe),
"unsub", len(req.ResourceNamesUnsubscribe),
"nonce", req.ResponseNonce,
"initial", len(req.InitialResourceVersions),
).Debugf("delta request")
metrics.XdsProxyRequests.Increment()
if req.TypeUrl == model.ExtensionConfigurationType {
p.ecdsLastNonce.Store(req.ResponseNonce)
}
if err := con.upstreamDeltas.Send(req); err != nil {
err = fmt.Errorf("send error for type url %s: %v", req.TypeUrl, err)
upstreamErr(con, err)
return
}
case <-con.stopChan:
return
}
}
}
func (p *XdsProxy) handleUpstreamDeltaResponse(con *ProxyConnection) {
forwardEnvoyCh := make(chan *discovery.DeltaDiscoveryResponse, 1)
for {
select {
case resp := <-con.deltaResponsesChan:
// TODO: separate upstream response handling from requests sending, which are both time costly
proxyLog.WithLabels(
"id", con.conID,
"type", model.GetShortType(resp.TypeUrl),
"nonce", resp.Nonce,
"resources", len(resp.Resources),
"removes", len(resp.RemovedResources),
).Debugf("upstream response")
metrics.XdsProxyResponses.Increment()
if h, f := p.handlers[resp.TypeUrl]; f {
if len(resp.Resources) == 0 {
// Empty response, nothing to do
// This assumes internal types are always singleton
break
}
err := h(resp.Resources[0].Resource)
var errorResp *google_rpc.Status
if err != nil {
errorResp = &google_rpc.Status{
Code: int32(codes.Internal),
Message: err.Error(),
}
}
// Send ACK/NACK
con.sendDeltaRequest(&discovery.DeltaDiscoveryRequest{
TypeUrl: resp.TypeUrl,
ResponseNonce: resp.Nonce,
ErrorDetail: errorResp,
})
continue
}
switch resp.TypeUrl {
case model.ExtensionConfigurationType:
if features.WasmRemoteLoadConversion {
// If Wasm remote load conversion feature is enabled, rewrite and send.
go p.deltaRewriteAndForward(con, resp, func(resp *discovery.DeltaDiscoveryResponse) {
// Forward the response using the thread of `handleUpstreamResponse`
// to prevent concurrent access to forwardToEnvoy
select {
case forwardEnvoyCh <- resp:
case <-con.stopChan:
}
})
} else {
// Otherwise, forward ECDS resource update directly to Envoy.
forwardDeltaToEnvoy(con, resp)
}
default:
if strings.HasPrefix(resp.TypeUrl, model.DebugType) {
p.forwardDeltaToTap(resp)
} else {
forwardDeltaToEnvoy(con, resp)
}
}
case resp := <-forwardEnvoyCh:
forwardDeltaToEnvoy(con, resp)
case <-con.stopChan:
return
}
}
}
func (p *XdsProxy) deltaRewriteAndForward(con *ProxyConnection, resp *discovery.DeltaDiscoveryResponse, forward func(resp *discovery.DeltaDiscoveryResponse)) {
resources := make([]*anypb.Any, 0, len(resp.Resources))
for i := range resp.Resources {
resources = append(resources, resp.Resources[i].Resource)
}
if err := wasm.MaybeConvertWasmExtensionConfig(resources, p.wasmCache); err != nil {
proxyLog.Debugf("sending NACK for ECDS resources %+v, err: %+v", resp.Resources, err)
con.sendDeltaRequest(&discovery.DeltaDiscoveryRequest{
TypeUrl: resp.TypeUrl,
ResponseNonce: resp.Nonce,
ErrorDetail: &google_rpc.Status{
Code: int32(codes.Internal),
Message: err.Error(),
},
})
return
}
for i := range resources {
resp.Resources[i].Resource = resources[i]
}
proxyLog.WithLabels("resources", slices.Map(resp.Resources, (*discovery.Resource).GetName), "removes", resp.RemovedResources).Debugf("forward ECDS")
forward(resp)
}
func forwardDeltaToEnvoy(con *ProxyConnection, resp *discovery.DeltaDiscoveryResponse) {
if !model.IsEnvoyType(resp.TypeUrl) && resp.TypeUrl != model.WorkloadType {
proxyLog.Errorf("Skipping forwarding type url %s to Envoy as is not a valid Envoy type", resp.TypeUrl)
return
}
if con.isClosed() {
proxyLog.WithLabels("id", con.conID).Errorf("downstream dropped delta xds push to Envoy, connection already closed")
return
}
if err := sendDownstreamDelta(con.downstreamDeltas, resp); err != nil {
err = fmt.Errorf("send error for type url %s: %v", resp.TypeUrl, err)
downstreamErr(con, err)
return
}
}
func sendDownstreamDelta(deltaDownstream DeltaDiscoveryStream, res *discovery.DeltaDiscoveryResponse) error {
tStart := time.Now()
defer func() {
// This is a hint to help debug slow responses.
if time.Since(tStart) > 10*time.Second {
proxyLog.Warnf("sendDownstreamDelta took %v", time.Since(tStart))
}
}()
return deltaDownstream.Send(res)
}
func (p *XdsProxy) sendDeltaHealthRequest(req *discovery.DeltaDiscoveryRequest) {
p.connectedMutex.Lock()
// Immediately send if we are currently connected.
if p.connected != nil && p.connected.deltaRequestsChan != nil {
p.connected.deltaRequestsChan.Put(req)
}
// Otherwise place it as our initial request for new connections
p.initialDeltaHealthRequest = req
p.connectedMutex.Unlock()
}
func (p *XdsProxy) forwardDeltaToTap(resp *discovery.DeltaDiscoveryResponse) {
select {
// Convert back to a SotW response
case p.tapResponseChannel <- &discovery.DiscoveryResponse{
VersionInfo: resp.SystemVersionInfo,
Resources: slices.Map(resp.Resources, (*discovery.Resource).GetResource),
Canary: false,
TypeUrl: resp.TypeUrl,
Nonce: resp.Nonce,
ControlPlane: resp.ControlPlane,
}:
default:
log.Infof("tap response %q arrived too late; discarding", resp.TypeUrl)
}
}
```
|
Sir Lancelot is a platform game published in 1984 by Melbourne House for the Amstrad CPC and ZX Spectrum home computers.
Gameplay
Sir Lancelot, controlled by the player, must explore the 24 rooms of the castle and collect all the objects (which come in many forms but glow to make them identifiable) in each room before making his way to the exit to the next. His task is made more difficult by the presence of various guardians (including animals and soldiers) who he must avoid in each room. He also has a time limit in which to complete each room. Control is very simple, with only three keys needed: left, right and jump. A joystick can also be used.
The ZX Spectrum version has the rooms visited progressively whilst the Amstrad CPC version allows the rooms to be completed in any order. The Amstrad version also has a high-score table which the Spectrum version lacks.
Reception
See also
Manic Miner
References
External links
Review of the game from CRASH magazine.
1984 video games
Amstrad CPC games
Video games based on Arthurian legend
Video games developed in the United Kingdom
ZX Spectrum games
|
Wikstroemia alternifolia is a shrub, of the family Thymelaeaceae. It is native to China, specifically Gansu and northern Sichuan.
Description
The shrub has white pale branches. It is found on open bushy slopes and rocks at altitudes under 2500 m.
References
alternifolia
|
```c++
// Version 1.0. (See accompanying file LICENSE_1_0.txt or copy at
// path_to_url
#if !defined(BOOST_VMD_DETAIL_DATA_EQUAL_HEADERS_HPP)
#define BOOST_VMD_DETAIL_DATA_EQUAL_HEADERS_HPP
#include <boost/preprocessor/array/size.hpp>
#include <boost/preprocessor/comparison/equal.hpp>
#include <boost/preprocessor/control/iif.hpp>
#include <boost/preprocessor/control/while.hpp>
#include <boost/preprocessor/list/size.hpp>
#include <boost/preprocessor/seq/size.hpp>
#include <boost/preprocessor/tuple/elem.hpp>
#include <boost/preprocessor/tuple/size.hpp>
#include <boost/vmd/equal.hpp>
#include <boost/vmd/identity.hpp>
#include <boost/vmd/detail/data_equal_common.hpp>
#endif /* BOOST_VMD_DETAIL_DATA_EQUAL_HEADERS_HPP */
```
|
In mathematical logic, the disjunction and existence properties are the "hallmarks" of constructive theories such as Heyting arithmetic and constructive set theories (Rathjen 2005).
Definitions
The disjunction property is satisfied by a theory if, whenever a sentence A ∨ B is a theorem, then either A is a theorem, or B is a theorem.
The existence property or witness property is satisfied by a theory if, whenever a sentence is a theorem, where A(x) has no other free variables, then there is some term t such that the theory proves .
Related properties
Rathjen (2005) lists five properties that a theory may possess. These include the disjunction property (DP), the existence property (EP), and three additional properties:
The numerical existence property (NEP) states that if the theory proves , where φ has no other free variables, then the theory proves for some Here is a term in representing the number n.
Church's rule (CR) states that if the theory proves then there is a natural number e such that, letting be the computable function with index e, the theory proves .
A variant of Church's rule, CR1, states that if the theory proves then there is a natural number e such that the theory proves is total and proves .
These properties can only be directly expressed for theories that have the ability to quantify over natural numbers and, for CR1, quantify over functions from to . In practice, one may say that a theory has one of these properties if a definitional extension of the theory has the property stated above (Rathjen 2005).
Results
Non-examples and examples
Almost by definition, a theory that accepts excluded middle while having independent statements does not have the disjunction property. So all classical theories expressing Robinson arithmetic do not have it. Most classical theories, such as Peano arithmetic and ZFC in turn do not validate the existence property either, e.g. because they validate the least number principle existence claim. But some classical theories, such as ZFC plus the axiom of constructibility, do have a weaker form of the existence property (Rathjen 2005).
Heyting arithmetic is well known for having the disjunction property and the (numerical) existence property.
While the earliest results were for constructive theories of arithmetic, many results are also known for constructive set theories (Rathjen 2005). John Myhill (1973) showed that IZF with the axiom of replacement eliminated in favor of the axiom of collection has the disjunction property, the numerical existence property, and the existence property. Michael Rathjen (2005) proved that CZF has the disjunction property and the numerical existence property.
Freyd and Scedrov (1990) observed that the disjunction property holds in free Heyting algebras and free topoi. In categorical terms, in the free topos, that corresponds to the fact that the terminal object, , is not the join of two proper subobjects. Together with the existence property it translates to the assertion that is an indecomposable projective object—the functor it represents (the global-section functor) preserves epimorphisms and coproducts.
Relationship between properties
There are several relationship between the five properties discussed above.
In the setting of arithmetic, the numerical existence property implies the disjunction property. The proof uses the fact that a disjunction can be rewritten as an existential formula quantifying over natural numbers:
.
Therefore, if
is a theorem of , so is .
Thus, assuming the numerical existence property, there exists some such that
is a theorem. Since is a numeral, one may concretely check the value of : if then is a theorem and if then is a theorem.
Harvey Friedman (1974) proved that in any recursively enumerable extension of intuitionistic arithmetic, the disjunction property implies the numerical existence property. The proof uses self-referential sentences in way similar to the proof of Gödel's incompleteness theorems. The key step is to find a bound on the existential quantifier in a formula (∃x)A(x), producing a bounded existential formula
(∃x<n)A(x). The bounded formula may then be written as a finite disjunction A(1)∨A(2)∨...∨A(n). Finally, disjunction elimination may be used to show that one of the disjuncts is provable.
History
Kurt Gödel (1932) stated without proof that intuitionistic propositional logic (with no additional axioms) has the disjunction property; this result was proven and extended to intuitionistic predicate logic by Gerhard Gentzen (1934, 1935). Stephen Cole Kleene (1945) proved that Heyting arithmetic has the disjunction property and the existence property. Kleene's method introduced the technique of realizability, which is now one of the main methods in the study of constructive theories (Kohlenbach 2008; Troelstra 1973).
See also
Constructive set theory
Heyting arithmetic
Law of excluded middle
Realizability
Existential quantifier
References
Peter J. Freyd and Andre Scedrov, 1990, Categories, Allegories. North-Holland.
Harvey Friedman, 1975, The disjunction property implies the numerical existence property, State University of New York at Buffalo.
Gerhard Gentzen, 1934, "Untersuchungen über das logische Schließen. I", Mathematische Zeitschrift v. 39 n. 2, pp. 176–210.
Gerhard Gentzen, 1935, "Untersuchungen über das logische Schließen. II", Mathematische Zeitschrift v. 39 n. 3, pp. 405–431.
Kurt Gödel, 1932, "Zum intuitionistischen Aussagenkalkül", Anzeiger der Akademie der Wissenschaftischen in Wien, v. 69, pp. 65–66.
Stephen Cole Kleene, 1945, "On the interpretation of intuitionistic number theory," Journal of Symbolic Logic, v. 10, pp. 109–124.
Ulrich Kohlenbach, 2008, Applied proof theory, Springer.
John Myhill, 1973, "Some properties of Intuitionistic Zermelo-Fraenkel set theory", in A. Mathias and H. Rogers, Cambridge Summer School in Mathematical Logic, Lectures Notes in Mathematics v. 337, pp. 206–231, Springer.
Michael Rathjen, 2005, "The Disjunction and Related Properties for Constructive Zermelo-Fraenkel Set Theory", Journal of Symbolic Logic, v. 70 n. 4, pp. 1233–1254.
Anne S. Troelstra, ed. (1973), Metamathematical investigation of intuitionistic arithmetic and analysis, Springer.
External links
Intuitionistic Logic by Joan Moschovakis, Stanford Encyclopedia of Philosophy
Proof theory
Constructivism (mathematics)
|
```smarty
{{/*
*/}}
{{/* vim: set filetype=mustache: */}}
{{/*
Return a soft nodeAffinity definition
{{ include "common.affinities.nodes.soft" (dict "key" "FOO" "values" (list "BAR" "BAZ")) -}}
*/}}
{{- define "common.affinities.nodes.soft" -}}
preferredDuringSchedulingIgnoredDuringExecution:
- preference:
matchExpressions:
- key: {{ .key }}
operator: In
values:
{{- range .values }}
- {{ . | quote }}
{{- end }}
weight: 1
{{- end -}}
{{/*
Return a hard nodeAffinity definition
{{ include "common.affinities.nodes.hard" (dict "key" "FOO" "values" (list "BAR" "BAZ")) -}}
*/}}
{{- define "common.affinities.nodes.hard" -}}
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: {{ .key }}
operator: In
values:
{{- range .values }}
- {{ . | quote }}
{{- end }}
{{- end -}}
{{/*
Return a nodeAffinity definition
{{ include "common.affinities.nodes" (dict "type" "soft" "key" "FOO" "values" (list "BAR" "BAZ")) -}}
*/}}
{{- define "common.affinities.nodes" -}}
{{- if eq .type "soft" }}
{{- include "common.affinities.nodes.soft" . -}}
{{- else if eq .type "hard" }}
{{- include "common.affinities.nodes.hard" . -}}
{{- end -}}
{{- end -}}
{{/*
Return a topologyKey definition
{{ include "common.affinities.topologyKey" (dict "topologyKey" "BAR") -}}
*/}}
{{- define "common.affinities.topologyKey" -}}
{{ .topologyKey | default "kubernetes.io/hostname" -}}
{{- end -}}
{{/*
Return a soft podAffinity/podAntiAffinity definition
{{ include "common.affinities.pods.soft" (dict "component" "FOO" "customLabels" .Values.podLabels "extraMatchLabels" .Values.extraMatchLabels "topologyKey" "BAR" "extraPodAffinityTerms" .Values.extraPodAffinityTerms "context" $) -}}
*/}}
{{- define "common.affinities.pods.soft" -}}
{{- $component := default "" .component -}}
{{- $customLabels := default (dict) .customLabels -}}
{{- $extraMatchLabels := default (dict) .extraMatchLabels -}}
{{- $extraPodAffinityTerms := default (list) .extraPodAffinityTerms -}}
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchLabels: {{- (include "common.labels.matchLabels" ( dict "customLabels" $customLabels "context" .context )) | nindent 10 }}
{{- if not (empty $component) }}
{{ printf "app.kubernetes.io/component: %s" $component }}
{{- end }}
{{- range $key, $value := $extraMatchLabels }}
{{ $key }}: {{ $value | quote }}
{{- end }}
topologyKey: {{ include "common.affinities.topologyKey" (dict "topologyKey" .topologyKey) }}
weight: 1
{{- range $extraPodAffinityTerms }}
- podAffinityTerm:
labelSelector:
matchLabels: {{- (include "common.labels.matchLabels" ( dict "customLabels" $customLabels "context" $.context )) | nindent 10 }}
{{- if not (empty $component) }}
{{ printf "app.kubernetes.io/component: %s" $component }}
{{- end }}
{{- range $key, $value := .extraMatchLabels }}
{{ $key }}: {{ $value | quote }}
{{- end }}
topologyKey: {{ include "common.affinities.topologyKey" (dict "topologyKey" .topologyKey) }}
weight: {{ .weight | default 1 -}}
{{- end -}}
{{- end -}}
{{/*
Return a hard podAffinity/podAntiAffinity definition
{{ include "common.affinities.pods.hard" (dict "component" "FOO" "customLabels" .Values.podLabels "extraMatchLabels" .Values.extraMatchLabels "topologyKey" "BAR" "extraPodAffinityTerms" .Values.extraPodAffinityTerms "context" $) -}}
*/}}
{{- define "common.affinities.pods.hard" -}}
{{- $component := default "" .component -}}
{{- $customLabels := default (dict) .customLabels -}}
{{- $extraMatchLabels := default (dict) .extraMatchLabels -}}
{{- $extraPodAffinityTerms := default (list) .extraPodAffinityTerms -}}
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels: {{- (include "common.labels.matchLabels" ( dict "customLabels" $customLabels "context" .context )) | nindent 8 }}
{{- if not (empty $component) }}
{{ printf "app.kubernetes.io/component: %s" $component }}
{{- end }}
{{- range $key, $value := $extraMatchLabels }}
{{ $key }}: {{ $value | quote }}
{{- end }}
topologyKey: {{ include "common.affinities.topologyKey" (dict "topologyKey" .topologyKey) }}
{{- range $extraPodAffinityTerms }}
- labelSelector:
matchLabels: {{- (include "common.labels.matchLabels" ( dict "customLabels" $customLabels "context" $.context )) | nindent 8 }}
{{- if not (empty $component) }}
{{ printf "app.kubernetes.io/component: %s" $component }}
{{- end }}
{{- range $key, $value := .extraMatchLabels }}
{{ $key }}: {{ $value | quote }}
{{- end }}
topologyKey: {{ include "common.affinities.topologyKey" (dict "topologyKey" .topologyKey) }}
{{- end -}}
{{- end -}}
{{/*
Return a podAffinity/podAntiAffinity definition
{{ include "common.affinities.pods" (dict "type" "soft" "key" "FOO" "values" (list "BAR" "BAZ")) -}}
*/}}
{{- define "common.affinities.pods" -}}
{{- if eq .type "soft" }}
{{- include "common.affinities.pods.soft" . -}}
{{- else if eq .type "hard" }}
{{- include "common.affinities.pods.hard" . -}}
{{- end -}}
{{- end -}}
```
|
```python
import numpy as np
import torch
from pyannote.audio.utils.permutation import permutate
def test_permutate_torch():
num_frames, num_speakers = 10, 3
actual_permutations = [
(0, 1, 2),
(0, 2, 1),
(1, 0, 2),
(1, 2, 0),
(2, 0, 1),
(2, 1, 0),
]
batch_size = len(actual_permutations)
y2 = torch.randn((num_frames, num_speakers))
y1 = torch.zeros((batch_size, num_frames, num_speakers))
for p, permutation in enumerate(actual_permutations):
y1[p] = y2[:, permutation]
permutated_y2, permutations = permutate(y1, y2)
assert actual_permutations == permutations
for p, permutation in enumerate(actual_permutations):
np.testing.assert_allclose(permutated_y2[p], y2[:, permutation])
def test_permutate_numpy():
num_frames, num_speakers = 10, 3
actual_permutations = [
(0, 1, 2),
(0, 2, 1),
(1, 0, 2),
(1, 2, 0),
(2, 0, 1),
(2, 1, 0),
]
batch_size = len(actual_permutations)
y2 = np.random.randn(num_frames, num_speakers)
y1 = np.zeros((batch_size, num_frames, num_speakers))
for p, permutation in enumerate(actual_permutations):
y1[p] = y2[:, permutation]
permutated_y2, permutations = permutate(y1, y2)
assert actual_permutations == permutations
for p, permutation in enumerate(actual_permutations):
np.testing.assert_allclose(permutated_y2[p], y2[:, permutation])
def test_permutate_less_speakers():
num_frames = 10
actual_permutations = [
(0, 1, None),
(0, None, 1),
(1, 0, None),
(1, None, 0),
(None, 0, 1),
(None, 1, 0),
]
batch_size = len(actual_permutations)
y2 = np.random.randn(num_frames, 2)
y1 = np.zeros((batch_size, num_frames, 3))
for p, permutation in enumerate(actual_permutations):
for i, j in enumerate(permutation):
if j is not None:
y1[p, :, i] = y2[:, j]
permutated_y2, permutations = permutate(y1, y2)
assert permutations == actual_permutations
def test_permutate_more_speakers():
num_frames = 10
actual_permutations = [
(0, 1),
(0, 2),
(1, 0),
(1, 2),
(2, 0),
(2, 1),
]
batch_size = len(actual_permutations)
y2 = np.random.randn(num_frames, 3)
y1 = np.zeros((batch_size, num_frames, 2))
for p, permutation in enumerate(actual_permutations):
for i, j in enumerate(permutation):
y1[p, :, i] = y2[:, j]
permutated_y2, permutations = permutate(y1, y2)
assert permutations == actual_permutations
np.testing.assert_allclose(permutated_y2, y1)
```
|
National Airways Corporation is a commercial aviation company with its head office on the grounds of Lanseria Airport in Johannesburg, South Africa. The company offers a range of products and services for fixed-wing aircraft and helicopter markets, including aircraft sales, maintenance, parts, value-added products, aircraft charter, international operations, and pilot training. NAC Operations is the flight operations and charter division. NAC operates a South African network of offices, its main base is Lanseria Airport, with office hubs at Cape Town's V&A Waterfront, Durban, Ultimate Heliport in Midrand and Rand Airport. NAC also has shareholding in Discovery Jets based in Fort Lauderdale, USA.
History
The general aviation company was established in 1946 and started operations on 25 May 1946 as National Air Charters and in 1962 acquired United Air Services; the following year National Air Charter was renamed to National Airways Corporation (NAC)
In 1986 NAC acquired Wings Airways and united with Namakwaland Lugdiens.
On 1 July 1999, Imperial Holdings, a mobility group listed on the Johannesburg Stock Exchange, purchased 62% NAC share.
In 2011, Naturelink Aviation was merged into National Airways Corporation.
The company is owned by a consortium of shareholders, including the Directors of NAC."/>
Dealerships and representation
NAC is an authorised dealer and representative for a variety of manufacturers and service providers in the aviation industry.
Aircraft manufacturers
Dassault Aviation
Bell Helicopter
Robinson Helicopter
Piper Aircraft
Quest Kodiak
Charter Fleet
As of February 2023 the National Airways charter fleet includes:
{| class="toccolours" border="1" cellpadding="3" style="white-space:nowrap; border-collapse:collapse; margin:auto;"
|+ National Airways Fleet
|- bgcolor=lightblue
! Aircraft
! In Fleet
! Notes
|-
|Boeing 737-500
|align="center"|
|
|-
|Global XRS
|align="center"|
|
|-
|Gulfstream G550
|align="center"|
|
|-
|Gulfstream 650ER
|align="center"|
|
|-
|Falcon 7X
|align="center"|
|
|-
|Hawker 4000
|align="center"|
|
|-
|Challenger 350
|align="center"|
|
|-
|Bombardier Learjet 60
|align="center"|
|
|-
|Bombardier Learjet 45XR
|align="center"|
|
|-
|Hawker 800XP
|align="center"|
|
|-
|Citation Mustang
|align="center"|
|
|-
|Learjet 35A
|align="center"|
|
|-
|King Air 200
|align="center"|
|
|-
|B1900D
|align="center"|
|
|-
|Embraer 120
|align="center"|
|
|-
|PC12
|align="center"|
|
|-
|Cessna 208B
|align="center"|
|
|-
|Cessna 402C
|align="center"|
|
References
Aviation companies
Companies of South Africa
Commercial aviation
|
```javascript
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// in the Software without restriction, including without limitation the rights
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
// THE SOFTWARE.
import {range} from 'd3-array';
import {scaleLinear} from 'd3-scale';
export const ORIENTATION = {
TOP: 'top',
LEFT: 'left',
RIGHT: 'right',
BOTTOM: 'bottom',
VERTICAL: 'vertical',
HORIZONTAL: 'horizontal'
};
export const DIRECTION = {
VERTICAL: 'vertical',
HORIZONTAL: 'horizontal'
};
/**
* Get total amount of ticks from a given size in pixels.
* @param {number} size Size of the axis in pixels.
* @returns {number} Total amount of ticks.
*/
export function getTicksTotalFromSize(size) {
if (size < 700) {
if (size > 300) {
return 10;
}
return 5;
}
return 20;
}
/**
* Get the tick values from a given d3 scale.
* @param {d3.scale} scale Scale function.
* @param {number} tickTotal Total number of ticks
* @param {Array} tickValues Array of tick values if they exist.
* @returns {Array} Array of tick values.
*/
export function getTickValues(scale, tickTotal, tickValues) {
return !tickValues
? scale.ticks
? scale.ticks(tickTotal)
: scale.domain()
: tickValues;
}
/**
* Generate a description of a decorative axis in terms of a linear equation
* y = slope * x + offset in coordinates
* @param {Object} axisStart Object of format {x, y} describing in coordinates
* the start position of the decorative axis
* @param {Object} axisEnd Object of format {x, y} describing in coordinates
* the start position of the decorative axis
* @returns {Number} Object describing each the line in coordinates
*/
export function generateFit(axisStart, axisEnd) {
// address the special case when the slope is infinite
if (axisStart.x === axisEnd.x) {
return {
left: axisStart.y,
right: axisEnd.y,
slope: 0,
offset: axisStart.x
};
}
const slope = (axisStart.y - axisEnd.y) / (axisStart.x - axisEnd.x);
return {
left: axisStart.x,
right: axisEnd.x,
// generate the linear projection of the axis direction
slope,
offset: axisStart.y - slope * axisStart.x
};
}
/**
* Generate a description of a decorative axis in terms of a linear equation
* y = slope * x + offset in coordinates
* @param props
* props.@param {Object} axisStart Object of format {x, y} describing in coordinates
* the start position of the decorative axis
* props.@param {Object} axisEnd Object of format {x, y} describing in coordinates
* the start position of the decorative axis
* props.@param {Number} numberOfTicks The number of ticks on the axis
* props.@param {Array.Numbers} axisDomain The values to be interpolated across for the axis
* @returns {Number} Object describing the slope and the specific coordinates of the points
*/
export function generatePoints({
axisStart,
axisEnd,
numberOfTicks,
axisDomain
}) {
const {left, right, slope, offset} = generateFit(axisStart, axisEnd);
// construct a linear band of points, then map them
const pointSlope = (right - left) / numberOfTicks;
const axisScale = scaleLinear()
.domain([left, right])
.range(axisDomain);
const slopeVertical = axisStart.x === axisEnd.x;
return {
slope: slopeVertical ? Infinity : slope,
points: range(left, right + pointSlope, pointSlope)
.map(val => {
if (slopeVertical) {
return {y: val, x: slope * val + offset, text: axisScale(val)};
}
return {x: val, y: slope * val + offset, text: axisScale(val)};
})
.slice(0, numberOfTicks + 1)
};
}
/**
* Compute the angle (in radians) of a decorative axis
* @param {Object} axisStart Object of format {x, y} describing in coordinates
* the start position of the decorative axis
* @param {Object} axisEnd Object of format {x, y} describing in coordinates
* the start position of the decorative axis
* @returns {Number} Angle in radials
*/
export function getAxisAngle(axisStart, axisEnd) {
if (axisStart.x === axisEnd.x) {
return axisEnd.y > axisStart.y ? Math.PI / 2 : (3 * Math.PI) / 2;
}
return Math.atan((axisEnd.y - axisStart.y) / (axisEnd.x - axisStart.x));
}
export default {
DIRECTION,
ORIENTATION,
getTicksTotalFromSize,
getTickValues
};
```
|
```smalltalk
using System;
using System.Collections.Generic;
using Certify.Models;
using Certify.Models.Config;
namespace Certify.Config
{
public enum TaskTriggerType
{
/// <summary>
/// Task will not run
/// </summary>
NOT_ENABLED = 0,
/// <summary>
/// Task will run for any status
/// </summary>
ANY_STATUS = 1,
/// <summary>
/// Task will run if the primary request succeeded
/// </summary>
ON_SUCCESS = 2,
/// <summary>
/// Task will run if the primary request failed
/// </summary>
ON_ERROR = 4,
/// <summary>
/// Manual tasks don't run automatically and are only started by the user via the UI or via the command line
/// </summary>
MANUAL = 8
}
public class DeploymentTaskTypes
{
public static Dictionary<string, string> TargetTypes { get; set; } = new Dictionary<string, string>
{
{ StandardAuthTypes.STANDARD_AUTH_LOCAL,"Local (as current service user)"},
{ StandardAuthTypes.STANDARD_AUTH_LOCAL_AS_USER,"Local (as specific user)"},
{ StandardAuthTypes.STANDARD_AUTH_WINDOWS,"Windows (Network)"},
{ StandardAuthTypes.STANDARD_AUTH_SSH,"SSH (Remote)"}
};
public static Dictionary<TaskTriggerType, string> TriggerTypes { get; set; } = new Dictionary<TaskTriggerType, string>
{
{ TaskTriggerType.NOT_ENABLED,"Disabled (Will Not Run)"},
{ TaskTriggerType.ANY_STATUS,"Run On Success or On Error"},
{ TaskTriggerType.ON_SUCCESS,"Run On Success"},
{ TaskTriggerType.ON_ERROR,"Run On Error"},
{ TaskTriggerType.MANUAL,"Manual (run using UI or command line)"}
};
}
public class DeploymentTaskConfig
{
public string? Id { get; set; }
/// <summary>
/// id of task provider to instantiate
/// </summary>
public string? TaskTypeId { get; set; }
/// <summary>
/// Unique task name (id) used in logs and to invoke this deployment task manually
/// </summary>
public string? TaskName { get; set; } = string.Empty;
/// <summary>
/// Optional description for this deployment tasks (i.e. what it does and why)
/// </summary>
public string? Description { get; set; } = string.Empty;
/// <summary>
/// if true, deployment will stop at this step and report as an error, deployment is not considered complete
/// if false, error will be logged as a warning, next deployment step will continue and overall deployment will be marked as successful (depending on other deployment steps)
/// </summary>
public bool IsFatalOnError { get; set; }
/// <summary>
/// If greater than 0, attempt up to N retries before failing
/// </summary>
public int RetriesAllowed { get; set; }
/// <summary>
/// Time to wait between retry attempts
/// </summary>
public int RetryDelaySeconds { get; set; } = 10;
/// <summary>
/// The challenge provider is the authentication type required (Local, Network, SSH etc)
/// </summary>
public string? ChallengeProvider { get; set; } = string.Empty;
public string? ChallengeCredentialKey { get; set; } = string.Empty;
/// <summary>
/// hostname or IP of target (if required)
/// </summary>
public string? TargetHost { get; set; } = string.Empty;
/// <summary>
///Dictionary of provider parameter values
/// </summary>
public List<ProviderParameterSetting>? Parameters { get; set; } = new();
public DateTimeOffset? DateLastExecuted { get; set; }
public string? LastResult { get; set; }
public RequestState? LastRunStatus { get; set; }
/// <summary>
/// The request result state which triggers the task (All, Success, Error)
/// </summary>
public TaskTriggerType TaskTrigger { get; set; } = TaskTriggerType.ANY_STATUS;
/// <summary>
/// If true, this task will run even if the last task in the sequence failed (default=false)
/// </summary>
public bool RunIfLastStepFailed { get; set; }
}
}
```
|
The White House Political Director, formally the Director of the Office of Political Affairs (OPA) or Director of the Office of Political Strategy and Outreach (OPSO), is a political appointee of the President of the United States and a senior member of the Executive Office of the President of the United States.
History
The White House Office of Political Affairs was first formally established in 1981 during under Ronald Reagan, while Jimmy Carter was the first to designate a political director in 1978.
Subsequent administrations have rebranded the office. During his second term, President Obama renamed the office as the Office of Political Strategy and Outreach, though the roles and responsibilities of the office and its director remained.
List
Political and Intergovernmental Affairs
During the second term of the Reagan administration, there was a director of political and intergovernmental affairs who sat above the political director and intergovernmental affairs director.
In popular culture
Paulo Costanzo portrays Lyor Boone, the fictional White House Political Director, in Designated Survivor, a political thriller television series.
References
Executive Office of the President of the United States
White House Directors of Speechwriting
White House
White House Office
|
```java
/**
* SusiIdea
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
*
* along with this program in the file lgpl21.txt
* If not, see <path_to_url
*/
package ai.susi.mind;
import java.util.LinkedHashSet;
import java.util.regex.PatternSyntaxException;
import ai.susi.DAO;
import ai.susi.mind.SusiPattern.SusiMatcher;
import ai.susi.server.ClientIdentity;
/**
* An idea is the application of a intent on a specific input. This matches with the idea of ideas where
* an idea is the 'sudden' solution to a problem with the hint how to apply the idea's core concept
* on the given input details. That is what this class does: it combines a intent with the pattern
* that matched from the input with the intent.
*/
public class SusiIdea {
private SusiIntent intent;
private LinkedHashSet<SusiMatcher> matchers;
/**
* create an idea based on a intent
* @param intent the intent that matched
* @throws PatternSyntaxException
*/
public SusiIdea(SusiIntent intent) throws PatternSyntaxException {
this.intent = intent;
this.matchers = null;
}
public SusiIntent getIntent() {
return this.intent;
}
/**
* Set a matcher to an idea. Having a matcher makes the idea 'valid', it means
* that the idea can be instantiated with a query.
* @param matchers the idea
* @return
*/
public SusiIdea setMatchers(LinkedHashSet<SusiMatcher> matchers) {
this.matchers = matchers;
return this;
}
/**
* Generate a proof that the idea is correct!
* Several intents can be candidates for answer computation. Each of such an intent is expressed as
* an SusiIdea object. They are combined with a recall (data objects from past answer computations)
* and tested by construction of an answer as the result of a causality chain that is described in the
* idea. If the chain can be constructed by finding instances of variables, then this is a kind of
* proof that the answer is correct. That answer is returned in the SusiArgument object.
* @param recall the data objects from past computations
* @param identity the identity of the user
* @param userLanguage the language of the user
* @param minds the hierarchy of mind layers that may be used for reflection within the argument
* @return the result of the application of the intent, a thought argument containing the thoughts which terminated into a final mindstate or NULL if the consideration should be rejected
*/
public SusiArgument consideration(
SusiThought recall,
ClientIdentity identity,
SusiLanguage userLanguage,
SusiMind... minds) {
// that argument is filled with an idea which consist of the query where we extract the identified data entities
if (this.matchers != null) alternatives: for (SusiMatcher matcher: this.matchers) {
// initialize keynote (basic data for unification) for flow
SusiThought keynote = new SusiThought(matcher);
// we deduced thoughts from the inferences in the intents. The keynote also carries these actions
this.intent.getActionsClone().forEach(action -> keynote.addAction(action));
DAO.log("Susi has an idea: on " + keynote.toString() + " apply " + this.intent.toJSON());
// we start with the recall from previous interactions as new flow
final SusiArgument flow = new SusiArgument(identity, userLanguage, minds) // empty flow
.think(recall) // the past
.think(keynote); // the idea, including actions (also: the "now")
// lets apply the intents that belong to this specific consideration
for (SusiInference inference: this.intent.getInferences()) {
SusiThought implication = inference.applyProcedures(flow);
DAO.log("Susi is thinking about: " + implication.toString());
// make sure that we are not stuck:
// in case that we are stuck (== no progress was made) we consider the next alternative matcher
if (implication.isFailed() || flow.mindstate().equals(implication)) continue alternatives; // TODO: do this only if specific marker is in intent
// think
flow.think(implication); // the future
}
// add skill source
flow.addSkill(this.intent.getSkillID(), this.intent.getUtterances().iterator().next().getLine());
return flow;
}
// fail, no alternative was successful
return null;
}
@Override
public int hashCode() {
// we compare ideas only by the intent
return this.intent.hashCode();
}
@Override
public boolean equals(Object o) {
// we compare ideas only by the intent
if (!(o instanceof SusiIdea)) return false;
SusiIdea si = (SusiIdea) o;
return this.intent.equals(si.intent);
}
public String toString() {
return this.intent.toString();
}
}
```
|
```javascript
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
// Flags: --allow-natives-syntax
function f(get, ...a) {
for (let i = 0; i < 1000; i++) {
if (i === 999) %OptimizeOsr();
a.map(f);
}
return get();
}
assertThrows(f);
```
|
Lucy C. Laney Comprehensive High School (Laney High School) is a public high school in the Laney-Walker district of Augusta, Georgia, United States. It was formed in 1949 by combining the A. R. Johnson and Haines Normal and Industrial Institute. From the merger, Laney derived the mascot, the "Wildcat," and the school colors of red and grey.
In the summer of 1951, the old building was torn down on the Haines site, and the new building was started. During the construction, classes were held at another site. In September 1953, Lucy Laney High School moved into its new building with Dr. C.W. Johnson as principal. In 1964, the music building was added with spacious new choral and band rooms. In 1981, renovations were made to the building to update the library facilities and the main offices. Air conditioning was installed.
During the 1996–97 school year, work started on a renovation for school improvements costing approximately seven million dollars. This added ten new classrooms, a technology lab, a new media center, an expansion of the gym with a concession area, and new restrooms and furnishings. The new facilities were completed during the 1997–98 school year. In the fall of 2007, a new 12 million-dollar athletic complex was opened, which included a 9000-seat football stadium. The school had previously been without a home field for over 30 years.
In 2014, Laney High School began another major renovation project. Students attended the nearby Tubman Education Center while the school was given a complete 23 million-dollar overhaul. The new facility opened in the fall of 2016 and includes 23 new classrooms, a fine arts building, a cosmetology lab, a mock courtroom for the law and justice program, a rifle range for the Reserve Officers' Training Corps, upgraded cafeteria with outdoor seating for seniors, and a new gymnasium.
Laney High School now offers two magnet programs open to all students of Richmond County. The Academy for Advanced Placement Studies enables students to pursue college-level studies while still in high school by offering numerous AP courses. Beginning in the fall of 2017, the Early College Academy will admit qualified students to take accelerated courses in grades 9 and 10 and then enroll full-time at Augusta University during grades 11 and 12.
Female literacy rate of graduates is 63%, whereas male literacy rate of graduates is 42%. This is comparatively high for a high school in a deprived area.
Notable alumni
Chip Banks, NFL linebacker
Kendrell Bell, NFL linebacker
Emerson Boozer, NFL running back
Corvey Irvin, NFL defensive tackle
Jessye Norman, opera singer, soprano
Curtis Rouse, NFL offensive lineman
Jermaine Smith, NFL defensive tackle
Jaylen Watson, Super Bowl Champion NFL defensive back
Bob Wells, NFL offensive tackle
See also
Richmond County School System
References
External links
Lucy C. Laney High School - official website
Lucy C. Laney High School Alumni Association - alumni website
Educational institutions established in 1949
High schools in Richmond County, Georgia
Public high schools in Georgia (U.S. state)
1949 establishments in Georgia (U.S. state)
|
```javascript
var fs = require('fs')
var path = require('path')
var mr = require('npm-registry-mock')
var test = require('tap').test
var common = require('../common-tap.js')
var pkg = common.pkg
var opts = [
'--cache', common.cache,
'--registry', common.registry
]
var desired = {
name: 'npm-test-shrinkwrap-dev-dependency',
version: '0.0.0',
dependencies: {
request: {
version: '0.9.0',
resolved: common.registry + '/request/-/request-0.9.0.tgz',
integrity: 'sha1-EEn1mm9GWI5tAwkh+7hMovDCcU4='
},
underscore: {
version: '1.3.1',
resolved: common.registry + '/underscore/-/underscore-1.3.1.tgz',
integrity: 'sha1-bLiq0Od+tdu/tUsivNhpcwnPlkE='
}
}
}
var json = {
author: 'Domenic Denicola',
name: 'npm-test-shrinkwrap-dev-dependency',
version: '0.0.0',
dependencies: {
request: '0.9.0',
underscore: '1.3.1'
},
devDependencies: {
underscore: '1.5.1'
}
}
test("shrinkwrap doesn't strip out the dependency", function (t) {
t.plan(3)
fs.writeFileSync(path.join(pkg, 'package.json'), JSON.stringify(json, null, 2))
process.chdir(pkg)
mr({port: common.port}, function (er, s) {
common.npm(opts.concat(['install', '.']), {stdio: [0, 'pipe', 2]}, function (err, code) {
if (err) throw err
if (!t.is(code, 0)) return (s.close(), t.end())
common.npm(opts.concat(['shrinkwrap']), {stdio: [0, 2, 2]}, function (err, code) {
if (err) throw err
t.is(code, 0)
try {
var results = JSON.parse(fs.readFileSync(path.join(pkg, 'npm-shrinkwrap.json')))
} catch (ex) {
t.comment(ex)
}
t.deepEqual(results.dependencies, desired.dependencies)
s.close()
t.end()
})
})
})
})
```
|
```kotlin
package expo.modules.updates.loader
import android.net.Uri
import androidx.test.internal.runner.junit4.AndroidJUnit4ClassRunner
import androidx.test.platform.app.InstrumentationRegistry
import expo.modules.updates.TestUtils.asJSONResponse
import expo.modules.updates.TestUtils.asResponse
import expo.modules.updates.UpdatesConfiguration
import expo.modules.updates.codesigning.CODE_SIGNING_METADATA_KEY_ID_KEY
import expo.modules.updates.codesigning.CertificateFixtures
import expo.modules.updates.codesigning.TestCertificateType
import expo.modules.updates.codesigning.getTestCertificate
import expo.modules.updates.manifest.Update
import okhttp3.Headers.Companion.toHeaders
import okhttp3.MediaType.Companion.toMediaType
import okhttp3.MediaType.Companion.toMediaTypeOrNull
import okhttp3.MultipartBody
import okhttp3.Protocol
import okhttp3.Request
import okhttp3.RequestBody.Companion.toRequestBody
import okhttp3.Response
import okhttp3.ResponseBody.Companion.toResponseBody
import org.junit.Assert
import org.junit.Test
import org.junit.runner.RunWith
@RunWith(AndroidJUnit4ClassRunner::class)
class FileDownloaderManifestParsingTest {
@Test
fun testManifestParsing_JSONBody() {
val context = InstrumentationRegistry.getInstrumentation().targetContext
val response = CertificateFixtures.testExpoUpdatesManifestBody.asJSONResponse(mapOf("expo-protocol-version" to "0").toHeaders())
val configuration = UpdatesConfiguration(
null,
mapOf(
UpdatesConfiguration.UPDATES_CONFIGURATION_UPDATE_URL_KEY to Uri.parse("path_to_url")
)
)
var errorOccurred = false
var resultUpdate: Update? = null
FileDownloader(context, configuration).parseRemoteUpdateResponse(
response,
object : FileDownloader.RemoteUpdateDownloadCallback {
override fun onFailure(message: String, e: Exception) {
errorOccurred = true
}
override fun onSuccess(updateResponse: UpdateResponse) {
resultUpdate = updateResponse.manifestUpdateResponsePart?.update
}
}
)
Assert.assertFalse(errorOccurred)
Assert.assertNotNull(resultUpdate)
Assert.assertFalse(resultUpdate!!.manifest.isVerified())
}
@Test
fun testManifestParsing_MultipartBody() {
val context = InstrumentationRegistry.getInstrumentation().targetContext
val boundary = "blah"
val extensions = "{}"
val directive = CertificateFixtures.testDirectiveNoUpdateAvailable
val response = MultipartBody.Builder(boundary)
.setType(MultipartBody.MIXED)
.addFormDataPart("extraneous", "hello1", "hello".toRequestBody("text/plain; charset=utf-8".toMediaTypeOrNull()))
.addFormDataPart("manifest", "hello2", CertificateFixtures.testExpoUpdatesManifestBody.toRequestBody("application/json; charset=utf-8".toMediaTypeOrNull()))
.addFormDataPart("extensions", "hello3", extensions.toRequestBody("application/json; charset=utf-8".toMediaTypeOrNull()))
.addFormDataPart("directive", "hello3", directive.toRequestBody("application/json; charset=utf-8".toMediaTypeOrNull()))
.build()
.asResponse(mapOf("expo-protocol-version" to "0").toHeaders())
val configuration = UpdatesConfiguration(
null,
mapOf(
UpdatesConfiguration.UPDATES_CONFIGURATION_UPDATE_URL_KEY to Uri.parse("path_to_url")
)
)
var errorOccurred = false
var resultUpdateResponse: UpdateResponse? = null
FileDownloader(context, configuration).parseRemoteUpdateResponse(
response,
object : FileDownloader.RemoteUpdateDownloadCallback {
override fun onFailure(message: String, e: Exception) {
errorOccurred = true
}
override fun onSuccess(updateResponse: UpdateResponse) {
resultUpdateResponse = updateResponse
}
}
)
Assert.assertFalse(errorOccurred)
Assert.assertNotNull(resultUpdateResponse)
Assert.assertNotNull(resultUpdateResponse!!.manifestUpdateResponsePart)
Assert.assertFalse(resultUpdateResponse!!.manifestUpdateResponsePart!!.update.manifest.isVerified())
Assert.assertNotNull(resultUpdateResponse!!.directiveUpdateResponsePart)
Assert.assertTrue(resultUpdateResponse!!.directiveUpdateResponsePart!!.updateDirective is UpdateDirective.NoUpdateAvailableUpdateDirective)
}
@Test
fun testManifestParsing_MultipartBodyOnlyDirective() {
val context = InstrumentationRegistry.getInstrumentation().targetContext
val boundary = "blah"
val directive = CertificateFixtures.testDirectiveNoUpdateAvailable
val response = MultipartBody.Builder(boundary)
.setType(MultipartBody.MIXED)
.addFormDataPart("directive", "hello3", directive.toRequestBody("application/json; charset=utf-8".toMediaTypeOrNull()))
.build()
.asResponse()
val configuration = UpdatesConfiguration(
null,
mapOf(
UpdatesConfiguration.UPDATES_CONFIGURATION_UPDATE_URL_KEY to Uri.parse("path_to_url")
)
)
var errorOccurred = false
var resultUpdateResponse: UpdateResponse? = null
FileDownloader(context, configuration).parseRemoteUpdateResponse(
response,
object : FileDownloader.RemoteUpdateDownloadCallback {
override fun onFailure(message: String, e: Exception) {
errorOccurred = true
}
override fun onSuccess(updateResponse: UpdateResponse) {
resultUpdateResponse = updateResponse
}
}
)
Assert.assertFalse(errorOccurred)
Assert.assertNotNull(resultUpdateResponse)
Assert.assertNull(resultUpdateResponse!!.manifestUpdateResponsePart)
Assert.assertNotNull(resultUpdateResponse!!.directiveUpdateResponsePart)
Assert.assertTrue(resultUpdateResponse!!.directiveUpdateResponsePart!!.updateDirective is UpdateDirective.NoUpdateAvailableUpdateDirective)
}
@Test
fun your_sha256_hashde() {
val context = InstrumentationRegistry.getInstrumentation().targetContext
val boundary = "blah"
val directive = CertificateFixtures.testDirectiveNoUpdateAvailable
val response = MultipartBody.Builder(boundary)
.setType(MultipartBody.MIXED)
.addFormDataPart("directive", "hello3", directive.toRequestBody("application/json; charset=utf-8".toMediaTypeOrNull()))
.build()
.asResponse()
val configuration = UpdatesConfiguration(
null,
mapOf(
UpdatesConfiguration.UPDATES_CONFIGURATION_UPDATE_URL_KEY to Uri.parse("path_to_url"),
UpdatesConfiguration.your_sha256_hashITY_MODE to true
)
)
var errorOccurred: Exception? = null
var resultUpdate: Update? = null
FileDownloader(context, configuration).parseRemoteUpdateResponse(
response,
object : FileDownloader.RemoteUpdateDownloadCallback {
override fun onFailure(message: String, e: Exception) {
errorOccurred = e
}
override fun onSuccess(updateResponse: UpdateResponse) {
resultUpdate = updateResponse.manifestUpdateResponsePart?.update
}
}
)
Assert.assertEquals("Multipart response missing manifest part. Manifest is required in version 0 of the expo-updates protocol. This may be due to the update being a rollback or other directive.", errorOccurred!!.message)
Assert.assertNull(resultUpdate)
}
@Test
fun testManifestParsing_MultipartBodyNoRelevantParts() {
val context = InstrumentationRegistry.getInstrumentation().targetContext
val boundary = "blah"
val response = MultipartBody.Builder(boundary)
.setType(MultipartBody.MIXED)
.addFormDataPart("fake", " filename", "".toRequestBody("application/json; charset=utf-8".toMediaTypeOrNull()))
.build()
.asResponse()
val configuration = UpdatesConfiguration(
null,
mapOf(
UpdatesConfiguration.UPDATES_CONFIGURATION_UPDATE_URL_KEY to Uri.parse("path_to_url")
)
)
var errorOccurred = false
var resultUpdateResponse: UpdateResponse? = null
FileDownloader(context, configuration).parseRemoteUpdateResponse(
response,
object : FileDownloader.RemoteUpdateDownloadCallback {
override fun onFailure(message: String, e: Exception) {
errorOccurred = true
}
override fun onSuccess(updateResponse: UpdateResponse) {
resultUpdateResponse = updateResponse
}
}
)
Assert.assertFalse(errorOccurred)
Assert.assertNotNull(resultUpdateResponse)
Assert.assertNull(resultUpdateResponse!!.manifestUpdateResponsePart)
Assert.assertNull(resultUpdateResponse!!.directiveUpdateResponsePart)
}
@Test
fun testManifestParsing_MultipartBodyEmpty() {
val context = InstrumentationRegistry.getInstrumentation().targetContext
val boundary = "blah"
val response = Response.Builder()
.request(Request.Builder().url("path_to_url").build())
.protocol(Protocol.HTTP_2)
.message("")
.code(200)
.body("".toResponseBody("${MultipartBody.MIXED}; boundary=$boundary".toMediaType()))
.build()
val configuration = UpdatesConfiguration(
null,
mapOf(
UpdatesConfiguration.UPDATES_CONFIGURATION_UPDATE_URL_KEY to Uri.parse("path_to_url")
)
)
var errorOccurred = false
var resultUpdateResponse: UpdateResponse? = null
FileDownloader(context, configuration).parseRemoteUpdateResponse(
response,
object : FileDownloader.RemoteUpdateDownloadCallback {
override fun onFailure(message: String, e: Exception) {
errorOccurred = true
}
override fun onSuccess(updateResponse: UpdateResponse) {
resultUpdateResponse = updateResponse
}
}
)
Assert.assertFalse(errorOccurred)
Assert.assertNotNull(resultUpdateResponse)
Assert.assertNull(resultUpdateResponse!!.manifestUpdateResponsePart)
Assert.assertNull(resultUpdateResponse!!.directiveUpdateResponsePart)
}
@Test
fun testManifestParsing_NullBodyResponseProtocol1() {
val context = InstrumentationRegistry.getInstrumentation().targetContext
val response = Response.Builder()
.request(Request.Builder().url("path_to_url").build())
.protocol(Protocol.HTTP_2)
.message("")
.code(200)
.header("expo-protocol-version", "1")
.body(null)
.build()
val configuration = UpdatesConfiguration(
null,
mapOf(
UpdatesConfiguration.UPDATES_CONFIGURATION_UPDATE_URL_KEY to Uri.parse("path_to_url")
)
)
var errorOccurred = false
var resultUpdateResponse: UpdateResponse? = null
FileDownloader(context, configuration).parseRemoteUpdateResponse(
response,
object : FileDownloader.RemoteUpdateDownloadCallback {
override fun onFailure(message: String, e: Exception) {
errorOccurred = true
}
override fun onSuccess(updateResponse: UpdateResponse) {
resultUpdateResponse = updateResponse
}
}
)
Assert.assertFalse(errorOccurred)
Assert.assertNotNull(resultUpdateResponse)
Assert.assertNull(resultUpdateResponse!!.manifestUpdateResponsePart)
Assert.assertNull(resultUpdateResponse!!.directiveUpdateResponsePart)
}
@Test
fun testManifestParsing_204ResponseProtocol1() {
val context = InstrumentationRegistry.getInstrumentation().targetContext
val response = Response.Builder()
.request(Request.Builder().url("path_to_url").build())
.protocol(Protocol.HTTP_2)
.message("")
.code(204)
.header("expo-protocol-version", "1")
.body(null)
.build()
val configuration = UpdatesConfiguration(
null,
mapOf(
UpdatesConfiguration.UPDATES_CONFIGURATION_UPDATE_URL_KEY to Uri.parse("path_to_url")
)
)
var errorOccurred = false
var resultUpdateResponse: UpdateResponse? = null
FileDownloader(context, configuration).parseRemoteUpdateResponse(
response,
object : FileDownloader.RemoteUpdateDownloadCallback {
override fun onFailure(message: String, e: Exception) {
errorOccurred = true
}
override fun onSuccess(updateResponse: UpdateResponse) {
resultUpdateResponse = updateResponse
}
}
)
Assert.assertFalse(errorOccurred)
Assert.assertNotNull(resultUpdateResponse)
Assert.assertNull(resultUpdateResponse!!.manifestUpdateResponsePart)
Assert.assertNull(resultUpdateResponse!!.directiveUpdateResponsePart)
}
@Test
fun testManifestParsing_204ResponseNoProtocol() {
val context = InstrumentationRegistry.getInstrumentation().targetContext
val response = Response.Builder()
.request(Request.Builder().url("path_to_url").build())
.protocol(Protocol.HTTP_2)
.message("")
.code(204)
.body(null)
.build()
val configuration = UpdatesConfiguration(
null,
mapOf(
UpdatesConfiguration.UPDATES_CONFIGURATION_UPDATE_URL_KEY to Uri.parse("path_to_url")
)
)
var errorOccurred: Exception? = null
var resultUpdate: Update? = null
FileDownloader(context, configuration).parseRemoteUpdateResponse(
response,
object : FileDownloader.RemoteUpdateDownloadCallback {
override fun onFailure(message: String, e: Exception) {
errorOccurred = e
}
override fun onSuccess(updateResponse: UpdateResponse) {
resultUpdate = updateResponse.manifestUpdateResponsePart?.update
}
}
)
Assert.assertEquals("Missing body in remote update", errorOccurred!!.message)
Assert.assertNull(resultUpdate)
}
@Test
fun testManifestParsing_JSONBodySigned() {
val headersMap = mapOf(
"expo-protocol-version" to "0",
"expo-sfv-version" to "0",
"expo-signature" to CertificateFixtures.testExpoUpdatesManifestBodySignature
)
val context = InstrumentationRegistry.getInstrumentation().targetContext
val response = CertificateFixtures.testExpoUpdatesManifestBody.asJSONResponse(headersMap.toHeaders())
val testCertificate = getTestCertificate(TestCertificateType.VALID)
val configuration = UpdatesConfiguration(
null,
mapOf(
UpdatesConfiguration.UPDATES_CONFIGURATION_UPDATE_URL_KEY to Uri.parse("path_to_url"),
UpdatesConfiguration.UPDATES_CONFIGURATION_CODE_SIGNING_CERTIFICATE to testCertificate,
UpdatesConfiguration.UPDATES_CONFIGURATION_CODE_SIGNING_METADATA to mapOf<String, String>()
)
)
var errorOccurred = false
var resultUpdate: Update? = null
FileDownloader(context, configuration).parseRemoteUpdateResponse(
response,
object : FileDownloader.RemoteUpdateDownloadCallback {
override fun onFailure(message: String, e: Exception) {
errorOccurred = true
}
override fun onSuccess(updateResponse: UpdateResponse) {
resultUpdate = updateResponse.manifestUpdateResponsePart?.update
}
}
)
Assert.assertFalse(errorOccurred)
Assert.assertNotNull(resultUpdate)
Assert.assertTrue(resultUpdate!!.manifest.isVerified())
}
@Test
fun testManifestParsing_MultipartBodySigned() {
val context = InstrumentationRegistry.getInstrumentation().targetContext
val boundary = "blah"
val headersMap = mapOf(
"expo-protocol-version" to "0",
"expo-sfv-version" to "0"
)
val extensions = "{}"
val directive = CertificateFixtures.testDirectiveNoUpdateAvailable
val response = MultipartBody.Builder(boundary)
.setType(MultipartBody.MIXED)
.addFormDataPart("extraneous", "hello1", "hello".toRequestBody("text/plain; charset=utf-8".toMediaTypeOrNull()))
.addPart(
mapOf(
"Content-Disposition" to "form-data; name=\"manifest\"; filename=\"hello2\"",
"expo-signature" to CertificateFixtures.testExpoUpdatesManifestBodySignature
)
.toHeaders(),
CertificateFixtures.testExpoUpdatesManifestBody.toRequestBody("application/json; charset=utf-8".toMediaTypeOrNull())
)
.addFormDataPart("extensions", "hello3", extensions.toRequestBody("application/json; charset=utf-8".toMediaTypeOrNull()))
.addPart(
mapOf(
"Content-Disposition" to "form-data; name=\"directive\"; filename=\"hello3\"",
"expo-signature" to CertificateFixtures.testDirectiveNoUpdateAvailableSignature
)
.toHeaders(),
directive.toRequestBody("application/json; charset=utf-8".toMediaTypeOrNull())
)
.build()
.asResponse(headersMap.toHeaders())
val testCertificate = getTestCertificate(TestCertificateType.VALID)
val configuration = UpdatesConfiguration(
null,
mapOf(
UpdatesConfiguration.UPDATES_CONFIGURATION_UPDATE_URL_KEY to Uri.parse("path_to_url"),
UpdatesConfiguration.UPDATES_CONFIGURATION_CODE_SIGNING_CERTIFICATE to testCertificate,
UpdatesConfiguration.UPDATES_CONFIGURATION_CODE_SIGNING_METADATA to mapOf<String, String>()
)
)
var errorOccurred = false
var resultUpdateResponse: UpdateResponse? = null
FileDownloader(context, configuration).parseRemoteUpdateResponse(
response,
object : FileDownloader.RemoteUpdateDownloadCallback {
override fun onFailure(message: String, e: Exception) {
errorOccurred = true
}
override fun onSuccess(updateResponse: UpdateResponse) {
resultUpdateResponse = updateResponse
}
}
)
Assert.assertFalse(errorOccurred)
Assert.assertNotNull(resultUpdateResponse)
Assert.assertTrue(resultUpdateResponse!!.manifestUpdateResponsePart!!.update.manifest.isVerified())
Assert.assertNotNull(resultUpdateResponse!!.directiveUpdateResponsePart)
Assert.assertTrue(resultUpdateResponse!!.directiveUpdateResponsePart!!.updateDirective is UpdateDirective.NoUpdateAvailableUpdateDirective)
}
@Test
fun testManifestParsing_JSONBodySigned_UnsignedRequest() {
val headersMap = mapOf(
"expo-protocol-version" to "0",
"expo-sfv-version" to "0"
)
val context = InstrumentationRegistry.getInstrumentation().targetContext
val response = CertificateFixtures.testExpoUpdatesManifestBody.asJSONResponse(headersMap.toHeaders())
val testCertificate = getTestCertificate(TestCertificateType.VALID)
val configuration = UpdatesConfiguration(
null,
mapOf(
UpdatesConfiguration.UPDATES_CONFIGURATION_UPDATE_URL_KEY to Uri.parse("path_to_url"),
UpdatesConfiguration.UPDATES_CONFIGURATION_CODE_SIGNING_CERTIFICATE to testCertificate,
UpdatesConfiguration.UPDATES_CONFIGURATION_CODE_SIGNING_METADATA to mapOf<String, String>()
)
)
var errorOccurred: Exception? = null
var resultUpdate: Update? = null
FileDownloader(context, configuration).parseRemoteUpdateResponse(
response,
object : FileDownloader.RemoteUpdateDownloadCallback {
override fun onFailure(message: String, e: Exception) {
errorOccurred = e
}
override fun onSuccess(updateResponse: UpdateResponse) {
resultUpdate = updateResponse.manifestUpdateResponsePart?.update
}
}
)
Assert.assertEquals("No expo-signature header specified", errorOccurred!!.message)
Assert.assertNull(resultUpdate)
}
@Test
fun your_sha256_hashrience() {
val context = InstrumentationRegistry.getInstrumentation().targetContext
val boundary = "blah"
val headersMap = mapOf(
"expo-protocol-version" to "0",
"expo-sfv-version" to "0"
)
val extensions = "{}"
val directive = CertificateFixtures.testDirectiveNoUpdateAvailable
val leafCert = getTestCertificate(TestCertificateType.CHAIN_LEAF)
val intermediateCert = getTestCertificate(TestCertificateType.CHAIN_INTERMEDIATE)
val rootCert = getTestCertificate(TestCertificateType.CHAIN_ROOT)
val response = MultipartBody.Builder(boundary)
.setType(MultipartBody.MIXED)
.addFormDataPart("extraneous", "hello1", "hello".toRequestBody("text/plain; charset=utf-8".toMediaTypeOrNull()))
.addPart(
mapOf(
"Content-Disposition" to "form-data; name=\"manifest\"; filename=\"hello2\"",
"expo-signature" to CertificateFixtures.testExpoUpdatesManifestBodyValidChainLeafSignature
)
.toHeaders(),
CertificateFixtures.testExpoUpdatesManifestBody.toRequestBody("application/json; charset=utf-8".toMediaTypeOrNull())
)
.addFormDataPart("extensions", "hello3", extensions.toRequestBody("application/json; charset=utf-8".toMediaTypeOrNull()))
.addFormDataPart("certificate_chain", "toHeaders", (leafCert + intermediateCert).toRequestBody("application/x-pem-file; charset=utf-8".toMediaTypeOrNull()))
.addPart(
mapOf(
"Content-Disposition" to "form-data; name=\"directive\"; filename=\"hello3\"",
"expo-signature" to CertificateFixtures.testDirectiveNoUpdateAvailableValidChainLeafSignature
)
.toHeaders(),
directive.toRequestBody("application/json; charset=utf-8".toMediaTypeOrNull())
)
.build()
.asResponse(headersMap.toHeaders())
val configuration = UpdatesConfiguration(
null,
mapOf(
UpdatesConfiguration.UPDATES_CONFIGURATION_UPDATE_URL_KEY to Uri.parse("path_to_url"),
UpdatesConfiguration.UPDATES_CONFIGURATION_CODE_SIGNING_CERTIFICATE to rootCert,
UpdatesConfiguration.UPDATES_CONFIGURATION_CODE_SIGNING_METADATA to mapOf(
CODE_SIGNING_METADATA_KEY_ID_KEY to "ca-root"
),
UpdatesConfiguration.your_sha256_hashTIFICATE_CHAIN to true
)
)
var errorOccurred = false
var resultUpdateResponse: UpdateResponse? = null
FileDownloader(context, configuration).parseRemoteUpdateResponse(
response,
object : FileDownloader.RemoteUpdateDownloadCallback {
override fun onFailure(message: String, e: Exception) {
errorOccurred = true
}
override fun onSuccess(updateResponse: UpdateResponse) {
resultUpdateResponse = updateResponse
}
}
)
Assert.assertFalse(errorOccurred)
Assert.assertNotNull(resultUpdateResponse!!.manifestUpdateResponsePart?.update)
Assert.assertTrue(resultUpdateResponse!!.manifestUpdateResponsePart?.update!!.manifest.isVerified())
Assert.assertNotNull(resultUpdateResponse!!.directiveUpdateResponsePart)
Assert.assertTrue(resultUpdateResponse!!.directiveUpdateResponsePart!!.updateDirective is UpdateDirective.NoUpdateAvailableUpdateDirective)
}
@Test
fun your_sha256_hashrience_IncorrectExperienceInManifest() {
val context = InstrumentationRegistry.getInstrumentation().targetContext
val boundary = "blah"
val contentType = "multipart/mixed; boundary=$boundary"
val headersMap = mapOf(
"expo-protocol-version" to "0",
"expo-sfv-version" to "0",
"content-type" to contentType
)
val extensions = "{}"
val leafCert = getTestCertificate(TestCertificateType.CHAIN_LEAF)
val intermediateCert = getTestCertificate(TestCertificateType.CHAIN_INTERMEDIATE)
val rootCert = getTestCertificate(TestCertificateType.CHAIN_ROOT)
val response = MultipartBody.Builder(boundary)
.setType(MultipartBody.MIXED)
.addFormDataPart("extraneous", "hello1", "hello".toRequestBody("text/plain; charset=utf-8".toMediaTypeOrNull()))
.addPart(
mapOf(
"Content-Disposition" to "form-data; name=\"manifest\"; filename=\"hello2\"",
"expo-signature" to CertificateFixtures.your_sha256_hashctId
)
.toHeaders(),
CertificateFixtures.testExpoUpdatesManifestBodyIncorrectProjectId.toRequestBody("application/json; charset=utf-8".toMediaTypeOrNull())
)
.addFormDataPart("extensions", "hello3", extensions.toRequestBody("application/json; charset=utf-8".toMediaTypeOrNull()))
.addFormDataPart("certificate_chain", "toHeaders", (leafCert + intermediateCert).toRequestBody("application/x-pem-file; charset=utf-8".toMediaTypeOrNull()))
.build()
.asResponse(headersMap.toHeaders())
val configuration = UpdatesConfiguration(
null,
mapOf(
UpdatesConfiguration.UPDATES_CONFIGURATION_UPDATE_URL_KEY to Uri.parse("path_to_url"),
UpdatesConfiguration.UPDATES_CONFIGURATION_CODE_SIGNING_CERTIFICATE to rootCert,
UpdatesConfiguration.UPDATES_CONFIGURATION_CODE_SIGNING_METADATA to mapOf(
CODE_SIGNING_METADATA_KEY_ID_KEY to "ca-root"
),
UpdatesConfiguration.your_sha256_hashTIFICATE_CHAIN to true
)
)
var errorOccurred: Exception? = null
var resultUpdate: Update? = null
FileDownloader(context, configuration).parseRemoteUpdateResponse(
response,
object : FileDownloader.RemoteUpdateDownloadCallback {
override fun onFailure(message: String, e: Exception) {
errorOccurred = e
}
override fun onSuccess(updateResponse: UpdateResponse) {
resultUpdate = updateResponse.manifestUpdateResponsePart?.update
}
}
)
Assert.assertEquals("Invalid certificate for manifest project ID or scope key", errorOccurred!!.message)
Assert.assertNull(resultUpdate)
}
@Test
fun your_sha256_hashrience_IncorrectExperienceInDirective() {
val context = InstrumentationRegistry.getInstrumentation().targetContext
val boundary = "blah"
val headersMap = mapOf(
"expo-protocol-version" to "0",
"expo-sfv-version" to "0"
)
val directive = CertificateFixtures.testDirectiveNoUpdateAvailableIncorrectProjectId
val leafCert = getTestCertificate(TestCertificateType.CHAIN_LEAF)
val intermediateCert = getTestCertificate(TestCertificateType.CHAIN_INTERMEDIATE)
val rootCert = getTestCertificate(TestCertificateType.CHAIN_ROOT)
val response = MultipartBody.Builder(boundary)
.setType(MultipartBody.MIXED)
.addFormDataPart("extraneous", "hello1", "hello".toRequestBody("text/plain; charset=utf-8".toMediaTypeOrNull()))
.addFormDataPart("certificate_chain", "toHeaders", (leafCert + intermediateCert).toRequestBody("application/x-pem-file; charset=utf-8".toMediaTypeOrNull()))
.addPart(
mapOf(
"Content-Disposition" to "form-data; name=\"directive\"; filename=\"hello3\"",
"expo-signature" to CertificateFixtures.your_sha256_hashojectId
)
.toHeaders(),
directive.toRequestBody("application/json; charset=utf-8".toMediaTypeOrNull())
)
.build()
.asResponse(headersMap.toHeaders())
val configuration = UpdatesConfiguration(
null,
mapOf(
UpdatesConfiguration.UPDATES_CONFIGURATION_UPDATE_URL_KEY to Uri.parse("path_to_url"),
UpdatesConfiguration.UPDATES_CONFIGURATION_CODE_SIGNING_CERTIFICATE to rootCert,
UpdatesConfiguration.UPDATES_CONFIGURATION_CODE_SIGNING_METADATA to mapOf(
CODE_SIGNING_METADATA_KEY_ID_KEY to "ca-root"
),
UpdatesConfiguration.your_sha256_hashTIFICATE_CHAIN to true
)
)
var errorOccurred: Exception? = null
var resultUpdateResponse: UpdateResponse? = null
FileDownloader(context, configuration).parseRemoteUpdateResponse(
response,
object : FileDownloader.RemoteUpdateDownloadCallback {
override fun onFailure(message: String, e: Exception) {
errorOccurred = e
}
override fun onSuccess(updateResponse: UpdateResponse) {
resultUpdateResponse = updateResponse
}
}
)
Assert.assertEquals("Invalid certificate for directive project ID or scope key", errorOccurred!!.message)
Assert.assertNull(resultUpdateResponse)
}
@Test
fun your_sha256_hashtureOptional() {
val headersMap = mapOf(
"expo-protocol-version" to "0",
"expo-sfv-version" to "0"
)
val context = InstrumentationRegistry.getInstrumentation().targetContext
val response = CertificateFixtures.testExpoUpdatesManifestBody.asJSONResponse(headersMap.toHeaders())
val configuration = UpdatesConfiguration(
null,
mapOf(
UpdatesConfiguration.UPDATES_CONFIGURATION_UPDATE_URL_KEY to Uri.parse("path_to_url"),
UpdatesConfiguration.UPDATES_CONFIGURATION_CODE_SIGNING_CERTIFICATE to getTestCertificate(TestCertificateType.VALID),
UpdatesConfiguration.UPDATES_CONFIGURATION_CODE_SIGNING_METADATA to mapOf<String, String>(),
UpdatesConfiguration.UPDATES_CONFIGURATION_CODE_SIGNING_ALLOW_UNSIGNED_MANIFESTS to true
)
)
var errorOccurred = false
var resultUpdate: Update? = null
FileDownloader(context, configuration).parseRemoteUpdateResponse(
response,
object : FileDownloader.RemoteUpdateDownloadCallback {
override fun onFailure(message: String, e: Exception) {
errorOccurred = true
}
override fun onSuccess(updateResponse: UpdateResponse) {
resultUpdate = updateResponse.manifestUpdateResponsePart?.update
}
}
)
Assert.assertFalse(errorOccurred)
Assert.assertNotNull(resultUpdate)
}
}
```
|
```objective-c
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef V8_COMPILER_BYTECODE_LIVENESS_MAP_H_
#define V8_COMPILER_BYTECODE_LIVENESS_MAP_H_
#include "src/base/hashmap.h"
#include "src/utils/bit-vector.h"
#include "src/zone/zone.h"
namespace v8 {
namespace internal {
class Zone;
namespace compiler {
class BytecodeLivenessState : public ZoneObject {
public:
BytecodeLivenessState(int register_count, Zone* zone)
: bit_vector_(register_count + 1, zone) {}
const BitVector& bit_vector() const { return bit_vector_; }
BitVector& bit_vector() { return bit_vector_; }
bool RegisterIsLive(int index) const {
DCHECK_GE(index, 0);
DCHECK_LT(index, bit_vector_.length() - 1);
return bit_vector_.Contains(index);
}
bool AccumulatorIsLive() const {
return bit_vector_.Contains(bit_vector_.length() - 1);
}
bool Equals(const BytecodeLivenessState& other) const {
return bit_vector_.Equals(other.bit_vector_);
}
void MarkRegisterLive(int index) {
DCHECK_GE(index, 0);
DCHECK_LT(index, bit_vector_.length() - 1);
bit_vector_.Add(index);
}
void MarkRegisterDead(int index) {
DCHECK_GE(index, 0);
DCHECK_LT(index, bit_vector_.length() - 1);
bit_vector_.Remove(index);
}
void MarkAccumulatorLive() { bit_vector_.Add(bit_vector_.length() - 1); }
void MarkAccumulatorDead() { bit_vector_.Remove(bit_vector_.length() - 1); }
void MarkAllLive() { bit_vector_.AddAll(); }
void Union(const BytecodeLivenessState& other) {
bit_vector_.Union(other.bit_vector_);
}
bool UnionIsChanged(const BytecodeLivenessState& other) {
return bit_vector_.UnionIsChanged(other.bit_vector_);
}
void CopyFrom(const BytecodeLivenessState& other) {
bit_vector_.CopyFrom(other.bit_vector_);
}
private:
BitVector bit_vector_;
DISALLOW_COPY_AND_ASSIGN(BytecodeLivenessState);
};
struct BytecodeLiveness {
BytecodeLivenessState* in;
BytecodeLivenessState* out;
BytecodeLiveness(int register_count, Zone* zone);
};
class V8_EXPORT_PRIVATE BytecodeLivenessMap {
public:
BytecodeLivenessMap(int size, Zone* zone);
BytecodeLiveness& InitializeLiveness(int offset, int register_count,
Zone* zone);
BytecodeLiveness& GetLiveness(int offset);
const BytecodeLiveness& GetLiveness(int offset) const;
BytecodeLivenessState* GetInLiveness(int offset) {
return GetLiveness(offset).in;
}
const BytecodeLivenessState* GetInLiveness(int offset) const {
return GetLiveness(offset).in;
}
BytecodeLivenessState* GetOutLiveness(int offset) {
return GetLiveness(offset).out;
}
const BytecodeLivenessState* GetOutLiveness(int offset) const {
return GetLiveness(offset).out;
}
private:
base::TemplateHashMapImpl<int, BytecodeLiveness,
base::KeyEqualityMatcher<int>, ZoneAllocationPolicy>
liveness_map_;
};
} // namespace compiler
} // namespace internal
} // namespace v8
#endif // V8_COMPILER_BYTECODE_LIVENESS_MAP_H_
```
|
Garwacz is a village in the administrative district of Gmina Bodzanów, within Płock County, Masovian Voivodeship, in east-central Poland. It lies approximately north-west of Bodzanów, east of Płock, and north-west of Warsaw.
References
Garwacz
|
All Saints' Church, Norwich is a Grade I listed redundant parish church in the Church of England in Norwich.
History
The church was largely built in the 15th century, when the nave and north aisle were added, but the chancel dates back to the 13th century. The un-buttressed tower was also built in the 15th century but had extensive repair work done in the 19th century, with the top stage of the tower being added in 1913.
There is an anchorhold attached to the church that served religious hermits who chose to live their life separate from secular society. The city records from 1287 to 1288 show that servants of the anchoress were charged with ‘stopping up the Cockey (blocked the common drain) so that no one can pass by there’. It has been suggested that this was done in an attempt to cover up that either the anchoress or her servants were engaged in trade, something that was forbidden for any anchoress.
It used to house a spectacular ornate font that featured carvings of saints arranged around the bowl and base. This was moved to St Julian's Church, also in Norwich, following All Saints' being made redundant in the parochial reorganisation in 1973.
Post redundancy
On being made redundant in 1973, Norwich Historic Churches Trust took it over and immediately spent £8000 on making it watertight. From 1979 it housed the All Saints Centre, a community centre set up by Jo Cook. It was used as a place to serve the community and to provide Christian hospitality for the less advantaged. The church was improved during this tenancy to include a commercial standard kitchen and a first-floor room in the aisle. This was originally designed to house the Diocesan Mothers' Union, who moved out in 2003.
The church faced an arson attack in 1992, following which there was a large cleaning and redecoration.
The All Saints Centre closed in 2015, when it was reopened as an antiques centre and tea room; All Saints Antiques Centre. these tenants are still occupying the building.
A gallery was installed at the base of the tower to provide a platform for bell ringers to practice. The Norwich Diocesan Association of Ringers use this as one of their churches on the first Tuesday of every month.
Organ
The church contained an organ dating from 1861 by Corps. A specification of the organ can be found on the National Pipe Organ Register.
References
All Saints
15th-century church buildings in England
Grade I listed churches in Norfolk
|
Ingomar is an unincorporated community in northwestern Rosebud County, Montana, United States, along the route of U.S. Route 12. The town was established in 1908, as a station stop on the Chicago, Milwaukee, St. Paul and Pacific Railroad, then under construction in Montana. Although the land around Ingomar attracted numerous homesteaders during the decade following the railroad's completion, the region proved to be far too arid and inhospitable for intensive agricultural use, and by the 1920s the town was in decline. The railroad through the area was abandoned in 1980, and only a handful of people remain in Ingomar today.
Three of the town's surviving buildings: the Ingomar Public School, J. A. Bookman General Store, and Wiley, Clark & Greening Bank have been listed on the National Register of Historic Places.
Climate
According to the Köppen Climate Classification system, Ingomar has a cold semi-arid climate, abbreviated "BSk" on climate maps.
Notes
Unincorporated communities in Rosebud County, Montana
Unincorporated communities in Montana
Ghost towns in Montana
1908 establishments in Montana
Populated places established in 1908
|
```xml
<?xml version="1.0" encoding="UTF-8"?>
<definitions id="definitions"
xmlns="path_to_url"
xmlns:activiti="path_to_url"
targetNamespace="org.flowable.engine.test.api.runtime">
<process id="nestedSubProcessQueryTest">
<startEvent id="theStart" />
<sequenceFlow id="flow1" sourceRef="theStart" targetRef="fork" />
<parallelGateway id="fork" />
<sequenceFlow sourceRef="fork" targetRef="callSubProcess1" />
<sequenceFlow sourceRef="fork" targetRef="callSubProcess2" />
<callActivity id="callSubProcess1" calledElement="nestedSimpleSubProcess" />
<callActivity id="callSubProcess2" calledElement="nestedSimpleSubProcess" />
<sequenceFlow sourceRef="callSubProcess1" targetRef="join" />
<sequenceFlow sourceRef="callSubProcess2" targetRef="join" />
<parallelGateway id="join" />
<sequenceFlow id="flow3" sourceRef="join" targetRef="theEnd" />
<endEvent id="theEnd" />
</process>
</definitions>
```
|
```yaml
commonfields:
id: 3b260f00-772c-4d4e-84ea-e47226637497
version: -1
name: VerifyHumanReadableEquals
script: >
var entryRes = executeCommand('getEntry', {'id': args.humanReadableEntryId});
if (entryRes && Array.isArray(entryRes)) {
if (entryRes[0].Type !== entryTypes.error) {
var outputString = entryRes[0].Contents;
}
} else {
throw 'Unexpected entry result: {0}'.format(entryRes);
}
if(outputString != args.string){
throw 'Output string is not equal to the expected string.\n\nOutput string:\n{0}\nstring:\n{1}'.format(outputString, args.string);
}
type: javascript
tags: []
comment: Verify that given entry is equal to given string
enabled: true
args:
- name: humanReadableEntryId
required: true
default: true
description: Entry ID of the last task.
- name: string
required: true
description: The string to compare to
scripttarget: 0
runonce: false
fromversion: 5.0.0
```
|
```smalltalk
#nullable disable
using System.Collections.Generic;
namespace ClosedXML.Excel
{
internal class XLCharts: IXLCharts
{
private List<IXLChart> charts = new List<IXLChart>();
public IEnumerator<IXLChart> GetEnumerator()
{
return charts.GetEnumerator();
}
System.Collections.IEnumerator System.Collections.IEnumerable.GetEnumerator()
{
return GetEnumerator();
}
public void Add(IXLChart chart)
{
charts.Add(chart);
}
}
}
```
|
```css
Use `box-sizing` to define an element's `width` and `height` properties
The `nth-child` Property
Styling elements using `::before` and `::after`
How to flip an image
Multiple borders with pseudo elements
```
|
The second USS Sovereign (SP-170) was an armed yacht that served in the United States Navy as a patrol vessel from 1918 to 1919.
Sovereign was built as a civilian yacht of the same name in 1911 by Charles L. Seabury and Company at Morris Heights in the Bronx, New York, for private use as a pleasure and commuting vessel. Prior to the entry of the United States into World War I in April 1917, Sovereign was registered with the U.S. Navy for potential service in time of war, and the Navy acquired her from the estate of M. C. D. Borden on 14 June 1918 for World War I service as a patrol vessel. She was commissioned as USS Sovereign (SP-170).
Sovereign served the 3rd Naval District as a patrol craft in the New York City area for ten months.
On 23 April 1919, Sovereign was stricken from the Navy List, and soon thereafter she was returned to her owners estate.
Notes
References
Department of the Navy: Naval Historical Center: Online Library of Selected Images: U.S. Navy Ships: Sovereign (American Steam Yacht, 1911). Served in 1918-1919 as USS Sovereign (SP-170)
NavSource Online: Section Patrol Craft Photo Archive USS Sovereign (SP-170)
Patrol vessels of the United States Navy
World War I patrol vessels of the United States
Steam yachts
Ships built in Morris Heights, Bronx
1911 ships
|
```shell
Finding a tag
The three states in git
Make your log output pretty
Limiting log output by time
Ignore files in git
```
|
Dr. Andor László (24 December 1914 – 8 August 1993) was a Hungarian economist, who served as Governor of the Hungarian National Bank from 1 November 1961 to 10 July 1975.
See also
National Bank of Hungary
References
1914 births
1993 deaths
Governors of the Hungarian National Bank
Writers from Budapest
20th-century Hungarian economists
|
Quincy City Hall is the seat of government for the City of Quincy, Massachusetts. The historic town hall building at 1305 Hancock Street in Quincy Center was built in 1844. It is a somewhat monumental example of Greek Revival architecture, featuring a temple front with two-story Ionic pilasters and a triangular pediment. Elements of the main facade were significantly altered when the town was converted to a city in 1888. It has been the seat of local government since its construction.
The building was listed on the National Register of Historic Places in 1980 (as "Quincy Town Hall").
See also
National Register of Historic Places listings in Quincy, Massachusetts
References
External links
Official City of Quincy website
City and town halls on the National Register of Historic Places in Massachusetts
Government buildings completed in 1844
Greek Revival architecture in Massachusetts
Buildings and structures in Quincy, Massachusetts
National Register of Historic Places in Quincy, Massachusetts
|
```xml
import { useState } from 'react';
// eslint-disable-next-line @typescript-eslint/explicit-function-return-type
export default function useObject<T>(InitialValue?: T) {
const [value, setValue] = useState<T>(InitialValue ?? {} as T);
const updateValue: (Updates: Partial<T>) => void = (Updates: Partial<T>) => setValue((prev) => ({ ...prev, ...Updates }))
return {
value, updateValue, overwriteData: setValue
};
}
```
|
Temnin el-Foka () is a village located approximately 28 kilometers southwest of Baalbek in the Baalbek District, in the Beqaa valley of Lebanon, at an altitude of 1100 meters above sea level. The village is famous for its Roman nymphaeum which is close to the spring of Ain el-Jobb.
History
Temnin was settled since Roman times, but the original name is unknown. The town is divided into two municipalities, the other being Temnine Et Tahta.
Ottoman tax registers from 1533–1548 indicate the village had 64 households and 11 bachelors, and one Imam, all Muslims.
In 1838, Eli Smith noted Temnin el-Foka's (or "Temnin the upper") population as being predominantly Metawileh.
The Roman nymphaeum
The nymphaeum is an arched watercourse built of large stones that has been constructed deep into a hill. It leads to a cistern underground. A gully has formed at the outflow, where a boundary pillar is carved with the image of a goddess. It resembles a similar cippus at Kafr Zabad.
The inner walls consist of four layers of massive, roughly hewn cuboids up to the vault. The top layer is completed by an unfinished cornice. At the rear end there is a slightly raised platform as Adyton. In the small semicircular niche on the back wall there must have been an image of the deity. It was probably a local deity of the flowing water, which can be seen on a stone slab in heavily weathered condition.
An ante was attached to the vaulted room in the form of an ante, which ended with an architrave with three fascias (horizontal stripes) and an upper bead. The staircase leads up in the middle between two columns with Corinthian capitals that bear the architrave. The porch is heavily restored, the rectangular portal was supplemented from concrete.
Only the vaulted arch and two rows of stones on the side walls were preserved before the restoration. The stone blocks of the side walls were piled up again, the pillars and capitals are largely new. Grooves can be seen in the longitudinal direction on the top of the vault. They may have served as a support for a wooden roof.
See also
Roman gardens
Temples of the Beqaa Valley
References
Bibliography
External links
Temnine El Faouqa, Localiban
Archaeological sites in Lebanon
Roman sites in Lebanon
Tourist attractions in Lebanon
Populated places in Baalbek District
Shia Muslim communities in Lebanon
|
The Michael government was formed by Alun Michael following the 1999 National Assembly for Wales election and was a Labour minority government.
Cabinet
References
Welsh governments
Ministries of Elizabeth II
|
```xml
<?xml version="1.0" encoding="utf-8"?>
<LinearLayout xmlns:android="path_to_url"
xmlns:app="path_to_url"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:animateLayoutChanges="true"
android:orientation="vertical">
<razerdp.demo.widget.TitleBarView
android:layout_width="match_parent"
android:layout_height="wrap_content"
app:title_text="Test" />
<androidx.appcompat.widget.LinearLayoutCompat
android:layout_width="match_parent"
android:layout_height="match_parent"
android:gravity="center"
android:orientation="vertical">
<razerdp.demo.widget.DPTextView
android:id="@+id/tv_show"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_gravity="center_horizontal"
android:layout_marginTop="@dimen/default_padding"
android:paddingLeft="@dimen/default_padding"
android:paddingTop="8dp"
android:paddingRight="@dimen/default_padding"
android:paddingBottom="8dp"
android:text=""
android:textColor="@color/white"
android:textSize="@dimen/text_normal"
app:backgroundColor="@color/color_blue"
app:corner_radius="4dp" />
<razerdp.demo.widget.DPTextView
android:id="@+id/tv_setting"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_gravity="center_horizontal"
android:layout_marginTop="@dimen/default_padding"
android:paddingLeft="@dimen/default_padding"
android:paddingTop="8dp"
android:paddingRight="@dimen/default_padding"
android:paddingBottom="8dp"
android:text=""
android:textColor="@color/white"
android:textSize="@dimen/text_normal"
app:backgroundColor="@color/common_red"
app:corner_radius="4dp" />
</androidx.appcompat.widget.LinearLayoutCompat>
</LinearLayout>
```
|
is a Formula One-based racing game developed by Nihon Bussan and published by Nichibutsu for the PC-Engine.
Reception
On release, Famicom Tsūshin scored the PC Engine version of the game a 31 out of 40.
References
External links
1990 video games
Formula One video games
Japan-exclusive video games
Nihon Bussan games
Nintendo Entertainment System games
TurboGrafx-16 games
Video games developed in Japan
Make Software games
ja:F1サーカス
|
```html
<!DOCTYPE html>
<html>
<head>
<meta charset="csbig5"> <!-- test breaks if the server overrides this -->
<title>csbig5 encoding (form)</title>
<meta name="timeout" content="long">
<meta name="variant" content="?1-1000">
<meta name="variant" content="?1001-2000">
<meta name="variant" content="?2001-3000">
<meta name="variant" content="?3001-4000">
<meta name="variant" content="?4001-5000">
<meta name="variant" content="?5001-6000">
<meta name="variant" content="?6001-7000">
<meta name="variant" content="?7001-8000">
<meta name="variant" content="?8001-9000">
<meta name="variant" content="?9001-10000">
<meta name="variant" content="?10001-11000">
<meta name="variant" content="?11001-12000">
<meta name="variant" content="?12001-13000">
<meta name="variant" content="?13001-14000">
<meta name="variant" content="?14001-last">
<script src="/resources/testharness.js"></script>
<script src="/resources/testharnessreport.js"></script>
<script src="/common/subset-tests.js"></script>
<script src="big5_index.js"></script>
<script src="big5-encoder.js"></script>
<link rel="author" title="Richard Ishida" href="mailto:ishida@w3.org">
<link rel="help" href="path_to_url#names-and-labels">
<meta name="assert" content="The browser produces the same encoding behavior for a document labeled 'csbig5' as for a document labeled 'big5' .">
<style>
iframe { display:none }
form { display:none }
</style>
</head>
<body>
<div id="log"></div>
<script src="../../resources/ranges.js"></script>
<script>
var errors = false;
var encoder = big5Encoder;
var ranges = rangesAll;
var separator = ",";
function expect(result, codepoint) {
return "%" + result.replace(/ /g, "%");
}
</script>
<script src="../../resources/encode-form-common.js"></script>
</body>
</html>
```
|
Qulbəndə (also, Gülbəndə and Gyul’benda) is a village and municipality in the Agdash Rayon of Azerbaijan. It has a population of 1,315. The municipality consists of the villages of Qulbəndə, Orta Qəsil, Bəylik, and Aşağı Qəsil.
References
Populated places in Agdash District
|
The Khairatabad metro station is located on the Red Line of the Hyderabad Metro. This station was opened to public on 2017. It is near to Cement Corporation of India, Khairtabad railway station, ICICI Bank, Institute of engineers Limited, Dr Babasaheb Ambedkar Statue, Prasads I Max Road, Vegetable Market, Raj Bhavan Road, Administrative staff college and Hanuman Temple.
History
It was opened on 24 September 2018.
The station
Structure
Khairatabad elevated metro station situated on the Red Line of Hyderabad Metro.
Facilities
The stations have staircases, elevators and escalators from the street level to the platform level which provide easy and comfortable access. Also, operation panels inside the elevators are installed at a level that can be conveniently operated by all passengers, including differently-abled and elderly citizens.
Station layout
Street Level This is the first level where passengers may park their vehicles and view the local area map.
Concourse level Ticketing office or Ticket Vending Machines (TVMs) is located here. Retail outlets and other facilities like washrooms, ATMs, first aid, etc., will be available in this area.
Platform level This layer consists of two platforms. Trains takes passengers from this level.
Entry/exit
See also
References
External links
Hyderabad Metro Rail Ltd
UrbanRail.Net – descriptions of all metro systems in the world, each with a schematic map showing all stations.
Hyderabad Metro stations
2017 establishments in Telangana
Railway stations in India opened in 2017
|
```python
from office365.runtime.client_result import ClientResult
from office365.runtime.queries.service_operation import ServiceOperationQuery
from office365.sharepoint.entity_collection import EntityCollection
from office365.sharepoint.publishing.pages.reposts.repost import RepostPage
class RepostPageCollection(EntityCollection):
def __init__(self, context, resource_path=None):
super(RepostPageCollection, self).__init__(context, RepostPage, resource_path)
def is_content_type_available(self):
return_type = ClientResult(self.context, bool())
qry = ServiceOperationQuery(
self, "IsContentTypeAvailable", None, None, None, return_type
)
self.context.add_query(qry)
return return_type
```
|
```kotlin
package net.grandcentrix.thirtyinch.lint
import com.android.tools.lint.detector.api.Category
import com.android.tools.lint.detector.api.Detector
import com.android.tools.lint.detector.api.Implementation
import com.android.tools.lint.detector.api.Issue
import com.android.tools.lint.detector.api.Scope
import com.android.tools.lint.detector.api.Severity
private val CATEGORY_TI = Category.create("ThirtyInch", 90)
sealed class TiIssue(
val id: String,
val briefDescription: String,
val category: Category,
val priority: Int,
val severity: Severity
) {
object MissingView : TiIssue(
id = "MissingTiViewImplementation",
briefDescription = "TiView Implementation missing in class",
category = CATEGORY_TI,
priority = 8,
severity = Severity.ERROR
)
fun asLintIssue(detectorCls: Class<out Detector>, description: String = briefDescription): Issue =
Issue.create(
id,
briefDescription,
description,
category,
priority,
severity,
Implementation(
detectorCls,
Scope.JAVA_FILE_SCOPE
)
)
}
```
|
Airplay40 is a syndicated radio-based Top 40 chart show broadcast around the globe on English speaking radio stations. It is based on the UK Singles Chart format, and is derived from airplay from subscriber English-language radio stations across Europe and the Middle East. The programme is aimed at English expatriates and tourists visiting popular holiday destinations across Europe and the Middle East.
The programme is broadcast to over ten regions including Spain, Greece, Cyprus, Italy, Gibraltar, Oman, Dubai, Malta, Aruba and New Zealand, and appears on selected internet radio stations.
The programme is hosted by Spencer James, and is produced by Fourway Media.
Airplay 40 is broadcast every Sunday, with the website updated as the programme airs. The website also has the showbiz news headlines, forums for discussion of the songs in the chart, and about life in the expat community.
Also there is a weekly showbiz news bulletin, with the main news in the world of entertainment, and a look back at previous hits over the years on the chart in the "Rewind" section of the programme.
History
The programme was originally called The eXpat Chart, and was specifically targeted towards an English speaking audience on radio stations broadcasting outside of the UK. The programme was launched in June 2008 and grew rapidly, however an increase in UK based stations and a shift in marketing plans meant a re-brand was needed.
In 2010, eXpat Party was launched, hosted at first by presenter Danny Looker, and then in 2011 by comedy duo Mabbs and Justice. This was broadcast until the spring of 2012, when eXpat Chart Rewind was launched.
In January 2013, The eXpat Chart was rebranded as Airplay40, with a new website.
Broadcast stations
The eXpat Chart is broadcast to over 40 radio stations across the globe. All times are local to the location of broadcast.
The stations include:
Radio Napa in Cyprus – 106.3fm and online – Sundays from 4pm.
Ace FM in Alhaurin & Coin in mainland Spain – 106.8fm and online – Sundays from 5pm.
Bay Radio in the Costa Blanca – 88.4, 88.8, 89.2, 89.4 & 98.5fm and online – Sundays from 6pm.
Coast FM in Tenerife – 89.2 & 100.8fm and online – Sundays from 4pm.
UK Away FM in Lanzarote – 99.4 & 99.9fm and online – Saturdays from 5pm.
Central FM in Spain – 105.5 & 92.6fm and online – Sundays from 4pm.
Radio Effedue in Italy – On fm and online – Sundays from 4pm.
HiFM in Oman – 95.9fm and online – Thursday's & Fridays from 4pm.
Sea FM Radio in Finland, 88,8 MHz and online – Sundays from 6pm.
88.6 Island FM in Zante – 88.6fm and online – Sundays from 8pm.
Energy FM in Malta – 96.4fm and online – Thursdays from 6pm.
www.myexpatradio.com in Dubai – Online – Sundays from 7pm.
Paul FM Radio – Global – Online – Every Sunday from 8:00am.
Miskin Radio – North West Kent and global – Online – Every Friday from 4pm.
A full listing of stations that broadcast the programme is available on the programme website.
Spin-off programmes
In December 2008 and December 2009, Fourway Radio, the producers of The eXpat Chart, presented a Christmas Special called The eXmas Chart. This was hosted by Spencer James, and co-hosted by Martin Jefferies who is the Showbiz news editor for the programme, and Adam Williams, who is one of the journalists in the showbiz newsroom. it has been announced that the programme will return for another broadcast in 2010, with a slight change to the format of the programme.
Following the death of Michael Jackson in June 2009, There was a special commemorating Jackson's musical history, playing the top 40 songs of his career.
References
External links
Official website
Music chart shows
|
```python
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following
# disclaimer in the documentation and/or other materials provided
# with the distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived
# from this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
# This file is (possibly, depending on python version) imported by
# gyp_v8 when GYP_PARALLEL=1 and it creates sub-processes through the
# multiprocessing library.
# Importing in Python 2.6 (fixed in 2.7) on Windows doesn't search for imports
# that don't end in .py (and aren't directories with an __init__.py). This
# wrapper makes "import gyp_v8" work with those old versions and makes it
# possible to execute gyp_v8.py directly on Windows where the extension is
# useful.
import os
path = os.path.abspath(os.path.split(__file__)[0])
execfile(os.path.join(path, 'gyp_v8'))
```
|
The Bremische Bürgerschaft (, ) is the legislative branch of the Free Hanseatic City of Bremen in Germany. The state parliament elects the members of the Senate (executive), exercises oversight of the executive, and passes legislation. It currently consists of 87 members from six parties. The current majority is a coalition of the Social Democratic Party, Alliance 90/The Greens and The Left, supporting Mayor and Senate president Andreas Bovenschulte. The 72 delegates of the city of Bremen also form the Stadtbürgerschaft (the local parliament of the city), while Bremerhaven has its own local parliament.
Current composition
After the elections of 14 May 2023, the composition of the Bürgerschaft is as follows:
After the elections of 26 May 2019, the composition of the Bürgerschaft is as follows:
Composition (June 2018)
After the elections of 10 May 2015, the composition of the Bürgerschaft is as follows:
Elections are conducted using proportional representation systems in both voting districts Bremen (68 seats) and Bremerhaven (15 seats), with a minimum of 5% vote share per voting district to receive any seats. The 5% rule is used separately, thus allowing the German People's Union to join the Bürgerschaft by winning 5.7% of the votes in Bremerhaven while winning only 2.75% in the whole state of Bremen.
The 68 members from Bremen also form the Stadtbürgerschaft (city council for the City of Bremen only), which is elected by an extended electorate: the minimum age for voting is 16 instead of 18 and all citizens of the European Union are allowed to vote. It is the only German state parliament with a 4-year, rather than 5-year term. These additional votes created a green Stadtbürgerschaft-only member and a SPD non-Stadtbürgerschaft member from Bremen(City) after the 2003 elections.
In 1979, the Bremer Grüne Liste managed to join the Bürgerschaft, thus being the first Green Party to ever enter a German Landtag.
Presidents of the Bürgerschaft
So far, the presidents of the Landtag of Bremen have been:
1946–1966 August Hagedorn, Social Democratic Party (SPD)
1966–1971 Hermann Engel, SPD
1971–1995 Dieter Klink, SPD
1995–1999 Reinhard Metz, Christian Democratic Union (CDU)
1999–2019 Christian Weber, SPD
2019 Antje Grotheer, SPD
2019–2023 Frank Imhoff, CDU
since 2023 Antje Grotheer, SPD
See also
1999 Bremen state election
2003 Bremen state election
2007 Bremen state election
The House of the Parliament
The House of the Parliament officially opened in September 1966. Bremen’s parliament building is called ‘Haus der Bürgerschaft’. The building has a frame construction of iron-reinforced concrete. The sheathing of glass has been hung in front of this construction. The height of the building is approximately that of the level of the eaves of both the Town Hall und the house 'Schütting'. The folded roof was a compromise solution conceived as a means for converging and linking the building with the older buildings surrounding the historic market square. The facade of the parliament building reflects the old buildings in the mirror-like surface of the glass sheathing. Artificial reliefs made of aluminum highlight the window sills.
External links
Politics of Bremen (state)
Bremen
History of Bremen (state)
|
```smalltalk
using System.Data;
using System.Linq.Expressions;
namespace Chloe
{
internal class DbContextProviderDecorator : IDbContextProvider
{
bool _disposed = false;
public DbContextProviderDecorator(IDbContextProvider dbContextProvider)
{
this.HoldDbContextProvider = dbContextProvider;
}
public virtual IDbContextProvider HoldDbContextProvider { get; private set; }
public virtual IDbSessionProvider Session => this.HoldDbContextProvider.Session;
public void Dispose()
{
if (this._disposed)
return;
this.Dispose(true);
this._disposed = true;
}
protected virtual void Dispose(bool disposing)
{
this.HoldDbContextProvider.Dispose();
}
public virtual void TrackEntity(object entity)
{
this.HoldDbContextProvider.TrackEntity(entity);
}
public void HasQueryFilter<TEntity>(Expression<Func<TEntity, bool>> filter)
{
this.HoldDbContextProvider.HasQueryFilter<TEntity>(filter);
}
public void HasQueryFilter(Type entityType, LambdaExpression filter)
{
this.HoldDbContextProvider.HasQueryFilter(entityType, filter);
}
public virtual IQuery<TEntity> Query<TEntity>(string table, LockType @lock)
{
return this.HoldDbContextProvider.Query<TEntity>(table, @lock);
}
public virtual List<T> SqlQuery<T>(string sql, CommandType cmdType, params DbParam[] parameters)
{
return this.HoldDbContextProvider.SqlQuery<T>(sql, cmdType, parameters);
}
public virtual Task<List<T>> SqlQueryAsync<T>(string sql, CommandType cmdType, params DbParam[] parameters)
{
return this.HoldDbContextProvider.SqlQueryAsync<T>(sql, cmdType, parameters);
}
public virtual List<T> SqlQuery<T>(string sql, CommandType cmdType, object parameter)
{
return this.HoldDbContextProvider.SqlQuery<T>(sql, cmdType, parameter);
}
public virtual Task<List<T>> SqlQueryAsync<T>(string sql, CommandType cmdType, object parameter)
{
return this.HoldDbContextProvider.SqlQueryAsync<T>(sql, cmdType, parameter);
}
public virtual TEntity Save<TEntity>(TEntity entity)
{
return this.HoldDbContextProvider.Save<TEntity>(entity);
}
public virtual Task<TEntity> SaveAsync<TEntity>(TEntity entity)
{
return this.HoldDbContextProvider.SaveAsync<TEntity>(entity);
}
public virtual TEntity Insert<TEntity>(TEntity entity, string table)
{
return this.HoldDbContextProvider.Insert<TEntity>(entity, table);
}
public virtual object Insert<TEntity>(Expression<Func<TEntity>> content, string table)
{
return this.HoldDbContextProvider.Insert<TEntity>(content, table);
}
public virtual Task<TEntity> InsertAsync<TEntity>(TEntity entity, string table)
{
return this.HoldDbContextProvider.InsertAsync<TEntity>(entity, table);
}
public virtual Task<object> InsertAsync<TEntity>(Expression<Func<TEntity>> content, string table)
{
return this.HoldDbContextProvider.InsertAsync<TEntity>(content, table);
}
public virtual void InsertRange<TEntity>(List<TEntity> entities, int? batchSize, string table)
{
this.HoldDbContextProvider.InsertRange<TEntity>(entities, batchSize, table);
}
public virtual Task InsertRangeAsync<TEntity>(List<TEntity> entities, int? batchSize, string table)
{
return this.HoldDbContextProvider.InsertRangeAsync<TEntity>(entities, batchSize, table);
}
public virtual int Update<TEntity>(TEntity entity, string table)
{
return this.HoldDbContextProvider.Update<TEntity>(entity, table);
}
public virtual int Update<TEntity>(Expression<Func<TEntity, bool>> condition, Expression<Func<TEntity, TEntity>> content, string table)
{
return this.HoldDbContextProvider.Update<TEntity>(condition, content, table);
}
public virtual Task<int> UpdateAsync<TEntity>(TEntity entity, string table)
{
return this.HoldDbContextProvider.UpdateAsync<TEntity>(entity, table);
}
public virtual Task<int> UpdateAsync<TEntity>(Expression<Func<TEntity, bool>> condition, Expression<Func<TEntity, TEntity>> content, string table)
{
return this.HoldDbContextProvider.UpdateAsync<TEntity>(condition, content, table);
}
public virtual int Delete<TEntity>(TEntity entity, string table)
{
return this.HoldDbContextProvider.Delete<TEntity>(entity, table);
}
public virtual int Delete<TEntity>(Expression<Func<TEntity, bool>> condition, string table)
{
return this.HoldDbContextProvider.Delete<TEntity>(condition, table);
}
public virtual Task<int> DeleteAsync<TEntity>(TEntity entity, string table)
{
return this.HoldDbContextProvider.DeleteAsync<TEntity>(entity, table);
}
public virtual Task<int> DeleteAsync<TEntity>(Expression<Func<TEntity, bool>> condition, string table)
{
return this.HoldDbContextProvider.DeleteAsync<TEntity>(condition, table);
}
}
}
```
|
```cmake
vcpkg_from_github(
OUT_SOURCE_PATH SOURCE_PATH
REPO SpriteOvO/sigmatch
REF v0.2.0
SHA512 your_sha256_hashyour_sha256_hash
HEAD_REF main
)
set(VCPKG_BUILD_TYPE release) # header-only
vcpkg_cmake_configure(
SOURCE_PATH "${SOURCE_PATH}"
OPTIONS
-DSIGMATCH_BUILD_TESTS=OFF
)
vcpkg_cmake_install()
vcpkg_cmake_config_fixup(CONFIG_PATH lib/cmake/sigmatch)
vcpkg_install_copyright(FILE_LIST "${SOURCE_PATH}/LICENSE")
file(REMOVE_RECURSE "${CURRENT_PACKAGES_DIR}/lib")
```
|
```javascript
/**
* @license Apache-2.0
*
*
*
* path_to_url
*
* Unless required by applicable law or agreed to in writing, software
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
*/
'use strict';
// MODULES //
var resolve = require( 'path' ).resolve;
var bench = require( '@stdlib/bench' );
var randu = require( '@stdlib/random/base/randu' );
var isnan = require( '@stdlib/math/base/assert/is-nan' );
var tryRequire = require( '@stdlib/utils/try-require' );
var pkg = require( './../package.json' ).name;
// VARIABLES //
var tand = tryRequire( resolve( __dirname, './../lib/native.js' ) );
var opts = {
'skip': ( tand instanceof Error )
};
// MAIN //
bench( pkg+'::native', opts, function benchmark( b ) {
var x;
var y;
var i;
b.tic();
for ( i = 0; i < b.iterations; i++ ) {
x = ( randu() * 10.0 ) - 5.0;
y = tand( x );
if ( isnan( y ) ) {
b.fail( 'should not return NaN' );
}
}
b.toc();
if ( isnan( y ) ) {
b.fail( 'should not return NaN' );
}
b.pass( 'benchmark finished' );
b.end();
});
```
|
Drowningman is an American hardcore punk band from Burlington, Vermont, which was active from 1995 to 2005. Formed in the fall of 1995 by Simon Brody, Denny Donovan, Javin Leonard, Dave Barnett and Todd Tomlinson, the band was heavily influenced by a variety of bands including Deadguy, Unbroken, Shotmaker, Unwound, Sunny Day Real Estate and Promise Ring. This musical amalgamation influenced the modern metalcore and mathcore musical subgenres.
History
Formation (1995–1996)
Drowingman was formed in Burlington, Vermont and played its first show in a basement at Hungerford Terrace on New Year's Eve of 1996. A first demo recording from early 1996 is included on the Learn to Let It Go retrospective released by ReIgnition Recordings in 2004. Hungerford Terrace is famous for its involvement in the Underground Railroad with Harriet Tumbman in the slavery days.
Hydra Head Records-era (1997–1998)
By 1997 the band had begun playing throughout the Northeast. Frequently sharing bills with bands on the Boston-based Hydra Head Records (Converge, Cave In, Cable and Piebald), Drowningman soon joined the roster and their debut 7-inch EP Weighted and Weighed Down was released in 1997, and was followed by the LP Busy Signal at the Suicide Hotline in 1998 and a split EP with frequent tour-mates The Dillinger Escape Plan on the same label.
Revelation Records-era (1999–2001)
By early 1999 the band was being courted by Revelation Records and was also talking to friends at Equal Vision Records. The band returned to the studio soon after the addition of Joe Villemaire, Matt Roy and Zach Martin. Hydra Head had become concerned with the revolving door policy and when the EP How They Light Cigarettes In Prison was presented to them, the reaction was underwhelming. Revelation Records expressed enthusiasm for the record and went on to release it in early 2000, initially shipping more copies than any previous Revelation EP.
Simon Brody described the emerging band on the Revelation Records website in the following fashion:
A first full U.S. tour with The Dillinger Escape Plan was embarked on to support the release. Soon after returning, production of the Rock And Roll Killing Machine record began in Washington, D.C. at the Salad Days Studio. A great deal of technical difficulty was encountered; Simon Brody had claimed in interviews the stressed work environment caused the tempo of many of the songs to rush and that record lost some of the previous efforts melodic counterpoint. Still, it was well received, earning a 10/10 in respected extreme music magazine Terrorizer and finding its way into many publications' top ten lists for 2001.
Aside from making regular appearances at Hellfest, Krazy Fest 4, Monster Fest and The New England Metal and Hardcore Festival, Drowningman began touring extensively in support of this latest record. They toured with hardcore and metal bands as varied as Earth Crisis, Glassjaw, Shadows Fall, Darkest Hour and Twelve Tribes. However, projected gigs for early 2001 were curtailed when the group lost its drummer. By May 2001 road action was resumed with regular partners Darkest Hour on the "Bro-Down 2001" tour.
Later Recordings and First Break-Up (2002)
The band then recorded an EP for Equal Vision Records which was released in 2002 entitled Drowningman Still Loves You.
Several tours followed, first with Thursday and Waterdown, later with Atreyu and Vaux. By September the group announced they were to hook up with Converge and Playing Enemy for East Coast and Midwest gigs, but backed out to prioritise songwriting.
Embroiled in contract disputes with Revelation Records the band went into God City Studios in early 2002 and recorded a series of improvised tracks for a final Revelation release tentatively and sarcastically entitled Best Record Ever. The instrumental tracks briefly circulated minus a 20-minute "meditation on a single riff" (a homage to the emerging and burgeoning stoner rock trend) and according to band members was never actually intended to be released.
Shortly after a particularly rowdy final performance on June 22, 2002, at Krazy Fest 5 in Louisville, Kentucky, the members of Drowningman parted ways.
Reunion and Don't Push Us When We're Hot (2004–2005)
Denny Donovan and Simon Brody revived Drowningman briefly, beginning with a 2005 trek with The Dillinger Escape Plan, Misery Signals, Every Time I Die and Zao.
Drowningman announced a Summer 2005 nationwide US trek with The Minor Times. Following the tour they recorded what was to be the band's final album which was moderately well-received by some critics but was not up to par with some of the band's earlier albums.
A promotional video for the track "White People Are Stupid" was directed by Joseph Patisall, which MTV aired with the abbreviated title "WPAS".
Before embarking on a tour to promote their new album, in October 2005 Drowningman former members rejoined the band, guitarist Frank Smecker and drummer Dave Joyal, the later having prior involvement during the Still Loves You EP. The band broke up permanently after recording a version of Black Flag's "Loose Nut", for the Reignition Records re-issue of the tribute album Black On Black.
When the tour for Don't Push Us When We're Hot commenced, the band became disgruntled by the poor organization of the tour, and only made it three or four shows into the tour before breaking up the band for the second time.
In 2014, Drowningman reformed for three shows in the Northeast with the Rock and Roll Killing Machine line-up.
Members
Final line-up
Simon Brody- vocals (1998–2002, 2004–2005, 2014)
Javin Leonard - guitar (1996–2002, 2014, 2021)
Matt Roy - guitar (1999–2002, 2014, 2021)
Dave Barnett - bass (1996–2000, 2000–2002, 2014)
Jackson Jacques - EVP (2021)
Joe Villemaire - drums (1998–2000, 2014)
Former members
Teej Maynard- Junior Guitar (2021–-present)
Little Baby Kevin - guitar (2021–2021)
Hans Olsen - guitar (2004–2005)
Jamie Durivage - bass (2004–2005)
Brian Curry - drums (2004–2005)
Dave Joyal - drums (2000–2002)
Zack Martin - bass (1999)
Daryl Rabidoux - guitar (1997–1999)
Todd Tomlinson - drums (1996–1998)
Denny Donovan - guitar (1996–1997)
Josh Levy - bass (1996–1997)
Timeline
Discography
Studio albums
Busy Signal at the Suicide Hotline (1998, Hydra Head)
Rock and Roll Killing Machine (2000, Revelation)
Don't Push Us When We're Hot (2004, Thorp)
Singles and EPs
Weighted and Weighed Down 7-inch (1997, Hydra Head)
Jim Fear/My First Restraining Order split 7-inch with The Dillinger Escape Plan (1999, Hydra Head)
How They Light Cigarettes In Prison 7-inch/CDep (2000, Revelation)
Drowningman Still Loves You 10-inch/CDep (2001, Equal Vision)
Compilation albums
Learn to Let It Go: the Demos (2004, Law of Inertia)
References
External links
Simon Brody's blog, including postings of The Scheme's unreleased tracks
Simon's New Band
Musical groups established in 1997
Musical groups disestablished in 2005
Equal Vision Records artists
Metalcore musical groups from Vermont
|
```c++
// 1.0. (See accompanying file LICENSE_1_0.txt or copy at
// path_to_url
//
//
// For more information, see path_to_url
//
#include <boost/range/algorithm/set_algorithm.hpp>
#include <boost/test/test_tools.hpp>
#include <boost/test/unit_test.hpp>
#include <boost/assign.hpp>
#include <boost/bind.hpp>
#include <algorithm>
#include <functional>
#include <list>
#include <numeric>
#include <deque>
#include <vector>
namespace boost
{
namespace
{
template<class Container1, class Iterator, class Container2>
void check_result(
Container1& reference,
Iterator reference_result,
Container2& test_cont,
Iterator test_result
)
{
BOOST_CHECK_EQUAL(
std::distance<Iterator>(reference.begin(), reference_result),
std::distance<Iterator>(test_cont.begin(), test_result)
);
BOOST_CHECK_EQUAL_COLLECTIONS(
reference.begin(), reference.end(),
test_cont.begin(), test_cont.end()
);
}
template<class Container1, class Container2>
void test(Container1& cont1, Container2& cont2)
{
typedef BOOST_DEDUCED_TYPENAME Container1::value_type value_t;
typedef BOOST_DEDUCED_TYPENAME std::vector<value_t>::iterator iterator_t;
std::vector<value_t> reference(cont1.size() + cont2.size());
std::vector<value_t> test_cont(reference);
iterator_t reference_result
= std::set_difference(cont1.begin(), cont1.end(),
cont2.begin(), cont2.end(),
reference.begin());
iterator_t test_result
= boost::set_difference(cont1, cont2, test_cont.begin());
check_result(reference, reference_result,
test_cont, test_result);
test_result = boost::set_difference(
boost::make_iterator_range(cont1), cont2,
test_cont.begin());
check_result(reference, reference_result,
test_cont, test_result);
test_result = boost::set_difference(
cont1, boost::make_iterator_range(cont2),
test_cont.begin());
check_result(reference, reference_result,
test_cont, test_result);
test_result = boost::set_difference(
boost::make_iterator_range(cont1),
boost::make_iterator_range(cont2),
test_cont.begin());
check_result(reference, reference_result,
test_cont, test_result);
}
template<class Container, class BinaryPredicate>
void sort_container(Container& cont, BinaryPredicate pred)
{
typedef BOOST_DEDUCED_TYPENAME Container::value_type value_t;
std::vector<value_t> temp(cont.begin(), cont.end());
std::sort(temp.begin(), temp.end(), pred);
cont.assign(temp.begin(), temp.end());
}
template<class Container1,
class Container2,
class BinaryPredicate>
void test_pred(Container1 cont1, Container2 cont2,
BinaryPredicate pred)
{
typedef BOOST_DEDUCED_TYPENAME Container1::value_type value_t;
typedef BOOST_DEDUCED_TYPENAME std::vector<value_t>::iterator iterator_t;
sort_container(cont1, pred);
sort_container(cont2, pred);
std::vector<value_t> reference(cont1.size() + cont2.size());
std::vector<value_t> test_cont(reference);
iterator_t reference_result
= std::set_difference(cont1.begin(), cont1.end(),
cont2.begin(), cont2.end(),
reference.begin(),
pred);
iterator_t test_result
= boost::set_difference(cont1, cont2, test_cont.begin(), pred);
check_result(reference, reference_result,
test_cont, test_result);
test_result = boost::set_difference(
boost::make_iterator_range(cont1), cont2,
test_cont.begin(), pred);
check_result(reference, reference_result,
test_cont, test_result);
test_result = boost::set_difference(
cont1, boost::make_iterator_range(cont2),
test_cont.begin(), pred);
check_result(reference, reference_result,
test_cont, test_result);
test_result = boost::set_difference(
boost::make_iterator_range(cont1),
boost::make_iterator_range(cont2),
test_cont.begin(), pred);
check_result(reference, reference_result,
test_cont, test_result);
}
template<class Container1, class Container2>
void test_set_difference_impl(
Container1& cont1,
Container2& cont2
)
{
test(cont1, cont2);
test_pred(cont1, cont2, std::less<int>());
test_pred(cont1, cont2, std::greater<int>());
}
template<class Container1, class Container2>
void test_set_difference_impl()
{
using namespace boost::assign;
Container1 cont1;
Container2 cont2;
test_set_difference_impl(cont1, cont2);
cont1.clear();
cont2.clear();
cont1 += 1;
test_set_difference_impl(cont1, cont2);
cont1.clear();
cont2.clear();
cont2 += 1;
test_set_difference_impl(cont1, cont2);
cont1.clear();
cont2.clear();
cont1 += 1,2,3,4,5,6,7,8,9;
cont2 += 2,3,4;
test_set_difference_impl(cont1, cont2);
cont1.clear();
cont2.clear();
cont1 += 2,3,4;
cont2 += 1,2,3,4,5,6,7,8,9;
test_set_difference_impl(cont1, cont2);
}
void test_set_difference()
{
test_set_difference_impl< std::vector<int>, std::vector<int> >();
test_set_difference_impl< std::list<int>, std::list<int> >();
test_set_difference_impl< std::deque<int>, std::deque<int> >();
test_set_difference_impl< std::vector<int>, std::list<int> >();
test_set_difference_impl< std::list<int>, std::vector<int> >();
}
}
}
boost::unit_test::test_suite*
init_unit_test_suite(int argc, char* argv[])
{
boost::unit_test::test_suite* test
= BOOST_TEST_SUITE( "RangeTestSuite.algorithm.set_difference" );
test->add( BOOST_TEST_CASE( &boost::test_set_difference ) );
return test;
}
```
|
```python
import pyodbc
import datetime
server = 'your_server.database.windows.net'
database = 'your_database'
username = 'your_user'
from azure.identity import DefaultAzureCredential
from azure.keyvault.secrets import SecretClient
credential = DefaultAzureCredential()
secret_client = SecretClient(vault_url="path_to_url", credential=credential)
# NOTE: please replace the ("<your-secret-name>") with the name of the secret in your vault
secret = secret_client.get_secret("AppSecret")
password = secret.value
cnxn = pyodbc.connect('DRIVER={ODBC Driver 17 for SQL Server};SERVER='+server+';DATABASE='+database+';UID='+username+';PWD='+ password)
cursor = cnxn.cursor()
tsql = "SELECT SUM(Price) as sum FROM Table_with_3M_rows"
a = datetime.datetime.now()
with cursor.execute(tsql):
b = datetime.datetime.now()
c = b - a
for row in cursor:
print ('Sum:', str(row[0]))
print ('QueryTime:', c.microseconds, 'ms')
```
|
```php
<?php
/*
*
*
* path_to_url
*
* Unless required by applicable law or agreed to in writing, software
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
*/
namespace Google\Service\DisplayVideo;
class ActivateManualTriggerRequest extends \Google\Model
{
}
// Adding a class alias for backwards compatibility with the previous class name.
class_alias(ActivateManualTriggerRequest::class, 'Google_Service_DisplayVideo_ActivateManualTriggerRequest');
```
|
```c
/* -*- C -*-
* main.c -- the bare scullv char module
*
*
* The source code in this file can be freely used, adapted,
* and redistributed in source or binary form, so long as an
* acknowledgment appears in derived source files. The citation
* should list that the code comes from the book "Linux Device
* Drivers" by Alessandro Rubini and Jonathan Corbet, published
* by O'Reilly & Associates. No warranty is attached;
* we cannot take responsibility for errors or fitness for use.
*
* $Id: _main.c.in,v 1.21 2004/10/14 20:11:39 corbet Exp $
*/
#include <linux/config.h>
#include <linux/module.h>
#include <linux/moduleparam.h>
#include <linux/init.h>
#include <linux/kernel.h> /* printk() */
#include <linux/slab.h> /* kmalloc() */
#include <linux/fs.h> /* everything... */
#include <linux/errno.h> /* error codes */
#include <linux/types.h> /* size_t */
#include <linux/proc_fs.h>
#include <linux/fcntl.h> /* O_ACCMODE */
#include <linux/aio.h>
#include <asm/uaccess.h>
#include <linux/vmalloc.h>
#include "scullv.h" /* local definitions */
int scullv_major = SCULLV_MAJOR;
int scullv_devs = SCULLV_DEVS; /* number of bare scullv devices */
int scullv_qset = SCULLV_QSET;
int scullv_order = SCULLV_ORDER;
module_param(scullv_major, int, 0);
module_param(scullv_devs, int, 0);
module_param(scullv_qset, int, 0);
module_param(scullv_order, int, 0);
MODULE_AUTHOR("Alessandro Rubini");
MODULE_LICENSE("Dual BSD/GPL");
struct scullv_dev *scullv_devices; /* allocated in scullv_init */
int scullv_trim(struct scullv_dev *dev);
void scullv_cleanup(void);
#ifdef SCULLV_USE_PROC /* don't waste space if unused */
/*
* The proc filesystem: function to read and entry
*/
void scullv_proc_offset(char *buf, char **start, off_t *offset, int *len)
{
if (*offset == 0)
return;
if (*offset >= *len) {
/* Not there yet */
*offset -= *len;
*len = 0;
} else {
/* We're into the interesting stuff now */
*start = buf + *offset;
*offset = 0;
}
}
/* FIXME: Do we need this here?? It be ugly */
int scullv_read_procmem(char *buf, char **start, off_t offset,
int count, int *eof, void *data)
{
int i, j, order, qset, len = 0;
int limit = count - 80; /* Don't print more than this */
struct scullv_dev *d;
*start = buf;
for(i = 0; i < scullv_devs; i++) {
d = &scullv_devices[i];
if (down_interruptible (&d->sem))
return -ERESTARTSYS;
qset = d->qset; /* retrieve the features of each device */
order = d->order;
len += sprintf(buf+len,"\nDevice %i: qset %i, order %i, sz %li\n",
i, qset, order, (long)(d->size));
for (; d; d = d->next) { /* scan the list */
len += sprintf(buf+len," item at %p, qset at %p\n",d,d->data);
scullv_proc_offset (buf, start, &offset, &len);
if (len > limit)
goto out;
if (d->data && !d->next) /* dump only the last item - save space */
for (j = 0; j < qset; j++) {
if (d->data[j])
len += sprintf(buf+len," % 4i:%8p\n",j,d->data[j]);
scullv_proc_offset (buf, start, &offset, &len);
if (len > limit)
goto out;
}
}
out:
up (&scullv_devices[i].sem);
if (len > limit)
break;
}
*eof = 1;
return len;
}
#endif /* SCULLV_USE_PROC */
/*
* Open and close
*/
int scullv_open (struct inode *inode, struct file *filp)
{
struct scullv_dev *dev; /* device information */
/* Find the device */
dev = container_of(inode->i_cdev, struct scullv_dev, cdev);
/* now trim to 0 the length of the device if open was write-only */
if ( (filp->f_flags & O_ACCMODE) == O_WRONLY) {
if (down_interruptible (&dev->sem))
return -ERESTARTSYS;
scullv_trim(dev); /* ignore errors */
up (&dev->sem);
}
/* and use filp->private_data to point to the device data */
filp->private_data = dev;
return 0; /* success */
}
int scullv_release (struct inode *inode, struct file *filp)
{
return 0;
}
/*
* Follow the list
*/
struct scullv_dev *scullv_follow(struct scullv_dev *dev, int n)
{
while (n--) {
if (!dev->next) {
dev->next = kmalloc(sizeof(struct scullv_dev), GFP_KERNEL);
memset(dev->next, 0, sizeof(struct scullv_dev));
}
dev = dev->next;
continue;
}
return dev;
}
/*
* Data management: read and write
*/
ssize_t scullv_read (struct file *filp, char __user *buf, size_t count,
loff_t *f_pos)
{
struct scullv_dev *dev = filp->private_data; /* the first listitem */
struct scullv_dev *dptr;
int quantum = PAGE_SIZE << dev->order;
int qset = dev->qset;
int itemsize = quantum * qset; /* how many bytes in the listitem */
int item, s_pos, q_pos, rest;
ssize_t retval = 0;
if (down_interruptible (&dev->sem))
return -ERESTARTSYS;
if (*f_pos > dev->size)
goto nothing;
if (*f_pos + count > dev->size)
count = dev->size - *f_pos;
/* find listitem, qset index, and offset in the quantum */
item = ((long) *f_pos) / itemsize;
rest = ((long) *f_pos) % itemsize;
s_pos = rest / quantum; q_pos = rest % quantum;
/* follow the list up to the right position (defined elsewhere) */
dptr = scullv_follow(dev, item);
if (!dptr->data)
goto nothing; /* don't fill holes */
if (!dptr->data[s_pos])
goto nothing;
if (count > quantum - q_pos)
count = quantum - q_pos; /* read only up to the end of this quantum */
if (copy_to_user (buf, dptr->data[s_pos]+q_pos, count)) {
retval = -EFAULT;
goto nothing;
}
up (&dev->sem);
*f_pos += count;
return count;
nothing:
up (&dev->sem);
return retval;
}
ssize_t scullv_write (struct file *filp, const char __user *buf, size_t count,
loff_t *f_pos)
{
struct scullv_dev *dev = filp->private_data;
struct scullv_dev *dptr;
int quantum = PAGE_SIZE << dev->order;
int qset = dev->qset;
int itemsize = quantum * qset;
int item, s_pos, q_pos, rest;
ssize_t retval = -ENOMEM; /* our most likely error */
if (down_interruptible (&dev->sem))
return -ERESTARTSYS;
/* find listitem, qset index and offset in the quantum */
item = ((long) *f_pos) / itemsize;
rest = ((long) *f_pos) % itemsize;
s_pos = rest / quantum; q_pos = rest % quantum;
/* follow the list up to the right position */
dptr = scullv_follow(dev, item);
if (!dptr->data) {
dptr->data = kmalloc(qset * sizeof(void *), GFP_KERNEL);
if (!dptr->data)
goto nomem;
memset(dptr->data, 0, qset * sizeof(char *));
}
/* Allocate a quantum using virtual addresses */
if (!dptr->data[s_pos]) {
dptr->data[s_pos] = (void *)vmalloc(PAGE_SIZE << dptr->order);
if (!dptr->data[s_pos])
goto nomem;
memset(dptr->data[s_pos], 0, PAGE_SIZE << dptr->order);
}
if (count > quantum - q_pos)
count = quantum - q_pos; /* write only up to the end of this quantum */
if (copy_from_user (dptr->data[s_pos]+q_pos, buf, count)) {
retval = -EFAULT;
goto nomem;
}
*f_pos += count;
/* update the size */
if (dev->size < *f_pos)
dev->size = *f_pos;
up (&dev->sem);
return count;
nomem:
up (&dev->sem);
return retval;
}
/*
* The ioctl() implementation
*/
int scullv_ioctl (struct inode *inode, struct file *filp,
unsigned int cmd, unsigned long arg)
{
int err = 0, ret = 0, tmp;
/* don't even decode wrong cmds: better returning ENOTTY than EFAULT */
if (_IOC_TYPE(cmd) != SCULLV_IOC_MAGIC) return -ENOTTY;
if (_IOC_NR(cmd) > SCULLV_IOC_MAXNR) return -ENOTTY;
/*
* the type is a bitmask, and VERIFY_WRITE catches R/W
* transfers. Note that the type is user-oriented, while
* verify_area is kernel-oriented, so the concept of "read" and
* "write" is reversed
*/
if (_IOC_DIR(cmd) & _IOC_READ)
err = !access_ok(VERIFY_WRITE, (void __user *)arg, _IOC_SIZE(cmd));
else if (_IOC_DIR(cmd) & _IOC_WRITE)
err = !access_ok(VERIFY_READ, (void __user *)arg, _IOC_SIZE(cmd));
if (err)
return -EFAULT;
switch(cmd) {
case SCULLV_IOCRESET:
scullv_qset = SCULLV_QSET;
scullv_order = SCULLV_ORDER;
break;
case SCULLV_IOCSORDER: /* Set: arg points to the value */
ret = __get_user(scullv_order, (int __user *) arg);
break;
case SCULLV_IOCTORDER: /* Tell: arg is the value */
scullv_order = arg;
break;
case SCULLV_IOCGORDER: /* Get: arg is pointer to result */
ret = __put_user (scullv_order, (int __user *) arg);
break;
case SCULLV_IOCQORDER: /* Query: return it (it's positive) */
return scullv_order;
case SCULLV_IOCXORDER: /* eXchange: use arg as pointer */
tmp = scullv_order;
ret = __get_user(scullv_order, (int __user *) arg);
if (ret == 0)
ret = __put_user(tmp, (int __user *) arg);
break;
case SCULLV_IOCHORDER: /* sHift: like Tell + Query */
tmp = scullv_order;
scullv_order = arg;
return tmp;
case SCULLV_IOCSQSET:
ret = __get_user(scullv_qset, (int __user *) arg);
break;
case SCULLV_IOCTQSET:
scullv_qset = arg;
break;
case SCULLV_IOCGQSET:
ret = __put_user(scullv_qset, (int __user *)arg);
break;
case SCULLV_IOCQQSET:
return scullv_qset;
case SCULLV_IOCXQSET:
tmp = scullv_qset;
ret = __get_user(scullv_qset, (int __user *)arg);
if (ret == 0)
ret = __put_user(tmp, (int __user *)arg);
break;
case SCULLV_IOCHQSET:
tmp = scullv_qset;
scullv_qset = arg;
return tmp;
default: /* redundant, as cmd was checked against MAXNR */
return -ENOTTY;
}
return ret;
}
/*
* The "extended" operations
*/
loff_t scullv_llseek (struct file *filp, loff_t off, int whence)
{
struct scullv_dev *dev = filp->private_data;
long newpos;
switch(whence) {
case 0: /* SEEK_SET */
newpos = off;
break;
case 1: /* SEEK_CUR */
newpos = filp->f_pos + off;
break;
case 2: /* SEEK_END */
newpos = dev->size + off;
break;
default: /* can't happen */
return -EINVAL;
}
if (newpos<0) return -EINVAL;
filp->f_pos = newpos;
return newpos;
}
/*
* A simple asynchronous I/O implementation.
*/
struct async_work {
struct kiocb *iocb;
int result;
struct work_struct work;
};
/*
* "Complete" an asynchronous operation.
*/
static void scullv_do_deferred_op(void *p)
{
struct async_work *stuff = (struct async_work *) p;
aio_complete(stuff->iocb, stuff->result, 0);
kfree(stuff);
}
static int scullv_defer_op(int write, struct kiocb *iocb, char __user *buf,
size_t count, loff_t pos)
{
struct async_work *stuff;
int result;
/* Copy now while we can access the buffer */
if (write)
result = scullv_write(iocb->ki_filp, buf, count, &pos);
else
result = scullv_read(iocb->ki_filp, buf, count, &pos);
/* If this is a synchronous IOCB, we return our status now. */
if (is_sync_kiocb(iocb))
return result;
/* Otherwise defer the completion for a few milliseconds. */
stuff = kmalloc (sizeof (*stuff), GFP_KERNEL);
if (stuff == NULL)
return result; /* No memory, just complete now */
stuff->iocb = iocb;
stuff->result = result;
INIT_WORK(&stuff->work, scullv_do_deferred_op, stuff);
schedule_delayed_work(&stuff->work, HZ/100);
return -EIOCBQUEUED;
}
static ssize_t scullv_aio_read(struct kiocb *iocb, char __user *buf, size_t count,
loff_t pos)
{
return scullv_defer_op(0, iocb, buf, count, pos);
}
static ssize_t scullv_aio_write(struct kiocb *iocb, const char __user *buf,
size_t count, loff_t pos)
{
return scullv_defer_op(1, iocb, (char __user *) buf, count, pos);
}
/*
* Mmap *is* available, but confined in a different file
*/
extern int scullv_mmap(struct file *filp, struct vm_area_struct *vma);
/*
* The fops
*/
struct file_operations scullv_fops = {
.owner = THIS_MODULE,
.llseek = scullv_llseek,
.read = scullv_read,
.write = scullv_write,
.ioctl = scullv_ioctl,
.mmap = scullv_mmap,
.open = scullv_open,
.release = scullv_release,
.aio_read = scullv_aio_read,
.aio_write = scullv_aio_write,
};
int scullv_trim(struct scullv_dev *dev)
{
struct scullv_dev *next, *dptr;
int qset = dev->qset; /* "dev" is not-null */
int i;
if (dev->vmas) /* don't trim: there are active mappings */
return -EBUSY;
for (dptr = dev; dptr; dptr = next) { /* all the list items */
if (dptr->data) {
/* Release the quantum-set */
for (i = 0; i < qset; i++)
if (dptr->data[i])
vfree(dptr->data[i]);
kfree(dptr->data);
dptr->data=NULL;
}
next=dptr->next;
if (dptr != dev) kfree(dptr); /* all of them but the first */
}
dev->size = 0;
dev->qset = scullv_qset;
dev->order = scullv_order;
dev->next = NULL;
return 0;
}
static void scullv_setup_cdev(struct scullv_dev *dev, int index)
{
int err, devno = MKDEV(scullv_major, index);
cdev_init(&dev->cdev, &scullv_fops);
dev->cdev.owner = THIS_MODULE;
dev->cdev.ops = &scullv_fops;
err = cdev_add (&dev->cdev, devno, 1);
/* Fail gracefully if need be */
if (err)
printk(KERN_NOTICE "Error %d adding scull%d", err, index);
}
/*
* Finally, the module stuff
*/
int scullv_init(void)
{
int result, i;
dev_t dev = MKDEV(scullv_major, 0);
/*
* Register your major, and accept a dynamic number.
*/
if (scullv_major)
result = register_chrdev_region(dev, scullv_devs, "scullv");
else {
result = alloc_chrdev_region(&dev, 0, scullv_devs, "scullv");
scullv_major = MAJOR(dev);
}
if (result < 0)
return result;
/*
* allocate the devices -- we can't have them static, as the number
* can be specified at load time
*/
scullv_devices = kmalloc(scullv_devs*sizeof (struct scullv_dev), GFP_KERNEL);
if (!scullv_devices) {
result = -ENOMEM;
goto fail_malloc;
}
memset(scullv_devices, 0, scullv_devs*sizeof (struct scullv_dev));
for (i = 0; i < scullv_devs; i++) {
scullv_devices[i].order = scullv_order;
scullv_devices[i].qset = scullv_qset;
sema_init (&scullv_devices[i].sem, 1);
scullv_setup_cdev(scullv_devices + i, i);
}
#ifdef SCULLV_USE_PROC /* only when available */
create_proc_read_entry("scullvmem", 0, NULL, scullv_read_procmem, NULL);
#endif
return 0; /* succeed */
fail_malloc:
unregister_chrdev_region(dev, scullv_devs);
return result;
}
void scullv_cleanup(void)
{
int i;
#ifdef SCULLV_USE_PROC
remove_proc_entry("scullvmem", NULL);
#endif
for (i = 0; i < scullv_devs; i++) {
cdev_del(&scullv_devices[i].cdev);
scullv_trim(scullv_devices + i);
}
kfree(scullv_devices);
unregister_chrdev_region(MKDEV (scullv_major, 0), scullv_devs);
}
module_init(scullv_init);
module_exit(scullv_cleanup);
```
|
```go
// Code generated by smithy-go-codegen DO NOT EDIT.
package sqs
import (
"context"
"fmt"
awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
"github.com/aws/aws-sdk-go-v2/service/sqs/types"
"github.com/aws/smithy-go/middleware"
smithyhttp "github.com/aws/smithy-go/transport/http"
)
// Gets the most recent message movement tasks (up to 10) under a specific source
// queue.
//
// - This action is currently limited to supporting message redrive from [dead-letter queues (DLQs)]only.
// In this context, the source queue is the dead-letter queue (DLQ), while the
// destination queue can be the original source queue (from which the messages were
// driven to the dead-letter-queue), or a custom destination queue.
//
// - Only one active message movement task is supported per queue at any given
// time.
//
// [dead-letter queues (DLQs)]: path_to_url
func (c *Client) ListMessageMoveTasks(ctx context.Context, params *ListMessageMoveTasksInput, optFns ...func(*Options)) (*ListMessageMoveTasksOutput, error) {
if params == nil {
params = &ListMessageMoveTasksInput{}
}
result, metadata, err := c.invokeOperation(ctx, "ListMessageMoveTasks", params, optFns, c.addOperationListMessageMoveTasksMiddlewares)
if err != nil {
return nil, err
}
out := result.(*ListMessageMoveTasksOutput)
out.ResultMetadata = metadata
return out, nil
}
type ListMessageMoveTasksInput struct {
// The ARN of the queue whose message movement tasks are to be listed.
//
// This member is required.
SourceArn *string
// The maximum number of results to include in the response. The default is 1,
// which provides the most recent message movement task. The upper limit is 10.
MaxResults *int32
noSmithyDocumentSerde
}
type ListMessageMoveTasksOutput struct {
// A list of message movement tasks and their attributes.
Results []types.ListMessageMoveTasksResultEntry
// Metadata pertaining to the operation's result.
ResultMetadata middleware.Metadata
noSmithyDocumentSerde
}
func (c *Client) addOperationListMessageMoveTasksMiddlewares(stack *middleware.Stack, options Options) (err error) {
if err := stack.Serialize.Add(&setOperationInputMiddleware{}, middleware.After); err != nil {
return err
}
err = stack.Serialize.Add(&awsAwsjson10_serializeOpListMessageMoveTasks{}, middleware.After)
if err != nil {
return err
}
err = stack.Deserialize.Add(&awsAwsjson10_deserializeOpListMessageMoveTasks{}, middleware.After)
if err != nil {
return err
}
if err := addProtocolFinalizerMiddlewares(stack, options, "ListMessageMoveTasks"); err != nil {
return fmt.Errorf("add protocol finalizers: %v", err)
}
if err = addlegacyEndpointContextSetter(stack, options); err != nil {
return err
}
if err = addSetLoggerMiddleware(stack, options); err != nil {
return err
}
if err = addClientRequestID(stack); err != nil {
return err
}
if err = addComputeContentLength(stack); err != nil {
return err
}
if err = addResolveEndpointMiddleware(stack, options); err != nil {
return err
}
if err = addComputePayloadSHA256(stack); err != nil {
return err
}
if err = addRetry(stack, options); err != nil {
return err
}
if err = addRawResponseToMetadata(stack); err != nil {
return err
}
if err = addRecordResponseTiming(stack); err != nil {
return err
}
if err = addClientUserAgent(stack, options); err != nil {
return err
}
if err = smithyhttp.AddErrorCloseResponseBodyMiddleware(stack); err != nil {
return err
}
if err = smithyhttp.AddCloseResponseBodyMiddleware(stack); err != nil {
return err
}
if err = addSetLegacyContextSigningOptionsMiddleware(stack); err != nil {
return err
}
if err = addTimeOffsetBuild(stack, c); err != nil {
return err
}
if err = addUserAgentRetryMode(stack, options); err != nil {
return err
}
if err = addOpListMessageMoveTasksValidationMiddleware(stack); err != nil {
return err
}
if err = stack.Initialize.Add(newServiceMetadataMiddleware_opListMessageMoveTasks(options.Region), middleware.Before); err != nil {
return err
}
if err = addRecursionDetection(stack); err != nil {
return err
}
if err = addRequestIDRetrieverMiddleware(stack); err != nil {
return err
}
if err = addResponseErrorMiddleware(stack); err != nil {
return err
}
if err = addRequestResponseLogging(stack, options); err != nil {
return err
}
if err = addDisableHTTPSMiddleware(stack, options); err != nil {
return err
}
return nil
}
func newServiceMetadataMiddleware_opListMessageMoveTasks(region string) *awsmiddleware.RegisterServiceMetadata {
return &awsmiddleware.RegisterServiceMetadata{
Region: region,
ServiceID: ServiceID,
OperationName: "ListMessageMoveTasks",
}
}
```
|
Marian Petrescu (born 1970, Bucharest, Romania) is a Romanian jazz pianist. Playing since the age of 4, he has been on the jazz scene since the age of 15 after appearing at Pori Jazz. He is known for his work with the renowned jazz guitarist Andreas Öberg, and many others.
Petrescu moved to Sweden in the 1980s where he attended the conservatory in Stockholm. He now resides in Finland where he studied at the Sibelius Academy, Helsinki.
Discography
1986: Pianist (Kompass Records)
2006: Body and Soul (Hot Club Records)
2009: Resonance Big Band Pays Tribute to Oscar Peterson (Resonance Records)
2010: Marian Petrescu Quartet with Andreas Öberg – Thrivin' – Live at Jazz Standard (Resonance Records)
References
Romanian jazz pianists
1970 births
Living people
21st-century pianists
Resonance Records artists
|
Caernarfonshire (; , ), sometimes spelled Caernarvonshire or Carnarvonshire, is one of the thirteen historic counties, a vice-county and a former administrative county of Wales.
Geography
The county is bounded to the north by the Irish Sea, to the east by Denbighshire, to the south by Cardigan Bay and Merionethshire, and to the west by Caernarfon Bay and the Menai Strait, separating it from Anglesey.
The county has a largely mountainous terrain. A large part of the Snowdonian Range lies in the centre and south-east of the county, including Snowdon itself, the highest mountain in Wales at 1,085 m (3,560 ft). The south-west of the county is formed by the Llŷn peninsula, with Bardsey Island lying off its western end. The north of the county, between the mountains and Menai Strait, has much more subdued relief. The east of the county is part of Vale of Conwy, with the River Conwy forming much of the eastern boundary. Llandudno and Creuddyn forms a small peninsula to the north-east across the Conwy estuary.
The counties includes the city of Bangor and the towns and villages of Betws-y-Coed, Caernarfon, Conwy, Llandudno, Porthmadog and Pwllheli.
History
Creation
The county was originally created under the terms of the Statute of Rhuddlan in 1284 following Edward I of England's conquest of the Principality of Wales and included the cantrefi of: Llŷn, Arfon, Arllechwedd and the commote of Eifionydd (the northern portion of Dunoding).
The county was divided into ten hundreds based on the existing Welsh commotes: Cymydmaen (anglicised as Commitmaen), Creuddyn, Dinllaen, Eifionydd (Evionydd), Cafflogion (Gaflogion), Llechwedd Isaf (...Isav), Llechwedd Uchaf (...Uchav), Nant Conwy (Nant-Conway), Is Gwyrfai (Isgorvai) and Uwch Gwyrfai (Uchgorvai).
19th and 20th centuries
During the 19th century the population increased steadily, from 46,000 in the 1801 census, to 81,093 in 1841, and up to 137,000 in the 1901 census (figures given for the registration county).
Governance
Under the Local Government Act 1888, an elected Carnarvonshire County Council took over functions from the county's quarter sessions. The administrative county covered by the county council had identical borders to the geographic county. The administrative county was formally renamed Caernarvonshire on 1 July 1926. The county council was based at County Hall, Caernarfon.
The county contained five ancient boroughs. Two of these (Caernarfon and Pwllheli) were reformed in 1835 by the Municipal Corporations Act. Criccieth established a special body of commissioners in 1873. Conwy (then called Conway in English) was reformed to become a municipal borough in 1877. The remaining borough, the City of Bangor was not reformed until 1883.
Under the Public Health Act 1848 and the Local Government Act 1858 a number of towns were created local board districts or local government districts respectively, with local boards to govern their areas. Other towns became improvement commissioners' districts by private act of parliament. In 1872 these, along with the municipal boroughs, became urban sanitary districts. At the same time the remainder of the county was divided into rural sanitary districts, some of which crossed county boundaries. The Local Government Act 1894 redesignated these as urban and rural districts. A county review order in 1934 made changes to the county's districts.
The civil parish of Llysfaen was a detached exclave of the county. On 1 April 1923 Llysfaen was transferred to the county of Denbighshire.
Under the Local Government Act 1972 the administrative county of Caernarvonshire was abolished on 1 April 1974. It was largely split between the three districts of Aberconwy, Arfon and Dwyfor of Gwynedd (along with Merionethshire and Anglesey). The administrative entity of Caernarfonshire was very briefly revived in 1996, when the unitary area of Caernarfonshire and Merionethshire was created. It was, however, renamed Gwynedd almost immediately. Since then Caernarfonshire has been divided between the unitary authorities of Gwynedd to the west and Conwy to the east.
Coat of arms
Caernarvonshire County Council received a grant of armorial bearings from the College of Arms in 1949. The shield was a combination of the arms of two great native Princes of Wales. The gold and red quarters bearing lions were the arms of Llewelyn the Last – now used as the arms of the Principality of Wales. Across this was placed a green fess or horizontal band, on which were three gold eagles, from the arms of Owain Gwynedd. According to the poet Michael Drayton, the eagles formed the device on the banner of the Caernarvonshire soldiers at the Battle of Agincourt. The crest above the shield was a generic castle, representing Caernarfon, Conwy and Criccieth Castles. Behind the castle was the badge of the heir apparent: three ostrich feathers. The supporters were Welsh dragons with fish tails to show that Caernarvonshire was a Welsh maritime county. The supporter stood on a compartment of rocks for the rugged coast and mountains of the county. The motto Cadernid Gwynedd was adopted by the county council. This was derived from the Mabinogion, and can be translated as "The Strength of Gwynedd".
Flag
The Flag of Caernarfonshire was registered with the Flag Institute in March 2012. The pattern of three gold eagles on a green background is a design with a long association with the county, having reputedly been flown by Caernarfonshire soldiers at the Battle of Agincourt in 1415.
Places of interest
Bangor Cathedral
Ynys Enlli / Bardsey Island ();
Caernarfon Castle ();
Conwy Castle ();
Criccieth Castle ();
Great Orme Tramway ();
Gwydir Castle, nr. Llanrwst ();
Penrhyn Castle ();
Swallow Falls, Betws-y-Coed ();
Snowdon Mountain Railway, Llanberis ();
Ty Mawr Wybrnant ().
See also
Lord Lieutenant of Carnarvonshire – chronological list of Lords Lieutenant of Caernarvonshire
List of High Sheriffs of Caernarvonshire
Custos Rotulorum of Caernarvonshire – chronological list of Custodes rotulorum of Caernarvonshire
Sheriff of Caernarvonshire – chronological list of Sheriffs of Caenarvonshire
Caernarvonshire (UK Parliament constituency) – chronological list of MPs for former Caernarvonshire constituency
Unitary Authorities of Wales
References
Bibliography
A.H. Dodd, The History of Caernarvonshire (Caernarfonshire Historical Society, 1968).
John Jones, Enwau Lleoedd Sir Gaernarfon (Caernarfon, 1913). Origin and meanings of place names in the county.
External links
Map of Caernarfonshire on Wikishire
The Caernarfonshire Association
Caernarfonshire
History of Gwynedd
Historic counties of Wales
States and territories established in 1284
1280s establishments in Europe
13th century in Wales
01
|
The 3 arrondissements of the Val-d'Oise department are:
Arrondissement of Argenteuil, (subprefecture: Argenteuil) with 17 communes. The population of the arrondissement was 412,334 in 2016.
Arrondissement of Pontoise, (prefecture of the Val-d'Oise department: Pontoise) with 105 communes. The population of the arrondissement was 338,425 in 2016.
Arrondissement of Sarcelles, (subprefecture: Sarcelles) with 62 communes. The population of the arrondissement was 471,164 in 2016.
History
As parts of the department Seine-et-Oise, the arrondissement of Pontoise was established in 1800, the arrondissement of Montmorency in 1962 and the arrondissement of Argenteuil in 1966. In 1968 the department Val-d'Oise was created from part of the former department Seine-et-Oise, and the arrondissements of Pontoise, Argenteuil and Montmorency became part of it. In March 2000 Sarcelles replaced Montmorency as subprefecture.
The borders of the arrondissements of Val-d'Oise were modified in January 2017:
10 communes from the arrondissement of Pontoise to the arrondissement of Argenteuil
two communes from the arrondissement of Pontoise to the arrondissement of Sarcelles
one commune from the arrondissement of Sarcelles to the arrondissement of Pontoise
References
Val-d'Oise
|
Park Ji-yeon (known mononymously as Jiyeon), is a South Korean singer, actress and model. She is a member of girl group T-ara and its subgroup T-ara N4.
Park has starred in multiple Korean and international movies and television series since 2007. Her first movie offer was the second installment of "Gas Station Raid", however, due to scheduling conflicts, Jiyeon had to drop the offer shortly after she was confirmed as lead.
Film
Television series
Web series
Theatre / musical
Television shows
Music videos appearances
Hosting
References
Actress filmographies
South Korean filmographies
|
Daniel Dean Bruce (May 18, 1950 – March 1, 1969) was a United States Marine who posthumously received the Medal of Honor for heroism in Vietnam.
Bruce joined the Marines in 1968, and was deployed to Vietnam in January 1969. Two months later, on March 1, 1969, Bruce was on night watch at Firebase Tomahawk in Quang Ngai Province when an enemy explosive charge was thrown at his position. The private first class caught it, held it close to his body, and ran from his position, where the grenade exploded and killed Bruce. This action saved the lives of three other Marines.
Biography
Daniel Bruce was born on May 18, 1950, in Michigan City, Indiana, where he attended Garfield Grammar School, Barker Jr. High School, and Elston Sr. High School.
He enlisted in the U.S. Marine Corps Reserve in Chicago, Illinois on May 20, 1968, and was discharged to enlist in the regular Marine Corps on July 17, 1968.
Upon completion of recruit training with the 2nd Recruit Training Battalion, Recruit Training Regiment, Marine Corps Recruit Depot San Diego, California in September 1968, he was transferred to the Marine Corps Base Camp Pendleton, California. He completed individual combat training with Company U, 3rd Battalion, 2nd Infantry Training Regiment in November, and basic infantry training with Weapons Company, Basic Infantry Training Battalion, 2nd Infantry Training Regiment in December.
On January 1, 1969, Bruce was promoted to private first class, and later that month was ordered to the Republic of Vietnam. He was assigned duty as anti-tank assault man with Headquarters and Service Company, 3rd Battalion, 5th Marines, 1st Marine Division.
While participating in combat at Firebase Tomahawk, Quang Nam Province, on March 1, 1969, he was killed in action — for his gallantry on this occasion, which saved the lives of three fellow Marines, he was awarded the Medal of Honor. He was on night watch when an enemy explosive was thrown at his position. He caught the charge, held it to his body, and ran from his position — away from fellow Marines who would have been killed by the explosion. Seconds later, the charge exploded and the full force of the blast was absorbed by Bruce, killing him instantly.
Decorations
A complete list of his medals and decorations includes: the Medal of Honor, the Purple Heart, the National Defense Service Medal, the Vietnam Service Medal with one bronze star, and the Republic of Vietnam Campaign Medal.
Medal of Honor citation
The President of the United States in the name of The Congress takes pride in presenting the MEDAL OF HONOR to posthumously to
for service as set forth in the following CITATION:
For conspicuous gallantry and intrepidity at the risk of his life above and beyond the call of duty while serving as a Mortar Man with Headquarters and Service Company, Third Battalion, Fifth Marines, First Marine Division, against the enemy in the Republic of Vietnam. Early on the morning of March 1, 1969, Private First Class Bruce was on watch in his night defensive position at Fire Support Base Tomahawk in Quang Nam Province when he heard movements ahead of him. An enemy explosive charge was thrown toward his position and he reacted instantly, catching the device and shouting to alert his companions. Realizing the danger to the adjacent position with its two occupants Private First Class Bruce Held the device to his body and attempted to carry it from the vicinity of the entrenched Marines. As he moved away, Private First Class Bruce's indomitable courage, inspiring valor and selfless devotion to duty saved the lives of three of his fellow Marines and upheld the highest traditions of the Marine Corps and the United States Naval Service. He gallantly gave his life for his country.
/S/RICHARD M. NIXON
The Wall
Daniel Dean Bruce has his name inscribed on the Vietnam Veterans Memorial on panel 31W, line 099.
See also
List of Medal of Honor recipients
List of Medal of Honor recipients for the Vietnam War
References
Inline
General
1950 births
1969 deaths
United States Marine Corps Medal of Honor recipients
United States Marines
People from Michigan City, Indiana
People from Indiana in the Vietnam War
Vietnam War recipients of the Medal of Honor
United States Marine Corps personnel killed in the Vietnam War
|
```go
/*
path_to_url
Unless required by applicable law or agreed to in writing, software
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
*/
package stubs
import "k8s.io/kops/dnsprovider/pkg/dnsprovider/providers/google/clouddns/internal/interfaces"
// Compile time check for interface adherence
var _ interfaces.ManagedZone = ManagedZone{}
type ManagedZone struct {
Service *ManagedZonesService
Name_ string
Id_ uint64
Rrsets []ResourceRecordSet
}
func (m ManagedZone) Name() string {
return m.Name_
}
func (m ManagedZone) Id() uint64 {
return m.Id_
}
func (m ManagedZone) DnsName() string {
return m.Name_ // Don't bother storing a separate DNS name
}
```
|
Alcohol dehydrogenase class-3 is an enzyme that in humans is encoded by the ADH5 gene.
This gene encodes glutathione-dependent formaldehyde dehydrogenase or the class III alcohol dehydrogenase chi subunit, which is a member of the alcohol dehydrogenase family. Members of this family metabolize a wide variety of substrates, including ethanol, retinol, other aliphatic alcohols, hydroxysteroids, and lipid peroxidation products. Class III alcohol dehydrogenase is a homodimer composed of 2 chi subunits. It has virtually no activity for ethanol oxidation, but exhibits high activity for oxidation of long-chain primary alcohols and for oxidation of S-hydroxymethyl-glutathione, a spontaneous adduct between formaldehyde and glutathione.
This enzyme is an important component of cellular metabolism for the elimination of formaldehyde, a potent irritant and sensitizing agent that causes lacrymation, rhinitis, pharyngitis, and contact dermatitis.
Clinical significance
Mutations of the ADH5 gene cause AMED syndrome, an autosomal recessive digenic multisystem disorder characterized by global developmental delay with impaired intellectual development. The syndrome was first described in 2020.
References
Further reading
External links
|
```javascript
Registry user accounts for npm
Bump package version in npm
`peerDependencies`
`optionalDependencies` in npm
`config` object in `package.json`
```
|
La Rochegiron (; ) is a commune in the Alpes-de-Haute-Provence department in southeastern France.
Population
See also
Communes of the Alpes-de-Haute-Provence department
References
Communes of Alpes-de-Haute-Provence
Alpes-de-Haute-Provence communes articles needing translation from French Wikipedia
|
```python
"""
Creates a modern site
path_to_url#create-a-modern-site
"""
from office365.sharepoint.client_context import ClientContext
from tests import (
create_unique_name,
test_admin_credentials,
test_team_site_url,
test_user_principal_name_alt,
)
client = ClientContext(test_team_site_url).with_credentials(test_admin_credentials)
owner = client.web.site_users.get_by_email(test_user_principal_name_alt)
site_alias = create_unique_name("commsite")
print("Creating a modern site: {0} ...".format(site_alias))
site = client.create_modern_site("Comm Site", site_alias, owner).execute_query()
print("Site has been created at url: {0}".format(site.url))
print("Cleaning up resources...")
site.delete_object().execute_query()
```
|
Hobbieville is an unincorporated community in Center Township, Greene County, Indiana.
History
Hobbieville was originally called Jonesboro, and under the latter name was founded in 1837. A post office was established as Hobbieville in 1840, and it remained in operation until it was discontinued in 1935.
Geography
Hobbieville is located at .
References
Unincorporated communities in Greene County, Indiana
Unincorporated communities in Indiana
Bloomington metropolitan area, Indiana
|
Tigard Transit Center, formally Thomas M. Brian Tigard Transit Center, is a transport hub in Tigard, Oregon, United States, that is owned and operated by TriMet. It is a transfer facility for bus routes mainly serving the westside communities of the Portland metropolitan area and the third southbound station from Beaverton Transit Center on WES Commuter Rail. The transit center is the located in downtown Tigard just south of Oregon Route 99W (OR 99W) on Commercial Street. It recorded 1,627 average weekday boardings in fall 2019. The facility opened in 1988 as a bus transit center, and a platform for WES was added in 2009.
History
Tigard Transit Center was designed by Skidmore, Owings & Merrill and opened for buses in 1988, served by about 200 bus trips per day. The design received a commendation from the local chapter of the American Institute of Architects in 1988. The site already had a Greyhound bus station (located in an adjacent storefront), which remained there after the transit center's opening but moved to a location on Main Street in the 1990s.
Plans for a rail connection started as early as 1991 when a proposal for a light rail line was studied, with the transit center as its southern terminus. As of 2009, this line has not been built, but it is still planned with studies to begin as early as 2013.
Plans for the commuter rail service between Beaverton and Wilsonville began as early as 1996. In 2001, the Federal Transit Administration authorized the project, and in 2004 it approved the project. Construction began in October 2006. The line is the first suburb-to-suburb commuter rail line in the United States, and the first commuter rail line in Oregon.
Groundbreaking for the rail station at the center was in December 2006, and was led by Oregon senators Gordon Smith and Ron Wyden. The public artwork at the station was installed on September 3, 2008. The line was opened on February 2, 2009. In 2009, TriMet announced they would add additional bike lockers at the transit center using federal stimulus funds. In May 2011, the transit center was dedicated as the Thomas M. Brian Tigard Transit Center in honor of former Tigard mayor and county commissioner Thomas M. Brian, who had helped make the WES rail line a reality.
Station details
The WES station is one of five on the rail line that utilizes Portland and Western Railroad's freight rail line. Located in downtown Tigard on Commercial Street south of Oregon Route 99W, the station and line are only in operation during the morning and evening commute times from Monday through Friday. The station has 100 parking spaces at its park-and-ride lot and is served by seven bus lines. The city allocated $100,000 for refurbishing the existing TriMet-operated bus transit center at the site, which opened in 1988. At the northern terminus, the Beaverton Transit Center, passengers can connect to MAX Light Rail.
Public art at the station consists of an interactive sculpture created by Frank Boyden and Brad Rude. The sculpture features bronze heads and a vehicle designed to represent the train and the variety of people who ride the line. The vehicle moves along a track and has an animal figure displayed in a scene atop the piece. Additionally, the station has a mural along one of the walls.
Services
As of October, 2023, Tigard Transit Center is served by the following bus lines:
12 – Barbur/Sandy Blvd
43 - Taylors Ferry/Marquam Hill
45 – Garden Home
76 – Hall/Greenburg
78 – Denney/Kerr Parkway
94 – Pacific Hwy/Sherwood
Yamhill County Transit (YCTA) service to McMinnville (routes 44 and 44X on weekdays; route 44 on Saturdays)
See also
List of TriMet transit centers
References
External links
Tigard Transit Center – TriMet page
Tigard, Oregon
Railway stations in Oregon
WES Commuter Rail
Railway stations in the United States opened in 2009
Railway stations in Washington County, Oregon
TriMet transit centers
1988 establishments in Oregon
|
```xml
import { ICustomerService } from './ICustomerService';
import { ICustomer } from '../../model/ICustomer';
export default class CustomerServiceMock implements ICustomerService {
// US customers from Northwind database
private mockItems: ICustomer[] =
[
{
"CustomerID": "GREAL",
"CompanyName": "Great Lakes Food Market",
"ContactName": "Howard Snyder",
"ContactTitle": "Marketing Manager",
"Address": "2732 Baker Blvd.",
"City": "Eugene",
"Region": "OR",
"PostalCode": "97403",
"Country": "USA",
"Phone": "(503) 555-7555",
"Fax": ""
},
{
"CustomerID": "HUNGC",
"CompanyName": "Hungry Coyote Import Store",
"ContactName": "Yoshi Latimer",
"ContactTitle": "Sales Representative",
"Address": "City Center Plaza 516 Main St.",
"City": "Elgin",
"Region": "OR",
"PostalCode": "97827",
"Country": "USA",
"Phone": "(503) 555-6874",
"Fax": "(503) 555-2376"
},
{
"CustomerID": "LAZYK",
"CompanyName": "Lazy K Kountry Store",
"ContactName": "John Steel",
"ContactTitle": "Marketing Manager",
"Address": "12 Orchestra Terrace",
"City": "Walla Walla",
"Region": "WA",
"PostalCode": "99362",
"Country": "USA",
"Phone": "(509) 555-7969",
"Fax": "(509) 555-6221"
},
{
"CustomerID": "LETSS",
"CompanyName": "Let's Stop N Shop",
"ContactName": "Jaime Yorres",
"ContactTitle": "Owner",
"Address": "87 Polk St. Suite 5",
"City": "San Francisco",
"Region": "CA",
"PostalCode": "94117",
"Country": "USA",
"Phone": "(415) 555-5938",
"Fax": ""
},
{
"CustomerID": "LONEP",
"CompanyName": "Lonesome Pine Restaurant",
"ContactName": "Fran Wilson",
"ContactTitle": "Sales Manager",
"Address": "89 Chiaroscuro Rd.",
"City": "Portland",
"Region": "OR",
"PostalCode": "97219",
"Country": "USA",
"Phone": "(503) 555-9573",
"Fax": "(503) 555-9646"
},
{
"CustomerID": "OLDWO",
"CompanyName": "Old World Delicatessen",
"ContactName": "Rene Phillips",
"ContactTitle": "Sales Representative",
"Address": "2743 Bering St.",
"City": "Anchorage",
"Region": "AK",
"PostalCode": "99508",
"Country": "USA",
"Phone": "(907) 555-7584",
"Fax": "(907) 555-2880"
},
{
"CustomerID": "RATTC",
"CompanyName": "Rattlesnake Canyon Grocery",
"ContactName": "Paula Wilson",
"ContactTitle": "Assistant Sales Representative",
"Address": "2817 Milton Dr.",
"City": "Albuquerque",
"Region": "NM",
"PostalCode": "87110",
"Country": "USA",
"Phone": "(505) 555-5939",
"Fax": "(505) 555-3620"
},
{
"CustomerID": "SAVEA",
"CompanyName": "Save-a-lot Markets",
"ContactName": "Jose Pavarotti",
"ContactTitle": "Sales Representative",
"Address": "187 Suffolk Ln.",
"City": "Boise",
"Region": "ID",
"PostalCode": "83720",
"Country": "USA",
"Phone": "(208) 555-8097",
"Fax": ""
},
{
"CustomerID": "SPLIR",
"CompanyName": "Split Rail Beer & Ale",
"ContactName": "Art Braunschweiger",
"ContactTitle": "Sales Manager",
"Address": "P.O. Box 555",
"City": "Lander",
"Region": "WY",
"PostalCode": "82520",
"Country": "USA",
"Phone": "(307) 555-4680",
"Fax": "(307) 555-6525"
},
{
"CustomerID": "THEBI",
"CompanyName": "The Big Cheese",
"ContactName": "Liz Nixon",
"ContactTitle": "Marketing Manager",
"Address": "89 Jefferson Way Suite 2",
"City": "Portland",
"Region": "OR",
"PostalCode": "97201",
"Country": "USA",
"Phone": "(503) 555-3612",
"Fax": ""
},
{
"CustomerID": "THECR",
"CompanyName": "The Cracker Box",
"ContactName": "Liu Wong",
"ContactTitle": "Marketing Assistant",
"Address": "55 Grizzly Peak Rd.",
"City": "Butte",
"Region": "MT",
"PostalCode": "59801",
"Country": "USA",
"Phone": "(406) 555-5834",
"Fax": "(406) 555-8083"
},
{
"CustomerID": "TRAIH",
"CompanyName": "Trail's Head Gourmet Provisioners",
"ContactName": "Helvetius Nagy",
"ContactTitle": "Sales Associate",
"Address": "722 DaVinci Blvd.",
"City": "Kirkland",
"Region": "WA",
"PostalCode": "98034",
"Country": "USA",
"Phone": "(206) 555-8257",
"Fax": "(206) 555-2174"
},
{
"CustomerID": "WHITC",
"CompanyName": "White Clover Markets",
"ContactName": "Karl Jablonski",
"ContactTitle": "Owner",
"Address": "305 - 14th Ave. S. Suite 3B",
"City": "Seattle",
"Region": "WA",
"PostalCode": "98128",
"Country": "USA",
"Phone": "(206) 555-4112",
"Fax": "(206) 555-4115"
}
];
public getCustomer(customerID: string):Promise<ICustomer> {
var result: ICustomer;
result = this.mockItems.filter(c => c.CustomerID == customerID)[0];
return new Promise<ICustomer>((resolve) => {
resolve(result);
});
}
}
```
|
```go
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package transform
import (
"bytes"
"errors"
"fmt"
"io/ioutil"
"strconv"
"strings"
"testing"
"time"
"unicode/utf8"
)
type lowerCaseASCII struct{ NopResetter }
func (lowerCaseASCII) Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, err error) {
n := len(src)
if n > len(dst) {
n, err = len(dst), ErrShortDst
}
for i, c := range src[:n] {
if 'A' <= c && c <= 'Z' {
c += 'a' - 'A'
}
dst[i] = c
}
return n, n, err
}
var errYouMentionedX = errors.New("you mentioned X")
type dontMentionX struct{ NopResetter }
func (dontMentionX) Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, err error) {
n := len(src)
if n > len(dst) {
n, err = len(dst), ErrShortDst
}
for i, c := range src[:n] {
if c == 'X' {
return i, i, errYouMentionedX
}
dst[i] = c
}
return n, n, err
}
// doublerAtEOF is a strange Transformer that transforms "this" to "tthhiiss",
// but only if atEOF is true.
type doublerAtEOF struct{ NopResetter }
func (doublerAtEOF) Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, err error) {
if !atEOF {
return 0, 0, ErrShortSrc
}
for i, c := range src {
if 2*i+2 >= len(dst) {
return 2 * i, i, ErrShortDst
}
dst[2*i+0] = c
dst[2*i+1] = c
}
return 2 * len(src), len(src), nil
}
// rleDecode and rleEncode implement a toy run-length encoding: "aabbbbbbbbbb"
// is encoded as "2a10b". The decoding is assumed to not contain any numbers.
type rleDecode struct{ NopResetter }
func (rleDecode) Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, err error) {
loop:
for len(src) > 0 {
n := 0
for i, c := range src {
if '0' <= c && c <= '9' {
n = 10*n + int(c-'0')
continue
}
if i == 0 {
return nDst, nSrc, errors.New("rleDecode: bad input")
}
if n > len(dst) {
return nDst, nSrc, ErrShortDst
}
for j := 0; j < n; j++ {
dst[j] = c
}
dst, src = dst[n:], src[i+1:]
nDst, nSrc = nDst+n, nSrc+i+1
continue loop
}
if atEOF {
return nDst, nSrc, errors.New("rleDecode: bad input")
}
return nDst, nSrc, ErrShortSrc
}
return nDst, nSrc, nil
}
type rleEncode struct {
NopResetter
// allowStutter means that "xxxxxxxx" can be encoded as "5x3x"
// instead of always as "8x".
allowStutter bool
}
func (e rleEncode) Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, err error) {
for len(src) > 0 {
n, c0 := len(src), src[0]
for i, c := range src[1:] {
if c != c0 {
n = i + 1
break
}
}
if n == len(src) && !atEOF && !e.allowStutter {
return nDst, nSrc, ErrShortSrc
}
s := strconv.Itoa(n)
if len(s) >= len(dst) {
return nDst, nSrc, ErrShortDst
}
copy(dst, s)
dst[len(s)] = c0
dst, src = dst[len(s)+1:], src[n:]
nDst, nSrc = nDst+len(s)+1, nSrc+n
}
return nDst, nSrc, nil
}
// trickler consumes all input bytes, but writes a single byte at a time to dst.
type trickler []byte
func (t *trickler) Reset() {
*t = nil
}
func (t *trickler) Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, err error) {
*t = append(*t, src...)
if len(*t) == 0 {
return 0, 0, nil
}
if len(dst) == 0 {
return 0, len(src), ErrShortDst
}
dst[0] = (*t)[0]
*t = (*t)[1:]
if len(*t) > 0 {
err = ErrShortDst
}
return 1, len(src), err
}
// delayedTrickler is like trickler, but delays writing output to dst. This is
// highly unlikely to be relevant in practice, but it seems like a good idea
// to have some tolerance as long as progress can be detected.
type delayedTrickler []byte
func (t *delayedTrickler) Reset() {
*t = nil
}
func (t *delayedTrickler) Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, err error) {
if len(*t) > 0 && len(dst) > 0 {
dst[0] = (*t)[0]
*t = (*t)[1:]
nDst = 1
}
*t = append(*t, src...)
if len(*t) > 0 {
err = ErrShortDst
}
return nDst, len(src), err
}
type testCase struct {
desc string
t Transformer
src string
dstSize int
srcSize int
ioSize int
wantStr string
wantErr error
wantIter int // number of iterations taken; 0 means we don't care.
}
func (t testCase) String() string {
return tstr(t.t) + "; " + t.desc
}
func tstr(t Transformer) string {
if stringer, ok := t.(fmt.Stringer); ok {
return stringer.String()
}
s := fmt.Sprintf("%T", t)
return s[1+strings.Index(s, "."):]
}
func (c chain) String() string {
buf := &bytes.Buffer{}
buf.WriteString("Chain(")
for i, l := range c.link[:len(c.link)-1] {
if i != 0 {
fmt.Fprint(buf, ", ")
}
buf.WriteString(tstr(l.t))
}
buf.WriteString(")")
return buf.String()
}
var testCases = []testCase{
{
desc: "empty",
t: lowerCaseASCII{},
src: "",
dstSize: 100,
srcSize: 100,
wantStr: "",
},
{
desc: "basic",
t: lowerCaseASCII{},
src: "Hello WORLD.",
dstSize: 100,
srcSize: 100,
wantStr: "hello world.",
},
{
desc: "small dst",
t: lowerCaseASCII{},
src: "Hello WORLD.",
dstSize: 3,
srcSize: 100,
wantStr: "hello world.",
},
{
desc: "small src",
t: lowerCaseASCII{},
src: "Hello WORLD.",
dstSize: 100,
srcSize: 4,
wantStr: "hello world.",
},
{
desc: "small buffers",
t: lowerCaseASCII{},
src: "Hello WORLD.",
dstSize: 3,
srcSize: 4,
wantStr: "hello world.",
},
{
desc: "very small buffers",
t: lowerCaseASCII{},
src: "Hello WORLD.",
dstSize: 1,
srcSize: 1,
wantStr: "hello world.",
},
{
desc: "basic",
t: dontMentionX{},
src: "The First Rule of Transform Club: don't mention Mister X, ever.",
dstSize: 100,
srcSize: 100,
wantStr: "The First Rule of Transform Club: don't mention Mister ",
wantErr: errYouMentionedX,
},
{
desc: "small buffers",
t: dontMentionX{},
src: "The First Rule of Transform Club: don't mention Mister X, ever.",
dstSize: 10,
srcSize: 10,
wantStr: "The First Rule of Transform Club: don't mention Mister ",
wantErr: errYouMentionedX,
},
{
desc: "very small buffers",
t: dontMentionX{},
src: "The First Rule of Transform Club: don't mention Mister X, ever.",
dstSize: 1,
srcSize: 1,
wantStr: "The First Rule of Transform Club: don't mention Mister ",
wantErr: errYouMentionedX,
},
{
desc: "only transform at EOF",
t: doublerAtEOF{},
src: "this",
dstSize: 100,
srcSize: 100,
wantStr: "tthhiiss",
},
{
desc: "basic",
t: rleDecode{},
src: "1a2b3c10d11e0f1g",
dstSize: 100,
srcSize: 100,
wantStr: "abbcccddddddddddeeeeeeeeeeeg",
},
{
desc: "long",
t: rleDecode{},
src: "12a23b34c45d56e99z",
dstSize: 100,
srcSize: 100,
wantStr: strings.Repeat("a", 12) +
strings.Repeat("b", 23) +
strings.Repeat("c", 34) +
strings.Repeat("d", 45) +
strings.Repeat("e", 56) +
strings.Repeat("z", 99),
},
{
desc: "tight buffers",
t: rleDecode{},
src: "1a2b3c10d11e0f1g",
dstSize: 11,
srcSize: 3,
wantStr: "abbcccddddddddddeeeeeeeeeeeg",
},
{
desc: "short dst",
t: rleDecode{},
src: "1a2b3c10d11e0f1g",
dstSize: 10,
srcSize: 3,
wantStr: "abbcccdddddddddd",
wantErr: ErrShortDst,
},
{
desc: "short src",
t: rleDecode{},
src: "1a2b3c10d11e0f1g",
dstSize: 11,
srcSize: 2,
ioSize: 2,
wantStr: "abbccc",
wantErr: ErrShortSrc,
},
{
desc: "basic",
t: rleEncode{},
src: "abbcccddddddddddeeeeeeeeeeeg",
dstSize: 100,
srcSize: 100,
wantStr: "1a2b3c10d11e1g",
},
{
desc: "long",
t: rleEncode{},
src: strings.Repeat("a", 12) +
strings.Repeat("b", 23) +
strings.Repeat("c", 34) +
strings.Repeat("d", 45) +
strings.Repeat("e", 56) +
strings.Repeat("z", 99),
dstSize: 100,
srcSize: 100,
wantStr: "12a23b34c45d56e99z",
},
{
desc: "tight buffers",
t: rleEncode{},
src: "abbcccddddddddddeeeeeeeeeeeg",
dstSize: 3,
srcSize: 12,
wantStr: "1a2b3c10d11e1g",
},
{
desc: "short dst",
t: rleEncode{},
src: "abbcccddddddddddeeeeeeeeeeeg",
dstSize: 2,
srcSize: 12,
wantStr: "1a2b3c",
wantErr: ErrShortDst,
},
{
desc: "short src",
t: rleEncode{},
src: "abbcccddddddddddeeeeeeeeeeeg",
dstSize: 3,
srcSize: 11,
ioSize: 11,
wantStr: "1a2b3c10d",
wantErr: ErrShortSrc,
},
{
desc: "allowStutter = false",
t: rleEncode{allowStutter: false},
src: "aaaabbbbbbbbccccddddd",
dstSize: 10,
srcSize: 10,
wantStr: "4a8b4c5d",
},
{
desc: "allowStutter = true",
t: rleEncode{allowStutter: true},
src: "aaaabbbbbbbbccccddddd",
dstSize: 10,
srcSize: 10,
ioSize: 10,
wantStr: "4a6b2b4c4d1d",
},
{
desc: "trickler",
t: &trickler{},
src: "abcdefghijklm",
dstSize: 3,
srcSize: 15,
wantStr: "abcdefghijklm",
},
{
desc: "delayedTrickler",
t: &delayedTrickler{},
src: "abcdefghijklm",
dstSize: 3,
srcSize: 15,
wantStr: "abcdefghijklm",
},
}
func TestReader(t *testing.T) {
for _, tc := range testCases {
r := NewReader(strings.NewReader(tc.src), tc.t)
// Differently sized dst and src buffers are not part of the
// exported API. We override them manually.
r.dst = make([]byte, tc.dstSize)
r.src = make([]byte, tc.srcSize)
got, err := ioutil.ReadAll(r)
str := string(got)
if str != tc.wantStr || err != tc.wantErr {
t.Errorf("%s:\ngot %q, %v\nwant %q, %v", tc, str, err, tc.wantStr, tc.wantErr)
}
}
}
func TestWriter(t *testing.T) {
tests := append(testCases, chainTests()...)
for _, tc := range tests {
sizes := []int{1, 2, 3, 4, 5, 10, 100, 1000}
if tc.ioSize > 0 {
sizes = []int{tc.ioSize}
}
for _, sz := range sizes {
bb := &bytes.Buffer{}
w := NewWriter(bb, tc.t)
// Differently sized dst and src buffers are not part of the
// exported API. We override them manually.
w.dst = make([]byte, tc.dstSize)
w.src = make([]byte, tc.srcSize)
src := make([]byte, sz)
var err error
for b := tc.src; len(b) > 0 && err == nil; {
n := copy(src, b)
b = b[n:]
m := 0
m, err = w.Write(src[:n])
if m != n && err == nil {
t.Errorf("%s:%d: did not consume all bytes %d < %d", tc, sz, m, n)
}
}
if err == nil {
err = w.Close()
}
str := bb.String()
if str != tc.wantStr || err != tc.wantErr {
t.Errorf("%s:%d:\ngot %q, %v\nwant %q, %v", tc, sz, str, err, tc.wantStr, tc.wantErr)
}
}
}
}
func TestNop(t *testing.T) {
testCases := []struct {
str string
dstSize int
err error
}{
{"", 0, nil},
{"", 10, nil},
{"a", 0, ErrShortDst},
{"a", 1, nil},
{"a", 10, nil},
}
for i, tc := range testCases {
dst := make([]byte, tc.dstSize)
nDst, nSrc, err := Nop.Transform(dst, []byte(tc.str), true)
want := tc.str
if tc.dstSize < len(want) {
want = want[:tc.dstSize]
}
if got := string(dst[:nDst]); got != want || err != tc.err || nSrc != nDst {
t.Errorf("%d:\ngot %q, %d, %v\nwant %q, %d, %v", i, got, nSrc, err, want, nDst, tc.err)
}
}
}
func TestDiscard(t *testing.T) {
testCases := []struct {
str string
dstSize int
}{
{"", 0},
{"", 10},
{"a", 0},
{"ab", 10},
}
for i, tc := range testCases {
nDst, nSrc, err := Discard.Transform(make([]byte, tc.dstSize), []byte(tc.str), true)
if nDst != 0 || nSrc != len(tc.str) || err != nil {
t.Errorf("%d:\ngot %q, %d, %v\nwant 0, %d, nil", i, nDst, nSrc, err, len(tc.str))
}
}
}
// mkChain creates a Chain transformer. x must be alternating between transformer
// and bufSize, like T, (sz, T)*
func mkChain(x ...interface{}) *chain {
t := []Transformer{}
for i := 0; i < len(x); i += 2 {
t = append(t, x[i].(Transformer))
}
c := Chain(t...).(*chain)
for i, j := 1, 1; i < len(x); i, j = i+2, j+1 {
c.link[j].b = make([]byte, x[i].(int))
}
return c
}
func chainTests() []testCase {
return []testCase{
{
desc: "nil error",
t: mkChain(rleEncode{}, 100, lowerCaseASCII{}),
src: "ABB",
dstSize: 100,
srcSize: 100,
wantStr: "1a2b",
wantErr: nil,
wantIter: 1,
},
{
desc: "short dst buffer",
t: mkChain(lowerCaseASCII{}, 3, rleDecode{}),
src: "1a2b3c10d11e0f1g",
dstSize: 10,
srcSize: 3,
wantStr: "abbcccdddddddddd",
wantErr: ErrShortDst,
},
{
desc: "short internal dst buffer",
t: mkChain(lowerCaseASCII{}, 3, rleDecode{}, 10, Nop),
src: "1a2b3c10d11e0f1g",
dstSize: 100,
srcSize: 3,
wantStr: "abbcccdddddddddd",
wantErr: errShortInternal,
},
{
desc: "short internal dst buffer from input",
t: mkChain(rleDecode{}, 10, Nop),
src: "1a2b3c10d11e0f1g",
dstSize: 100,
srcSize: 3,
wantStr: "abbcccdddddddddd",
wantErr: errShortInternal,
},
{
desc: "empty short internal dst buffer",
t: mkChain(lowerCaseASCII{}, 3, rleDecode{}, 10, Nop),
src: "4a7b11e0f1g",
dstSize: 100,
srcSize: 3,
wantStr: "aaaabbbbbbb",
wantErr: errShortInternal,
},
{
desc: "empty short internal dst buffer from input",
t: mkChain(rleDecode{}, 10, Nop),
src: "4a7b11e0f1g",
dstSize: 100,
srcSize: 3,
wantStr: "aaaabbbbbbb",
wantErr: errShortInternal,
},
{
desc: "short internal src buffer after full dst buffer",
t: mkChain(Nop, 5, rleEncode{}, 10, Nop),
src: "cccccddddd",
dstSize: 100,
srcSize: 100,
wantStr: "",
wantErr: errShortInternal,
wantIter: 1,
},
{
desc: "short internal src buffer after short dst buffer; test lastFull",
t: mkChain(rleDecode{}, 5, rleEncode{}, 4, Nop),
src: "2a1b4c6d",
dstSize: 100,
srcSize: 100,
wantStr: "2a1b",
wantErr: errShortInternal,
},
{
desc: "short internal src buffer after successful complete fill",
t: mkChain(Nop, 3, rleDecode{}),
src: "123a4b",
dstSize: 4,
srcSize: 3,
wantStr: "",
wantErr: errShortInternal,
wantIter: 1,
},
{
desc: "short internal src buffer after short dst buffer; test lastFull",
t: mkChain(rleDecode{}, 5, rleEncode{}),
src: "2a1b4c6d",
dstSize: 4,
srcSize: 100,
wantStr: "2a1b",
wantErr: errShortInternal,
},
{
desc: "short src buffer",
t: mkChain(rleEncode{}, 5, Nop),
src: "abbcccddddeeeee",
dstSize: 4,
srcSize: 4,
ioSize: 4,
wantStr: "1a2b3c",
wantErr: ErrShortSrc,
},
{
desc: "process all in one go",
t: mkChain(rleEncode{}, 5, Nop),
src: "abbcccddddeeeeeffffff",
dstSize: 100,
srcSize: 100,
wantStr: "1a2b3c4d5e6f",
wantErr: nil,
wantIter: 1,
},
{
desc: "complete processing downstream after error",
t: mkChain(dontMentionX{}, 2, rleDecode{}, 5, Nop),
src: "3a4b5eX",
dstSize: 100,
srcSize: 100,
ioSize: 100,
wantStr: "aaabbbbeeeee",
wantErr: errYouMentionedX,
},
{
desc: "return downstream fatal errors first (followed by short dst)",
t: mkChain(dontMentionX{}, 8, rleDecode{}, 4, Nop),
src: "3a4b5eX",
dstSize: 100,
srcSize: 100,
ioSize: 100,
wantStr: "aaabbbb",
wantErr: errShortInternal,
},
{
desc: "return downstream fatal errors first (followed by short src)",
t: mkChain(dontMentionX{}, 5, Nop, 1, rleDecode{}),
src: "1a5bX",
dstSize: 100,
srcSize: 100,
ioSize: 100,
wantStr: "",
wantErr: errShortInternal,
},
{
desc: "short internal",
t: mkChain(Nop, 11, rleEncode{}, 3, Nop),
src: "abbcccddddddddddeeeeeeeeeeeg",
dstSize: 3,
srcSize: 100,
wantStr: "1a2b3c10d",
wantErr: errShortInternal,
},
}
}
func doTransform(tc testCase) (res string, iter int, err error) {
tc.t.Reset()
dst := make([]byte, tc.dstSize)
out, in := make([]byte, 0, 2*len(tc.src)), []byte(tc.src)
for {
iter++
src, atEOF := in, true
if len(src) > tc.srcSize {
src, atEOF = src[:tc.srcSize], false
}
nDst, nSrc, err := tc.t.Transform(dst, src, atEOF)
out = append(out, dst[:nDst]...)
in = in[nSrc:]
switch {
case err == nil && len(in) != 0:
case err == ErrShortSrc && nSrc > 0:
case err == ErrShortDst && (nDst > 0 || nSrc > 0):
default:
return string(out), iter, err
}
}
}
func TestChain(t *testing.T) {
if c, ok := Chain().(nop); !ok {
t.Errorf("empty chain: %v; want Nop", c)
}
// Test Chain for a single Transformer.
for _, tc := range testCases {
tc.t = Chain(tc.t)
str, _, err := doTransform(tc)
if str != tc.wantStr || err != tc.wantErr {
t.Errorf("%s:\ngot %q, %v\nwant %q, %v", tc, str, err, tc.wantStr, tc.wantErr)
}
}
tests := chainTests()
sizes := []int{1, 2, 3, 4, 5, 7, 10, 100, 1000}
addTest := func(tc testCase, t *chain) {
if t.link[0].t != tc.t && tc.wantErr == ErrShortSrc {
tc.wantErr = errShortInternal
}
if t.link[len(t.link)-2].t != tc.t && tc.wantErr == ErrShortDst {
tc.wantErr = errShortInternal
}
tc.t = t
tests = append(tests, tc)
}
for _, tc := range testCases {
for _, sz := range sizes {
tt := tc
tt.dstSize = sz
addTest(tt, mkChain(tc.t, tc.dstSize, Nop))
addTest(tt, mkChain(tc.t, tc.dstSize, Nop, 2, Nop))
addTest(tt, mkChain(Nop, tc.srcSize, tc.t, tc.dstSize, Nop))
if sz >= tc.dstSize && (tc.wantErr != ErrShortDst || sz == tc.dstSize) {
addTest(tt, mkChain(Nop, tc.srcSize, tc.t))
addTest(tt, mkChain(Nop, 100, Nop, tc.srcSize, tc.t))
}
}
}
for _, tc := range testCases {
tt := tc
tt.dstSize = 1
tt.wantStr = ""
addTest(tt, mkChain(tc.t, tc.dstSize, Discard))
addTest(tt, mkChain(Nop, tc.srcSize, tc.t, tc.dstSize, Discard))
addTest(tt, mkChain(Nop, tc.srcSize, tc.t, tc.dstSize, Nop, tc.dstSize, Discard))
}
for _, tc := range testCases {
tt := tc
tt.dstSize = 100
tt.wantStr = strings.Replace(tc.src, "0f", "", -1)
// Chain encoders and decoders.
if _, ok := tc.t.(rleEncode); ok && tc.wantErr == nil {
addTest(tt, mkChain(tc.t, tc.dstSize, Nop, 1000, rleDecode{}))
addTest(tt, mkChain(tc.t, tc.dstSize, Nop, tc.dstSize, rleDecode{}))
addTest(tt, mkChain(Nop, tc.srcSize, tc.t, tc.dstSize, Nop, 100, rleDecode{}))
// decoding needs larger destinations
addTest(tt, mkChain(Nop, tc.srcSize, tc.t, tc.dstSize, rleDecode{}, 100, Nop))
addTest(tt, mkChain(Nop, tc.srcSize, tc.t, tc.dstSize, Nop, 100, rleDecode{}, 100, Nop))
} else if _, ok := tc.t.(rleDecode); ok && tc.wantErr == nil {
// The internal buffer size may need to be the sum of the maximum segment
// size of the two encoders!
addTest(tt, mkChain(tc.t, 2*tc.dstSize, rleEncode{}))
addTest(tt, mkChain(tc.t, tc.dstSize, Nop, 101, rleEncode{}))
addTest(tt, mkChain(Nop, tc.srcSize, tc.t, tc.dstSize, Nop, 100, rleEncode{}))
addTest(tt, mkChain(Nop, tc.srcSize, tc.t, tc.dstSize, Nop, 200, rleEncode{}, 100, Nop))
}
}
for _, tc := range tests {
str, iter, err := doTransform(tc)
mi := tc.wantIter != 0 && tc.wantIter != iter
if str != tc.wantStr || err != tc.wantErr || mi {
t.Errorf("%s:\ngot iter:%d, %q, %v\nwant iter:%d, %q, %v", tc, iter, str, err, tc.wantIter, tc.wantStr, tc.wantErr)
}
break
}
}
func TestRemoveFunc(t *testing.T) {
filter := RemoveFunc(func(r rune) bool {
return strings.IndexRune("ab\u0300\u1234,", r) != -1
})
tests := []testCase{
{
src: ",",
wantStr: "",
},
{
src: "c",
wantStr: "c",
},
{
src: "\u2345",
wantStr: "\u2345",
},
{
src: "tsch",
wantStr: "tsch",
},
{
src: ",,,",
wantStr: "",
},
{
src: "a\xbd\xb2=\xbc ",
wantStr: "\uFFFD\uFFFD=\uFFFD ",
},
{
// If we didn't replace illegal bytes with RuneError, the result
// would be \u0300 or the code would need to be more complex.
src: "\xcc\u0300\x80",
wantStr: "\uFFFD\uFFFD",
},
{
src: "\xcc\u0300\x80",
dstSize: 3,
wantStr: "\uFFFD\uFFFD",
wantIter: 2,
},
{
// Test a long buffer greater than the internal buffer size
src: "hello\xcc\xcc\xccworld",
srcSize: 13,
wantStr: "hello\uFFFD\uFFFD\uFFFDworld",
wantIter: 1,
},
{
src: "\u2345",
dstSize: 2,
wantStr: "",
wantErr: ErrShortDst,
},
{
src: "\xcc",
dstSize: 2,
wantStr: "",
wantErr: ErrShortDst,
},
{
src: "\u0300",
dstSize: 2,
srcSize: 1,
wantStr: "",
wantErr: ErrShortSrc,
},
{
t: RemoveFunc(func(r rune) bool {
return r == utf8.RuneError
}),
src: "\xcc\u0300\x80",
wantStr: "\u0300",
},
}
for _, tc := range tests {
tc.desc = tc.src
if tc.t == nil {
tc.t = filter
}
if tc.dstSize == 0 {
tc.dstSize = 100
}
if tc.srcSize == 0 {
tc.srcSize = 100
}
str, iter, err := doTransform(tc)
mi := tc.wantIter != 0 && tc.wantIter != iter
if str != tc.wantStr || err != tc.wantErr || mi {
t.Errorf("%+q:\ngot iter:%d, %+q, %v\nwant iter:%d, %+q, %v", tc.src, iter, str, err, tc.wantIter, tc.wantStr, tc.wantErr)
}
tc.src = str
idem, _, _ := doTransform(tc)
if str != idem {
t.Errorf("%+q: found %+q; want %+q", tc.src, idem, str)
}
}
}
func testString(t *testing.T, f func(Transformer, string) (string, int, error)) {
for _, tt := range append(testCases, chainTests()...) {
if tt.desc == "allowStutter = true" {
// We don't have control over the buffer size, so we eliminate tests
// that depend on a specific buffer size being set.
continue
}
if tt.wantErr == ErrShortDst || tt.wantErr == ErrShortSrc {
// The result string will be different.
continue
}
got, n, err := f(tt.t, tt.src)
if tt.wantErr != err {
t.Errorf("%s:error: got %v; want %v", tt.desc, err, tt.wantErr)
}
if got, want := err == nil, n == len(tt.src); got != want {
t.Errorf("%s:n: got %v; want %v", tt.desc, got, want)
}
if got != tt.wantStr {
t.Errorf("%s:string: got %q; want %q", tt.desc, got, tt.wantStr)
}
}
}
func TestBytes(t *testing.T) {
testString(t, func(z Transformer, s string) (string, int, error) {
b, n, err := Bytes(z, []byte(s))
return string(b), n, err
})
}
func TestAppend(t *testing.T) {
// Create a bunch of subtests for different buffer sizes.
testCases := [][]byte{
nil,
make([]byte, 0, 0),
make([]byte, 0, 1),
make([]byte, 1, 1),
make([]byte, 1, 5),
make([]byte, 100, 100),
make([]byte, 100, 200),
}
for _, tc := range testCases {
testString(t, func(z Transformer, s string) (string, int, error) {
b, n, err := Append(z, tc, []byte(s))
return string(b[len(tc):]), n, err
})
}
}
func TestString(t *testing.T) {
testString(t, String)
// Overrun the internal destination buffer.
for i, s := range []string{
strings.Repeat("a", initialBufSize-1),
strings.Repeat("a", initialBufSize+0),
strings.Repeat("a", initialBufSize+1),
strings.Repeat("A", initialBufSize-1),
strings.Repeat("A", initialBufSize+0),
strings.Repeat("A", initialBufSize+1),
strings.Repeat("A", 2*initialBufSize-1),
strings.Repeat("A", 2*initialBufSize+0),
strings.Repeat("A", 2*initialBufSize+1),
strings.Repeat("a", initialBufSize-2) + "A",
strings.Repeat("a", initialBufSize-1) + "A",
strings.Repeat("a", initialBufSize+0) + "A",
strings.Repeat("a", initialBufSize+1) + "A",
} {
got, _, _ := String(lowerCaseASCII{}, s)
if want := strings.ToLower(s); got != want {
t.Errorf("%d:dst buffer test: got %s (%d); want %s (%d)", i, got, len(got), want, len(want))
}
}
// Overrun the internal source buffer.
for i, s := range []string{
strings.Repeat("a", initialBufSize-1),
strings.Repeat("a", initialBufSize+0),
strings.Repeat("a", initialBufSize+1),
strings.Repeat("a", 2*initialBufSize+1),
strings.Repeat("a", 2*initialBufSize+0),
strings.Repeat("a", 2*initialBufSize+1),
} {
got, _, _ := String(rleEncode{}, s)
if want := fmt.Sprintf("%da", len(s)); got != want {
t.Errorf("%d:src buffer test: got %s (%d); want %s (%d)", i, got, len(got), want, len(want))
}
}
// Test allocations for non-changing strings.
// Note we still need to allocate a single buffer.
for i, s := range []string{
"",
"123",
"123456789",
strings.Repeat("a", initialBufSize),
strings.Repeat("a", 10*initialBufSize),
} {
if n := testing.AllocsPerRun(5, func() { String(&lowerCaseASCII{}, s) }); n > 1 {
t.Errorf("%d: #allocs was %f; want 1", i, n)
}
}
}
// TestBytesAllocation tests that buffer growth stays limited with the trickler
// transformer, which behaves oddly but within spec. In case buffer growth is
// not correctly handled, the test will either panic with a failed allocation or
// thrash. To ensure the tests terminate under the last condition, we time out
// after some sufficiently long period of time.
func TestBytesAllocation(t *testing.T) {
done := make(chan bool)
go func() {
in := bytes.Repeat([]byte{'a'}, 1000)
tr := trickler(make([]byte, 1))
Bytes(&tr, in)
done <- true
}()
select {
case <-done:
case <-time.After(3 * time.Second):
t.Error("time out, likely due to excessive allocation")
}
}
// TestStringAllocation tests that buffer growth stays limited with the trickler
// transformer, which behaves oddly but within spec. In case buffer growth is
// not correctly handled, the test will either panic with a failed allocation or
// thrash. To ensure the tests terminate under the last condition, we time out
// after some sufficiently long period of time.
func TestStringAllocation(t *testing.T) {
done := make(chan bool)
go func() {
in := strings.Repeat("a", 1000)
tr := trickler(make([]byte, 1))
String(&tr, in)
done <- true
}()
select {
case <-done:
case <-time.After(3 * time.Second):
t.Error("time out, likely due to excessive allocation")
}
}
func BenchmarkStringLower(b *testing.B) {
in := strings.Repeat("a", 4096)
for i := 0; i < b.N; i++ {
String(&lowerCaseASCII{}, in)
}
}
```
|
Koda Kumi Driving Hit's 2 is the third remix album released by Japanese singer/songwriter, Kumi Koda. It was released a year after Koda Kumi Driving Hit's on March 31, 2010. It ranked higher than its predecessor, coming in at #5 on Oricon and staying on the charts for twelve weeks.
Track listing
(Source)
"Lick me♥" [Prog5 BIG BASS Remix]
"Driving" [GROOVE HACKER$ Remix]
"Ecstasy [Caramel Pod Remix]"
44th single 3 SPLASH
"Cutie Honey" [MITOMI TOKOTO Remix]
"Rain" [PLUG in LANGUAGE Remix]
"Shake It Up" [HOUSE NATION Sunset In Ibiza Remix]
"No Regret [FUTURE HOUSE UNITED Remix]
"Last Angel feat. Tohoshinki" [neroDoll Remix]
"UNIVERSE" [Pink Chameleons Remix]
"you" [Floor on the Intelligence Remix]
"1000 no Kotoba" [Shohei Matsumoto & Junichi Matsuda Remix]
"hands" [The Standard Club PIANO DANCE Remix]
"Taisetsu na kimi e" [Ryuzo Remix]
"stay with me" [Tomoharu Moriya Remix]
"Yume no Uta" [Sunset In Ibiza Remix]
"Trust Your Love" [Terminal Vox Remix]
"love across the ocean" [Caramel Pod Remix]
Oricon Charts (Japan)
References
Koda Kumi albums
2010 remix albums
Avex Group remix albums
|
The Planning-gain Supplement (Preparations) Act 2007 (c 2) is an Act of the Parliament of the United Kingdom.
The Treasury may by order repeal this Act.
Section 1 - Preparatory expenditure
This section provides:
"Secretary of State"
This means one of Her Majesty's Principal Secretaries of State.
References
Halsbury's Statutes,
External links
The Planning-gain Supplement (Preparations) Act 2007, as amended from the National Archives.
The Planning-gain Supplement (Preparations) Act 2007, as originally enacted from the National Archives.
Explanatory notes to the Planning-gain Supplement (Preparations) Act 2007.
United Kingdom Acts of Parliament 2007
|
Cégep de Shawinigan is a public college located in Shawinigan, Quebec, Canada.
Originally known as the Cégep de Shawinigan at its founding in 1968, it was renamed to Collège Shawinigan in 1994 before returning to its original name in 2019. In addition to the main campus in Shawinigan, there is a campus in La Tuque (CEC La Tuque).
Student life
The college has competitive teams in sports (Les Électriks), Esports and improvisation (Les Fourches, Le Trident).
See also
Education in Quebec
References
External links
in French
Fourches de Shawinigan
Colleges in Quebec
Shawinigan
Buildings and structures in Shawinigan
Education in Mauricie
|
Wolverine Pass, 2218 m (7277 ft), is a mountain pass in the Chilcotin Ranges of the Pacific Ranges, the southernmost major subdivision of the Coast Mountains of British Columbia, Canada. It is located between the headwaters of Gun Creek, a major north tributary of the Bridge River, and those of Slim Creek, which is a tributary of Gun Creek, and is part of the trail system within the Spruce Lake Protected Area (a.k.a. "Southern Chilcotins").
See also
List of mountain passes
Tyoax Pass
Griswold Pass
Warner Pass
Elbow Pass
References
Mountain passes of British Columbia
Bridge River Country
Chilcotin Ranges
|
```javascript
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
function isDate(value) {
return value instanceof Date && !isNaN(+value);
}
exports.isDate = isDate;
//# sourceMappingURL=isDate.js.map
```
|
```javascript
import _ from 'lodash';
export default class LdapUserSearchController {
/* @ngInject */
constructor($async, Notifications) {
Object.assign(this, { $async, Notifications });
this.users = null;
this.showTable = false;
this.onRemoveClick = this.onRemoveClick.bind(this);
this.onAddClick = this.onAddClick.bind(this);
this.search = this.search.bind(this);
}
onAddClick() {
const lastSetting = _.last(this.settings);
this.settings.push({ BaseDN: this.domainSuffix, UserNameAttribute: lastSetting.UserNameAttribute, Filter: this.baseFilter });
}
onRemoveClick(index) {
this.settings.splice(index, 1);
}
search() {
return this.$async(async () => {
try {
this.users = null;
this.showTable = true;
const users = await this.onSearchClick();
this.users = _.compact(users);
} catch (error) {
this.Notifications.error('Failure', error, 'Failed to search users');
this.showTable = false;
}
});
}
}
```
|
Artemis Intelligent Power (AIP) is an engineering and R&D company based in Edinburgh, Scotland. It primarily manufactures hydraulic machines and transmissions that are based on high-speed solenoid valves and digital control technology. The company is noted for developing its digital displacement technology.
History
The company emerged from a University of Edinburgh project initiated in 1994 by Win Rampen and Stephen Salter with a focus on producing high-tech machines to generate renewable energy and reduce fuel consumption of vehicles. The UK Carbon Trust supported the research project splitting off into a fully operational company in its development of its hydraulic transmission system.
AIP stated that its digital displacement hydraulic pump (DDP) technology can deliver greater efficiency and productivity, particularly when applied to off-highway machines. The technology was first used to power wind turbines and increase their efficiency. It was a recipient of the Royal Academy of Engineering’s MacRobert Award for Innovation.
One of the challenges that the technology addressed was wasted energy. Recently, AIP's hydraulic pump technology was adopted by the train operator ScotRail, allowing its trains to save 9,000 liters of diesel per carriage every year. The company claimed that from 64 to 73 percent of a train's energy is lost during braking and transmission. The AIP hydraulic pump eliminated the incidence of wasted energy through its computer-controlled valves that turn off the pump's cylinders when unused.
Acquisition
Mitsubishi Heavy Industries acquired AIP in December 2010. It became a wholly-owned subsidiary of the Japanese company through Mitsubishi Power Systems Europe (MPSE). MPSE targeted to build an offshore wind park project for a UK Government national initiative.
In 2018, the Danish multinational company Danfoss acquired AIP. This created a joint venture with Mitsubishi. Danfoss completely acquired Artemis Intelligent Power in 2021, effectively retiring the brand. Its products are now available in the market using the name Danfoss Digital Displacement.
Aside from its hydraulic system, AIP also holds several patents such as those involving high-capacity, high-speed, and digitally-controlled valves. AIP has partnered with other companies to develop projects such as infinitely variable hydraulic transmission systems and the hydraulic energy storage technology, including the world's first tidal energy research center constructed by Babcock International and the University of Edinburgh.
References
Manufacturing companies of Scotland
Engineering companies of Scotland
|
```xml
<?xml version="1.0" encoding="UTF-8"?>
<module external.linked.project.id="PassingDataBetweenFragments" external.linked.project.path="$MODULE_DIR$" external.root.project.path="$MODULE_DIR$" external.system.id="GRADLE" type="JAVA_MODULE" version="4">
<component name="FacetManager">
<facet type="java-gradle" name="Java-Gradle">
<configuration>
<option name="BUILD_FOLDER_PATH" value="$MODULE_DIR$/build" />
<option name="BUILDABLE" value="false" />
</configuration>
</facet>
</component>
<component name="NewModuleRootManager" LANGUAGE_LEVEL="JDK_1_7" inherit-compiler-output="true">
<exclude-output />
<content url="file://$MODULE_DIR$">
<excludeFolder url="file://$MODULE_DIR$/.gradle" />
</content>
<orderEntry type="inheritedJdk" />
<orderEntry type="sourceFolder" forTests="false" />
</component>
</module>
```
|
```javascript
// Should not error
import 'server-only'
export function register() {
if (process.env.NEXT_RUNTIME === 'edge') {
console.log('instrumentation hook on the edge')
} else if (process.env.NEXT_RUNTIME === 'nodejs') {
console.log('instrumentation hook on nodejs')
} else {
require('this should fail')
}
}
```
|
Gustav Heinrich Wiedemann (; 2 October 1826 – 24 March 1899) was a German physicist and scientific author.
Life
Wiedemann was born in Berlin the son of a merchant who died two years later. Following the death of his mother in 1842 he lived with his grandparents.
After attending a private school as well as the Cölnische Humanistische Gymnasium, he entered the University of Berlin in 1844 where took his doctor's degree three years later under the supervision of Heinrich Gustav Magnus. His thesis on that occasion was devoted to a question in organic chemistry, for he held the opinion that the study of chemistry is an indispensable preliminary to the pursuit of physics, which was his ultimate aim. In Berlin he made the acquaintance of Hermann von Helmholtz at the house of Heinrich Gustav Magnus and was one of the founders of the Berlin Physical Society.
In 1854 he left Germany to take on the role of Professor of Physics in Basel, nine years later he moved to Braunschweig and in 1866 to Karlsruhe. In 1871 he accepted the chair of physical chemistry at Leipzig. The attention he had paid to chemistry in the earlier part of his career enabled him to hold his own in this position, but he found his work more congenial when in 1887 he was transferred to the professorship of physics. With Rudolph Franz, Wiedemann developed the Wiedemann–Franz law relating thermal and electrical conductivity in 1853.
August Hagenbach was one of his students in Leipzig.
He died in Leipzig on 24 March 1899.
Literary work
His name is probably most widely known for his literary work. In 1877 he undertook the editorship of the Annalen der Physik und Chemie in succession to Johann Christian Poggendorff, thus starting the series of that scientific periodical which is familiarly cited as Wied. Ann. Another monumental work for which he was responsible was Die Lehre van der Elektricitat, or, as it was called in the first instance, Lehre von Galvanismus und Elektromagnetismus, a book that is unsurpassed for accuracy and comprehensiveness. He produced the first edition in 1861, and a fourth, revised and enlarged, was only completed a short time before his death.
Scientific research
But his original work was also important. His data for the thermal conductivity of various metals were for long the most trustworthy at the disposal of physicists, and his determination of the ohm in terms of the specific resistance of mercury showed remarkable skill in quantitative research. He carried out a number of magnetic investigations which resulted in the discovery of many interesting phenomena, some of which have been rediscovered by others; they related among other things to the effect of mechanical strain on the magnetic properties of the magnetic metals, to the relation between the chemical composition of compound bodies and their magnetic properties, and to a curious parallelism between the laws of torsion and of magnetism (see Wiedemann effect). He also investigated electrical endosmosis and the electrical resistance of electrolytes.
Family
In 1851 he married Clara Mitscherlich.
Their eldest son, Eilhard Ernst Gustav, born in Berlin on 1 August 1852, became professor of physics at Erlangen in 1886, and his younger son, Alfred, born in Berlin on 18 July 1856, was appointed to the extraordinary professorship of Egyptology at Bonn in 1892.
References
Attribution:
Works
External links
1826 births
1899 deaths
19th-century German physicists
Foreign Members of the Royal Society
German physical chemists
Scientists from Berlin
Members of the Royal Society of Sciences in Uppsala
|
Ian Ross Tomlinson (27 February 1936 – 26 January 1995) was an Olympic athlete from Australia. He specialised in the triple jump and long jump events during his career.
Born in Perth, Western Australia Tomlinson represented Australia at two consecutive Olympic Games, starting in 1960. He twice claimed the gold medal in the men's triple jump event at the British Empire and Commonwealth Games for his native country: 1958 and 1962.
Tomlinson died in Melbourne, Victoria, aged 58.
References
1936 births
1995 deaths
Athletes from Perth, Western Australia
Sportsmen from Western Australia
Australian male long jumpers
Australian male triple jumpers
Olympic male long jumpers
Olympic male triple jumpers
Olympic athletes for Australia
Athletes (track and field) at the 1960 Summer Olympics
Athletes (track and field) at the 1964 Summer Olympics
Commonwealth Games gold medallists for Australia
Commonwealth Games gold medallists in athletics
Athletes (track and field) at the 1958 British Empire and Commonwealth Games
Athletes (track and field) at the 1962 British Empire and Commonwealth Games
Australian Athletics Championships winners
Japan Championships in Athletics winners
Commonwealth Games competitors for Australia
Medallists at the 1958 British Empire and Commonwealth Games
Medallists at the 1962 British Empire and Commonwealth Games
|
Robert Griffith (1501/1502 – 1568) was an English politician.
Griffith was mayor of Salisbury in 1545. He was a Member (MP) of the Parliament of England for Salisbury in April 1554 and November 1554.
References
1502 births
1568 deaths
English MPs 1554
English MPs 1554–1555
Mayors of Salisbury
|
```c
int x[10] = { 0,1,2,3,4,5,6,7,8,9};
int
main()
{
int niterations = 0, i;
for (;;) {
int i, mi, max;
max = 0;
for (i = 0; i < 10 ; i++) {
if (x[i] > max) {
max = x[i];
mi = i;
}
}
if (max == 0)
break;
x[mi] = 0;
niterations++;
if (niterations > 10)
abort ();
}
exit (0);
}
```
|
```html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>Authentication With Microsoft Graph Sample</title>
<style>
body {
margin: 0px;
padding: 0px;
font-family: Segoe UI;
}
html,
body {
height: 100%;
}
header {
background-image: url("data:image/svg+xml,%3Csvg version='1.1' id='Layer_1' xmlns='path_to_url xmlns:xlink='path_to_url x='0px' y='0px' viewBox='0 0 4638.9 651.6' style='enable-background:new 0 0 4638.9 651.6;' xml:space='preserve'%3E%3Cstyle type='text/css'%3E .st0%7Bfill:%2355A0E0;%7D .st1%7Bfill:none;%7D .st2%7Bfill:%230058A8;%7D .st3%7Bfill:%23328BD8;%7D .st4%7Bfill:%23B6DCF1;%7D .st5%7Bopacity:0.2;fill:url(%23SVGID_1_);enable-background:new ;%7D%0A%3C/style%3E%3Crect y='1.1' class='st0' width='4640' height='646.3'/%3E%3Cpath class='st1' d='M3987.8,323.6L4310.3,1.1h-65.6l-460.1,460.1c-17.5,17.5-46.1,17.5-63.6,0L3260.9,1.1H0v646.3h3660.3 L3889,418.7c17.5-17.5,46.1-17.5,63.6,0l228.7,228.7h66.6l-260.2-260.2C3970.3,369.8,3970.3,341.1,3987.8,323.6z'/%3E%3Cpath class='st2' d='M3784.6,461.2L4244.7,1.1h-983.9l460.1,460.1C3738.4,478.7,3767.1,478.7,3784.6,461.2z'/%3E%3Cpath class='st3' d='M4640,1.1h-329.8l-322.5,322.5c-17.5,17.5-17.5,46.1,0,63.6l260.2,260.2H4640L4640,1.1L4640,1.1z'/%3E%3Cpath class='st4' d='M3889,418.8l-228.7,228.7h521.1l-228.7-228.7C3935.2,401.3,3906.5,401.3,3889,418.8z'/%3E%3ClinearGradient id='SVGID_1_' gradientUnits='userSpaceOnUse' x1='3713.7576' y1='438.1175' x2='3911.4084' y2='14.2535' gradientTransform='matrix(1 0 0 -1 0 641.3969)'%3E%3Cstop offset='0' style='stop-color:%23FFFFFF;stop-opacity:0.5'/%3E%3Cstop offset='1' style='stop-color:%23FFFFFF'/%3E%3C/linearGradient%3E%3Cpath class='st5' d='M3952.7,124.5c-17.5-17.5-46.1-17.5-63.6,0l-523,523h1109.6L3952.7,124.5z'/%3E%3C/svg%3E%0A");
background-repeat: no-repeat;
background-size: 100%;
background-position: right;
background-color: #55A0E0;
width: 100%;
font-size: 44px;
height: 120px;
color: white;
padding: 30px 0 40px 0px;
display: inline-block;
}
.header-icon {
background-image: url("data:image/svg+xml;utf8,%3Csvg%20version%3D%221.1%22%20id%3D%22Layer_1%22%20xmlns%3D%22http%3A//www.w3.org/2000/svg%22%20xmlns%3Axlink%3D%22http%3A//www.w3.org/1999/xlink%22%20x%3D%220px%22%20y%3D%220px%22%0A%09%20viewBox%3D%220%200%20150.2%20125%22%20style%3D%22enable-background%3Anew%200%200%20150.2%20125%3B%22%20xml%3Aspace%3D%22preserve%22%3E%0A%3Cstyle%20type%3D%22text/css%22%3E%0A%09.st0%7Bfill%3Anone%3B%7D%0A%09.st1%7Bfill%3A%23FFFFFF%3B%7D%0A%3C/style%3E%0A%3Crect%20x%3D%220.5%22%20class%3D%22st0%22%20width%3D%22149.7%22%20height%3D%22125%22/%3E%0A%3Cg%3E%0A%09%3Cpath%20class%3D%22st1%22%20d%3D%22M59%2C102.9L21.8%2C66c-3.5-3.5-3.5-9.1%2C0-12.5l37-36.5l2.9%2C3l-37%2C36.4c-1.8%2C1.8-1.8%2C4.7%2C0%2C6.6l37.2%2C37L59%2C102.9z%22%0A%09%09/%3E%0A%3C/g%3E%0A%3Cg%3E%0A%09%3Cpath%20class%3D%22st1%22%20d%3D%22M92.5%2C102.9l-3-3l37.2-37c0.9-0.9%2C1.4-2%2C1.4-3.3c0-1.2-0.5-2.4-1.4-3.3L89.5%2C20l2.9-3l37.2%2C36.4%0A%09%09c1.7%2C1.7%2C2.6%2C3.9%2C2.6%2C6.3s-0.9%2C4.6-2.6%2C6.3L92.5%2C102.9z%22/%3E%0A%3C/g%3E%0A%3Cg%3E%0A%09%3Cpath%20class%3D%22st1%22%20d%3D%22M90.1%2C68.4c-4.5%2C0-8-3.5-8-8.1c0-4.5%2C3.5-8.1%2C8-8.1c4.4%2C0%2C8%2C3.7%2C8%2C8.1C98.1%2C64.7%2C94.4%2C68.4%2C90.1%2C68.4z%0A%09%09%20M90.1%2C56.5c-2.2%2C0-3.8%2C1.7-3.8%2C3.9c0%2C2.2%2C1.7%2C3.9%2C3.8%2C3.9c1.9%2C0%2C3.8-1.6%2C3.8-3.9S91.9%2C56.5%2C90.1%2C56.5z%22/%3E%0A%3C/g%3E%0A%3Cg%3E%0A%09%3Cpath%20class%3D%22st1%22%20d%3D%22M61.4%2C68.4c-4.5%2C0-8-3.5-8-8.1c0-4.5%2C3.5-8.1%2C8-8.1c4.4%2C0%2C8%2C3.7%2C8%2C8.1C69.5%2C64.7%2C65.8%2C68.4%2C61.4%2C68.4z%0A%09%09%20M61.4%2C56.5c-2.2%2C0-3.8%2C1.7-3.8%2C3.9c0%2C2.2%2C1.7%2C3.9%2C3.8%2C3.9c1.9%2C0%2C3.8-1.6%2C3.8-3.9S63.3%2C56.5%2C61.4%2C56.5z%22/%3E%0A%3C/g%3E%0A%3C/svg%3E%0A");
background-repeat: no-repeat;
float: left;
height: 140px;
width: 140px;
display: inline-block;
vertical-align: middle;
}
.header-text {
padding-left: 1%;
color: #FFFFFF;
font-family: "Segoe UI";
font-size: 72px;
font-weight: 300;
letter-spacing: 0.35px;
line-height: 96px;
display: inline-block;
vertical-align: middle;
}
.header-inner-container {
min-width: 480px;
max-width: 1366px;
margin-left: auto;
margin-right: auto;
vertical-align: middle;
}
.header-inner-container::after {
content: "";
clear: both;
display: table;
}
.main-content-area {
padding-left: 30px;
}
.content-title {
color: #000000;
font-family: "Segoe UI";
font-size: 46px;
font-weight: 300;
line-height: 62px;
}
.main-text {
color: #808080;
font-size: 24px;
font-family: "Segoe UI";
font-size: 24px;
font-weight: 200;
line-height: 32px;
}
.main-text-p1{
padding-top: 48px;
padding-bottom: 28px;
}
.endpoint {
height: 32px;
width: 571px;
color: #808080;
font-family: "Segoe UI";
font-size: 24px;
font-weight: 200;
line-height: 32px;
padding-top: 28px;
}
.how-to-build-section {
padding-top: 20px;
padding-left: 30px;
}
.how-to-build-section>h3 {
font-size: 16px;
font-weight: 600;
letter-spacing: 0.35px;
line-height: 22px;
margin: 0 0 24px 0;
text-transform: uppercase;
}
.step-container {
display: flex;
align-items: stretch;
position: relative;
}
.step-container dl {
border-left: 1px solid #A0A0A0;
display: block;
padding: 0 24px;
margin: 0;
}
.step-container dl>dt::before {
background-color: white;
border: 1px solid #A0A0A0;
border-radius: 100%;
content: '';
left: 47px;
height: 11px;
position: absolute;
width: 11px;
}
.step-container dl>.test-bullet::before {
background-color: blue;
}
.step-container dl>dt {
display: block;
font-size: inherit;
font-weight: bold;
line-height: 20px;
}
.step-container dl>dd {
font-size: inherit;
line-height: 20px;
margin-left: 0;
padding-bottom: 32px;
}
.step-container:last-child dl {
border-left: 1px solid transparent;
}
.ctaLink {
background-color: transparent;
border: 1px solid transparent;
color: #006AB1;
cursor: pointer;
font-weight: 600;
padding: 0;
white-space: normal;
}
.ctaLink:focus {
outline: 1px solid #00bcf2;
}
.ctaLink:hover {
text-decoration: underline;
}
.step-icon {
display: flex;
height: 38px;
margin-right: 15px;
width: 38px;
}
.step-icon>div {
height: 30px;
width: 30px;
background-repeat: no-repeat;
}
.ms-logo-container {
min-width: 580px;
max-width: 980px;
margin-left: auto;
margin-right: auto;
left: 0;
right: 0;
transition: bottom 400ms;
}
.ms-logo {
float: right;
background-image: url("data:image/svg+xml;utf8,%0A%3Csvg%20version%3D%221.1%22%20id%3D%22MS-symbol%22%20xmlns%3D%22http%3A//www.w3.org/2000/svg%22%20xmlns%3Axlink%3D%22http%3A//www.w3.org/1999/xlink%22%20x%3D%220px%22%20y%3D%220px%22%0A%09%20viewBox%3D%220%200%20400%20120%22%20style%3D%22enable-background%3Anew%200%200%20400%20120%3B%22%20xml%3Aspace%3D%22preserve%22%3E%0A%3Cstyle%20type%3D%22text/css%22%3E%0A%09.st0%7Bfill%3Anone%3B%7D%0A%09.st1%7Bfill%3A%23737474%3B%7D%0A%09.st2%7Bfill%3A%23D63F26%3B%7D%0A%09.st3%7Bfill%3A%23167D3E%3B%7D%0A%09.st4%7Bfill%3A%232E76BC%3B%7D%0A%09.st5%7Bfill%3A%23FDB813%3B%7D%0A%3C/style%3E%0A%3Crect%20x%3D%220.6%22%20class%3D%22st0%22%20width%3D%22398.7%22%20height%3D%22119%22/%3E%0A%3Cpath%20class%3D%22st1%22%20d%3D%22M171.3%2C38.4v43.2h-7.5V47.7h-0.1l-13.4%2C33.9h-5l-13.7-33.9h-0.1v33.9h-6.9V38.4h10.8l12.4%2C32h0.2l13.1-32H171.3%0A%09z%20M177.6%2C41.7c0-1.2%2C0.4-2.2%2C1.3-3c0.9-0.8%2C1.9-1.2%2C3.1-1.2c1.3%2C0%2C2.4%2C0.4%2C3.2%2C1.3c0.8%2C0.8%2C1.3%2C1.8%2C1.3%2C3c0%2C1.2-0.4%2C2.2-1.3%2C3%0A%09c-0.9%2C0.8-1.9%2C1.2-3.2%2C1.2s-2.3-0.4-3.1-1.2C178%2C43.8%2C177.6%2C42.8%2C177.6%2C41.7z%20M185.7%2C50.6v31h-7.3v-31H185.7z%20M207.8%2C76.3%0A%09c1.1%2C0%2C2.3-0.3%2C3.6-0.8c1.3-0.5%2C2.5-1.2%2C3.6-2v6.8c-1.2%2C0.7-2.5%2C1.2-4%2C1.5c-1.5%2C0.3-3.1%2C0.5-4.9%2C0.5c-4.6%2C0-8.3-1.4-11.1-4.3%0A%09c-2.9-2.9-4.3-6.6-4.3-11c0-5%2C1.5-9.1%2C4.4-12.3c2.9-3.2%2C7-4.8%2C12.4-4.8c1.4%2C0%2C2.7%2C0.2%2C4.1%2C0.5c1.4%2C0.4%2C2.5%2C0.8%2C3.3%2C1.2v7%0A%09c-1.1-0.8-2.3-1.5-3.4-1.9c-1.2-0.5-2.4-0.7-3.6-0.7c-2.9%2C0-5.2%2C0.9-7%2C2.8c-1.8%2C1.9-2.7%2C4.4-2.7%2C7.6c0%2C3.1%2C0.8%2C5.6%2C2.5%2C7.3%0A%09C202.6%2C75.4%2C204.9%2C76.3%2C207.8%2C76.3z%20M235.7%2C50.1c0.6%2C0%2C1.1%2C0%2C1.6%2C0.1s0.9%2C0.2%2C1.2%2C0.3v7.4c-0.4-0.3-0.9-0.5-1.7-0.8%0A%09c-0.7-0.3-1.6-0.4-2.7-0.4c-1.8%2C0-3.3%2C0.8-4.5%2C2.3c-1.2%2C1.5-1.9%2C3.8-1.9%2C7v15.6h-7.3v-31h7.3v4.9h0.1c0.7-1.7%2C1.7-3%2C3-4%0A%09C232.2%2C50.6%2C233.8%2C50.1%2C235.7%2C50.1z%20M238.9%2C66.6c0-5.1%2C1.4-9.2%2C4.3-12.2c2.9-3%2C6.9-4.5%2C12.1-4.5c4.8%2C0%2C8.6%2C1.4%2C11.3%2C4.3%0A%09c2.7%2C2.9%2C4.1%2C6.8%2C4.1%2C11.7c0%2C5-1.4%2C9-4.3%2C12c-2.9%2C3-6.8%2C4.5-11.8%2C4.5c-4.8%2C0-8.6-1.4-11.4-4.2C240.3%2C75.3%2C238.9%2C71.4%2C238.9%2C66.6z%0A%09%20M246.5%2C66.3c0%2C3.2%2C0.7%2C5.7%2C2.2%2C7.4c1.5%2C1.7%2C3.6%2C2.6%2C6.3%2C2.6c2.7%2C0%2C4.7-0.9%2C6.1-2.6c1.4-1.7%2C2.1-4.2%2C2.1-7.6c0-3.3-0.7-5.8-2.2-7.5%0A%09c-1.4-1.7-3.4-2.5-6-2.5c-2.7%2C0-4.7%2C0.9-6.2%2C2.7C247.2%2C60.5%2C246.5%2C63%2C246.5%2C66.3z%20M281.5%2C58.8c0%2C1%2C0.3%2C1.9%2C1%2C2.5%0A%09c0.7%2C0.6%2C2.1%2C1.3%2C4.4%2C2.2c2.9%2C1.2%2C5%2C2.5%2C6.1%2C3.9c1.2%2C1.5%2C1.8%2C3.2%2C1.8%2C5.3c0%2C2.9-1.1%2C5.3-3.4%2C7c-2.2%2C1.8-5.3%2C2.7-9.1%2C2.7%0A%09c-1.3%2C0-2.7-0.2-4.3-0.5c-1.6-0.3-2.9-0.7-4-1.2v-7.2c1.3%2C0.9%2C2.8%2C1.7%2C4.3%2C2.2c1.5%2C0.5%2C2.9%2C0.8%2C4.2%2C0.8c1.6%2C0%2C2.9-0.2%2C3.6-0.7%0A%09c0.8-0.5%2C1.2-1.2%2C1.2-2.3c0-1-0.4-1.9-1.2-2.5c-0.8-0.7-2.4-1.5-4.6-2.4c-2.7-1.1-4.6-2.4-5.7-3.8c-1.1-1.4-1.7-3.2-1.7-5.4%0A%09c0-2.8%2C1.1-5.1%2C3.3-6.9c2.2-1.8%2C5.1-2.7%2C8.6-2.7c1.1%2C0%2C2.3%2C0.1%2C3.6%2C0.4c1.3%2C0.2%2C2.5%2C0.6%2C3.4%2C0.9v6.9c-1-0.6-2.1-1.2-3.4-1.7%0A%09c-1.3-0.5-2.6-0.7-3.8-0.7c-1.4%2C0-2.5%2C0.3-3.2%2C0.8C281.9%2C57.1%2C281.5%2C57.8%2C281.5%2C58.8z%20M297.9%2C66.6c0-5.1%2C1.4-9.2%2C4.3-12.2%0A%09c2.9-3%2C6.9-4.5%2C12.1-4.5c4.8%2C0%2C8.6%2C1.4%2C11.3%2C4.3c2.7%2C2.9%2C4.1%2C6.8%2C4.1%2C11.7c0%2C5-1.4%2C9-4.3%2C12c-2.9%2C3-6.8%2C4.5-11.8%2C4.5%0A%09c-4.8%2C0-8.6-1.4-11.4-4.2C299.4%2C75.3%2C297.9%2C71.4%2C297.9%2C66.6z%20M305.5%2C66.3c0%2C3.2%2C0.7%2C5.7%2C2.2%2C7.4c1.5%2C1.7%2C3.6%2C2.6%2C6.3%2C2.6%0A%09c2.7%2C0%2C4.7-0.9%2C6.1-2.6c1.4-1.7%2C2.1-4.2%2C2.1-7.6c0-3.3-0.7-5.8-2.2-7.5c-1.4-1.7-3.4-2.5-6-2.5c-2.7%2C0-4.7%2C0.9-6.2%2C2.7%0A%09C306.3%2C60.5%2C305.5%2C63%2C305.5%2C66.3z%20M353.9%2C56.6h-10.9v25h-7.4v-25h-5.2v-6h5.2v-4.3c0-3.3%2C1.1-5.9%2C3.2-8c2.1-2.1%2C4.8-3.1%2C8.1-3.1%0A%09c0.9%2C0%2C1.7%2C0%2C2.4%2C0.1c0.7%2C0.1%2C1.3%2C0.2%2C1.8%2C0.4V42c-0.2-0.1-0.7-0.3-1.3-0.5c-0.6-0.2-1.3-0.3-2.1-0.3c-1.5%2C0-2.7%2C0.5-3.5%2C1.4%0A%09s-1.2%2C2.4-1.2%2C4.2v3.7h10.9v-7l7.3-2.2v9.2h7.4v6h-7.4v14.5c0%2C1.9%2C0.3%2C3.3%2C1%2C4c0.7%2C0.8%2C1.8%2C1.2%2C3.3%2C1.2c0.4%2C0%2C0.9-0.1%2C1.5-0.3%0A%09c0.6-0.2%2C1.1-0.4%2C1.6-0.7v6c-0.5%2C0.3-1.2%2C0.5-2.3%2C0.7c-1.1%2C0.2-2.1%2C0.3-3.2%2C0.3c-3.1%2C0-5.4-0.8-6.9-2.5c-1.5-1.6-2.3-4.1-2.3-7.4%0A%09V56.6z%22/%3E%0A%3Cg%3E%0A%09%3Crect%20x%3D%2231%22%20y%3D%2224%22%20class%3D%22st2%22%20width%3D%2234.2%22%20height%3D%2234.2%22/%3E%0A%09%3Crect%20x%3D%2268.8%22%20y%3D%2224%22%20class%3D%22st3%22%20width%3D%2234.2%22%20height%3D%2234.2%22/%3E%0A%09%3Crect%20x%3D%2231%22%20y%3D%2261.8%22%20class%3D%22st4%22%20width%3D%2234.2%22%20height%3D%2234.2%22/%3E%0A%09%3Crect%20x%3D%2268.8%22%20y%3D%2261.8%22%20class%3D%22st5%22%20width%3D%2234.2%22%20height%3D%2234.2%22/%3E%0A%3C/g%3E%0A%3C/svg%3E%0A");
}
.ms-logo-container>div {
min-height: 60px;
width: 150px;
background-repeat: no-repeat;
}
.row {
padding: 90px 0px 0 20px;
min-width: 480px;
max-width: 1366px;
margin-left: auto;
margin-right: auto;
}
.column {
float: left;
width: 45%;
padding-right: 20px;
}
.row:after {
content: "";
display: table;
clear: both;
}
a {
text-decoration: none;
}
.download-the-emulator {
height: 20px;
color: #0063B1;
font-size: 15px;
line-height: 20px;
padding-bottom: 70px;
}
.how-to-iframe {
max-width: 700px !important;
min-width: 650px !important;
height: 700px !important;
}
.remove-frame-height {
height: 10px;
}
@media only screen and (max-width: 1300px) {
.ms-logo {
padding-top: 30px;
}
.header-text {
font-size: 40px;
}
.column {
float: none;
padding-top: 30px;
width: 100%;
}
.ms-logo-container {
padding-top: 30px;
min-width: 480px;
max-width: 650px;
margin-left: auto;
margin-right: auto;
}
.row {
padding: 20px 0px 0 20px;
min-width: 480px;
max-width: 650px;
margin-left: auto;
margin-right: auto;
}
}
@media only screen and (max-width: 1370px) {
header {
background-color: #55A0E0;
background-size: auto 200px;
}
}
@media only screen and (max-width: 1230px) {
header {
background-color: #55A0E0;
background-size: auto 200px;
}
.header-text {
font-size: 44px;
}
.header-icon {
height: 120px;
width: 120px;
}
}
@media only screen and (max-width: 1000px) {
header {
background-color: #55A0E0;
background-image: none;
}
}
@media only screen and (max-width: 632px) {
.header-text {
font-size: 32px;
}
.row {
padding: 10px 0px 0 10px;
max-width: 490px !important;
min-width: 410px !important;
}
.endpoint {
font-size: 25px;
}
.main-text {
font-size: 20px;
}
.step-container dl>dd {
font-size: 14px;
}
.column {
padding-right: 5px;
}
.header-icon {
height: 110px;
width: 110px;
}
.how-to-iframe {
max-width: 480px !important;
min-width: 400px !important;
height: 650px !important;
overflow: hidden;
}
}
.remove-frame-height {
max-height: 10px;
}
</style>
<script>
document.addEventListener('DOMContentLoaded', function () {
loadFrame();
});
var loadFrame = function () {
var iframe = document.createElement('iframe');
iframe.setAttribute("id", "iframe");
var offLineHTMLContent = "";
var frameElement = document.getElementById("how-to-iframe");
if (window.navigator.onLine) {
iframe.src = 'path_to_url
iframe.setAttribute("scrolling", "no");
iframe.setAttribute("frameborder", "0");
iframe.setAttribute("width", "100%");
iframe.setAttribute("height", "100%");
var frameDiv = document.getElementById("how-to-iframe");
frameDiv.appendChild(iframe);
} else {
frameElement.classList.add("remove-frame-height");
}
};
</script>
</head>
<body>
<header class="header">
<div class="header-inner-container">
<div class="header-icon" style="display: inline-block"></div>
<div class="header-text" style="display: inline-block">Authentication With Microsoft Graph Sample</div>
</div>
</header>
<div class="row">
<div class="column" class="main-content-area">
<div class="content-title">Your bot is ready!</div>
<div class="main-text main-text-p1">You can test your bot in the Bot Framework Emulator<br />
by connecting to path_to_url
<div class="main-text download-the-emulator"><a class="ctaLink" href="path_to_url"
target="_blank">Download the Emulator</a></div>
<div class="main-text">Visit <a class="ctaLink" href="path_to_url" target="_blank">Azure
Bot Service</a> to register your bot and add it to<br />
various channels. The bot's endpoint URL typically looks
like this:</div>
<div class="endpoint">path_to_url
</div>
<div class="column how-to-iframe" id="how-to-iframe"></div>
</div>
<div class="ms-logo-container">
<div class="ms-logo"></div>
</div>
</body>
</html>
```
|
Elisabeth Clay Moreira is an American submission grappler, Brazilian jiu-jitsu practitioner and competitor.
Clay won multiple World and Pan championships titles (Gi and No-Gi) throughout her colored belts as well as the ADCC West Coat Trials when just a 16-year-old blue belt. She is a black belt 3X World No-Gi champion, a 4X Pan No-Gi champion as well as a World, European Open and Brazilian Nationals medallist. Clay is currently ranked No. 1 in the 2022–2023 IBJJF No-Gi black belt division.
Early life
Elisabeth Ann Clay Moreira was born on June 10, 2000, in Katy, Texas, USA. when she was still a child her family moved to Oklahoma and then Alaska. From a family of competitive gymnasts Clay tried gymnastics before joining a local MMA gym at 12 where she started Brazilian jiu-jitsu.
Early career
In 2016, after competing at IBJJF World Championship in the juvenile blue belt division, winning gold in heavyweight and silver in the Absolute, Clay moved to Legacy Jiu Jitsu (an Ares Affiliate) in Anchorage, Alaska. Clay trained under coach Jordan Kontra and started competing in major tournaments. As a 16-year-old blue belt, Clay upset the bracket by winning 2017 ADCC Submission Grappling West Coat Trials. Clay started to be known as a "black belt killer" after defeating brown and black belt opponents. In 2018 she won medium heavy and the open class at Pan Championship then became absolute blue belt World champion. She was then promoted to purple belt.
Mid-2018, Clay moved to Modesto, California to train under Samir Chantre and Osvaldo Moizinho at the Ares Jiu-Jitsu Team headquarters. In February 2019 she arrived third in the 2nd ADCC North American Trial. In March 2019, Clay won double bronze at purple belt at the 2019 Pan Championship. In 2019, she won IBJJF World No-Gi Championship at brown belt, submitting all her opponents in the process. At Fight 2 Win 143 in June 2020 she defeated World No-Gi super-heavy and open weight black belt champion Kendall Reusing via Split Decision. At Fight 2 Win 147 in July 2020 Clay submitted 2x IBJJF world champion Luiza Monteiro via outside heel hook.
Black belt career
2020-2022
In November 2020 she received her black belt from Moizinho and Chantre. In January 2021 FloGrappling chose her as the "2020 Female Grappler of the Year". In February she made her black belt Gi debut at Fight to Win 165 where she submitted no. 2-ranked medium heavyweight Maria Malyjasiak via toehold, winning in the process the welterweight Gi title.
In May 2021 at Subversiv 5 taking place in Miami in May, Clay won the Superfight submitting Andressa Cintra with a kneebar, a few weeks later Clay won silver at the Pan-American Championship then double gold at the 2021 Pan No-Gi with a 100% submission rate at both middleweight and in the absolute division, submitting Kendall Reusing in the openweight final. In October Clay won her first black belt world title at the 2021 World No-Gi Championship, also winning silver in openweight.
In April 2022 Clay participated in the ADCC West Coast Trials but lost on points to Amy Campo in the semi-final. In September she was invited to compete at the 2022 ADCC World Championship replacing Carina Santi, Clay lost on points to Amy Campo. At the 2022 Pan No-Gi taking place in October Clay won her weight and the openweight for the second year in a row after submitting all six of her opponent.
2023
Clay competed in the 2023 IBJJF European Championship, winning a bronze medal in the middleweight division. Clay is currently ranked No. 1 in the IBJJF No-Gi black belt division. She was then invited to compete in the women's under 66kg grand prix at Polaris 23 on March 11, 2023. Clay defeated Joanna Dineva, Ffion Davies, and Amy Campo in one night to win the tournament.
On March 26, 2023, Clay won a gold medal in the middleweight division of the IBJJF Pan Championship 2023. She then competed in the Campeonato Brasileiro de Jiu-Jitsu on May 7, 2023 and won a pair of bronze medals in the middleweight and absolute divisions. Clay then competed at the IBJJF American National Championship 2023, winning a silver medal at middleweight and a bronze medal in the absolute division on July 7. In the no gi edition of the competition on July 8, she won a gold medal at middleweight and a gold medal in the absolute division.
Clay competed against Brianna Ste-Marie for the vacant featherweight Who's Number One title at WNO: Night of Champions on October 1, 2023. She won the match by unanimous decision.
Championships and accomplishments
Main Achievements (black belt level):
4 x IBJJF Pan No-Gi Champion (2022 / 2021)
3 x IBJJF World No-Gi Champion (2022 / 2021)
3 x IBJJF American National (2022 / 2021)
Polaris Under 66kg Grand Prix Champion (2023)
IBJJF American National No-Gi Champion (2022)
IBJJF Dallas International Open Champion (2021)
SUBVERSIV Tournament Champion (2020)
F2W Heavyweight Champion (2021)
2nd place IBJJF World Championship (2021)
2nd place IBJJF World No-Gi Championship (2021)
2nd place IBJJF Pan Championship (2021)
2nd place CBJJ Brazilian Nationals (2022)
2nd place IBJJF American National (2022)
2nd place Abu Dhabi Grand Slam Miami (2022)
3rd place IBJJF European Open Championship (2023)
3rd place CBJJ Brazilian Nationals (2023)
Main Achievements (colored belts ):
4 x IBJJF World No-Gi Champion (2018 purple, 2019 brown)
3 x IBJJF World Juvenile Champion (2016 / 2017)
3 x IBJJF Pan No-Gi Champion (2018 blue, 2019 purple)
2 x IBJJF World Champion (2018 blue)
2 x IBJJF Pan Champion (2018 blue)
2 x IBJJF Pan Juvenile Champion (2016)
IBJJF Pan Championship No-Gi Champion (2019 purple)
ADCC American Trials Champion (2017)
2nd place IBJJF World Championship (2018 blue)
2nd place IBJJF World Juvenile Championship (2016
2nd place IBJJF Pan Championship Juvenile (2017)
3rd place IBJJF Pan Championship (2019 purple)
3rd place ADCC American Trials (2019)
Instructor lineage
Mitsuyo Maeda > Carlos Gracie Sr. > Helio Gracie > Carlos Gracie Junior > Samir Chantre > Elisabeth Clay
Notes
References
2000 births
American practitioners of Brazilian jiu-jitsu
Living people
People awarded a black belt in Brazilian jiu-jitsu
American submission wrestlers
Sportspeople from Alaska
World No-Gi Brazilian Jiu-Jitsu Championship medalists
Female Brazilian jiu-jitsu practitioners
|
```javascript
if(typeof cptable === 'undefined') cptable = {};
cptable[20838] = (function(){ var d = "\u0002\u0003\t\u000b\f\r\u000e\u000f\u0010\u0011\u0012\u0013\b\u0018\u0019\u001c\u001d\u001e\u001f\n\u0017\u001b\u0005\u0006\u0007\u0016\u0004\u0014\u0015\u001a [.<(+|&]!$*);-/^,%_>?`:#@'=\"abcdefghijklmnopqr~stuvwxyz{ABCDEFGHI}JKLMNOPQR\\STUVWXYZ0123456789", D = [], e = {}; for(var i=0;i!=d.length;++i) { if(d.charCodeAt(i) !== 0xFFFD) e[d.charAt(i)] = i; D[i] = d.charAt(i); } return {"enc": e, "dec": D }; })();
```
|
Nabakalebara also spelled as Navakalevara () is the ritualistic recreation of the wooden icons of four Hindu deities (Jagannath, Balabhadra, Subhadra, and Sudarshana) at Jagannath Temple, Puri. The ritual is performed during the 8th, 12th, or 19th year after the previous Nabakalebara.
Nabakalebara is an important festival in the Hindu Odia calendar, observed in the Jagannath Temple, Puri. It was first organised in 1575 A.D by Yaduvanshi Bhoi King Ramachandra Deva. It marks the symbolic demise and rebirth of Jagannath at Puri. The event involves installation of new images in the Jagannath temple and burial of the old ones in the temple premises at Koili Baikuntha.
Etymology
Nabakalebara is a combination of two Odia words: naba (new) and kalebara (body), translated as "the change of one's physical form."
Timing
The year of Nabakalebara is when the full moon occurs twice during the month of Ashadha. Every three years in the Hindu calendar, a lunar month is excluded from the calculation to maintain a balance between lunar and solar years. This period is called Adhikmasa or Malamasa. A year with an extra month (अधिकमास or मलमास or पुरुषोत्तममास )is considered auspicious for the ceremony, which typically occurs every twelve to nineteen years. The three deities undergo the process of Nabakalebara in the year in which the adhikmasa falls. The deities are carved from a special type of neem wood, known as daru bramha. Preparations for the ceremony begin in the month of Chaitra. The most recent ceremony was in 2015, 19 years after the 1996 ceremony.
Over three million devotees were expected to visit the temple during the Nabakalebara 2015.
Jirna bera parityaga
Jirna bera parityaga () means "the leaving of the old deity and the consecration of the new". As a person puts on new garments and gives up the old, the soul accepts new material bodies and gives up old, useless ones. According to temple rituals, the deities are changed. Made from the neem tree, musk, sandalwood and other combinations, they undergo a change before the adhika ashadha ends. Agama shastras followed in other parts of India for Vishnu worship, such as the Vaikhanasas, also prescribe the change of wooden deities under a specific astrological combination. Deities made of stone or metal do not need to be changed (unless they are damaged), but wooden deities must be changed within a specific number of years and their power must be ritually transferred. Nabakalebar is about the transformation of the Puri temple and Odisha lords into a new body. The new wooden idols of Jagannath, Balabhadra, Subhadra and Sudarshan are welcomed to the temple in celebration. The old idols are ritually buried in Koili Baikuntha in accordance with century-old Odia scriptures.
Nabakalebara 2015
The Nabakalebara 2015 began with the Bana Jaga Jatra in March. The Holy Darus were identified and brought to Puri. For details, see the separate Wikipedia article.
Finding the sacred trees
Ordinary neem trees cannot be used to make the deities. For the identification of the tree, conditions and signs are taken into account.
The daru (log) of Sudarshan should have three branches. The skin (bark) of the neem tree should be reddish. The tree should have a chakra (wheel) with a small depression in the middle. The daru of Balabhadra should have seven branches. The bark of the tree should be light-brown or white. It should have the sign of a plow and pestle on it. Near the tree should be a heritage site and a graveyard. The daru of Subhadra should have five branches, and its bark should be yellowish. There should be a lotus flower on the tree. The daru of Jagannath should have four main branches, and its bark should be dark. The tree should have a Shankha and a chakra on it. There should be a cremation ground and an anthill near the tree, and a snake hole at its roots. The tree should be near a river, pond, a three-way crossing or three mountains. There should not be birds' nests on the tree, and no bird should have perched on the tree. The tree should be surrounded by other trees, and there should be a temple to Shiva in the vicinity. The tree should be free of parasitic plants and creepers.
Rituals
The search group announces where the logs are located in order; the last is Jagannath's tree. Security is arranged by the government of Odisha. The trees are ritually cut down, and the logs transported in small carts to the temple in Puri, where they are carved into deities. At midnight on Chaturdashi, the tattva Padārtha is transferred from the old deities to the new. The new deities are worshipped, and the old are buried in sand.
Rituals and mythology are attached to Nabakalebara. The procedure for the transformation of images was mentioned in Sanskrit manuscripts, written on palm leaves and kept in the temple. The temple's three head priests are charged with reading and interpreting them.
The images of Jagannath must be made of wood. Since the deity is dark, the neem tree from which his image is carved should be dark also. The trees used for the images of his brother and sister are lighter in color, since his siblings are fair in complexion.
Jagannath's tree must have four principal branches, symbolizing the four arms of Narayana. No branches are broken or cut. The tree must be located near a three-way intersection or surrounded by three mountains. A hermitage and a temple to Shiva must be nearby, and natural impressions of a conch-shell and chakra (wheel) must be on the trunk.
After the tree is felled, sections are selected for carving and the remainder is buried; the location is then considered sacred. The logs are placed in a wooden six-wheeled oxcart and transported to the temple, where they are kept in the koili vaikuntha (koili means "burial ground", and vaikuntha means "heaven"); the old deities are buried, and the new ones made. After the transfer of essence, the old images are considered lifeless.
Carving of the images begins with the three oldest of the nine main wood carvers working on Jagannath. The images of Lord Balabhadra and Devi are simultaneously carved by two three-person teams. More than 50 carpenters assist the carvers. The work is done in secret, and not even the temple's head priest is allowed to visit the workplace. The carving enclosure is open on the top, but closed with strong doors. The carvers are not supposed to eat, drink or smoke in the enclosure. The carvings are completed in 21 days, during which the carvers are not supposed to leave the temple; they sleep in the temple courtyard, and eat mahaprasad. Devotional songs are sung by devadasis, accompanied by temple musicians, outside the koili vaikuntha during the carving period; shlokas from the Vedas are chanted by Brahmin priests.
After the new deities are made they are brought into the temple's inner sanctum, they are placed in front of (and facing) the old ones. No puja is performed at this time, and no food is offered. The images are life-sized, and very heavy. The transformation ceremony takes place three days before the chariot festival.
At midnight, the old deities are carried out and buried in the koili vaikuntha before dawn. Although the other deities have separate graves, the previous Jagannath deities are buried on top of each other.
On the morning of the second day, the new deities are seated on the altar. The temple's daily routine resumes after nearly 58 days (the search and carving periods). Sweet-smelling flower garlands and new garments are given to the new deities, food is offered, and a puja is performed; devotees can again enter the temple. On the third day, the new deities emerge from the temple for the chariot festival. Nabakalebara attracts millions of people from around the world to Puri, and is one of India's largest festivals.
References
External links
Portal on Nabakalebara 2015
Puri Nabakalebara 2015
More about Nabakalebara 2015
Festivals in Jagannath
Hindu festivals
Religious festivals in India
|
```c
/*your_sha256_hash---------
*
* multi_logical_replication.c
*
* This file contains functions to use logical replication on the distributed
* tables for moving/replicating shards.
*
*
*your_sha256_hash---------
*/
#include "postgres.h"
#include "fmgr.h"
#include "libpq-fe.h"
#include "miscadmin.h"
#include "pgstat.h"
#include "access/genam.h"
#include "access/htup_details.h"
#include "access/sysattr.h"
#include "access/xact.h"
#include "catalog/namespace.h"
#include "catalog/pg_constraint.h"
#include "catalog/pg_subscription_rel.h"
#include "commands/dbcommands.h"
#include "common/hashfn.h"
#include "nodes/bitmapset.h"
#include "parser/scansup.h"
#include "postmaster/interrupt.h"
#include "storage/ipc.h"
#include "storage/latch.h"
#include "storage/lock.h"
#include "utils/builtins.h"
#include "utils/fmgroids.h"
#include "utils/fmgrprotos.h"
#include "utils/formatting.h"
#include "utils/guc.h"
#include "utils/inval.h"
#include "utils/lsyscache.h"
#include "utils/pg_lsn.h"
#include "utils/rel.h"
#include "utils/ruleutils.h"
#include "utils/syscache.h"
#include "pg_version_constants.h"
#include "distributed/adaptive_executor.h"
#include "distributed/citus_safe_lib.h"
#include "distributed/colocation_utils.h"
#include "distributed/connection_management.h"
#include "distributed/coordinator_protocol.h"
#include "distributed/distributed_planner.h"
#include "distributed/hash_helpers.h"
#include "distributed/listutils.h"
#include "distributed/metadata_cache.h"
#include "distributed/metadata_sync.h"
#include "distributed/multi_join_order.h"
#include "distributed/multi_logical_replication.h"
#include "distributed/multi_partitioning_utils.h"
#include "distributed/priority.h"
#include "distributed/remote_commands.h"
#include "distributed/resource_lock.h"
#include "distributed/shard_cleaner.h"
#include "distributed/shard_rebalancer.h"
#include "distributed/shard_transfer.h"
#include "distributed/version_compat.h"
#define CURRENT_LOG_POSITION_COMMAND "SELECT pg_current_wal_lsn()"
/* decimal representation of Adler-16 hash value of citus_shard_move_publication */
#define SHARD_MOVE_ADVISORY_LOCK_FIRST_KEY 44000
/* decimal representation of Adler-16 hash value of citus_shard_move_subscription */
#define SHARD_MOVE_ADVISORY_LOCK_SECOND_KEY 55152
static const char *publicationPrefix[] = {
[SHARD_MOVE] = "citus_shard_move_publication_",
[SHARD_SPLIT] = "citus_shard_split_publication_",
};
static const char *replicationSlotPrefix[] = {
[SHARD_MOVE] = "citus_shard_move_slot_",
[SHARD_SPLIT] = "citus_shard_split_slot_",
};
/*
* IMPORTANT: All the subscription names should start with "citus_". Otherwise
* our utility hook does not defend against non-superusers altering or dropping
* them, which is important for security purposes.
*
* We should also keep these in sync with IsCitusShardTransferBackend().
*/
static const char *subscriptionPrefix[] = {
[SHARD_MOVE] = "citus_shard_move_subscription_",
[SHARD_SPLIT] = "citus_shard_split_subscription_",
};
static const char *subscriptionRolePrefix[] = {
[SHARD_MOVE] = "citus_shard_move_subscription_role_",
[SHARD_SPLIT] = "citus_shard_split_subscription_role_",
};
/* GUC variable, defaults to 2 hours */
int LogicalReplicationTimeout = 2 * 60 * 60 * 1000;
/* see the comment in master_move_shard_placement */
bool PlacementMovedUsingLogicalReplicationInTX = false;
/* report in every 10 seconds */
static int logicalReplicationProgressReportTimeout = 10 * 1000;
static List * PrepareReplicationSubscriptionList(List *shardList);
static List * GetReplicaIdentityCommandListForShard(Oid relationId, uint64 shardId);
static List * GetIndexCommandListForShardBackingReplicaIdentity(Oid relationId,
uint64 shardId);
static void CreatePostLogicalReplicationDataLoadObjects(List *logicalRepTargetList,
LogicalRepType type);
static void ExecuteCreateIndexCommands(List *logicalRepTargetList);
static void ExecuteCreateConstraintsBackedByIndexCommands(List *logicalRepTargetList);
static List * ConvertNonExistingPlacementDDLCommandsToTasks(List *shardCommandList,
char *targetNodeName,
int targetNodePort);
static void ExecuteClusterOnCommands(List *logicalRepTargetList);
static void ExecuteCreateIndexStatisticsCommands(List *logicalRepTargetList);
static void ExecuteRemainingPostLoadTableCommands(List *logicalRepTargetList);
static char * escape_param_str(const char *str);
static XLogRecPtr GetRemoteLSN(MultiConnection *connection, char *command);
static void WaitForMiliseconds(long timeout);
static XLogRecPtr GetSubscriptionPosition(
GroupedLogicalRepTargets *groupedLogicalRepTargets);
static void AcquireLogicalReplicationLock(void);
static HTAB * CreateShardMovePublicationInfoHash(WorkerNode *targetNode,
List *shardIntervals);
static List * CreateShardMoveLogicalRepTargetList(HTAB *publicationInfoHash,
List *shardList);
static void WaitForGroupedLogicalRepTargetsToCatchUp(XLogRecPtr sourcePosition,
GroupedLogicalRepTargets *
groupedLogicalRepTargets);
/*
* LogicallyReplicateShards replicates a list of shards from one node to another
* using logical replication. Once replication is reasonably caught up, writes
* are blocked and then the publication and subscription are dropped.
*
* The caller of the function should ensure that logical replication is applicable
* for the given shards, source and target nodes. Also, the caller is responsible
* for ensuring that the input shard list consists of co-located distributed tables
* or a single shard.
*/
void
LogicallyReplicateShards(List *shardList, char *sourceNodeName, int sourceNodePort,
char *targetNodeName, int targetNodePort)
{
AcquireLogicalReplicationLock();
char *superUser = CitusExtensionOwnerName();
char *databaseName = get_database_name(MyDatabaseId);
int connectionFlags = FORCE_NEW_CONNECTION;
List *replicationSubscriptionList = PrepareReplicationSubscriptionList(shardList);
/* no shards to move */
if (list_length(replicationSubscriptionList) == 0)
{
return;
}
MultiConnection *sourceConnection =
GetNodeUserDatabaseConnection(connectionFlags, sourceNodeName, sourceNodePort,
superUser, databaseName);
/*
* Operations on publications and replication slots cannot run in a
* transaction block. We claim the connections exclusively to ensure they
* do not get used for metadata syncing, which does open a transaction
* block.
*/
ClaimConnectionExclusively(sourceConnection);
WorkerNode *sourceNode = FindWorkerNode(sourceNodeName, sourceNodePort);
WorkerNode *targetNode = FindWorkerNode(targetNodeName, targetNodePort);
HTAB *publicationInfoHash = CreateShardMovePublicationInfoHash(
targetNode, replicationSubscriptionList);
List *logicalRepTargetList = CreateShardMoveLogicalRepTargetList(publicationInfoHash,
shardList);
HTAB *groupedLogicalRepTargetsHash = CreateGroupedLogicalRepTargetsHash(
logicalRepTargetList);
CreateGroupedLogicalRepTargetsConnections(groupedLogicalRepTargetsHash, superUser,
databaseName);
MultiConnection *sourceReplicationConnection =
GetReplicationConnection(sourceConnection->hostname, sourceConnection->port);
/* set up the publication on the source and subscription on the target */
CreatePublications(sourceConnection, publicationInfoHash);
char *snapshot = CreateReplicationSlots(
sourceConnection,
sourceReplicationConnection,
logicalRepTargetList,
"pgoutput");
CreateSubscriptions(
sourceConnection,
sourceConnection->database,
logicalRepTargetList);
/* only useful for isolation testing, see the function comment for the details */
ConflictWithIsolationTestingBeforeCopy();
/*
* We have to create the primary key (or any other replica identity)
* before the update/delete operations that are queued will be
* replicated. Because if the replica identity does not exist on the
* target, the replication would fail.
*
* So the latest possible moment we could do this is right after the
* initial data COPY, but before enabling the susbcriptions. It might
* seem like a good idea to it after the initial data COPY, since
* it's generally the rule that it's cheaper to build an index at once
* than to create it incrementally. This general rule, is why we create
* all the regular indexes as late during the move as possible.
*
* But as it turns out in practice it's not as clear cut, and we saw a
* speed degradation in the time it takes to move shards when doing the
* replica identity creation after the initial COPY. So, instead we
* keep it before the COPY.
*/
CreateReplicaIdentities(logicalRepTargetList);
UpdatePlacementUpdateStatusForShardIntervalList(
shardList,
sourceNodeName,
sourceNodePort,
PLACEMENT_UPDATE_STATUS_COPYING_DATA);
CopyShardsToNode(sourceNode, targetNode, shardList, snapshot);
/*
* We can close this connection now, because we're done copying the
* data and thus don't need access to the snapshot anymore. The
* replication slot will still be at the same LSN, because the
* subscriptions have not been enabled yet.
*/
CloseConnection(sourceReplicationConnection);
/*
* Start the replication and copy all data
*/
CompleteNonBlockingShardTransfer(shardList,
sourceConnection,
publicationInfoHash,
logicalRepTargetList,
groupedLogicalRepTargetsHash,
SHARD_MOVE);
/*
* We use these connections exclusively for subscription management,
* because otherwise subsequent metadata changes may inadvertedly use
* these connections instead of the connections that were used to
* grab locks in BlockWritesToShardList.
*/
CloseGroupedLogicalRepTargetsConnections(groupedLogicalRepTargetsHash);
CloseConnection(sourceConnection);
}
/*
* CreateGroupedLogicalRepTargetsHash creates a hashmap that groups the subscriptions
* logicalRepTargetList by node. This is useful for cases where we want to
* iterate the subscriptions by node, so we can batch certain operations, such
* as checking subscription readiness.
*/
HTAB *
CreateGroupedLogicalRepTargetsHash(List *logicalRepTargetList)
{
HTAB *logicalRepTargetsHash = CreateSimpleHash(uint32, GroupedLogicalRepTargets);
LogicalRepTarget *target = NULL;
foreach_ptr(target, logicalRepTargetList)
{
bool found = false;
GroupedLogicalRepTargets *groupedLogicalRepTargets =
(GroupedLogicalRepTargets *) hash_search(
logicalRepTargetsHash,
&target->replicationSlot->targetNodeId,
HASH_ENTER,
&found);
if (!found)
{
groupedLogicalRepTargets->logicalRepTargetList = NIL;
groupedLogicalRepTargets->superuserConnection = NULL;
}
groupedLogicalRepTargets->logicalRepTargetList =
lappend(groupedLogicalRepTargets->logicalRepTargetList, target);
}
return logicalRepTargetsHash;
}
/*
* CompleteNonBlockingShardTransfer uses logical replication to apply the changes
* made on the source to the target. It also runs all DDL on the target shards
* that need to be run after the data copy.
*
* For shard splits it skips the partition hierarchy and foreign key creation
* though, since those need to happen after the metadata is updated.
*/
void
CompleteNonBlockingShardTransfer(List *shardList,
MultiConnection *sourceConnection,
HTAB *publicationInfoHash,
List *logicalRepTargetList,
HTAB *groupedLogicalRepTargetsHash,
LogicalRepType type)
{
/* Start applying the changes from the replication slots to catch up. */
EnableSubscriptions(logicalRepTargetList);
UpdatePlacementUpdateStatusForShardIntervalList(
shardList,
sourceConnection->hostname,
sourceConnection->port,
PLACEMENT_UPDATE_STATUS_CATCHING_UP);
/*
* Wait until all the subscriptions are caught up to changes that
* happened after the initial COPY on the shards.
*/
WaitForAllSubscriptionsToCatchUp(sourceConnection, groupedLogicalRepTargetsHash);
UpdatePlacementUpdateStatusForShardIntervalList(
shardList,
sourceConnection->hostname,
sourceConnection->port,
PLACEMENT_UPDATE_STATUS_CREATING_CONSTRAINTS);
/*
* Now lets create the post-load objects, such as the indexes, constraints
* and partitioning hierarchy. Once they are done, wait until the replication
* catches up again. So we don't block writes too long.
*/
CreatePostLogicalReplicationDataLoadObjects(logicalRepTargetList, type);
UpdatePlacementUpdateStatusForShardIntervalList(
shardList,
sourceConnection->hostname,
sourceConnection->port,
PLACEMENT_UPDATE_STATUS_FINAL_CATCH_UP);
WaitForAllSubscriptionsToCatchUp(sourceConnection, groupedLogicalRepTargetsHash);
/* only useful for isolation testing, see the function comment for the details */
ConflictWithIsolationTestingAfterCopy();
/*
* We're almost done, we'll block the writes to the shards that we're
* replicating and expect all the subscription to catch up quickly
* afterwards.
*
* Notice that although shards in partitioned relation are excluded from
* logical replication, they are still locked against modification, and
* foreign constraints are created on them too.
*/
BlockWritesToShardList(shardList);
WaitForAllSubscriptionsToCatchUp(sourceConnection, groupedLogicalRepTargetsHash);
if (type != SHARD_SPLIT)
{
UpdatePlacementUpdateStatusForShardIntervalList(
shardList,
sourceConnection->hostname,
sourceConnection->port,
PLACEMENT_UPDATE_STATUS_CREATING_FOREIGN_KEYS);
/*
* We're creating the foreign constraints to reference tables after the
* data is already replicated and all the necessary locks are acquired.
*
* We prefer to do it here because the placements of reference tables
* are always valid, and any modification during the shard move would
* cascade to the hash distributed tables' shards if we had created
* the constraints earlier. The same is true for foreign keys between
* tables owned by different users.
*/
CreateUncheckedForeignKeyConstraints(logicalRepTargetList);
}
UpdatePlacementUpdateStatusForShardIntervalList(
shardList,
sourceConnection->hostname,
sourceConnection->port,
PLACEMENT_UPDATE_STATUS_COMPLETING);
}
/*
* CreateShardMovePublicationInfoHash creates hashmap of PublicationInfos for a
* shard move. Even though we only support moving a shard to a single target
* node, the resulting hashmap can have multiple PublicationInfos in it.
* The reason for that is that we need a separate publication for each
* distributed table owning user in the shard group.
*/
static HTAB *
CreateShardMovePublicationInfoHash(WorkerNode *targetNode, List *shardIntervals)
{
HTAB *publicationInfoHash = CreateSimpleHash(NodeAndOwner, PublicationInfo);
ShardInterval *shardInterval = NULL;
foreach_ptr(shardInterval, shardIntervals)
{
NodeAndOwner key;
key.nodeId = targetNode->nodeId;
key.tableOwnerId = TableOwnerOid(shardInterval->relationId);
bool found = false;
PublicationInfo *publicationInfo =
(PublicationInfo *) hash_search(publicationInfoHash, &key,
HASH_ENTER,
&found);
if (!found)
{
publicationInfo->name = PublicationName(SHARD_MOVE, key.nodeId,
key.tableOwnerId);
publicationInfo->shardIntervals = NIL;
}
publicationInfo->shardIntervals =
lappend(publicationInfo->shardIntervals, shardInterval);
}
return publicationInfoHash;
}
/*
* CreateShardMoveLogicalRepTargetList creates the list containing all the
* subscriptions that should be connected to the publications in the given
* publicationHash.
*/
static List *
CreateShardMoveLogicalRepTargetList(HTAB *publicationInfoHash, List *shardList)
{
List *logicalRepTargetList = NIL;
HASH_SEQ_STATUS status;
hash_seq_init(&status, publicationInfoHash);
Oid nodeId = InvalidOid;
PublicationInfo *publication = NULL;
while ((publication = (PublicationInfo *) hash_seq_search(&status)) != NULL)
{
Oid ownerId = publication->key.tableOwnerId;
nodeId = publication->key.nodeId;
LogicalRepTarget *target = palloc0(sizeof(LogicalRepTarget));
target->subscriptionName = SubscriptionName(SHARD_MOVE, ownerId);
target->tableOwnerId = ownerId;
target->publication = publication;
publication->target = target;
target->newShards = NIL;
target->subscriptionOwnerName = SubscriptionRoleName(SHARD_MOVE, ownerId);
target->replicationSlot = palloc0(sizeof(ReplicationSlotInfo));
target->replicationSlot->name =
ReplicationSlotNameForNodeAndOwnerForOperation(SHARD_MOVE,
nodeId,
ownerId,
CurrentOperationId);
target->replicationSlot->targetNodeId = nodeId;
target->replicationSlot->tableOwnerId = ownerId;
logicalRepTargetList = lappend(logicalRepTargetList, target);
}
ShardInterval *shardInterval = NULL;
foreach_ptr(shardInterval, shardList)
{
NodeAndOwner key;
key.nodeId = nodeId;
key.tableOwnerId = TableOwnerOid(shardInterval->relationId);
bool found = false;
publication = (PublicationInfo *) hash_search(
publicationInfoHash,
&key,
HASH_FIND,
&found);
if (!found)
{
ereport(ERROR, errmsg("Could not find publication matching a split"));
}
publication->target->newShards = lappend(
publication->target->newShards, shardInterval);
}
return logicalRepTargetList;
}
/*
* AcquireLogicalReplicationLock tries to acquire a lock for logical
* replication. We need this lock, because at the start of logical replication
* we clean up old subscriptions and publications. Because of this cleanup it's
* not safe to run multiple logical replication based shard moves at the same
* time. If multiple logical replication moves would run at the same time, the
* second move might clean up subscriptions and publications that are in use by
* another move.
*/
static void
AcquireLogicalReplicationLock(void)
{
LOCKTAG tag;
SET_LOCKTAG_LOGICAL_REPLICATION(tag);
LockAcquire(&tag, ExclusiveLock, false, false);
}
/*
* PrepareReplicationSubscriptionList returns list of shards to be logically
* replicated from given shard list. This is needed because Postgres does not
* allow logical replication on partitioned tables, therefore shards belonging
* to a partitioned tables should be exluded from logical replication
* subscription list.
*/
static List *
PrepareReplicationSubscriptionList(List *shardList)
{
List *replicationSubscriptionList = NIL;
ListCell *shardCell = NULL;
foreach(shardCell, shardList)
{
ShardInterval *shardInterval = (ShardInterval *) lfirst(shardCell);
if (!PartitionedTable(shardInterval->relationId))
{
/* only add regular and child tables to subscription */
replicationSubscriptionList = lappend(replicationSubscriptionList,
shardInterval);
}
}
return replicationSubscriptionList;
}
/*
* CreateReplicaIdentities creates replica identities for all the shards that
* are part of the given subscriptions.
*/
void
CreateReplicaIdentities(List *logicalRepTargetList)
{
LogicalRepTarget *target = NULL;
foreach_ptr(target, logicalRepTargetList)
{
MultiConnection *superuserConnection = target->superuserConnection;
CreateReplicaIdentitiesOnNode(
target->newShards,
superuserConnection->hostname,
superuserConnection->port);
}
}
/*
* CreateReplicaIdentitiesOnNode gets a shardList and creates all the replica
* identities on the shards in the given node.
*/
void
CreateReplicaIdentitiesOnNode(List *shardList, char *nodeName, int32 nodePort)
{
MemoryContext localContext = AllocSetContextCreate(CurrentMemoryContext,
"CreateReplicaIdentitiesOnNode",
ALLOCSET_DEFAULT_SIZES);
MemoryContext oldContext = MemoryContextSwitchTo(localContext);
ShardInterval *shardInterval;
foreach_ptr(shardInterval, shardList)
{
uint64 shardId = shardInterval->shardId;
Oid relationId = shardInterval->relationId;
List *backingIndexCommandList =
GetIndexCommandListForShardBackingReplicaIdentity(relationId, shardId);
List *replicaIdentityShardCommandList =
GetReplicaIdentityCommandListForShard(relationId, shardId);
List *commandList =
list_concat(backingIndexCommandList, replicaIdentityShardCommandList);
if (commandList != NIL)
{
ereport(DEBUG1, (errmsg("Creating replica identity for shard %ld on "
"target node %s:%d", shardId, nodeName, nodePort)));
SendCommandListToWorkerOutsideTransaction(nodeName, nodePort,
TableOwner(relationId),
commandList);
}
MemoryContextReset(localContext);
}
MemoryContextSwitchTo(oldContext);
}
/*
* GetIndexCommandListForShardBackingReplicaIdentity returns all the create index
* commands that are needed to create replica identity. If the table doesn't have
* a replica identity, the function returns NIL.
*/
static List *
GetIndexCommandListForShardBackingReplicaIdentity(Oid relationId, uint64 shardId)
{
List *commandList = NIL;
Relation relation = table_open(relationId, AccessShareLock);
Oid replicaIdentityIndex = GetRelationIdentityOrPK(relation);
table_close(relation, NoLock);
if (OidIsValid(replicaIdentityIndex))
{
/*
* The replica identity is backed by an index or primary key,
* so get the index/pkey definition first.
*/
HeapTuple indexTuple =
SearchSysCache1(INDEXRELID, ObjectIdGetDatum(replicaIdentityIndex));
if (!HeapTupleIsValid(indexTuple))
{
/* should not happen */
elog(ERROR, "cache lookup failed for index %u", replicaIdentityIndex);
}
Form_pg_index indexForm = ((Form_pg_index) GETSTRUCT(indexTuple));
List *indexCommandTableDDLList = NIL;
int indexFlags = INCLUDE_INDEX_ALL_STATEMENTS;
GatherIndexAndConstraintDefinitionList(indexForm, &indexCommandTableDDLList,
indexFlags);
List *indexCommandShardDDLList =
WorkerApplyShardDDLCommandList(indexCommandTableDDLList, shardId);
commandList = list_concat(commandList, indexCommandShardDDLList);
ReleaseSysCache(indexTuple);
}
return commandList;
}
/*
* GetReplicaIdentityCommandListForShard returns the create replica identity
* command that are needed to create replica identity. If the table doesn't have
* a replica identity, the function returns NIL.
*/
static List *
GetReplicaIdentityCommandListForShard(Oid relationId, uint64 shardId)
{
List *replicaIdentityTableDDLCommand =
GetTableReplicaIdentityCommand(relationId);
List *replicaIdentityShardCommandList =
WorkerApplyShardDDLCommandList(replicaIdentityTableDDLCommand, shardId);
return replicaIdentityShardCommandList;
}
/*
* CreatePostLogicalReplicationDataLoadObjects gets a shardList and creates all
* the objects that can be created after the data is moved with logical replication.
*/
static void
CreatePostLogicalReplicationDataLoadObjects(List *logicalRepTargetList,
LogicalRepType type)
{
/*
* We create indexes in 4 steps.
* - CREATE INDEX statements
* - CREATE CONSTRAINT statements that are backed by
* indexes (unique and exclude constraints)
* - ALTER TABLE %s CLUSTER ON %s
* - ALTER INDEX %s ALTER COLUMN %d SET STATISTICS %d
*
* On each step, we execute can execute commands in parallel. For example,
* multiple indexes on the shard table or indexes for the colocated shards
* can be created in parallel. However, the latter two steps, clustering the
* table and setting the statistics of indexes, depends on the indexes being
* created. That's why the execution is divided into four distinct stages.
*/
ExecuteCreateIndexCommands(logicalRepTargetList);
ExecuteCreateConstraintsBackedByIndexCommands(logicalRepTargetList);
ExecuteClusterOnCommands(logicalRepTargetList);
ExecuteCreateIndexStatisticsCommands(logicalRepTargetList);
/*
* Once the indexes are created, there are few more objects like triggers and table
* statistics that should be created after the data move.
*/
ExecuteRemainingPostLoadTableCommands(logicalRepTargetList);
/*
* Creating the partitioning hierarchy errors out in shard splits when
*/
if (type != SHARD_SPLIT)
{
/* create partitioning hierarchy, if any */
CreatePartitioningHierarchy(logicalRepTargetList);
}
}
/*
* ExecuteCreateIndexCommands gets a shardList and creates all the indexes
* for the given shardList in the given target node.
*
* The execution is done in parallel, and throws an error if any of the
* commands fail.
*/
static void
ExecuteCreateIndexCommands(List *logicalRepTargetList)
{
List *taskList = NIL;
LogicalRepTarget *target = NULL;
foreach_ptr(target, logicalRepTargetList)
{
ShardInterval *shardInterval = NULL;
foreach_ptr(shardInterval, target->newShards)
{
Oid relationId = shardInterval->relationId;
List *tableCreateIndexCommandList =
GetTableIndexAndConstraintCommandsExcludingReplicaIdentity(relationId,
INCLUDE_CREATE_INDEX_STATEMENTS);
List *shardCreateIndexCommandList =
WorkerApplyShardDDLCommandList(tableCreateIndexCommandList,
shardInterval->shardId);
List *taskListForShard =
ConvertNonExistingPlacementDDLCommandsToTasks(
shardCreateIndexCommandList,
target->superuserConnection->hostname,
target->superuserConnection->port);
taskList = list_concat(taskList, taskListForShard);
}
}
/*
* We are going to create indexes and constraints using the current user. That is
* alright because an index/constraint always belongs to the owner of the table,
* and Citus already ensures that the current user owns all the tables that are
* moved.
*
* CREATE INDEX commands acquire ShareLock on a relation. So, it is
* allowed to run multiple CREATE INDEX commands concurrently on a table
* and across different tables (e.g., shards).
*/
ereport(DEBUG1, (errmsg("Creating post logical replication objects "
"(indexes)")));
ExecuteTaskListOutsideTransaction(ROW_MODIFY_NONE, taskList,
MaxAdaptiveExecutorPoolSize,
NIL);
}
/*
* ExecuteCreateConstraintsBackedByIndexCommands gets a shardList and creates all the constraints
* that are backed by indexes for the given shardList in the given target node.
*
* The execution is done in sequential mode, and throws an error if any of the
* commands fail.
*/
static void
ExecuteCreateConstraintsBackedByIndexCommands(List *logicalRepTargetList)
{
ereport(DEBUG1, (errmsg("Creating post logical replication objects "
"(constraints backed by indexes)")));
MemoryContext localContext = AllocSetContextCreate(CurrentMemoryContext,
"CreateConstraintsBackedByIndexContext",
ALLOCSET_DEFAULT_SIZES);
MemoryContext oldContext = MemoryContextSwitchTo(localContext);
LogicalRepTarget *target = NULL;
foreach_ptr(target, logicalRepTargetList)
{
ShardInterval *shardInterval = NULL;
foreach_ptr(shardInterval, target->newShards)
{
Oid relationId = shardInterval->relationId;
List *tableCreateConstraintCommandList =
GetTableIndexAndConstraintCommandsExcludingReplicaIdentity(relationId,
INCLUDE_CREATE_CONSTRAINT_STATEMENTS);
if (tableCreateConstraintCommandList == NIL)
{
/* no constraints backed by indexes, skip */
MemoryContextReset(localContext);
continue;
}
List *shardCreateConstraintCommandList =
WorkerApplyShardDDLCommandList(tableCreateConstraintCommandList,
shardInterval->shardId);
char *tableOwner = TableOwner(shardInterval->relationId);
SendCommandListToWorkerOutsideTransaction(
target->superuserConnection->hostname,
target->superuserConnection->port,
tableOwner,
shardCreateConstraintCommandList);
MemoryContextReset(localContext);
}
}
MemoryContextSwitchTo(oldContext);
}
/*
* ConvertNonExistingShardDDLCommandsToTasks generates one task per input
* element in shardCommandList.
*
* The generated tasks' placements do not exist (yet). We are generating
* fake placements for the tasks.
*/
static List *
ConvertNonExistingPlacementDDLCommandsToTasks(List *shardCommandList,
char *targetNodeName,
int targetNodePort)
{
WorkerNode *workerNode = FindWorkerNodeOrError(targetNodeName, targetNodePort);
List *taskList = NIL;
uint64 jobId = INVALID_JOB_ID;
ListCell *commandCell = NULL;
int taskId = 1;
foreach(commandCell, shardCommandList)
{
char *command = (char *) lfirst(commandCell);
Task *task = CreateBasicTask(jobId, taskId, DDL_TASK, command);
/* this placement currently does not exist */
ShardPlacement *taskPlacement = CitusMakeNode(ShardPlacement);
SetPlacementNodeMetadata(taskPlacement, workerNode);
task->taskPlacementList = list_make1(taskPlacement);
taskList = lappend(taskList, task);
taskId++;
}
return taskList;
}
/*
* ExecuteClusterOnCommands gets a shardList and creates all the CLUSTER ON commands
* for the given shardList in the given target node.
*
* The execution is done in parallel, and in case of any failure, the transaction
* is aborted.
*/
static void
ExecuteClusterOnCommands(List *logicalRepTargetList)
{
List *taskList = NIL;
LogicalRepTarget *target = NULL;
foreach_ptr(target, logicalRepTargetList)
{
ShardInterval *shardInterval = NULL;
foreach_ptr(shardInterval, target->newShards)
{
Oid relationId = shardInterval->relationId;
List *tableAlterTableClusterOnCommandList =
GetTableIndexAndConstraintCommandsExcludingReplicaIdentity(relationId,
INCLUDE_INDEX_CLUSTERED_STATEMENTS);
List *shardAlterTableClusterOnCommandList =
WorkerApplyShardDDLCommandList(tableAlterTableClusterOnCommandList,
shardInterval->shardId);
List *taskListForShard =
ConvertNonExistingPlacementDDLCommandsToTasks(
shardAlterTableClusterOnCommandList,
target->superuserConnection->hostname,
target->superuserConnection->port);
taskList = list_concat(taskList, taskListForShard);
}
}
ereport(DEBUG1, (errmsg("Creating post logical replication objects "
"(CLUSTER ON)")));
ExecuteTaskListOutsideTransaction(ROW_MODIFY_NONE, taskList,
MaxAdaptiveExecutorPoolSize,
NIL);
}
/*
* ExecuteCreateIndexStatisticsCommands gets a shardList and creates
* all the statistics objects for the indexes in the given target node.
*
* The execution is done in sequentially, and in case of any failure, the transaction
* is aborted.
*/
static void
ExecuteCreateIndexStatisticsCommands(List *logicalRepTargetList)
{
ereport(DEBUG1, (errmsg("Creating post logical replication objects "
"(index statistics)")));
MemoryContext localContext = AllocSetContextCreate(CurrentMemoryContext,
"CreateIndexStatisticsContext",
ALLOCSET_DEFAULT_SIZES);
MemoryContext oldContext = MemoryContextSwitchTo(localContext);
LogicalRepTarget *target = NULL;
foreach_ptr(target, logicalRepTargetList)
{
ShardInterval *shardInterval = NULL;
foreach_ptr(shardInterval, target->newShards)
{
Oid relationId = shardInterval->relationId;
List *tableAlterIndexSetStatisticsCommandList =
GetTableIndexAndConstraintCommandsExcludingReplicaIdentity(relationId,
INCLUDE_INDEX_STATISTICS_STATEMENTTS);
List *shardAlterIndexSetStatisticsCommandList =
WorkerApplyShardDDLCommandList(tableAlterIndexSetStatisticsCommandList,
shardInterval->shardId);
if (shardAlterIndexSetStatisticsCommandList == NIL)
{
/* no index statistics exists, skip */
MemoryContextReset(localContext);
continue;
}
/*
* These remaining operations do not require significant resources, so no
* need to create them in parallel.
*/
char *tableOwner = TableOwner(shardInterval->relationId);
SendCommandListToWorkerOutsideTransaction(
target->superuserConnection->hostname,
target->superuserConnection->port,
tableOwner,
shardAlterIndexSetStatisticsCommandList);
MemoryContextReset(localContext);
}
}
MemoryContextSwitchTo(oldContext);
}
/*
* ExecuteRemainingPostLoadTableCommands gets a shardList and creates
* all the remaining post load objects other than the indexes
* in the given target node.
*/
static void
ExecuteRemainingPostLoadTableCommands(List *logicalRepTargetList)
{
ereport(DEBUG1, (errmsg("Creating post logical replication objects "
"(triggers and table statistics)"
)));
MemoryContext localContext = AllocSetContextCreate(CurrentMemoryContext,
"CreateTableStatisticsContext",
ALLOCSET_DEFAULT_SIZES);
MemoryContext oldContext = MemoryContextSwitchTo(localContext);
LogicalRepTarget *target = NULL;
foreach_ptr(target, logicalRepTargetList)
{
ShardInterval *shardInterval = NULL;
foreach_ptr(shardInterval, target->newShards)
{
Oid relationId = shardInterval->relationId;
bool includeIndexes = false;
bool includeReplicaIdentity = false;
List *tablePostLoadTableCommandList =
GetPostLoadTableCreationCommands(relationId, includeIndexes,
includeReplicaIdentity);
List *shardPostLoadTableCommandList =
WorkerApplyShardDDLCommandList(tablePostLoadTableCommandList,
shardInterval->shardId);
if (shardPostLoadTableCommandList == NIL)
{
/* no index statistics exists, skip */
continue;
}
/*
* These remaining operations do not require significant resources, so no
* need to create them in parallel.
*/
char *tableOwner = TableOwner(shardInterval->relationId);
SendCommandListToWorkerOutsideTransaction(
target->superuserConnection->hostname,
target->superuserConnection->port,
tableOwner,
shardPostLoadTableCommandList);
MemoryContextReset(localContext);
}
}
MemoryContextSwitchTo(oldContext);
}
/*
* CreatePartitioningHierarchy gets a shardList and creates the partitioning
* hierarchy between the shardList, if any,
*/
void
CreatePartitioningHierarchy(List *logicalRepTargetList)
{
ereport(DEBUG1, (errmsg("Creating post logical replication objects "
"(partitioning hierarchy)")));
MemoryContext localContext = AllocSetContextCreate(CurrentMemoryContext,
"CreatePartitioningHierarchy",
ALLOCSET_DEFAULT_SIZES);
MemoryContext oldContext = MemoryContextSwitchTo(localContext);
LogicalRepTarget *target = NULL;
foreach_ptr(target, logicalRepTargetList)
{
ShardInterval *shardInterval = NULL;
foreach_ptr(shardInterval, target->newShards)
{
if (PartitionTable(shardInterval->relationId))
{
char *attachPartitionCommand =
GenerateAttachShardPartitionCommand(shardInterval);
char *tableOwner = TableOwner(shardInterval->relationId);
/*
* Attaching partition may acquire conflicting locks when created in
* parallel, so create them sequentially. Also attaching partition
* is a quick operation, so it is fine to execute sequentially.
*/
MultiConnection *connection =
GetNodeUserDatabaseConnection(OUTSIDE_TRANSACTION,
target->superuserConnection->hostname,
target->superuserConnection->port,
tableOwner, NULL);
ExecuteCriticalRemoteCommand(connection, attachPartitionCommand);
MemoryContextReset(localContext);
}
}
}
MemoryContextSwitchTo(oldContext);
}
/*
* CreateUncheckedForeignKeyConstraints is used to create the foreign
* constraints on the logical replication target without checking that they are
* actually valid.
*
* We skip the validation phase of foreign keys to after a shard
* move/copy/split because the validation is pretty costly and given that the
* source placements are already valid, the validation in the target nodes is
* useless.
*/
void
CreateUncheckedForeignKeyConstraints(List *logicalRepTargetList)
{
MemoryContext localContext =
AllocSetContextCreate(CurrentMemoryContext,
"CreateKeyForeignConstraints",
ALLOCSET_DEFAULT_SIZES);
MemoryContext oldContext = MemoryContextSwitchTo(localContext);
/*
* Iterate over all the shards in the shard group.
*/
LogicalRepTarget *target = NULL;
foreach_ptr(target, logicalRepTargetList)
{
ShardInterval *shardInterval = NULL;
/*
* Iterate on split shards list for a given shard and create constraints.
*/
foreach_ptr(shardInterval, target->newShards)
{
List *commandList = CopyShardForeignConstraintCommandList(
shardInterval);
commandList = list_concat(
list_make1("SET LOCAL citus.skip_constraint_validation TO ON;"),
commandList);
SendCommandListToWorkerOutsideTransactionWithConnection(
target->superuserConnection,
commandList);
MemoryContextReset(localContext);
}
}
MemoryContextSwitchTo(oldContext);
}
/*
* ConflictWithIsolationTestingBeforeCopy is only useful to test
* get_rebalance_progress by pausing before doing the actual copy. This way we
* can see the state of the tables at that point. This should not be called by
* any code-path except for code paths to move and split shards().
*
* Note that since the cost of calling this function is pretty low, we prefer
* to use it in non-assert builds as well not to diverge in the behaviour.
*/
extern void
ConflictWithIsolationTestingBeforeCopy(void)
{
LOCKTAG tag;
const bool sessionLock = false;
const bool dontWait = false;
if (RunningUnderCitusTestSuite)
{
SET_LOCKTAG_ADVISORY(tag, MyDatabaseId,
SHARD_MOVE_ADVISORY_LOCK_SECOND_KEY,
SHARD_MOVE_ADVISORY_LOCK_FIRST_KEY, 2);
/* uses sharelock so concurrent moves don't conflict with eachother */
(void) LockAcquire(&tag, ShareLock, sessionLock, dontWait);
}
}
/*
* ConflictWithIsolationTestingAfterCopy is only useful for two types of tests.
* 1. Testing the output of get_rebalance_progress after the copy is completed,
* but before the move is completely finished. Because finishing the move
* will clear the contents of get_rebalance_progress.
* 2. To test that our non-blocking shard moves/splits actually don't block
* writes. Since logically replicating shards does eventually block
* modifications, it becomes tricky to use isolation tester to show
* concurrent behaviour of online shard rebalancing and modification
* queries. So, during logical replication we call this function at
* the end of the catchup, right before blocking writes.
*
* Note that since the cost of calling this function is pretty low, we prefer
* to use it in non-assert builds as well not to diverge in the behaviour.
*/
extern void
ConflictWithIsolationTestingAfterCopy(void)
{
LOCKTAG tag;
const bool sessionLock = false;
const bool dontWait = false;
if (RunningUnderCitusTestSuite)
{
SET_LOCKTAG_ADVISORY(tag, MyDatabaseId,
SHARD_MOVE_ADVISORY_LOCK_FIRST_KEY,
SHARD_MOVE_ADVISORY_LOCK_SECOND_KEY, 2);
/* uses sharelock so concurrent moves don't conflict with eachother */
(void) LockAcquire(&tag, ShareLock, sessionLock, dontWait);
}
}
/*
* PublicationName returns the name of the publication for the given node and
* table owner.
*/
char *
PublicationName(LogicalRepType type, uint32_t nodeId, Oid ownerId)
{
return psprintf("%s%u_%u_%lu", publicationPrefix[type],
nodeId, ownerId, CurrentOperationId);
}
/*
* ReplicationSlotNameForNodeAndOwnerForOperation returns the name of the
* replication slot for the given node, table owner and operation id.
*
* Note that PG15 introduced a new ReplicationSlotName function that caused name conflicts
* and we renamed this function.
*/
char *
ReplicationSlotNameForNodeAndOwnerForOperation(LogicalRepType type, uint32_t nodeId,
Oid ownerId, OperationId operationId)
{
StringInfo slotName = makeStringInfo();
appendStringInfo(slotName, "%s%u_%u_%lu", replicationSlotPrefix[type], nodeId,
ownerId, operationId);
if (slotName->len > NAMEDATALEN)
{
ereport(ERROR,
(errmsg(
"Replication Slot name:%s having length:%d is greater than maximum allowed length:%d",
slotName->data, slotName->len, NAMEDATALEN)));
}
return slotName->data;
}
/*
* SubscriptionName returns the name of the subscription for the given owner.
*/
char *
SubscriptionName(LogicalRepType type, Oid ownerId)
{
return psprintf("%s%u_%lu", subscriptionPrefix[type],
ownerId, CurrentOperationId);
}
/*
* SubscriptionRoleName returns the name of the role used by the
* subscription that subscribes to the tables of the given owner.
*/
char *
SubscriptionRoleName(LogicalRepType type, Oid ownerId)
{
return psprintf("%s%u_%lu", subscriptionRolePrefix[type], ownerId,
CurrentOperationId);
}
/*
* GetQueryResultStringList expects a query that returns a single column of
* strings. This query is executed on the connection and the function then
* returns the results of the query in a List.
*/
List *
GetQueryResultStringList(MultiConnection *connection, char *query)
{
bool raiseInterrupts = true;
int querySent = SendRemoteCommand(connection, query);
if (querySent == 0)
{
ReportConnectionError(connection, ERROR);
}
PGresult *result = GetRemoteCommandResult(connection, raiseInterrupts);
if (!IsResponseOK(result))
{
ReportResultError(connection, result, ERROR);
}
int rowCount = PQntuples(result);
int columnCount = PQnfields(result);
if (columnCount != 1)
{
ereport(ERROR, (errmsg("unexpected number of columns returned while reading ")));
}
List *resultList = NIL;
for (int rowIndex = 0; rowIndex < rowCount; rowIndex++)
{
int columnIndex = 0;
StringInfo resultStringInfo = makeStringInfo();
char *resultString = PQgetvalue(result, rowIndex, columnIndex);
/* we're using the stringinfo to copy the data into the current memory context */
appendStringInfoString(resultStringInfo, resultString);
resultList = lappend(resultList, resultStringInfo->data);
}
PQclear(result);
ForgetResults(connection);
return resultList;
}
/*
* CreatePublications creates a the publications defined in the
* publicationInfoHash over the given connection.
*/
void
CreatePublications(MultiConnection *connection,
HTAB *publicationInfoHash)
{
HASH_SEQ_STATUS status;
hash_seq_init(&status, publicationInfoHash);
PublicationInfo *entry = NULL;
while ((entry = (PublicationInfo *) hash_seq_search(&status)) != NULL)
{
StringInfo createPublicationCommand = makeStringInfo();
bool prefixWithComma = false;
appendStringInfo(createPublicationCommand, "CREATE PUBLICATION %s FOR TABLE ",
quote_identifier(entry->name));
ShardInterval *shard = NULL;
foreach_ptr(shard, entry->shardIntervals)
{
char *shardName = ConstructQualifiedShardName(shard);
if (prefixWithComma)
{
appendStringInfoString(createPublicationCommand, ",");
}
appendStringInfoString(createPublicationCommand, shardName);
prefixWithComma = true;
}
WorkerNode *worker = FindWorkerNode(connection->hostname,
connection->port);
InsertCleanupRecordOutsideTransaction(CLEANUP_OBJECT_PUBLICATION,
entry->name,
worker->groupId,
CLEANUP_ALWAYS);
ExecuteCriticalRemoteCommand(connection, DISABLE_DDL_PROPAGATION);
ExecuteCriticalRemoteCommand(connection, createPublicationCommand->data);
ExecuteCriticalRemoteCommand(connection, ENABLE_DDL_PROPAGATION);
pfree(createPublicationCommand->data);
pfree(createPublicationCommand);
}
}
/*
* GetReplicationConnection opens a new replication connection to this node.
* This connection can be used to send replication commands, such as
* CREATE_REPLICATION_SLOT.
*/
MultiConnection *
GetReplicationConnection(char *nodeName, int nodePort)
{
int connectionFlags = FORCE_NEW_CONNECTION;
connectionFlags |= REQUIRE_REPLICATION_CONNECTION_PARAM;
MultiConnection *connection = GetNodeUserDatabaseConnection(
connectionFlags,
nodeName,
nodePort,
CitusExtensionOwnerName(),
get_database_name(MyDatabaseId));
/*
* Replication connections are special and don't support all of SQL, so we
* don't want it to be used for other purposes what we create it for.
*/
ClaimConnectionExclusively(connection);
return connection;
}
/*
* CreateReplicationSlot creates a replication slot with the given slot name
* over the given connection. The given connection should be a replication
* connection. This function returns the name of the snapshot that is used for
* this replication slot. When using this snapshot name for other transactions
* you need to keep the given replication connection open until you have used
* the snapshot name.
*/
static char *
CreateReplicationSlot(MultiConnection *connection, char *slotname, char *outputPlugin)
{
StringInfo createReplicationSlotCommand = makeStringInfo();
appendStringInfo(createReplicationSlotCommand,
"CREATE_REPLICATION_SLOT %s LOGICAL %s EXPORT_SNAPSHOT;",
quote_identifier(slotname), quote_identifier(outputPlugin));
PGresult *result = NULL;
int response = ExecuteOptionalRemoteCommand(connection,
createReplicationSlotCommand->data,
&result);
if (response != RESPONSE_OKAY || !IsResponseOK(result) || PQntuples(result) != 1)
{
ReportResultError(connection, result, ERROR);
}
/*'snapshot_name' is second column where index starts from zero.
* We're using the pstrdup to copy the data into the current memory context */
char *snapShotName = pstrdup(PQgetvalue(result, 0, 2 /* columIndex */));
PQclear(result);
ForgetResults(connection);
return snapShotName;
}
/*
* CreateReplicationSlots creates the replication slots that the subscriptions
* in the logicalRepTargetList can use.
*
* This function returns the snapshot name of the replication slots that are
* used by the subscription. When using this snapshot name for other
* transactions you need to keep the given replication connection open until
* you are finished using the snapshot.
*/
char *
CreateReplicationSlots(MultiConnection *sourceConnection,
MultiConnection *sourceReplicationConnection,
List *logicalRepTargetList,
char *outputPlugin)
{
ReplicationSlotInfo *firstReplicationSlot = NULL;
char *snapshot = NULL;
LogicalRepTarget *target = NULL;
foreach_ptr(target, logicalRepTargetList)
{
ReplicationSlotInfo *replicationSlot = target->replicationSlot;
WorkerNode *worker = FindWorkerNode(sourceConnection->hostname,
sourceConnection->port);
InsertCleanupRecordOutsideTransaction(CLEANUP_OBJECT_REPLICATION_SLOT,
replicationSlot->name,
worker->groupId,
CLEANUP_ALWAYS);
if (!firstReplicationSlot)
{
firstReplicationSlot = replicationSlot;
snapshot = CreateReplicationSlot(
sourceReplicationConnection,
replicationSlot->name,
outputPlugin
);
}
else
{
ExecuteCriticalRemoteCommand(
sourceConnection,
psprintf("SELECT pg_catalog.pg_copy_logical_replication_slot(%s, %s)",
quote_literal_cstr(firstReplicationSlot->name),
quote_literal_cstr(replicationSlot->name)));
}
}
return snapshot;
}
/*
* CreateSubscriptions creates the subscriptions according to their definition
* in the logicalRepTargetList. The remote node(s) needs to have appropriate
* pg_dist_authinfo rows for the superuser such that the apply process can
* connect. Because the generated CREATE SUBSCRIPTION statements use the host
* and port names directly (rather than looking up any relevant
* pg_dist_poolinfo rows), all such connections remain direct and will not
* route through any configured poolers.
*
* The subscriptions created by this function are created in the disabled
* state. This is done so a data copy can be done manually afterwards. To
* enable the subscriptions you can use EnableSubscriptions().
*/
void
CreateSubscriptions(MultiConnection *sourceConnection,
char *databaseName,
List *logicalRepTargetList)
{
LogicalRepTarget *target = NULL;
foreach_ptr(target, logicalRepTargetList)
{
int ownerId = target->tableOwnerId;
WorkerNode *worker = FindWorkerNode(target->superuserConnection->hostname,
target->superuserConnection->port);
/*
* The CREATE USER command should not propagate, so we temporarily
* disable DDL propagation.
*
* Subscription workers have SUPERUSER permissions. Hence we temporarily
* create a user with SUPERUSER permissions and then alter it to NOSUPERUSER.
* This prevents permission escalations.
*/
SendCommandListToWorkerOutsideTransactionWithConnection(
target->superuserConnection,
list_make2(
"SET LOCAL citus.enable_ddl_propagation TO OFF;",
psprintf(
"CREATE USER %s SUPERUSER IN ROLE %s;",
quote_identifier(target->subscriptionOwnerName),
quote_identifier(GetUserNameFromId(ownerId, false))
)));
InsertCleanupRecordOutsideTransaction(CLEANUP_OBJECT_USER,
target->subscriptionOwnerName,
worker->groupId,
CLEANUP_ALWAYS);
StringInfo conninfo = makeStringInfo();
appendStringInfo(conninfo, "host='%s' port=%d user='%s' dbname='%s' "
"connect_timeout=20",
escape_param_str(sourceConnection->hostname),
sourceConnection->port,
escape_param_str(sourceConnection->user), escape_param_str(
databaseName));
if (CpuPriorityLogicalRepSender != CPU_PRIORITY_INHERIT &&
list_length(logicalRepTargetList) <= MaxHighPriorityBackgroundProcesess)
{
appendStringInfo(conninfo,
" options='-c citus.cpu_priority=%d'",
CpuPriorityLogicalRepSender);
}
StringInfo createSubscriptionCommand = makeStringInfo();
appendStringInfo(createSubscriptionCommand,
"CREATE SUBSCRIPTION %s CONNECTION %s PUBLICATION %s "
"WITH (citus_use_authinfo=true, create_slot=false, "
#if PG_VERSION_NUM >= PG_VERSION_16
/*
* password_required specifies whether connections to the publisher
* made as a result of this subscription must use password authentication.
* However, this setting is ignored when the subscription is owned
* by a superuser.
* Given that this command is executed below with superuser
* ExecuteCriticalRemoteCommand(target->superuserConnection,
* createSubscriptionCommand->data);
* We are safe to pass password_required as false because
* it will be ignored anyway
*/
"copy_data=false, enabled=false, slot_name=%s, password_required=false",
#else
"copy_data=false, enabled=false, slot_name=%s",
#endif
quote_identifier(target->subscriptionName),
quote_literal_cstr(conninfo->data),
quote_identifier(target->publication->name),
quote_identifier(target->replicationSlot->name));
if (EnableBinaryProtocol)
{
appendStringInfoString(createSubscriptionCommand, ", binary=true)");
}
else
{
appendStringInfoString(createSubscriptionCommand, ")");
}
ExecuteCriticalRemoteCommand(target->superuserConnection,
createSubscriptionCommand->data);
pfree(createSubscriptionCommand->data);
pfree(createSubscriptionCommand);
InsertCleanupRecordOutsideTransaction(CLEANUP_OBJECT_SUBSCRIPTION,
target->subscriptionName,
worker->groupId,
CLEANUP_ALWAYS);
ExecuteCriticalRemoteCommand(target->superuserConnection, psprintf(
"ALTER SUBSCRIPTION %s OWNER TO %s",
quote_identifier(target->subscriptionName),
quote_identifier(target->subscriptionOwnerName)
));
/*
* The ALTER ROLE command should not propagate, so we temporarily
* disable DDL propagation.
*/
SendCommandListToWorkerOutsideTransactionWithConnection(
target->superuserConnection,
list_make2(
"SET LOCAL citus.enable_ddl_propagation TO OFF;",
psprintf(
"ALTER ROLE %s NOSUPERUSER;",
quote_identifier(target->subscriptionOwnerName)
)));
}
}
/*
* EnableSubscriptions enables all the the subscriptions in the
* logicalRepTargetList. This means the replication slot will start to be read
* and the catchup phase begins.
*/
void
EnableSubscriptions(List *logicalRepTargetList)
{
LogicalRepTarget *target = NULL;
foreach_ptr(target, logicalRepTargetList)
{
ExecuteCriticalRemoteCommand(target->superuserConnection, psprintf(
"ALTER SUBSCRIPTION %s ENABLE",
target->subscriptionName
));
}
}
/* *INDENT-OFF* */
/*
* Escaping libpq connect parameter strings.
*
* Replaces "'" with "\'" and "\" with "\\".
*
* Copied from dblink.c to escape libpq params
*/
static char *
escape_param_str(const char *str)
{
StringInfoData buf;
initStringInfo(&buf);
for (const char *cp = str; *cp; cp++)
{
if (*cp == '\\' || *cp == '\'')
appendStringInfoChar(&buf, '\\');
appendStringInfoChar(&buf, *cp);
}
return buf.data;
}
/* *INDENT-ON* */
/*
* GetRemoteLogPosition gets the current WAL log position over the given connection.
*/
XLogRecPtr
GetRemoteLogPosition(MultiConnection *connection)
{
return GetRemoteLSN(connection, CURRENT_LOG_POSITION_COMMAND);
}
/*
* GetRemoteLSN executes a command that returns a single LSN over the given connection
* and returns it as an XLogRecPtr (uint64).
*/
static XLogRecPtr
GetRemoteLSN(MultiConnection *connection, char *command)
{
bool raiseInterrupts = false;
XLogRecPtr remoteLogPosition = InvalidXLogRecPtr;
int querySent = SendRemoteCommand(connection, command);
if (querySent == 0)
{
ReportConnectionError(connection, ERROR);
}
PGresult *result = GetRemoteCommandResult(connection, raiseInterrupts);
if (!IsResponseOK(result))
{
ReportResultError(connection, result, ERROR);
}
int rowCount = PQntuples(result);
if (rowCount != 1)
{
PQclear(result);
ForgetResults(connection);
return InvalidXLogRecPtr;
}
int colCount = PQnfields(result);
if (colCount != 1)
{
ereport(ERROR, (errmsg("unexpected number of columns returned by: %s",
command)));
}
if (!PQgetisnull(result, 0, 0))
{
char *resultString = PQgetvalue(result, 0, 0);
Datum remoteLogPositionDatum = DirectFunctionCall1Coll(pg_lsn_in, InvalidOid,
CStringGetDatum(
resultString));
remoteLogPosition = DatumGetLSN(remoteLogPositionDatum);
}
PQclear(result);
ForgetResults(connection);
return remoteLogPosition;
}
/*
* CreateGroupedLogicalRepTargetsConnections creates connections for all of the nodes
* in the groupedLogicalRepTargetsHash.
*/
void
CreateGroupedLogicalRepTargetsConnections(HTAB *groupedLogicalRepTargetsHash,
char *user,
char *databaseName)
{
int connectionFlags = FORCE_NEW_CONNECTION;
HASH_SEQ_STATUS status;
GroupedLogicalRepTargets *groupedLogicalRepTargets = NULL;
foreach_htab(groupedLogicalRepTargets, &status, groupedLogicalRepTargetsHash)
{
WorkerNode *targetWorkerNode = FindNodeWithNodeId(
groupedLogicalRepTargets->nodeId,
false);
MultiConnection *superuserConnection =
GetNodeUserDatabaseConnection(connectionFlags, targetWorkerNode->workerName,
targetWorkerNode->workerPort,
user,
databaseName);
/*
* Operations on subscriptions cannot run in a transaction block. We
* claim the connections exclusively to ensure they do not get used for
* metadata syncing, which does open a transaction block.
*/
ClaimConnectionExclusively(superuserConnection);
groupedLogicalRepTargets->superuserConnection = superuserConnection;
LogicalRepTarget *target = NULL;
foreach_ptr(target, groupedLogicalRepTargets->logicalRepTargetList)
{
target->superuserConnection = superuserConnection;
}
}
}
/*
* CreateGroupedLogicalRepTargetsConnections closes the connections for all of the
* nodes in the groupedLogicalRepTargetsHash.
*/
void
CloseGroupedLogicalRepTargetsConnections(HTAB *groupedLogicalRepTargetsHash)
{
HASH_SEQ_STATUS status;
GroupedLogicalRepTargets *groupedLogicalRepTargets = NULL;
foreach_htab(groupedLogicalRepTargets, &status, groupedLogicalRepTargetsHash)
{
CloseConnection(groupedLogicalRepTargets->superuserConnection);
}
}
/*
* SubscriptionNamesValueList returns a SQL value list containing the
* subscription names from the logicalRepTargetList. This value list can
* be used in a query by using the IN operator.
*/
static char *
SubscriptionNamesValueList(List *logicalRepTargetList)
{
StringInfo subscriptionValueList = makeStringInfo();
appendStringInfoString(subscriptionValueList, "(");
bool first = true;
LogicalRepTarget *target = NULL;
foreach_ptr(target, logicalRepTargetList)
{
if (!first)
{
appendStringInfoString(subscriptionValueList, ",");
}
else
{
first = false;
}
appendStringInfoString(subscriptionValueList, quote_literal_cstr(
target->subscriptionName));
}
appendStringInfoString(subscriptionValueList, ")");
return subscriptionValueList->data;
}
/*
* WaitForAllSubscriptionToCatchUp waits until the last LSN reported by the
* subscription.
*
* The function errors if the target LSN doesn't increase within
* LogicalReplicationErrorTimeout. The function also reports its progress in
* every logicalReplicationProgressReportTimeout.
*/
void
WaitForAllSubscriptionsToCatchUp(MultiConnection *sourceConnection,
HTAB *groupedLogicalRepTargetsHash)
{
XLogRecPtr sourcePosition = GetRemoteLogPosition(sourceConnection);
HASH_SEQ_STATUS status;
GroupedLogicalRepTargets *groupedLogicalRepTargets = NULL;
foreach_htab(groupedLogicalRepTargets, &status, groupedLogicalRepTargetsHash)
{
WaitForGroupedLogicalRepTargetsToCatchUp(sourcePosition,
groupedLogicalRepTargets);
}
}
/*
* WaitForNodeSubscriptionToCatchUp waits until the last LSN reported by the
* subscription.
*
* The function errors if the target LSN doesn't increase within
* LogicalReplicationErrorTimeout. The function also reports its progress in
* every logicalReplicationProgressReportTimeout.
*/
static void
WaitForGroupedLogicalRepTargetsToCatchUp(XLogRecPtr sourcePosition,
GroupedLogicalRepTargets *
groupedLogicalRepTargets)
{
XLogRecPtr previousTargetPosition = 0;
TimestampTz previousLSNIncrementTime = GetCurrentTimestamp();
/* report in the first iteration as well */
TimestampTz previousReportTime = 0;
MultiConnection *superuserConnection = groupedLogicalRepTargets->superuserConnection;
/*
* We might be in the loop for a while. Since we don't need to preserve
* any memory beyond this function, we can simply switch to a child context
* and reset it on every iteration to make sure we don't slowly build up
* a lot of memory.
*/
MemoryContext loopContext = AllocSetContextCreateInternal(CurrentMemoryContext,
"WaitForShardSubscriptionToCatchUp",
ALLOCSET_DEFAULT_MINSIZE,
ALLOCSET_DEFAULT_INITSIZE,
ALLOCSET_DEFAULT_MAXSIZE);
MemoryContext oldContext = MemoryContextSwitchTo(loopContext);
while (true)
{
XLogRecPtr targetPosition = GetSubscriptionPosition(groupedLogicalRepTargets);
if (targetPosition >= sourcePosition)
{
ereport(LOG, (errmsg(
"The LSN of the target subscriptions on node %s:%d have "
"caught up with the source LSN ",
superuserConnection->hostname,
superuserConnection->port)));
break;
}
/*
* The following logic ensures that the subsription continues to grow withing
* LogicalReplicationErrorTimeout duration. Otherwise, we error out since we
* suspect that there is a problem on the target. It also handles the progess
* reporting.
*/
if (targetPosition > previousTargetPosition)
{
/* variable is only used for the log message */
uint64 previousTargetBeforeThisLoop = previousTargetPosition;
previousTargetPosition = targetPosition;
previousLSNIncrementTime = GetCurrentTimestamp();
if (TimestampDifferenceExceeds(previousReportTime,
GetCurrentTimestamp(),
logicalReplicationProgressReportTimeout))
{
ereport(LOG, (errmsg("The LSN of the target subscriptions on node %s:%d "
"has increased from %X/%X to %X/%X at %s where the "
"source LSN is %X/%X ",
superuserConnection->hostname,
superuserConnection->port,
LSN_FORMAT_ARGS(previousTargetBeforeThisLoop),
LSN_FORMAT_ARGS(targetPosition),
timestamptz_to_str(previousLSNIncrementTime),
LSN_FORMAT_ARGS(sourcePosition))));
previousReportTime = GetCurrentTimestamp();
}
}
else
{
if (TimestampDifferenceExceeds(previousLSNIncrementTime,
GetCurrentTimestamp(),
LogicalReplicationTimeout))
{
ereport(ERROR, (errmsg("The logical replication waiting timeout "
"of %d msec is exceeded",
LogicalReplicationTimeout),
errdetail("The LSN on the target subscription hasn't "
"caught up ready on the target node %s:%d",
superuserConnection->hostname,
superuserConnection->port),
errhint(
"There might have occurred problems on the target "
"node. If not consider using higher values for "
"citus.logical_replication_error_timeout")));
}
}
/* sleep for 1 seconds (1000 miliseconds) and try again */
WaitForMiliseconds(1000);
MemoryContextReset(loopContext);
}
MemoryContextSwitchTo(oldContext);
}
/*
* WaitForMiliseconds waits for given timeout and then checks for some
* interrupts.
*/
static void
WaitForMiliseconds(long timeout)
{
int latchFlags = WL_LATCH_SET | WL_TIMEOUT | WL_POSTMASTER_DEATH;
/* wait until timeout, or until somebody wakes us up */
int rc = WaitLatch(MyLatch, latchFlags, timeout, PG_WAIT_EXTENSION);
/* emergency bailout if postmaster has died */
if (rc & WL_POSTMASTER_DEATH)
{
proc_exit(1);
}
if (rc & WL_LATCH_SET)
{
ResetLatch(MyLatch);
CHECK_FOR_INTERRUPTS();
}
if (ConfigReloadPending)
{
ConfigReloadPending = false;
ProcessConfigFile(PGC_SIGHUP);
}
}
/*
* GetSubscriptionPosition gets the minimum WAL log position of the
* subscription given subscriptions: That is the WAL log position on the source
* node up to which the subscription completed replication.
*/
static XLogRecPtr
GetSubscriptionPosition(GroupedLogicalRepTargets *groupedLogicalRepTargets)
{
char *subscriptionValueList = SubscriptionNamesValueList(
groupedLogicalRepTargets->logicalRepTargetList);
return GetRemoteLSN(groupedLogicalRepTargets->superuserConnection, psprintf(
"SELECT min(latest_end_lsn) FROM pg_stat_subscription "
"WHERE subname IN %s", subscriptionValueList));
}
```
|
```java
/*
*
*/
package io.debezium.connector.jdbc.transforms;
import static org.assertj.core.api.Assertions.assertThat;
import static org.junit.jupiter.api.Assertions.assertThrows;
import java.util.HashMap;
import java.util.Map;
import org.apache.kafka.common.config.ConfigException;
import org.apache.kafka.connect.data.Schema;
import org.apache.kafka.connect.data.Struct;
import org.apache.kafka.connect.sink.SinkRecord;
import org.junit.jupiter.api.Test;
import org.junit.jupiter.params.ParameterizedTest;
import org.junit.jupiter.params.provider.ValueSource;
import io.debezium.connector.jdbc.util.DebeziumSinkRecordFactory;
import io.debezium.connector.jdbc.util.SinkRecordFactory;
import io.debezium.converters.spi.SerializerType;
import io.debezium.doc.FixFor;
/**
* Unit tests for {@link ConvertCloudEventToSaveableForm}
*
* @author Roman Kudryashov
*/
class ConvertCloudEventToSaveableFormTest {
@Test
@FixFor({ "DBZ-7065", "DBZ-7130" })
void testConvertCloudEventRecordWithEmptyConfig() {
try (ConvertCloudEventToSaveableForm transform = new ConvertCloudEventToSaveableForm()) {
final Map<String, String> config = new HashMap<>();
Exception exception = assertThrows(ConfigException.class, () -> transform.configure(config));
assertThat(exception.getMessage())
.isEqualTo("Invalid value null for configuration serializer.type: Serialization/deserialization type of CloudEvents converter is required");
}
}
@ParameterizedTest
@ValueSource(strings = { "json", "avro" })
@FixFor({ "DBZ-7065", "DBZ-7130" })
void testConvertNotCloudEventRecord(String serializerType) {
try (ConvertCloudEventToSaveableForm transform = new ConvertCloudEventToSaveableForm()) {
final Map<String, String> config = new HashMap<>();
config.put("serializer.type", serializerType);
transform.configure(config);
final SinkRecordFactory factory = new DebeziumSinkRecordFactory();
final SinkRecord createRecord = factory.createRecord("test.topic");
assertThat(createRecord.valueSchema().name()).doesNotEndWith(".CloudEvents.Envelope");
final SinkRecord convertedRecord = transform.apply(createRecord);
assertThat(convertedRecord).isEqualTo(createRecord);
}
}
@ParameterizedTest
@ValueSource(strings = { "json", "avro" })
@FixFor({ "DBZ-7065", "DBZ-7130" })
void testConvertCloudEventRecordWithEmptyMapping(String serializerType) {
try (ConvertCloudEventToSaveableForm transform = new ConvertCloudEventToSaveableForm()) {
final Map<String, String> config = new HashMap<>();
config.put("serializer.type", serializerType);
transform.configure(config);
final SinkRecordFactory factory = new DebeziumSinkRecordFactory();
final SinkRecord cloudEventRecord = factory.cloudEventRecord("test.topic", SerializerType.withName(serializerType), null);
if (serializerType.equals("avro")) {
assertThat(cloudEventRecord.valueSchema().name()).endsWith(".CloudEvents.Envelope");
assertThat(cloudEventRecord.valueSchema().fields().size()).isEqualTo(7);
assertThat(cloudEventRecord.valueSchema().field("id").schema()).isEqualTo(Schema.STRING_SCHEMA);
}
final SinkRecord convertedRecord = transform.apply(cloudEventRecord);
assertThat(convertedRecord).isEqualTo(cloudEventRecord);
}
}
@ParameterizedTest
@ValueSource(strings = { "json", "avro" })
@FixFor({ "DBZ-7065", "DBZ-7130" })
void testConvertCloudEventRecordWithMappingOfIdField(String serializerType) {
try (ConvertCloudEventToSaveableForm transform = new ConvertCloudEventToSaveableForm()) {
final Map<String, String> config = new HashMap<>();
config.put("fields.mapping", "id");
config.put("serializer.type", serializerType);
transform.configure(config);
final SinkRecordFactory factory = new DebeziumSinkRecordFactory();
final SinkRecord cloudEventRecord = factory.cloudEventRecord("test.topic", SerializerType.withName(serializerType), null);
if (serializerType.equals("avro")) {
assertThat(cloudEventRecord.valueSchema().name()).endsWith(".CloudEvents.Envelope");
assertThat(cloudEventRecord.valueSchema().fields().size()).isEqualTo(7);
assertThat(cloudEventRecord.valueSchema().field("id").schema()).isEqualTo(Schema.STRING_SCHEMA);
}
final SinkRecord convertedRecord = transform.apply(cloudEventRecord);
assertThat(convertedRecord).isNotNull();
assertThat(convertedRecord).isNotEqualTo(cloudEventRecord);
assertThat(convertedRecord.valueSchema().type()).isEqualTo(Schema.Type.STRUCT);
assertThat(convertedRecord.valueSchema().name()).isNull();
assertThat(convertedRecord.valueSchema().fields().size()).isEqualTo(1);
assertThat(convertedRecord.valueSchema().field("id").schema()).isEqualTo(Schema.STRING_SCHEMA);
assertThat(convertedRecord.value()).isInstanceOf(Struct.class);
assertThat(((Struct) convertedRecord.value()).getString("id")).isNotBlank();
checkParamsOfOriginalAndConvertedRecordsAreEqual(cloudEventRecord, convertedRecord);
}
}
@Test
@FixFor("DBZ-7235")
void your_sha256_hashtomNameAndMappingOfIdField() {
try (ConvertCloudEventToSaveableForm transform = new ConvertCloudEventToSaveableForm()) {
final Map<String, String> config = new HashMap<>();
config.put("fields.mapping", "id");
// the test is not applicable to `json` because in that case the schema name is not checked by CloudEventsValidator
config.put("serializer.type", "avro");
transform.configure(config);
final SinkRecordFactory factory = new DebeziumSinkRecordFactory();
final SinkRecord cloudEventRecord = factory.cloudEventRecord("test.topic", SerializerType.withName("avro"), "TestCESchemaCustomName");
assertThat(cloudEventRecord.valueSchema().name()).isEqualTo("TestCESchemaCustomName");
assertThat(cloudEventRecord.valueSchema().fields().size()).isEqualTo(7);
assertThat(cloudEventRecord.valueSchema().field("id").schema()).isEqualTo(Schema.STRING_SCHEMA);
final SinkRecord convertedRecord = transform.apply(cloudEventRecord);
assertThat(convertedRecord).isNotNull();
// main check: the record was not converted. This is because the transform was not configured with a custom CloudEvents schema name
// but the incoming record had a custom name so CloudEventsValidator decided it is not a valid CloudEvent record
assertThat(convertedRecord).isEqualTo(cloudEventRecord);
}
}
@Test
@FixFor("DBZ-7235")
void your_sha256_hashtomNameAndMappingOfIdField() {
try (ConvertCloudEventToSaveableForm transform = new ConvertCloudEventToSaveableForm()) {
final Map<String, String> config = new HashMap<>();
config.put("fields.mapping", "id");
// the test is not applicable to `json` because in that case the schema name is not checked by CloudEventsValidator
config.put("serializer.type", "avro");
config.put("schema.cloudevents.name", "TestCESchemaCustomName");
transform.configure(config);
final SinkRecordFactory factory = new DebeziumSinkRecordFactory();
final SinkRecord cloudEventRecord = factory.cloudEventRecord("test.topic", SerializerType.withName("avro"), null);
assertThat(cloudEventRecord.valueSchema().name()).isEqualTo("test.test.CloudEvents.Envelope");
assertThat(cloudEventRecord.valueSchema().fields().size()).isEqualTo(7);
assertThat(cloudEventRecord.valueSchema().field("id").schema()).isEqualTo(Schema.STRING_SCHEMA);
final SinkRecord convertedRecord = transform.apply(cloudEventRecord);
assertThat(convertedRecord).isNotNull();
// main check: the record was not converted. This is because the transform was configured with a custom CloudEvents schema name
// but the incoming record had a generated by default name so CloudEventsValidator decided it is not a valid CloudEvent record
assertThat(convertedRecord).isEqualTo(cloudEventRecord);
}
}
@Test
@FixFor("DBZ-7235")
void your_sha256_hashNameAndMappingOfIdField() {
try (ConvertCloudEventToSaveableForm transform = new ConvertCloudEventToSaveableForm()) {
final Map<String, String> config = new HashMap<>();
config.put("fields.mapping", "id");
// the test is not applicable to `json` because in that case the schema name is not checked by CloudEventsValidator
config.put("serializer.type", "avro");
config.put("schema.cloudevents.name", "TestCESchemaCustomName");
transform.configure(config);
final SinkRecordFactory factory = new DebeziumSinkRecordFactory();
final SinkRecord cloudEventRecord = factory.cloudEventRecord("test.topic", SerializerType.withName("avro"), "TestCESchemaCustomName");
assertThat(cloudEventRecord.valueSchema().name()).isEqualTo("TestCESchemaCustomName");
assertThat(cloudEventRecord.valueSchema().fields().size()).isEqualTo(7);
assertThat(cloudEventRecord.valueSchema().field("id").schema()).isEqualTo(Schema.STRING_SCHEMA);
final SinkRecord convertedRecord = transform.apply(cloudEventRecord);
assertThat(convertedRecord).isNotNull();
// main check: the record was converted. This is because the transform was configured with a custom CloudEvents schema name
// and the incoming record had the same custom name so CloudEventsValidator decided it is a valid CloudEvent record
assertThat(convertedRecord).isNotEqualTo(cloudEventRecord);
assertThat(convertedRecord.valueSchema().type()).isEqualTo(Schema.Type.STRUCT);
assertThat(convertedRecord.valueSchema().name()).isNull();
checkParamsOfOriginalAndConvertedRecordsAreEqual(cloudEventRecord, convertedRecord);
}
}
@ParameterizedTest
@ValueSource(strings = { "json", "avro" })
@FixFor({ "DBZ-7065", "DBZ-7130" })
void testConvertCloudEventRecordWithMappingOfDataField(String serializerType) {
try (ConvertCloudEventToSaveableForm transform = new ConvertCloudEventToSaveableForm()) {
final Map<String, String> config = new HashMap<>();
config.put("fields.mapping", "data");
config.put("serializer.type", serializerType);
transform.configure(config);
final SinkRecordFactory factory = new DebeziumSinkRecordFactory();
final SinkRecord cloudEventRecord = factory.cloudEventRecord("test.topic", SerializerType.withName(serializerType), null);
if (serializerType.equals("avro")) {
assertThat(cloudEventRecord.valueSchema().name()).endsWith(".CloudEvents.Envelope");
assertThat(cloudEventRecord.valueSchema().fields().size()).isEqualTo(7);
assertThat(cloudEventRecord.valueSchema().field("data").schema().type()).isEqualTo(Schema.Type.STRUCT);
}
final SinkRecord convertedRecord = transform.apply(cloudEventRecord);
assertThat(convertedRecord).isNotNull();
assertThat(convertedRecord).isNotEqualTo(cloudEventRecord);
assertThat(convertedRecord.valueSchema().type()).isEqualTo(Schema.Type.STRUCT);
assertThat(convertedRecord.valueSchema().name()).isNull();
assertThat(convertedRecord.valueSchema().fields().size()).isEqualTo(1);
assertThat(convertedRecord.valueSchema().field("data").schema()).isEqualTo(Schema.STRING_SCHEMA);
assertThat(convertedRecord.value()).isInstanceOf(Struct.class);
assertThat(((Struct) convertedRecord.value()).getString("data")).isNotBlank();
checkParamsOfOriginalAndConvertedRecordsAreEqual(cloudEventRecord, convertedRecord);
}
}
@ParameterizedTest
@ValueSource(strings = { "json", "avro" })
@FixFor({ "DBZ-7065", "DBZ-7130" })
void your_sha256_hash(String serializerType) {
try (ConvertCloudEventToSaveableForm transform = new ConvertCloudEventToSaveableForm()) {
final Map<String, String> config = new HashMap<>();
config.put("fields.mapping", "id,source:created_by,specversion:ce_spec_number,type,time:created_at,datacontenttype:payload_format,data:payload");
config.put("serializer.type", serializerType);
transform.configure(config);
final SinkRecordFactory factory = new DebeziumSinkRecordFactory();
final SinkRecord cloudEventRecord = factory.cloudEventRecord("test.topic", SerializerType.withName(serializerType), null);
if (serializerType.equals("avro")) {
assertThat(cloudEventRecord.valueSchema().name()).endsWith(".CloudEvents.Envelope");
assertThat(cloudEventRecord.valueSchema().fields().size()).isEqualTo(7);
assertThat(cloudEventRecord.valueSchema().field("data").schema().type()).isEqualTo(Schema.Type.STRUCT);
}
final SinkRecord convertedRecord = transform.apply(cloudEventRecord);
assertThat(convertedRecord).isNotNull();
assertThat(convertedRecord).isNotEqualTo(cloudEventRecord);
assertThat(convertedRecord.valueSchema().type()).isEqualTo(Schema.Type.STRUCT);
assertThat(convertedRecord.valueSchema().name()).isNull();
assertThat(convertedRecord.valueSchema().fields().size()).isEqualTo(7);
assertThat(convertedRecord.value()).isInstanceOf(Struct.class);
Struct convertedRecordValue = (Struct) convertedRecord.value();
assertThat(convertedRecordValue.getString("id")).isNotBlank();
assertThat(convertedRecordValue.getString("created_by")).isNotBlank();
assertThat(convertedRecordValue.getString("ce_spec_number")).isNotBlank();
assertThat(convertedRecordValue.getString("type")).isNotBlank();
assertThat(convertedRecordValue.getString("created_at")).isNotBlank();
assertThat(convertedRecordValue.getString("payload_format")).isNotBlank();
assertThat(convertedRecordValue.getString("payload")).isNotBlank();
checkParamsOfOriginalAndConvertedRecordsAreEqual(cloudEventRecord, convertedRecord);
}
}
private void checkParamsOfOriginalAndConvertedRecordsAreEqual(SinkRecord original, SinkRecord converted) {
assertThat(converted.topic()).isEqualTo(original.topic());
assertThat(converted.kafkaPartition()).isEqualTo(original.originalKafkaPartition());
assertThat(converted.kafkaOffset()).isEqualTo(original.originalKafkaOffset());
assertThat(converted.keySchema()).isEqualTo(original.keySchema());
assertThat(converted.key()).isEqualTo(original.key());
assertThat(converted.headers()).isEqualTo(original.headers());
assertThat(converted.timestamp()).isEqualTo(original.timestamp());
}
}
```
|
Homes England is the non-departmental public body that funds new affordable housing in England. It was founded on 1 January 2018 to replace the Homes and Communities Agency (HCA).
HCA in turn was established by the Housing and Regeneration Act 2008 as one of the successor bodies to the Housing Corporation, and became operational on 1 December 2008.
History
On 17 January 2007, Ruth Kelly announced proposals to bring together the investment functions of the Housing Corporation, English Partnerships and parts of the Department for Communities and Local Government to form a new unified housing and regeneration agency. It would also incorporate the functions of the Academy for Sustainable Communities and the government's advisory team for large applications.
In the following months, Martin Cave, Director of the Centre for Management under Regulation at University of Warwick, led the most comprehensive review of English housing regulation for 30 years. Reporting in June, the Cave Review recommended that a new regulator be set up, separating the regulation and investment responsibilities of the Housing Corporation.
On 15 October 2007, Yvette Cooper announced that the Government accepted the recommendation of the Cave Review to transfer the Corporation's regulatory powers to an independent body, subsequently named as the Tenant Services Authority (TSA). The new investment body was initially announced as "Communities England", and later renamed as the Homes and Communities Agency.
The Chief Executive for the body was announced as Bob Kerslake in December 2007. Kerslake had led the regeneration of Sheffield as chief executive of the City Council since 1997.
On 17 October 2008 the Housing Minister Iain Wright announced the Board members of the HCA including Robert Napier (chair), Kate Barker, Candy Atherton, and Shaukat Moledina (previously Vice-Chair of the Housing Corporation).
Kerslake was appointed as a Permanent Secretary at the agency's parent Department for Communities and Local Government in September 2010. The HCA announced that it would appoint an interim Chief Executive from existing staff.
Housing minister Grant Shapps announced early on that the TSA would be abolished as part of the cull of quangos by the coalition government after the 2010 general election. In June 2010, he said that the HCA would be retained but become "smaller, more strategic - with the HCA's functions being delivered under local leadership."
In September 2010, the HCA was also included on a list of organisations being considered for closure. However, Shapps announced in October that the TSA would be merged into the HCA. In November, he confirmed that the HCA would be retained, but reformed to cut running costs.
New initiatives
The HCA's Kickstart programme provided grants to developers in order to rescue stalled projects during the recession, helping to maintain employment and output of new homes. One of the most groundbreaking Kickstart projects was a £45.6 million investment in Berkeley Homes to provide 555 new homes for rent on the open market, located in London, the south east and south west.
However, after a campaign for disclosure by Building Design magazine, the agency revealed that many Kickstart projects failed to meet CABE's standards of good design.
Sale of ransom strips
The pilot sale of micro plots was compared to driveway ransoms when Homes England wrote to householders in Birmingham warning that Homes England owned microplots between the household and the public road. Homes England said it had written to 90 householders however a freedom of information request found over 500 micro plots for sale in the Redditch and Bromsgrove boroughs. Homes England said that if householders did not purchase microplots they could be sold to third parties. A third party sale was expected by homeowners to result in the micro plot being used as a ransom strip.
Social Housing Regulator
The Homes and Communities Agency acted as the government's Social Housing Regulator. It provided regular reports on each registered social housing agency in England. In March 2014, it made its first ruling that a housing association had breached its "serious detriment" threshold for harm to consumers for its home repairs against Circle 33, due to "chronic and long standing difficulties in the delivery of the repairs service".
In Scotland this function is performed by the Scottish Housing Regulator. In Wales, the function is carried out by the Welsh government.
References
External links
Housing in England
Public housing in England
Non-departmental public bodies of the United Kingdom government
Department for Levelling Up, Housing and Communities
Interested parties in planning in England
Governance of England
2008 establishments in England
Regulators of England
Government agencies established in 2008
Housing organisations based in London
|
```java
/**
*
*
* path_to_url
*
* Unless required by applicable law or agreed to in writing, software
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
*/
package io.pravega.cli.admin.cluster;
import io.pravega.cli.admin.AdminCommand;
import io.pravega.cli.admin.CommandArgs;
/**
* Base for any Cluster-related commands.
*/
public abstract class ClusterCommand extends AdminCommand {
protected static final String COMPONENT = "cluster";
ClusterCommand(CommandArgs args) {
super(args);
}
}
```
|
Parliamentary elections were held in Slovakia on 8 and 9 June 1990 alongside federal elections. They were the first elections after the Velvet Revolution, and the first free elections since 1946. The Public Against Violence (VPN) party emerged as the largest in the Slovak National Council, winning 48 of the 150 seats. In the aftermath of the election, Vladimír Mečiar of the VPN formed a grand coalition with the Christian Democratic Movement (KDH). After a conflict leading to the dissolution of the VPN, the first Mečiar cabinet was brought down by a vote of non-confidence in the parliament. Ján Čarnogurský of the KDH became the new Prime Minister in April 1991.
Electoral system
These were the only elections with a 3% electoral threshold; it was raised to 5% for the 1992 elections.
Participating parties
Results
References
Parliamentary elections in Slovakia
Legislative elections in Czechoslovakia
Slovakia
1990 in Slovakia
June 1990 events in Europe
|
Katharine Gatty (11 June 1870 – 1 May 1952) was a British nurse, journalist, lecturer and militant suffragette. As a prominent member of the Women's Social and Political Union (WSPU), she received from them the Hunger Strike Medal after going on a hunger strike in prison during which she was force-fed. In her later years she resided in California in the United States before emigrating to Australia, where she spent her last years.
Early years
Of Irish descent through her mother, Emma Katharine Gatty was born in Ferozopur in Bengal in India in 1870 to Emma Rebecca née Collum (1844-1929) and Captain Edward Gatty (1837-1872) of the 39th (Dorsetshire) Regiment of Foot. By 1881 she and her widowed mother were living in Hammersmith in London. Her career as a Liberal started at age 18, when she took part in the Great Dock Strike of 1889.
In 1908 Gatty was a delegate to the International Congress of Women in Amsterdam.
Activism
After joining the Ealing branch of the Women's Social and Political Union Gatty became a militant suffragette, on one occasion chaining herself to the gates at Hyde Park. In the suffragette publication Votes For Women Gatty was described as a journalist and lecturer. She was first imprisoned in Holloway Prison in 1909 for one month. In 1911 she was a salaried member of the Women's Tax Resistance League in London. In November 1911 Gatty was sentenced to three weeks imprisonment in Holloway Prison after taking part in a campaign of window smashing after the government 'torpedoed' the anticipated Conciliation Bill which was seen as a progressive step towards achieving women's suffrage. In Holloway she went on hunger strike for which action she received a Hunger Strike Medal from the leadership of the WSPU.
In January 1912 she was again arrested while causing a disturbance when women had been excluded from the trial of Emily Davison, but this time she was released without charge. Gatty was a close friend of fellow-suffragette Davison (in May 1913 Gatty invited Davison for tea) who was killed when she ran in front of the horse of King Edward VII at the 1913 Derby.
In March 1912 Gatty took part in a further campaign of window smashing in March 1912 on behalf of the WSPU for which she was sentenced to six months imprisonment for smashing glass valued at £42. At her trial she said that men were allowed to break women’s hearts and homes without punishment and contrasted her sixth month sentence for minor property damage to the two month sentence an Edinburgh man received for breaking his wife’s skull. In her opinion, property was worth more in the eyes of the law than a person. Her signature was among those embroidered on The Suffragette Handkerchief in Holloway Prison, which was kept afterwards by fellow prisoner Mary Ann Hilliard. In prison in April 1912 Gatty again went on hunger strike which "lasted from dinner time on Sat. 13 till breakfast time on Friday 19 inclusive (6 days). Doctors began F.f. on the Wed. and they tried to feed me on Thursday 3 times but failed." In June of the same year she again went on hunger strike and was force-fed 13 times. On her release from prison in August 1911 she was immediately rearrested for smashing a window at the post office in Abergavenny in Wales, stating that she had done so to protest against the exclusion of women from such official lists as the electoral register. On this occasion Gatty received one month in prison with hard labour which had a serious effect on her health. In the latter stages of 1912 Gatty became Secretary to the Suffrage Atelier (SA), an organisation of suffrage artists in London who created and printed postcards, posters and banners for the women's suffrage movement.
By 1913 Gatty was an organiser for the National Amalgamated Union of Shop Assistants, Warehousemen and Clerks while in her later years she had links to the Communist Party, regularly corresponding with the journalist and activist Anna Louise Strong. In total she was imprisoned nine times for her activities on behalf of women's suffrage and the movement to abolish capital punishment. Gatty was an active member of the International Coordination Committee for Aid to Republican Spain, and was one of the organisers of the Co-operative Party in England in addition to being a lifelong advocate of Irish Home Rule. Gatty trained as a nurse in the early 1920s, qualifying in 1924. It was at this time she began a correspondence with the activist for socialism and sexual revolution Hildegart Rodríguez Carballeira. Gatty was still on the Nursing Register in 1934.
In January 1893 in St Mary’s church in Islington she married William Lewis Reid (1858–1923) of the Reid & Sons family of silversmiths. Their daughter Eve Lewis Reid was born in December 1893. In 1911 Reid divorced her following her adultery with a John Manson between 1897 and 1910. She and Manson lived together as husband and wife and Reid alleged she had given birth to Manson's child in 1898. In 1915 she married Ernest Lucas Gillett (1882-1954), a clerk in the Civil Service. The couple adopted the surname Gillett-Gatty.
Later life
In September 1934 Gatty, representing Action Feministe Internationale, attended a conference on 'Ethiopia and Justice' organised by Sylvia Pankhurst at the Central Hall, Westminster. In the mid-1930s she lived for a period in Greece. In 1937 Gatty, describing herself an author "writing a book" and a widow despite the fact her husband was still alive, moved to California to the United States, residing there during the 1940s. In 1938 the "humorous, witty Irishwoman, Mrs. Gillett-Gatty" spoke at a meeting of the American Student Union at Stanford University in California on "Fascism in Italy and Its Threat to the Democratic Ideal" in which she related her own experiences in Italy before and after Mussolini's rise to power.
Emma Katharine Gillett-Gatty emigrated to Strathfield in New South Wales Australia in 1947, and here she died in 1952 aged 81. In her will she donated her eyes to two blind people, in it stating: "About my own carcase, first, that both my eyes be enucleated if possible within eight hours of my demise, so that corneally blind persons may each receive one. Next, I be cremated or buried at sea."
The Archive of the Women's Library at the London School of Economics holds 12 of her letters sent from prison.
References
1870 births
1952 deaths
Irish suffragettes
British feminists
British women's rights activists
Women's suffrage in the United Kingdom
Women's Social and Political Union
Prisoners and detainees of England and Wales
People of Anglo-Irish descent
English tax resisters
British people in colonial India
|
Meriwether Lewis (August 18, 1774 – October 11, 1809) was an American explorer, soldier, politician, and public administrator, best known for his role as the leader of the Lewis and Clark Expedition, also known as the Corps of Discovery, with William Clark.
Their mission was to explore the territory of the Louisiana Purchase, establish trade with, and sovereignty over the natives near the Missouri River, and claim the Pacific Northwest and Oregon Country for the United States before European nations. They also collected scientific data, and information on indigenous nations. President Thomas Jefferson appointed him Governor of Upper Louisiana in 1806. He died of gunshot wounds in what was either a murder or suicide, in 1809.
Life and work
Meriwether Lewis was born August 18, 1774, on Locust Hill Plantation in Albemarle County, Colony of Virginia, in the present-day community of Ivy. He was the son of William Lewis, of Welsh ancestry, and Lucy Meriwether, of English ancestry. After his father died of pneumonia in November 1779, he moved with his mother and stepfather Captain John Marks to Georgia. They settled along the Broad River in the Goosepond Community within the Broad River Valley in Wilkes County (now Oglethorpe County). He was also the great-great-grandson of David Crawford, a prominent Virginia Burgess and militia colonel.
Lewis had no formal education until he was 13 years of age, but during his time in Georgia he enhanced his skills as a hunter and an outdoorsman. He would often venture out in the middle of the night in the dead of winter with only his dog to go hunting. Even at an early age, he was interested in natural history, which would develop into a lifelong passion. His mother taught him how to gather wild herbs for medicinal purposes.
In the Broad River Valley, Lewis first dealt with Native Americans. This was the traditional territory of the Cherokee, who resented encroachment by the colonists. Lewis seems to have been a champion for them among his own people. While in Georgia, he met Eric Parker, who encouraged him to travel. At age 13, Lewis was sent back to Virginia for education by private tutors. His father's older brother Nicholas Lewis became his guardian. One of his tutors was Parson Matthew Maury, an uncle of Matthew Fontaine Maury.
He joined the Virginia militia, and in 1794 he was sent as part of a detachment that was involved in putting down the Whiskey Rebellion. In 1795, Lewis joined the United States Army, commissioned as an ensign—an army rank that was later abolished and was equivalent to a modern-day second lieutenant. By 1800 he rose to captain, and ended his service there in 1801. Among his commanding officers was William Clark, who would later become his companion in the Corps of Discovery.
On April 1, 1801, Lewis was appointed as Secretary to the President by President Thomas Jefferson, whom he knew through Virginia society in Albemarle County. Lewis resided in the presidential mansion, and frequently conversed with various prominent figures in politics, the arts and other circles. He compiled information on the personnel and politics of the United States Army, which had seen an influx of Federalist officers as a result of "midnight appointments" made by outgoing president John Adams in 1801. Meriwether was elected a member of the American Philosophical Society in 1802.
When Jefferson began to plan for an expedition across the continent, he chose Lewis to lead the expedition. Meriwether Lewis recruited Clark, then aged 33, to share command of the expedition.
Expedition west
After the Louisiana Purchase in 1803, Thomas Jefferson wanted to get an accurate sense of the new land and its resources. The president also hoped to find a "direct and practicable water communication across this continent, for the purposes of commerce with Asia". In addition, Jefferson placed special importance on declaring U.S. sovereignty over the Native Americans along the Missouri River.
The two-year exploration by Lewis and Clark was the first transcontinental expedition to the Pacific Coast by the United States. They reached the Pacific twelve years after Sir Alexander Mackenzie did overland in Canada. When they left Fort Mandan in April 1805 they were accompanied by the 16-year-old Shoshone woman, Sacagawea, the wife of the French-Canadian fur trader, Toussaint Charbonneau. The Corps of Discovery made contact with many Native Americans in the Trans-Mississippi West and found them accustomed to dealing with European traders and already connected to global markets.
After crossing the Rocky Mountains, the expedition reached the Oregon Country (which was disputed land beyond the Louisiana Purchase) and the Pacific Ocean in November 1805. They returned in 1806, bringing with them an immense amount of information about the region as well as numerous plant and animal specimens. They demonstrated the possibility of overland travel to the Pacific Coast. The success of their journey helped to strengthen the American concept of "manifest destiny" – the idea that the United States was destined to reach all the way across North America from Atlantic to Pacific.
Return and gubernatorial duties
After returning from the expedition, Lewis received a reward of of land. He also initially made arrangements to publish the Corps of Discovery journals, but had difficulty completing his writing. In 1807, Jefferson appointed him governor of the Louisiana Territory; he settled in St. Louis.
Lewis's record as an administrator is mixed. He published the first laws in the Upper Louisiana Territory, established roads and furthered Jefferson's mission as a strong proponent of the fur trade. He negotiated peace among several quarreling Indian tribes. His duty to enforce Indian treaties was to protect the western Indian lands from encroachment, which was opposed by the rush of settlers looking to open new lands for settlements. But due to his quarreling with local political leaders, controversy over his approvals of trading licenses, land grant politics, and Indian depredations, some historians have argued that Lewis was a poor administrator.
That view has been reconsidered in recent biographies. Lewis's primary quarrels were with his territorial secretary Frederick Bates. Bates was accused of undermining Lewis to seek Lewis's dismissal and his own appointment as governor. Because of the slow-moving mail system, former president Jefferson and Lewis's superiors in Washington got the impression that Lewis did not adequately keep in touch with them.
Bates wrote letters to Lewis's superiors accusing Lewis of profiting from a mission to return a Mandan chief to his tribe. Because of Bates' accusation, the War Department refused to reimburse Lewis for a large sum he personally advanced for the mission. When Lewis's creditors heard that Lewis would not be reimbursed for the expenses, they called Lewis's notes, forcing him to liquidate his assets, including land he was granted for the Lewis and Clark Expedition. One of the primary reasons Lewis set out for Washington on this final trip was to clear up questions raised by Bates and to seek a reimbursement of the money he had advanced for the territorial government.
The U.S. government finally reimbursed the expenses to Lewis's estate two years after his death. Bates eventually became governor of Missouri. Though some historians have speculated that Lewis abused alcohol or opiates based upon an account attributed to Gilbert C. Russell at Fort Pickering on Lewis's final journey, others have argued that Bates never alleged that Lewis suffered from such addictions and that Bates certainly would have used them against Lewis if Lewis suffered from those conditions.
Freemasonry
See List of Notable Freemasons
Lewis was a Freemason, initiated, passed and raised in the "Door To Virtue Lodge No. 44" in Albemarle, Virginia, between 1796 and 1797. On August 2, 1808, Lewis and several of his acquaintances submitted a petition to the Grand Lodge of Pennsylvania requesting dispensation to establish a lodge in St. Louis. Lewis was nominated and recommended to serve as the first Master of the proposed Lodge, which was warranted as Lodge No. 111 on September 16, 1808.
Lewis and slavery
Although Lewis attempted to supervise enslaved people while running his mother's plantation before the westward expedition, he left that post and had no valet during the expedition, unlike William Clark, who brought his slave York. Lewis made assignments to York but allowed Clark to supervise him; Lewis also granted York and Sacagawea votes during expedition meetings. Later, Lewis hired a free African-American man as his valet, John Pernia. Pernia accompanied Lewis during his final journey, although his wages were considerably in arrears. After Lewis's death, Pernia continued to Monticello and asked Jefferson to pay the $240 owed him, but was refused. Pernia later committed suicide.
Death
On September 3, 1809, Lewis set out for Washington, D.C. He hoped to resolve issues regarding the denied payment of drafts he had drawn against the War Department while serving as governor of the Upper Louisiana Territory, leaving him in potentially ruinous debt. Lewis carried his journals with him for delivery to his publisher. He intended to travel to Washington by ship from New Orleans, but changed his plans while floating down the Mississippi River from St. Louis. He disembarked and decided instead to make an overland journey via the Natchez Trace and then east to Washington (the Natchez Trace was the old pioneer road between Natchez, Mississippi, and Nashville, Tennessee). Robbers preyed on travelers on that road and sometimes killed their victims. Lewis had written his will before his journey and also attempted suicide on this journey, but was restrained.
Circumstances
According to a lost letter from October 19, 1809, to Thomas Jefferson, Lewis stopped at an inn on the Natchez Trace called Grinder's Stand, about southwest of Nashville on October 10. After dinner, he retired to his one-room cabin. In the predawn hours of October 11, the innkeeper's wife, Priscilla Griner, heard gunshots. Servants found Lewis badly injured from multiple gunshot wounds, one each to the head and gut. He bled out on his buffalo hide robe and died shortly after sunrise. The Nashville Democratic Clarion published the account, which newspapers across the country repeated and embellished. The Nashville newspaper also reported that Lewis's throat was cut. Money that Lewis had borrowed from Major Gilbert Russell at Fort Pickering to complete the journey was missing.
While Lewis's friend Thomas Jefferson and some modern historians have generally accepted Lewis's death as a suicide, debate continues, as discussed below. No one reported seeing Lewis shoot himself. Three inconsistent, somewhat contemporary accounts are attributed to Mrs. Griner, who left no written account or testimony. Some thus believe her testimony was fabricated, while others point to it as proof of suicide. Mrs. Griner claimed Lewis acted strangely the night before his death: standing and pacing during dinner and talking to himself in the way one would speak to a lawyer, with face flushed as if it had come on him in a fit. She continued to hear him talking to himself after he retired, and then at some point in the night, she heard multiple gunshots, a scuffle, and someone calling for help.
She claimed to see Lewis through the slit in the door crawling back to his room. She did not explain why she stopped investigating then, or decided the next morning to send her children to look for his servants. Another account claims the servants found him in the cabin, wounded and bloody, with part of his skull gone, where he lived for several hours. In her last account, three men followed him up the Natchez Trace, where he pulled his pistols and challenged them to a duel. She heard voices and gunfire in his cabin about 1:00 am. She then found it empty with a large amount of gunpowder on the floor.
Lewis's relatives maintained it was murder. A coroner's inquest held immediately after his death as provided by local law did not charge anyone with any crime. The jury foreman kept a pocket diary of the proceedings, which disappeared in the early 1900s. When William Clark and Thomas Jefferson were informed of Lewis's death, both accepted the conclusion of suicide. Based on their positions and the lost Lewis letter of mid-September 1809, historian Stephen Ambrose dismissed the murder theory as "not convincing".
Later analysis
The only doctor to examine Lewis's body did not do so until 40 years later, in 1848. The Tennessee State Commission, including Dr. Samuel B. Moore, charged with locating Lewis's grave and erecting a monument over it, opened Lewis's grave. The commission wrote in its official report that though the impression had long prevailed that Lewis died by his own hand, "it seems to be more probable that he died by the hands of an assassin." In the book The History of the Lewis and Clark Expedition, first printed in 1893, the editor Elliott Coues expressed doubt about Thomas Jefferson's conclusion that Lewis committed suicide, despite including the former president's Memoir of Meriwether Lewis in his book.
From 1993 to 2010, about 200 of Lewis's kin (through his sister Jane, as he had no children) sought to have the body exhumed for forensic analysis, to try to determine whether his death was a suicide or murder. A Tennessee coroner's jury in 1996 recommended exhumation. Since his gravesite is in a national monument, the National Park Service must approve. The agency refused the request in 1998, citing possible disturbance to the bodies of more than 100 pioneers buried nearby. In 2008, the Department of the Interior approved the exhumation, but rescinded that decision in 2010, stating that the decision is final. It is nonetheless improving the grave site and visitor facility.
Historian Paul Russell Cutright wrote a detailed rebuttal of the murder/robbery theory, concluding that it "lacks legs to stand on". He stressed Lewis's debts, heavy drinking, possible morphine and opium use, failure to prepare the expedition's journals for publication, repeated failure to find a wife, and the deterioration of his friendship with Thomas Jefferson. This refutation was countered by Dr. Eldon G. Chuinard, (physician), who argued for the murder hypothesis on the basis that Lewis's reported wounds were inconsistent with his reported two-hour survival after the shooting. This particular medical theory as well as other medical/psychological theories often cited by numerous Lewis authors (syphilis, malaria, alcohol abuse, mercury poisoning, PTSD, depression, et al.), have been explored by Dr. David J. Peck (physician) and Marti Peck, Ph.D. (psychologist) in their book So Hard to Die. Leading Lewis scholars Donald Jackson, Jay H. Buckley, Clay S. Jenkinson and others, have stated that, regardless of their leanings or beliefs, the facts of his death are not known, there are no eyewitnesses, and the reliability of reports of those in the place or vicinity cannot be considered certain. Author Peter Stark believes that post-traumatic stress disorder may have been a contributor to Meriwether Lewis's condition after spending months traversing hostile Indian territory, particularly because travelers coming afterward exhibited the same symptoms.
Memorials
Lewis was buried near present-day Hohenwald, Tennessee, near his place of death. His grave was located about 200 yards from Grinder's Stand, alongside the Natchez Trace (that section of the 1801 Natchez Trace was built by the U.S. Army under the direction of Lewis's mentor Thomas Jefferson, during Lewis's lifetime).
At first, the grave was unmarked. Alexander Wilson, an ornithologist and friend of Lewis who visited the grave in May 1810 during a trip to New Orleans to sell his drawings, wrote that he gave the innkeeper Robert Griner money to erect a fence around the grave to protect it from animals.
The State of Tennessee erected a monument over Lewis's grave in 1848. Lemuel Kirby, a stonemason from Columbia, Tennessee, chose the design of a broken column, commonly used at the time to symbolize a life cut short.
An iron fence erected around the base of the monument was partially dismantled during the Civil War by Confederate detachments under General John Bell Hood marching from Shiloh toward Franklin; they forged the iron into horseshoes.
A September 1905 article in Everybody's Magazine called attention to Lewis's abandoned and overgrown grave. A county road worker, Teen Cothran, took the initiative to open a road to the cemetery. Thereafter, a local Tennessee Meriwether Lewis Monument Committee was soon formed to push for restoring Lewis's gravesite. In 1925, in response to the committee's work, President Calvin Coolidge designated Lewis's grave as the fifth National Monument in the South.
In 2009, the Lewis and Clark Trail Heritage Foundation organized a commemoration for Lewis in conjunction with their 41st annual meeting from October 3–7, 2009. It included the first national memorial service at his grave site. On October 7, 2009, near the 200th anniversary of Lewis's death, about 2,500 people (National Park Service estimate) from more than 25 states gathered at his grave to acknowledge Lewis's life and achievements. Speakers included William Clark's descendant Peyton "Bud" Clark, Lewis's collateral descendants Howell Bowen and Tom McSwain, and Stephanie Ambrose Tubbs (daughter of Stephen Ambrose, who wrote Undaunted Courage, an award-winning book about the Lewis and Clark Expedition). A bronze bust of Lewis was dedicated at the Natchez Trace Parkway for a planned visitor center at the gravesite. The District of Columbia and governors of 20 states associated with the Lewis and Clark Trail sent flags flown over state capital buildings to be carried to Lewis's grave by residents of the states, acknowledging the significance of Lewis's contribution in the creation of their states.
The 2009 ceremony at Lewis's grave was the final bicentennial event honoring the Lewis and Clark Expedition. Re-enactors from the Lewis and Clark Bicentennial participated, and official attendees included representatives from Jefferson's Monticello. Lewis and Clark descendants and family members, along with representatives of St. Louis Lodge #1, past presidents of the Lewis and Clark Trail Heritage Foundation, and the Daughters of the American Revolution, carried wreaths and led a formal procession to Lewis's grave. Samples of plants which Lewis discovered on the expedition were brought from the Trail states and laid on his grave. The U.S. Army was represented by the 101st Airborne Infantry Band and its Army chaplain. The National Park Service announced that it would rehabilitate the site.
Legacy
For many years, Lewis's legacy was overlooked, inaccurately assessed, and somewhat tarnished by his alleged suicide. Yet his contributions to science, the exploration of the Western U.S., and the lore of great world explorers, are considered incalculable.
Four years after Lewis's death, Thomas Jefferson wrote:
"Of courage undaunted, possessing a firmness & perseverance of purpose which nothing but impossibilities could divert from its direction, careful as a father of those committed to his charge, yet steady in the maintenance of order & discipline, intimate with the Indian character, customs & principles, habituated to the hunting life, guarded by exact observation of the vegetables & animals of his own country, against losing time in the description of objects already possessed, honest, disinterested, liberal, of sound understanding and a fidelity to truth so scrupulous that whatever he should report would be as certain as if seen by ourselves, with all these qualifications as if selected and implanted by nature in one body, for this express purpose, I could have no hesitation in confiding the enterprise to him.
Jefferson wrote that Lewis had a "luminous and discriminating intellect". William Clark's first son Meriwether Lewis Clark was named after Lewis; the senior Meriwether Clark passed the name on to his son, Meriwether Lewis Clark, Jr.
Captain Meriwether Lewis and Lieutenant (de facto co-captain and posthumously, officially promoted to captain in advance of the bicentennial) William Clark commanded the Corps of Discovery to map the course of the Missouri River to its source and the Pacific Northwest overland and water routes to and from the mouth of the Columbia River. They were honored with a 3-cent stamp July 24, 1954 on the 150th anniversary. The 1803 Louisiana Purchase doubled the size of the United States. Lewis and Clark described and sketched its flora and fauna and described the native inhabitants they encountered before returning to St. Louis in 1806.
Coins
Both Lewis and Clark appear on the gold Lewis and Clark Exposition dollars minted for the Lewis and Clark Centennial Exposition. Among the early United States commemorative coins, they were produced in both 1904 and 1905 and survive in relatively small numbers.
Postage stamps
The Lewis and Clark Expedition was celebrated on May 14, 2004, the 200th anniversary of its outset, by depicting the two on a hilltop outlook: two companion 37-cent USPS stamps showed portraits of Meriwether Lewis and William Clark. A special 32-page booklet accompanied the issue in eleven cities along the route taken by the Corps of Discovery. An image of the stamp can be found on Arago online at the link in the footnote.
Flora and fauna
The plant genus Lewisia (family Portulacaceae), popular in rock gardens and which includes the bitterroot (Lewisia rediviva), the state flower of Montana, is named after Lewis, as is Lewis's woodpecker (Melanerpes lewis) and a subspecies of the cutthroat trout, the westslope cutthroat trout (Oncorhynchus clarkii lewisi). Also named after him in 1999, is Lewisiopsis tweedyi which is a flowering plant and sole species in genus Lewisiopsis (in the family Montiaceae).
In 2004, the American elm cultivar Ulmus americana 'Lewis & Clark' (selling name ) was released by North Dakota State University Research Foundation in commemoration of the Lewis & Clark expedition's bicentenary; the tree has a resistance to Dutch elm disease.
Geographic names
Geographic names that honor him include:
Lewis County, Kentucky
Lewis County, Tennessee
Lewis County, Missouri
Lewis County, Idaho
Lewis County, Washington
Lewisburg, Tennessee
Lewiston, Idaho
the U.S. Army fort Fort Lewis, Washington, the home of the US Army 1st Corps (I Corps)
Lewis and Clark County, Montana, the home of the capital city, Helena
Lewis and Clark Pass (Montana)
Lewis and Clark National Forest
Lewistown, Montana
the Lewis Range of Montana's Glacier National Park
Lewis Avenue in Billings, Montana
Gates of the Mountains Wilderness, a day use campground north of Helena, Montana's Meriwether Picnic site
Lewis and Clark Caverns, a cave between Three Forks and Whitehall, Montana
Seaside, Oregon has numerous landmarks, museums, and a "Lewis and Clark Avenue" devoted to both of the explorers. This small city is also known as being the end of their journey to the Pacific Coast.
Fort Clatsop was the encampment of the Lewis and Clark Expedition in the Oregon Country near the mouth of the Columbia River during the winter of 1805–1806. Located along the Lewis and Clark River at the north end of the Clatsop Plains approximately 5 miles (8.0 km) southwest of Astoria, the fort was the last encampment of the Corps of Discovery, before embarking on their return trip east to St. Louis.
Lewis and Clark State Park, a state park located in Williams County, North Dakota near Williston which is a part of the North Dakota Parks and Recreation Department system.
Vessels
Three U.S. Navy vessels have been named in honor of Lewis: the Liberty ship SS Meriwether Lewis, the Polaris armed nuclear submarine USS Lewis and Clark and the supply ship USNS Lewis and Clark.
Academic institutions
Lewis & Clark College, Portland, Oregon, was named for Meriwether Lewis and William Clark.
Lewis-Clark State College, Lewiston, Idaho, was named for Meriwether Lewis and William Clark.
Lewis and Clark Community College, Godfrey, Illinois, was named for Meriwether Lewis and William Clark. The campus lies about 11 miles upstream from the Corps of Discovery's departure point.
Lewis & Clark High School, Spokane, Washington, was named for Meriwether Lewis and William Clark.
Meriwether Lewis Elementary School, Albemarle County, Virginia was named for Meriwether Lewis, who was born nearby. The school board voted to rename the school in early 2023, despite 85% of community members voting to retain the name.
Meriwether Lewis Elementary School, Portland, Oregon was named for Meriwether Lewis.
Lewis and Clark Elementary School, Missoula, Montana was named for Meriwether Lewis and William Clark.
Popular culture
Meriwether Lewis's relationship with Thomas Jefferson; Lewis's multiple expeditions, journals, and discoveries; and details surrounding Lewis's death play major roles in James Rollins' seventh Sigma Force novel, The Devil Colony.
The mystery surrounding Meriwether Lewis's death played a role in the 2016 book, The Secret History of Twin Peaks, by author Mark Frost and in the 1998 novel by Malcolm Shuman, The Meriwether Murder.
In 2013, on the "Nashville" episode of the Comedy Central series Drunk History, Alie Ward and Georgia Hardstark retold the story of Lewis and Clark's expedition and Lewis's death, with Tony Hale portraying Lewis and Taran Killam as Clark.
In 2015, Link Neal alongside long time collaborator Rhett McLaughlin, portrayed Meriwether Lewis and William Clark respectively in the popular web series Epic Rap Battles of History as part of the Season 4 episode "Lewis and Clark vs Bill and Ted".
Halls of fame
In 1965, he was inducted into the Hall of Great Westerners of the National Cowboy & Western Heritage Museum.
Descendants
The Arquette acting family claims to be descended from Meriwether Lewis. Meriwether Lewis never married or had any children, but he has numerous collateral descendants via his siblings. As of 2004 there were around 774 documented collateral descendants of Lewis.
See also
List of unsolved deaths
Seaman (dog)
Footnotes
References
Further reading
External links
Meriwether Lewis at nps.gov
Meriwether Lewis on The History Channel
"Writings of Lewis and Clark" from C-SPAN's American Writers: A Journey Through History
Lewis and Clark Expedition Maps and Receipt. Yale Collection of Western Americana, Beinecke Rare Book and Manuscript Library.
1774 births
1809 deaths
American explorers
American Freemasons
American naturalists
American people of English descent
American people of Welsh descent
Burials in Tennessee
Deaths by firearm in Tennessee
Explorers of Montana
Explorers of Oregon
Governors of Louisiana Territory
History of Lancaster, Pennsylvania
Lewis and Clark Expedition people
Lewis family
Military aides to the President of the United States
Multiple gunshot suicides
People from Ivy, Virginia
Personal secretaries to the President of the United States
Scientists from Virginia
United States Army officers
Unsolved deaths in the United States
|
```smalltalk
using Microsoft.EntityFrameworkCore;
using Microsoft.Extensions.Logging;
using Microsoft.Extensions.Logging.Console;
namespace ConsoleAppEF2.Database
{
public class TestContext : DbContext
{
#if EF3 || EF5
public static readonly ILoggerFactory MyLoggerFactory = LoggerFactory.Create(builder => {
builder
//.AddFilter("Default", LogLevel.Information)
.AddFilter("Microsoft", LogLevel.Information)
//.AddFilter("System", LogLevel.Information)
//.AddDebug()
.AddConsole();
}
);
#else
public static readonly LoggerFactory MyLoggerFactory = new LoggerFactory(new[] { new ConsoleLoggerProvider((filter, includeScopes) => true, true) });
#endif
public virtual DbSet<Car> Cars { get; set; }
public virtual DbSet<Brand> Brands { get; set; }
protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
{
optionsBuilder.UseLoggerFactory(MyLoggerFactory); // Warning: Do not create a new ILoggerFactory instance each time
optionsBuilder.EnableSensitiveDataLogging();
optionsBuilder.UseSqlServer(@"Server=(localdb)\mssqllocaldb;Database=CarsEF20;Trusted_Connection=True;");
}
// path_to_url
public static bool Like(string matchExpression, string pattern) => EF.Functions.Like(matchExpression, pattern);
}
}
```
|
Presidential elections were held in Iran on 23 May 1997, which resulted in an unpredicted win for the reformist candidate Muhammad Khatami. The election was notable not only for the lopsided majority of the winner - 70% - but for the high turnout. 80% of those eligible to vote did so, compared to 50% in the previous presidential election.
During the election, voting age was 15 and more than half of Iran's population was younger than 25.
Candidates
The Council of Guardians blocked 234 candidates from running for the presidency because they lacked the religious and political qualifications. Only four candidates were permitted to run for office:
Mohammad Khatami, Former Minister of Culture and Islamic Guidance
Mohammad Reyshahri, Former Minister of Intelligence and National Security
Reza Zavare'i, Member of Guardian Council
Ali Akbar Nategh-Nouri, Incumbent Speaker of the Parliament of Iran
Disqualified candidates
Ebrahim Yazdi, secretary-general of Freedom Movement of Iran
Habibollah Payman, leader of Movement of Militant Muslims
Ezzatollah Sahabi, leading Nationalist-Religious figure
Azam Taleghani, former member of the Iranian parliament
Declined to run
Mir-Hossein Mousavi, former Prime Minister
Issues
The candidates were asked about their opinion on the fatwa against Salman Rushdie. Ali Akbar Nateq-Nouri said that any "a good Muslim" would carry out the fatwa. Mohammad Khatami avoided the issue. Mohammad Khatami's supporters called Nateq-Nouri the "Taliban" of Iran.
Khatami ran on a platform of political liberalization at home and détente abroad and expressed support for the easing Islamic regulations "from women's dress to whether TV satellite dishes should be allowed."
Endorsements
Media
During the elections, neutrality of Islamic Republic of Iran Broadcasting (IRIB) became a subject of dispute, as the organization was accused of supporting Nateq-Nouri and promoting conservative agenda.
Salam supported Khatami
Hamshahri supported Khatami
Resalat supported Nateq-Nouri
Kayhan supported Nateq-Nouri
Results
References
External links
Moderate triumphs in Iranian elections
Mohammad Khatami's background
Iran Elections: An Overview
Presidential elections in Iran
1997 elections in Iran
May 1997 events in Asia
Iran
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.