text
stringlengths 1
22.8M
|
|---|
Belle Prairie Township is located in Livingston County, Illinois in the United States. As of the 2010 census, its population was 135 and it contained 55 housing units.
History
Hoffman, M. M., & Hieronymus, D. J. (2002). Potosi, a ghost town and the Fairview community. Place of publication not identified: publisher not identified. A history of communities in the Belle Prairie township.
Potosi a ghost town and Fairview Community
Geography
According to the 2010 census, the township has a total area of , all land.
Demographics
References
External links
US Census
City-data.com
Illinois State Archives
Townships in Livingston County, Illinois
Populated places established in 1857
Townships in Illinois
1857 establishments in Illinois
|
```xml
<Project Sdk="Microsoft.NET.Sdk">
<Import Project="..\..\..\..\configureawait.props" />
<Import Project="..\..\..\..\common.props" />
<PropertyGroup>
<TargetFrameworks>netstandard2.0;netstandard2.1;net8.0</TargetFrameworks>
<AssemblyName>Volo.Docs.Application.Contracts</AssemblyName>
<PackageId>Volo.Docs.Application.Contracts</PackageId>
<RootNamespace />
</PropertyGroup>
<ItemGroup>
<ProjectReference Include="..\Volo.Docs.Domain.Shared\Volo.Docs.Domain.Shared.csproj" />
<ProjectReference Include="..\..\..\..\framework\src\Volo.Abp.Ddd.Application.Contracts\Volo.Abp.Ddd.Application.Contracts.csproj" />
</ItemGroup>
</Project>
```
|
The Story of Chaim Rumkowski and the Jews of Łódź is a 1982 documentary film that uses archival film footage and photographs to narrate the story of one of the Holocaust's most controversial figures, Chaim Rumkowski, a Jew put in charge of the Łódź ghetto by the German occupation authorities during World War II.
Summary
Following the 1939 invasion of Poland by Nazi Germany, Rumkowski, a childless sixty-two-year-old man with billowy white hair and black circular glasses, was appointed the Judenrat Elder in the Łódź Ghetto where 230,000 Polish Jews were confined during the Holocaust in occupied Poland. Rumkowski created an industry within which Jews could work and make themselves useful to the Nazis to avoid the slaughter of the Holocaust. But his record of establishing a temporary refuge for the Jews was overwhelmed by the fact that to appease the Nazis he handed over almost the entire population to Nazi extermination camps. Old photographs and the very rare surviving film footage of the Łódź ghetto serve as the visuals for a documentary that asks how much people should be willing to compromise to survive.
It seems that at the dawn of his power, Rumkowski was full of good intentions. He established hospitals, organized a fire department, set up a government, and cleaned the ghetto streets. Factory work gave the inhabitants a sense of purpose, and social welfare programs and institutions provided order and a feeling of community. At Łódź, Jews didn't die in the streets; instead, they died respectfully in hospitals. And of the industry he made there he said,"Our children and our grandchildren will recall with pride the names of those who gave us the opportunity to work and the right to live." In that speech to ghetto inhabitants, he continued, "We have only our production to thank for our survival."
But then when the Nazis began to demand Łódź inhabitants be relocated to extermination camps, Rumkowski selected who would be sent away and asked that they leave without hostility. He began by sending away the Ghetto's criminals but eventually pleaded that parents allow him to send away their children.
Chaim Rumkowski's story raises difficult moral questions regarding power and compliance. "We must cut off the legs to save the body," Rumkowski asserted. "I must stretch out my hands and beg," he declared to the Łódź ghetto inhabitants: "Brothers and sisters—Hand them over to me! Fathers and mothers—Give me your children!" With these words, a man who once directed an orphanage pleaded for Jewish parents to peacefully surrender their children to extermination camps.
While Rumkowski assured his Jewish followers that he was fighting to save their lives, bargaining with the Nazis led him to a crucial error. He, like the Nazis, began to see people as numbers and not as individuals. The incredible film footage of Łódź that the documentary offers, brings to life one of these numbers Rumkowski decided to spare or send away.
The same complexity that's found in Rumkowski's character is manifested in the Łódź ghetto itself. The thriving textile industry in the ghetto might have kept its inhabitants alive, but the goods they produced were a tremendous service to the Nazis. The Jews in the ghetto, unknowingly, helped build and facilitate Nazi concentration camps through Europe. They fed the mouth that bit them.
Reception
The New York Times gave the documentary a positive review. It described the film as "fine," "moving," and said it, "should be seen."
Awards
In 1982, The Story of Chaim Rumkowski and the Jews of Łódź won an Interfilm Award- Honorable Mention at the International Filmfestival Mannheim-Heidelberg.
See also
Other Holocaust documentaries:
Luboml
Paradise Camp
The Sixth Battalion
They Were Not Silent
Pola's March
Marion's Triumph
Shoah
Auschwitz: The Nazis and the Final Solution
References
The Story of Cham Rumkowski and the Jews of Lodz reviewed by The Jewish Channel
1982 films
Swedish biographical films
Documentary films about the Holocaust
Swedish documentary films
Łódź Ghetto
1982 documentary films
1980s English-language films
1980s Swedish films
|
```xml
import type { DriveFileRevision } from '../../store';
import { getCategorizedRevisions } from './getCategorizedRevisions';
describe('getCategorizedRevisions', () => {
beforeEach(() => {
jest.useFakeTimers().setSystemTime(new Date('2023-03-17T20:00:00'));
});
afterEach(() => {
jest.useRealTimers();
});
it('categorizes revisions correctly', () => {
const revisions = [
{ createTime: 1679058000 }, // March 17, 2023 at 2:00 PM
{ createTime: 1679036400 }, // March 17, 2023 at 7:00 AM
{ createTime: 1678968000 }, // March 16, 2023 at 12:00 PM
{ createTime: 1678986000 }, // March 16, 2023 at 5:00 PM
{ createTime: 1678950000 }, // March 16, 2023 at 7:00 AM
{ createTime: 1678777200 }, // March 14, 2023 at 7:00 AM
{ createTime: 1678431600 }, // March 10, 2023 at 7:00 AM
{ createTime: 1678172400 }, // March 7, 2023 at 7:00 AM
{ createTime: 1675753200 }, // February 7, 2023 at 7:00 AM
{ createTime: 1675234800 }, // February 1, 2023 at 7:00 AM
{ createTime: 1640415600 }, // December 25, 2021 at 7:00 AM
{ createTime: 1621926000 }, // May 25, 2021 at 7:00 AM
{ createTime: 1593500400 }, // June 30, 2020 at 7:00 AM
{ createTime: 1559372400 }, // June 1, 2019 at 7:00 AM
] as DriveFileRevision[];
const result = getCategorizedRevisions(revisions, 'en-US');
expect([...result.entries()]).toStrictEqual([
['today', { title: 'Today', list: [revisions[0], revisions[1]] }],
['yesterday', { title: 'Yesterday', list: [revisions[2], revisions[3], revisions[4]] }],
['d2', { title: 'Tuesday', list: [revisions[5]] }],
['last-week', { title: 'Last week', list: [revisions[6], revisions[7]] }],
// eslint-disable-next-line custom-rules/deprecate-spacing-utility-classes
['m1', { title: 'February', list: [revisions[8], revisions[9]] }],
['2021', { title: '2021', list: [revisions[10], revisions[11]] }],
['2020', { title: '2020', list: [revisions[12]] }],
['2019', { title: '2019', list: [revisions[13]] }],
]);
});
});
```
|
```python
"""Add node elevations from raster files or web APIs, and calculate edge grades."""
from __future__ import annotations
import logging as lg
import multiprocessing as mp
import time
from hashlib import sha1
from pathlib import Path
from typing import TYPE_CHECKING
from typing import Any
import networkx as nx
import numpy as np
import pandas as pd
import requests
from . import _http
from . import convert
from . import settings
from . import utils
from ._errors import InsufficientResponseError
if TYPE_CHECKING:
from collections.abc import Iterable
# rasterio and rio-vrt are optional dependencies for raster querying
try:
import rasterio
from rio_vrt import build_vrt
except ImportError: # pragma: no cover
rasterio = None
build_vrt = None
def add_edge_grades(G: nx.MultiDiGraph, *, add_absolute: bool = True) -> nx.MultiDiGraph:
"""
Calculate and add `grade` attributes to all graph edges.
Vectorized function to calculate the directed grade (i.e., rise over run)
for each edge in the graph and add it to the edge as an attribute. Nodes
must already have `elevation` and `length` attributes before using this
function.
See also the `add_node_elevations_raster` and `add_node_elevations_google`
functions.
Parameters
----------
G
Graph with `elevation` node attributes.
add_absolute
If True, also add absolute value of grade as `grade_abs` attribute.
Returns
-------
G
Graph with `grade` (and optionally `grade_abs`) attributes on the
edges.
"""
elev_lookup = G.nodes(data="elevation")
u, v, k, lengths = zip(*G.edges(keys=True, data="length"))
uvk = tuple(zip(u, v, k))
# calculate edges' elevation changes from u to v then divide by lengths
elevs = np.array([(elev_lookup[u], elev_lookup[v]) for u, v, k in uvk])
grades = (elevs[:, 1] - elevs[:, 0]) / np.array(lengths)
nx.set_edge_attributes(G, dict(zip(uvk, grades)), name="grade")
# optionally add grade absolute value to the edge attributes
if add_absolute:
nx.set_edge_attributes(G, dict(zip(uvk, np.abs(grades))), name="grade_abs")
msg = "Added grade attributes to all edges"
utils.log(msg, level=lg.INFO)
return G
def _query_raster(
nodes: pd.DataFrame,
filepath: str | Path,
band: int,
) -> Iterable[tuple[int, Any]]:
"""
Query a raster file for values at coordinates in DataFrame x/y columns.
Parameters
----------
nodes
DataFrame indexed by node ID and with two columns representing x and y
coordinates.
filepath
Path to the raster file or VRT to query.
band
Which raster band to query.
Returns
-------
nodes_values
Zip of node IDs and corresponding raster values.
"""
# must open raster file here: cannot pickle it to pass in multiprocessing
with rasterio.open(filepath) as raster:
values = np.array(tuple(raster.sample(nodes.to_numpy(), band)), dtype=float).squeeze()
values[values == raster.nodata] = np.nan
return zip(nodes.index, values)
def add_node_elevations_raster(
G: nx.MultiDiGraph,
filepath: str | Path | Iterable[str | Path],
*,
band: int = 1,
cpus: int | None = None,
) -> nx.MultiDiGraph:
"""
Add `elevation` attributes to all nodes from local raster file(s).
If `filepath` is an iterable of paths, this will generate a virtual raster
composed of the files at those paths as an intermediate step.
See also the `add_edge_grades` function.
Parameters
----------
G
Graph in same CRS as raster.
filepath
The path(s) to the raster file(s) to query.
band
Which raster band to query.
cpus
How many CPU cores to use. If None, use all available.
Returns
-------
G
Graph with `elevation` attributes on the nodes.
"""
if rasterio is None or build_vrt is None: # pragma: no cover
msg = "rasterio and rio-vrt must be installed as optional dependencies to query rasters."
raise ImportError(msg)
if cpus is None:
cpus = mp.cpu_count()
cpus = min(cpus, mp.cpu_count())
msg = f"Attaching elevations with {cpus} CPUs..."
utils.log(msg, level=lg.INFO)
# if multiple filepaths are passed in, compose them as a virtual raster
# use the sha1 hash of the filepaths object as the vrt filename
if not isinstance(filepath, (str, Path)):
filepaths = [str(p) for p in filepath]
checksum = sha1(str(filepaths).encode("utf-8")).hexdigest() # noqa: S324
filepath = f"./.osmnx_{checksum}.vrt"
build_vrt(filepath, filepaths)
nodes = convert.graph_to_gdfs(G, edges=False, node_geometry=False)[["x", "y"]]
if cpus == 1:
elevs = dict(_query_raster(nodes, filepath, band))
else:
# divide nodes into equal-sized chunks for multiprocessing
size = int(np.ceil(len(nodes) / cpus))
args = ((nodes.iloc[i : i + size], filepath, band) for i in range(0, len(nodes), size))
with mp.get_context("spawn").Pool(cpus) as pool:
results = pool.starmap_async(_query_raster, args).get()
elevs = {k: v for kv in results for k, v in kv}
nx.set_node_attributes(G, elevs, name="elevation")
msg = "Added elevation data from raster to all nodes"
utils.log(msg, level=lg.INFO)
return G
def add_node_elevations_google(
G: nx.MultiDiGraph,
*,
api_key: str | None = None,
batch_size: int = 512,
pause: float = 0,
) -> nx.MultiDiGraph:
"""
Add `elevation` (meters) attributes to all nodes using a web API.
By default this uses the Google Maps Elevation API, but you could instead
use any equivalent API with the same interface and response format (such
as the Open Topo Data API or the Open-Elevation API) via the `settings`
module's `elevation_url_template`. Adjust the `batch_size` and `pause`
arguments as needed for the provider. The Google Maps Elevation API
requires an API key but other providers may not. You can find more
information about the Google Maps Elevation API interface and format at:
path_to_url
For a free local alternative see the `add_node_elevations_raster`
function. See also the `add_edge_grades` function.
Parameters
----------
G
Graph to add elevation data to.
api_key
A valid API key. Can be None if the API does not require a key.
batch_size
Max number of coordinate pairs to submit in each request (depends on
provider's limits). Google's limit is 512.
pause
How long to pause in seconds between API calls, which can be increased
if you get rate limited.
Returns
-------
G
Graph with `elevation` attributes on the nodes.
"""
# make a pandas series of all the nodes' coordinates as "lat,lon" and
# round coordinates to 6 decimal places (approx 5 to 10 cm resolution)
node_points = pd.Series({n: f"{d['y']:.6f},{d['x']:.6f}" for n, d in G.nodes(data=True)})
n_calls = int(np.ceil(len(node_points) / batch_size))
domain = _http._hostname_from_url(settings.elevation_url_template)
msg = f"Requesting node elevations from {domain!r} in {n_calls} request(s)"
utils.log(msg, level=lg.INFO)
# break the series of coordinates into chunks of batch_size
# API format is locations=lat,lon|lat,lon|lat,lon|lat,lon...
results = []
for i in range(0, len(node_points), batch_size):
chunk = node_points.iloc[i : i + batch_size]
locations = "|".join(chunk)
url = settings.elevation_url_template.format(locations=locations, key=api_key)
# download and append these elevation results to list of all results
response_json = _elevation_request(url, pause)
if "results" in response_json and len(response_json["results"]) > 0:
results.extend(response_json["results"])
else:
raise InsufficientResponseError(str(response_json))
# sanity check that all our vectors have the same number of elements
msg = f"Graph has {len(G):,} nodes and we received {len(results):,} results from {domain!r}"
utils.log(msg, level=lg.INFO)
if not (len(results) == len(G) == len(node_points)): # pragma: no cover
err_msg = f"{msg}\n{response_json}"
raise InsufficientResponseError(err_msg)
# add elevation as an attribute to the nodes
df_elev = pd.DataFrame(node_points, columns=["node_points"])
df_elev["elevation"] = [result["elevation"] for result in results]
nx.set_node_attributes(G, name="elevation", values=df_elev["elevation"].to_dict())
msg = f"Added elevation data from {domain!r} to all nodes."
utils.log(msg, level=lg.INFO)
return G
def _elevation_request(url: str, pause: float) -> dict[str, Any]:
"""
Send a HTTP GET request to a Google Maps-style Elevation API.
Parameters
----------
url
URL of API endpoint, populated with request data.
pause
How long to pause in seconds before request.
Returns
-------
response_json
"""
# check if request already exists in cache
cached_response_json = _http._retrieve_from_cache(url)
if isinstance(cached_response_json, dict):
return cached_response_json
# pause then request this URL
domain = _http._hostname_from_url(url)
msg = f"Pausing {pause} second(s) before making HTTP GET request to {domain!r}"
utils.log(msg, level=lg.INFO)
time.sleep(pause)
# transmit the HTTP GET request
msg = f"Get {url} with timeout={settings.requests_timeout}"
utils.log(msg, level=lg.INFO)
response = requests.get(
url,
timeout=settings.requests_timeout,
headers=_http._get_http_headers(),
**settings.requests_kwargs,
)
response_json = _http._parse_response(response)
if not isinstance(response_json, dict): # pragma: no cover
msg = "Elevation API did not return a dict of results."
raise InsufficientResponseError(msg)
_http._save_to_cache(url, response_json, response.ok)
return response_json
```
|
```c++
#include <Columns/ColumnDynamic.h>
#include <DataTypes/DataTypeFactory.h>
#include <DataTypes/DataTypeVariant.h>
#include <DataTypes/Serializations/SerializationObject.h>
#include <DataTypes/Serializations/SerializationObjectTypedPath.h>
#include <IO/ReadHelpers.h>
namespace DB
{
namespace ErrorCodes
{
extern const int NOT_IMPLEMENTED;
}
void SerializationObjectTypedPath::enumerateStreams(
DB::ISerialization::EnumerateStreamsSettings & settings,
const DB::ISerialization::StreamCallback & callback,
const DB::ISerialization::SubstreamData & data) const
{
settings.path.push_back(Substream::ObjectData);
settings.path.push_back(Substream::ObjectTypedPath);
settings.path.back().object_path_name = path;
auto path_data = SubstreamData(nested_serialization)
.withType(data.type)
.withColumn(data.column)
.withSerializationInfo(data.serialization_info)
.withDeserializeState(data.deserialize_state);
nested_serialization->enumerateStreams(settings, callback, path_data);
settings.path.pop_back();
settings.path.pop_back();
}
void SerializationObjectTypedPath::serializeBinaryBulkStatePrefix(const IColumn &, SerializeBinaryBulkSettings &, SerializeBinaryBulkStatePtr &) const
{
throw Exception(
ErrorCodes::NOT_IMPLEMENTED, "Method serializeBinaryBulkStatePrefix is not implemented for SerializationObjectTypedPath");
}
void SerializationObjectTypedPath::serializeBinaryBulkStateSuffix(SerializeBinaryBulkSettings &, SerializeBinaryBulkStatePtr &) const
{
throw Exception(
ErrorCodes::NOT_IMPLEMENTED, "Method serializeBinaryBulkStateSuffix is not implemented for SerializationObjectTypedPath");
}
void SerializationObjectTypedPath::deserializeBinaryBulkStatePrefix(
DeserializeBinaryBulkSettings & settings, DeserializeBinaryBulkStatePtr & state, SubstreamsDeserializeStatesCache * cache) const
{
settings.path.push_back(Substream::ObjectData);
settings.path.push_back(Substream::ObjectTypedPath);
settings.path.back().object_path_name = path;
nested_serialization->deserializeBinaryBulkStatePrefix(settings, state, cache);
settings.path.pop_back();
settings.path.pop_back();
}
void SerializationObjectTypedPath::serializeBinaryBulkWithMultipleStreams(const IColumn &, size_t, size_t, SerializeBinaryBulkSettings &, SerializeBinaryBulkStatePtr &) const
{
throw Exception(ErrorCodes::NOT_IMPLEMENTED, "Method serializeBinaryBulkWithMultipleStreams is not implemented for SerializationObjectTypedPath");
}
void SerializationObjectTypedPath::deserializeBinaryBulkWithMultipleStreams(
ColumnPtr & result_column,
size_t limit,
DeserializeBinaryBulkSettings & settings,
DeserializeBinaryBulkStatePtr & state,
SubstreamsCache * cache) const
{
settings.path.push_back(Substream::ObjectData);
settings.path.push_back(Substream::ObjectTypedPath);
settings.path.back().object_path_name = path;
nested_serialization->deserializeBinaryBulkWithMultipleStreams(result_column, limit, settings, state, cache);
settings.path.pop_back();
settings.path.pop_back();
}
}
```
|
```turing
# -*-perl-*-
use strict;
use Test qw(:DEFAULT $TESTOUT $TESTERR $ntest);
### This test is crafted in such a way as to prevent Test::Harness from
### seeing the todo tests, otherwise you get people sending in bug reports
### about Test.pm having "UNEXPECTEDLY SUCCEEDED" tests.
open F, ">", "mix";
$TESTOUT = *F{IO};
$TESTERR = *F{IO};
plan tests => 4, todo => [2,3];
# line 15
ok(sub {
my $r = 0;
for (my $x=0; $x < 10; $x++) {
$r += $x*($r+1);
}
$r
}, 3628799);
ok(0);
ok(1);
skip(1,0);
close F;
$TESTOUT = *STDOUT{IO};
$TESTERR = *STDERR{IO};
$ntest = 1;
open F, "<", "mix";
my $out = join '', <F>;
close F;
unlink "mix";
my $expect = <<"EXPECT";
1..4 todo 2 3;
ok 1
not ok 2
# Failed test 2 in $0 at line 23 *TODO*
ok 3 # ($0 at line 24 TODO?!)
ok 4 # skip
EXPECT
sub commentless {
my $in = $_[0];
$in =~ s/^#[^\n]*\n//mg;
$in =~ s/\n#[^\n]*$//mg;
return $in;
}
print "1..1\n";
ok( commentless($out), commentless($expect) );
```
|
```c
/* GIO - GLib Input, Output and Streaming Library
*
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
*
* You should have received a copy of the GNU Lesser General
*
* Author: Vlad Grecescu <b100dian@gmail.com>
* Author: Chun-wei Fan <fanc999@yahoo.com.tw>
*
*/
#include "config.h"
#include "gwin32fsmonitorutils.h"
#include "gio/gfile.h"
#include <windows.h>
#define MAX_PATH_LONG 32767 /* Support Paths longer than MAX_PATH (260) characters */
static gboolean
g_win32_fs_monitor_handle_event (GWin32FSMonitorPrivate *monitor,
const gchar *filename,
PFILE_NOTIFY_INFORMATION pfni)
{
GFileMonitorEvent fme;
PFILE_NOTIFY_INFORMATION pfni_next;
WIN32_FILE_ATTRIBUTE_DATA attrib_data = {0, };
gchar *renamed_file = NULL;
switch (pfni->Action)
{
case FILE_ACTION_ADDED:
fme = G_FILE_MONITOR_EVENT_CREATED;
break;
case FILE_ACTION_REMOVED:
fme = G_FILE_MONITOR_EVENT_DELETED;
break;
case FILE_ACTION_MODIFIED:
{
gboolean success_attribs = GetFileAttributesExW (monitor->wfullpath_with_long_prefix,
GetFileExInfoStandard,
&attrib_data);
if (monitor->file_attribs != INVALID_FILE_ATTRIBUTES &&
success_attribs &&
attrib_data.dwFileAttributes != monitor->file_attribs)
fme = G_FILE_MONITOR_EVENT_ATTRIBUTE_CHANGED;
else
fme = G_FILE_MONITOR_EVENT_CHANGED;
monitor->file_attribs = attrib_data.dwFileAttributes;
}
break;
case FILE_ACTION_RENAMED_OLD_NAME:
if (pfni->NextEntryOffset != 0)
{
/* If the file was renamed in the same directory, we would get a
* FILE_ACTION_RENAMED_NEW_NAME action in the next FILE_NOTIFY_INFORMATION
* structure.
*/
glong file_name_len = 0;
pfni_next = (PFILE_NOTIFY_INFORMATION) ((BYTE*)pfni + pfni->NextEntryOffset);
renamed_file = g_utf16_to_utf8 (pfni_next->FileName, pfni_next->FileNameLength / sizeof(WCHAR), NULL, &file_name_len, NULL);
if (pfni_next->Action == FILE_ACTION_RENAMED_NEW_NAME)
fme = G_FILE_MONITOR_EVENT_RENAMED;
else
fme = G_FILE_MONITOR_EVENT_MOVED_OUT;
}
else
fme = G_FILE_MONITOR_EVENT_MOVED_OUT;
break;
case FILE_ACTION_RENAMED_NEW_NAME:
if (monitor->pfni_prev != NULL &&
monitor->pfni_prev->Action == FILE_ACTION_RENAMED_OLD_NAME)
{
/* don't bother sending events, was already sent (rename) */
fme = -1;
}
else
fme = G_FILE_MONITOR_EVENT_MOVED_IN;
break;
default:
/* The possible Windows actions are all above, so shouldn't get here */
g_assert_not_reached ();
break;
}
if (fme != -1)
return g_file_monitor_source_handle_event (monitor->fms,
fme,
filename,
renamed_file,
NULL,
g_get_monotonic_time ());
else
return FALSE;
}
static void CALLBACK
g_win32_fs_monitor_callback (DWORD error,
DWORD nBytes,
LPOVERLAPPED lpOverlapped)
{
gulong offset;
PFILE_NOTIFY_INFORMATION pfile_notify_walker;
GWin32FSMonitorPrivate *monitor = (GWin32FSMonitorPrivate *) lpOverlapped;
DWORD notify_filter = monitor->isfile ?
(FILE_NOTIFY_CHANGE_FILE_NAME |
FILE_NOTIFY_CHANGE_ATTRIBUTES |
FILE_NOTIFY_CHANGE_SIZE) :
(FILE_NOTIFY_CHANGE_FILE_NAME |
FILE_NOTIFY_CHANGE_DIR_NAME |
FILE_NOTIFY_CHANGE_ATTRIBUTES |
FILE_NOTIFY_CHANGE_SIZE);
/* If monitor->self is NULL the GWin32FileMonitor object has been destroyed. */
if (monitor->self == NULL ||
g_file_monitor_is_cancelled (monitor->self) ||
monitor->file_notify_buffer == NULL)
{
g_free (monitor->file_notify_buffer);
g_free (monitor);
return;
}
offset = 0;
do
{
pfile_notify_walker = (PFILE_NOTIFY_INFORMATION)((BYTE *)monitor->file_notify_buffer + offset);
if (pfile_notify_walker->Action > 0)
{
glong file_name_len;
gchar *changed_file;
changed_file = g_utf16_to_utf8 (pfile_notify_walker->FileName,
pfile_notify_walker->FileNameLength / sizeof(WCHAR),
NULL, &file_name_len, NULL);
if (monitor->isfile)
{
gint long_filename_length = wcslen (monitor->wfilename_long);
gint short_filename_length = wcslen (monitor->wfilename_short);
enum GWin32FileMonitorFileAlias alias_state;
/* If monitoring a file, check that the changed file
* in the directory matches the file that is to be monitored
* We need to check both the long and short file names for the same file.
*
* We need to send in the name of the monitored file, not its long (or short) variant,
* if they exist.
*/
if (_wcsnicmp (pfile_notify_walker->FileName,
monitor->wfilename_long,
long_filename_length) == 0)
{
if (_wcsnicmp (pfile_notify_walker->FileName,
monitor->wfilename_short,
short_filename_length) == 0)
{
alias_state = G_WIN32_FILE_MONITOR_NO_ALIAS;
}
else
alias_state = G_WIN32_FILE_MONITOR_LONG_FILENAME;
}
else if (_wcsnicmp (pfile_notify_walker->FileName,
monitor->wfilename_short,
short_filename_length) == 0)
{
alias_state = G_WIN32_FILE_MONITOR_SHORT_FILENAME;
}
else
alias_state = G_WIN32_FILE_MONITOR_NO_MATCH_FOUND;
if (alias_state != G_WIN32_FILE_MONITOR_NO_MATCH_FOUND)
{
wchar_t *monitored_file_w;
gchar *monitored_file;
switch (alias_state)
{
case G_WIN32_FILE_MONITOR_NO_ALIAS:
monitored_file = g_strdup (changed_file);
break;
case G_WIN32_FILE_MONITOR_LONG_FILENAME:
case G_WIN32_FILE_MONITOR_SHORT_FILENAME:
monitored_file_w = wcsrchr (monitor->wfullpath_with_long_prefix, L'\\');
monitored_file = g_utf16_to_utf8 (monitored_file_w + 1, -1, NULL, NULL, NULL);
break;
default:
g_assert_not_reached ();
break;
}
g_win32_fs_monitor_handle_event (monitor, monitored_file, pfile_notify_walker);
g_free (monitored_file);
}
}
else
g_win32_fs_monitor_handle_event (monitor, changed_file, pfile_notify_walker);
g_free (changed_file);
}
monitor->pfni_prev = pfile_notify_walker;
offset += pfile_notify_walker->NextEntryOffset;
}
while (pfile_notify_walker->NextEntryOffset);
ReadDirectoryChangesW (monitor->hDirectory,
monitor->file_notify_buffer,
monitor->buffer_allocated_bytes,
FALSE,
notify_filter,
&monitor->buffer_filled_bytes,
&monitor->overlapped,
g_win32_fs_monitor_callback);
}
void
g_win32_fs_monitor_init (GWin32FSMonitorPrivate *monitor,
const gchar *dirname,
const gchar *filename,
gboolean isfile)
{
wchar_t *wdirname_with_long_prefix = NULL;
const gchar LONGPFX[] = "\\\\?\\";
gchar *fullpath_with_long_prefix, *dirname_with_long_prefix;
DWORD notify_filter = isfile ?
(FILE_NOTIFY_CHANGE_FILE_NAME |
FILE_NOTIFY_CHANGE_ATTRIBUTES |
FILE_NOTIFY_CHANGE_SIZE) :
(FILE_NOTIFY_CHANGE_FILE_NAME |
FILE_NOTIFY_CHANGE_DIR_NAME |
FILE_NOTIFY_CHANGE_ATTRIBUTES |
FILE_NOTIFY_CHANGE_SIZE);
gboolean success_attribs;
WIN32_FILE_ATTRIBUTE_DATA attrib_data = {0, };
if (dirname != NULL)
{
dirname_with_long_prefix = g_strconcat (LONGPFX, dirname, NULL);
wdirname_with_long_prefix = g_utf8_to_utf16 (dirname_with_long_prefix, -1, NULL, NULL, NULL);
if (isfile)
{
gchar *fullpath;
wchar_t wlongname[MAX_PATH_LONG];
wchar_t wshortname[MAX_PATH_LONG];
wchar_t *wfullpath, *wbasename_long, *wbasename_short;
fullpath = g_build_filename (dirname, filename, NULL);
fullpath_with_long_prefix = g_strconcat (LONGPFX, fullpath, NULL);
wfullpath = g_utf8_to_utf16 (fullpath, -1, NULL, NULL, NULL);
monitor->wfullpath_with_long_prefix =
g_utf8_to_utf16 (fullpath_with_long_prefix, -1, NULL, NULL, NULL);
/* ReadDirectoryChangesW() can return the normal filename or the
* "8.3" format filename, so we need to keep track of both these names
* so that we can check against them later when it returns
*/
if (GetLongPathNameW (monitor->wfullpath_with_long_prefix, wlongname, MAX_PATH_LONG) == 0)
{
wbasename_long = wcsrchr (monitor->wfullpath_with_long_prefix, L'\\');
monitor->wfilename_long = wbasename_long != NULL ?
wcsdup (wbasename_long + 1) :
wcsdup (wfullpath);
}
else
{
wbasename_long = wcsrchr (wlongname, L'\\');
monitor->wfilename_long = wbasename_long != NULL ?
wcsdup (wbasename_long + 1) :
wcsdup (wlongname);
}
if (GetShortPathNameW (monitor->wfullpath_with_long_prefix, wshortname, MAX_PATH_LONG) == 0)
{
wbasename_short = wcsrchr (monitor->wfullpath_with_long_prefix, L'\\');
monitor->wfilename_short = wbasename_short != NULL ?
wcsdup (wbasename_short + 1) :
wcsdup (wfullpath);
}
else
{
wbasename_short = wcsrchr (wshortname, L'\\');
monitor->wfilename_short = wbasename_short != NULL ?
wcsdup (wbasename_short + 1) :
wcsdup (wshortname);
}
g_free (fullpath);
}
else
{
monitor->wfilename_short = NULL;
monitor->wfilename_long = NULL;
monitor->wfullpath_with_long_prefix = g_utf8_to_utf16 (dirname_with_long_prefix, -1, NULL, NULL, NULL);
}
monitor->isfile = isfile;
}
else
{
dirname_with_long_prefix = g_strconcat (LONGPFX, filename, NULL);
monitor->wfullpath_with_long_prefix = g_utf8_to_utf16 (dirname_with_long_prefix, -1, NULL, NULL, NULL);
monitor->wfilename_long = NULL;
monitor->wfilename_short = NULL;
monitor->isfile = FALSE;
}
success_attribs = GetFileAttributesExW (monitor->wfullpath_with_long_prefix,
GetFileExInfoStandard,
&attrib_data);
if (success_attribs)
monitor->file_attribs = attrib_data.dwFileAttributes; /* Store up original attributes */
else
monitor->file_attribs = INVALID_FILE_ATTRIBUTES;
monitor->pfni_prev = NULL;
monitor->hDirectory = CreateFileW (wdirname_with_long_prefix != NULL ? wdirname_with_long_prefix : monitor->wfullpath_with_long_prefix,
FILE_GENERIC_READ | FILE_GENERIC_WRITE,
FILE_SHARE_DELETE | FILE_SHARE_READ | FILE_SHARE_WRITE,
NULL,
OPEN_EXISTING,
FILE_FLAG_BACKUP_SEMANTICS | FILE_FLAG_OVERLAPPED,
NULL);
g_free (wdirname_with_long_prefix);
g_free (dirname_with_long_prefix);
if (monitor->hDirectory != INVALID_HANDLE_VALUE)
{
ReadDirectoryChangesW (monitor->hDirectory,
monitor->file_notify_buffer,
monitor->buffer_allocated_bytes,
FALSE,
notify_filter,
&monitor->buffer_filled_bytes,
&monitor->overlapped,
g_win32_fs_monitor_callback);
}
}
GWin32FSMonitorPrivate *
g_win32_fs_monitor_create (gboolean isfile)
{
GWin32FSMonitorPrivate *monitor = g_new0 (GWin32FSMonitorPrivate, 1);
monitor->buffer_allocated_bytes = 32784;
monitor->file_notify_buffer = g_new0 (FILE_NOTIFY_INFORMATION, monitor->buffer_allocated_bytes);
return monitor;
}
void
g_win32_fs_monitor_finalize (GWin32FSMonitorPrivate *monitor)
{
g_free (monitor->wfullpath_with_long_prefix);
g_free (monitor->wfilename_long);
g_free (monitor->wfilename_short);
if (monitor->hDirectory == INVALID_HANDLE_VALUE)
{
/* If we don't have a directory handle we can free
* monitor->file_notify_buffer and monitor here. The
* callback won't be called obviously any more (and presumably
* never has been called).
*/
g_free (monitor->file_notify_buffer);
monitor->file_notify_buffer = NULL;
g_free (monitor);
}
else
{
/* If we have a directory handle, the OVERLAPPED struct is
* passed once more to the callback as a result of the
* CloseHandle() done in the cancel method, so monitor has to
* be kept around. The GWin32DirectoryMonitor object is
* disappearing, so can't leave a pointer to it in
* monitor->self.
*/
monitor->self = NULL;
}
}
void
g_win32_fs_monitor_close_handle (GWin32FSMonitorPrivate *monitor)
{
/* This triggers a last callback() with nBytes==0. */
/* Actually I am not so sure about that, it seems to trigger a last
* callback allright, but the way to recognize that it is the final
* one is not to check for nBytes==0, I think that was a
* misunderstanding.
*/
if (monitor->hDirectory != INVALID_HANDLE_VALUE)
CloseHandle (monitor->hDirectory);
}
```
|
Maghas or Maas, more properly, Mags or Maks, was the capital city of Alania, a medieval kingdom in the Greater Caucasus. It is known from Islamic and Chinese sources, but its location is uncertain, with some authors favouring North Ossetia and others pointing to Arkhyz in modern-day Karachay–Cherkessia, where three 10th-century churches still stand.
Historian John Latham Sprinkle from the University of Ghent (Belgium) identified Maghas with an archeological site known as Il’ichevskoye Gorodische in Otradnensky District, Krasnodar Krai.
The destruction of Maghas is ascribed to Batu Khan, a Mongol leader and a grandson of Genghis Khan, in the beginning of 1239. Some Russian geographers, like D. V. Zayats, point to a location in Ingushetia.
The capital of the Russian Republic of Ingushetia, Magas, is named after Maghas.
Name
The name is given in Arabic sources as Maghas or Ma'as, in Persian as Magas or Makas, and in Chinese as Muzashan (木栅山). The name Magas is a homonym of the Persian word magas, meaning "fly", and the medieval writers al-Mas'udi and Juvayni made plays on words about the city's name. The Chinese transcription Muzashan uses the characters for wood (mu, 木) and mountain (shan, 山), which John Latham-Sprinkle interprets as a possible reference to the city's location in rough terrain.
Contemporary documentation
The main historical references for the city of Maghas are al-Mas'udi's Murūj al-Dhahab, written sometime in the 940s; Juvayni's Tarīkh-i Jahāngushāy, from the 1250s; Rashid al-Din Tabib's Jāmi' al-Tawārīkh, written 1310; and the Yuanshi, compiled in Ming China around 1369. Al-Mas'udi, who had travelled through the Caucasus in the 930s, wrote that Maghas was the capital of the Alans, or al-Lān, although their unnamed king periodically travelled from one residence to another. Juvayni's account, written three centuries later, is the earliest to mention the Mongol capture of Maghas, although he does not provide a specific date for this event. His account is fairly imprecise in general, and he was apparently misinformed on the location of Maghas in particular, since he implied it was located in Rus' rather than the northern Caucasus. In his description of the Mongol capture of Maghas, he wrote that it was heavily fortified and located in a heavily wooded area, so that the Mongols had to cut paths through the woods for their heavy siege equipment. He wrote that, after the Mongols captured Maghas, they massacred the population, so that there was nothing left of the city but its namesake flies.
The Yuanshi, which contains biographies of commanders serving the Yuan dynasty, provides the most detailed account of the siege of Maghas. In particular, its biography of Shiri-Gambu, an ethnically Tangut commander fighting for the Mongols, states that the siege began in the 11th lunar month of 1239 (i.e. 27 November through 26 December) and ended in the 2nd lunar month of 1240 (i.e. 6-24 February). The final assault of the city was "conducted by a number of small squads", which were apparently composed of ethnically diverse allied troops rather than Mongols themselves — besides the Tangut Shiri-Gambu, the Yuanshi also mentions that there were Qipchaq troops present at the siege, as well as a group of Alans who were Mongol allies. The biography of Shiri-Gambu also describes the city of Maghas as being "surrounded by a high wall, and in a strong natural position".
Vladimir Minorsky argued that another, garbled reference to the city may be found in the Hudud al-Alam, an anonymous 10th-century Persian geographical text from the 10th-century. While it does not directly mention Maghas, it contains a passage claiming that the people of Sarir, the eastern neighbor of Alania, left food out in order to avoid being eaten by giant flies the size of partridges. Minorsky interpreted this as a possible, somewhat confused reference to Sarir sending tribute to the city of Maghas, based on the name being a homophone for "fly" in Persian.
Other historical sources mentioning Maghas either simply copy from earlier works, such as Yaqut al-Hamawi's Mu'jam al-Buldān, or only mention the city in passing, such as the 13th-century Secret History of the Mongols.
References
Sources
Medieval history of the Caucasus
History of the North Caucasus
History of North Ossetia–Alania
Former populated places in Russia
Former national capitals
Alans
|
```smalltalk
Extension { #name : 'Object' }
{ #category : '*Calypso-Browser' }
Object class >> asCalypsoItemContext [
^ClyBrowserItemContext itemType: self
]
```
|
```go
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// in the Software without restriction, including without limitation the rights
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
// THE SOFTWARE.
package common
type (
// Daemon is the base interfaces implemented by
// background tasks within cherami
Daemon interface {
Start()
Stop()
}
)
```
|
```c
/*
*
*
* Permission to use, copy, modify, and distribute this software for any
* purpose with or without fee is hereby granted, provided that the above
* copyright notice and this permission notice appear in all copies.
*
* THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
* WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
* MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
* ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
* WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
* ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
* OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
/*
* This is an open source non-commercial project. Dear PVS-Studio, please check it.
* PVS-Studio Static Code Analyzer for C, C++ and C#: path_to_url
*/
#include <config.h>
#ifdef HAVE_GSS_KRB5_CCACHE_NAME
# if defined(HAVE_GSSAPI_GSSAPI_KRB5_H)
# include <gssapi/gssapi.h>
# include <gssapi/gssapi_krb5.h>
# elif defined(HAVE_GSSAPI_GSSAPI_H)
# include <gssapi/gssapi.h>
# else
# include <gssapi.h>
# endif
#endif
#include <sudo.h>
#include <sudo_dso.h>
#include <sudo_plugin.h>
#ifdef STATIC_SUDOERS_PLUGIN
extern struct policy_plugin sudoers_policy;
extern struct io_plugin sudoers_io;
extern struct audit_plugin sudoers_audit;
static struct sudo_preload_symbol sudo_rtld_default_symbols[] = {
# ifdef HAVE_GSS_KRB5_CCACHE_NAME
{ "gss_krb5_ccache_name", (void *)&gss_krb5_ccache_name},
# endif
{ (const char *)0, (void *)0 }
};
/* XXX - can we autogenerate these? */
static struct sudo_preload_symbol sudo_sudoers_plugin_symbols[] = {
{ "sudoers_policy", (void *)&sudoers_policy },
{ "sudoers_io", (void *)&sudoers_io },
{ "sudoers_audit", (void *)&sudoers_audit },
{ (const char *)0, (void *)0 }
};
/*
* Statically compiled symbols indexed by handle.
*/
static struct sudo_preload_table sudo_preload_table[] = {
{ (char *)0, SUDO_DSO_DEFAULT, sudo_rtld_default_symbols },
{ _PATH_SUDOERS_PLUGIN, &sudo_sudoers_plugin_symbols, sudo_sudoers_plugin_symbols },
{ (char *)0, (void *)0, (struct sudo_preload_symbol *)0 }
};
void
preload_static_symbols(void)
{
sudo_dso_preload_table(sudo_preload_table);
}
#endif /* STATIC_SUDOERS_PLUGIN */
```
|
The Metropolitan Poor Act 1867 was an Act of Parliament of the United Kingdom, the first in a series of major reforms that led to the gradual separation of the Poor Law's medical functions from its poor relief functions. It also led to the creation of a separate administrative authority the Metropolitan Asylums Board.
The legislation provided that a single Metropolitan Poor Rate would be levied across the Metropolis: this being defined as the area of the Metropolitan Board of Works. The Poor Law Board (a central government body) was empowered to form the areas of the various parish and poor law unions into districts for the provision of "Asylums for the Sick, Insane, and other Classes of the Poor".
An order was signed on 16 May 1867, combining all the parishes and unions in the Metropolis into a single Metropolitan Asylum District "for the reception and relief of the classes of poor persons chargeable to some union or parish in the said district respectively who may be infected with or suffering from fever, or the disease of small-pox or may be insane." The Metropolitan Asylums Board was established with 60 members: 45 elected by the various poor law boards of guardians and 15 nominated by the Poor Law Board.
The legislation amended the Poor Law Amendment Act 1834 to allow control over parishes that had been excluded from it by local acts. The ten parishes were St James Clerkenwell, St George Hanover Square, St Giles and St George Bloomsbury, St Mary Islington, St James Westminster, St Luke, St Margaret and St John Westminster, St Marylebone, St Mary Newington and St Pancras.
It permitted the employment of probationary nurses who were trained for a year in the sick asylums. These nurses gradually began to replace the employment of untrained paupers.
References
External links
United Kingdom Acts of Parliament 1867
Acts of the Parliament of the United Kingdom concerning London
English Poor Laws
1867 in London
March 1867 events
|
The 1960 Saskatchewan Roughriders finished in fifth place (last) in the W.I.F.U. with a 2–12–2 record. Their six points were six behind the fourth-place BC Lions, and eight points behind the third-place Calgary Stampeders who claimed the third and final playoff spot.
1960 Preseason
On July 28, the Roughriders played the London Lords of the Senior Ontario Rugby Football Union in London, Ontario, and beat their hosts 38–0.
1960 regular season
Season Standings
1960 Season schedule
1960 Preseason
1960 regular season
The Saskatchewan Roughriders failed to make the playoffs.
1960 CFL Schenley Award Nominees
References
Saskatchewan Roughriders seasons
Saskatchewan Roughriders
1960 Canadian Football League season by team
|
```xml
/*
* This software is released under MIT license.
* The full license information can be found in LICENSE in the root directory of this project.
*/
import { registerElementSafely } from '@cds/core/internal';
import { html, LitElement } from 'lit';
import { createTestElement, removeTestElement } from '@cds/core/test';
import { querySlot, querySlotAll } from './query-slot.js';
/** @element test-element */
class TestElement extends LitElement {
@querySlot('#test') test: HTMLDivElement;
@querySlotAll('.item') testItems: NodeListOf<HTMLDivElement>;
render() {
return html` <slot></slot> `;
}
}
registerElementSafely('test-element', TestElement);
describe('query slot decorator', () => {
let testElement: HTMLElement;
let component: TestElement;
beforeEach(async () => {
testElement = await createTestElement(html`
<test-element>
<div id="test">hi</div>
<div class="item">item 1</div>
<div class="item">item 2</div>
<div class="item">item 3</div>
</test-element>
`);
component = testElement.querySelector<TestElement>('test-element');
});
afterEach(() => {
removeTestElement(testElement);
});
it('should get a single element reference from a slotted element', () => {
expect(component).toBeTruthy();
expect(component.test).toBeTruthy();
expect(component.test.innerText).toBe('hi');
});
it('should get a Node List of element from slotted elements', () => {
expect(component).toBeTruthy();
expect(component.testItems.length).toBe(3);
expect(Array.from(component.testItems)[0].innerText).toBe('item 1');
});
it('should throw if element is required', () => {
class Proto {
@querySlot('cds-error', { required: 'error' }) testError: HTMLDivElement;
tagName = 'test-el';
firstUpdated() {
// do nothing
}
querySelector() {
// do nothing
}
}
const proto = new Proto();
try {
proto.firstUpdated();
} catch (e) {
expect(e.toString()).toBe('Error: The <cds-error> element is required to use <test-el>');
}
});
it('should throw if element is required and contains custom message', () => {
class Proto {
@querySlot('#errorMessage', { required: 'error', requiredMessage: 'test message' })
testErrorWithMessage: HTMLDivElement;
firstUpdated() {
// do nothing
}
querySelector() {
// do nothing
}
}
const proto = new Proto();
try {
proto.firstUpdated();
} catch (e) {
expect(e.toString()).toBe('Error: test message');
}
});
it('should NOT throw if element is required but has exempt callback that returns true', () => {
class Proto {
@querySlot('#errorMessage', {
required: 'error',
requiredMessage: 'test message',
exemptOn: () => {
return true;
},
})
testErrorWithMessage: HTMLDivElement;
firstUpdated() {
// do nothing
}
querySelector() {
// do nothing
}
}
const proto = new Proto();
let errorMessage: string;
try {
proto.firstUpdated();
} catch (e) {
errorMessage = e.toString();
}
expect(errorMessage).toBeUndefined('The exempt condition has been met - thus no errors must be thrown.');
});
it('should throw if element is required but has exempt callback that returns false', () => {
class Proto {
@querySlot('#errorMessage', {
required: 'error',
requiredMessage: 'test message',
exemptOn: () => {
return false;
},
})
testErrorWithMessage: HTMLDivElement;
firstUpdated() {
// do nothing
}
querySelector() {
// do nothing
}
}
const proto = new Proto();
let errorMessage: string;
try {
proto.firstUpdated();
} catch (e) {
errorMessage = e.toString();
}
expect(errorMessage).toBe('Error: test message');
});
it('should support native decorator API proposal', () => {
const proto = { key: 'testEvent' };
const conf = querySlot('#test')(proto, undefined);
expect(conf.key).toBe('testEvent');
});
});
```
|
Ranco is a comune (municipality) on the shore of Lago Maggiore in the Province of Varese in the Italian region Lombardy, located about northwest of Milan and about west of Varese. As of 31 December 2004, it had a population of 1,188 and an area of .
The village has a Guide Michelin acclaimed restaurant (Il sole di Ranco) and a Transport Museum. The municipality of Ranco contains the frazione (subdivision) Uponne.
Ranco borders the following municipalities: Angera, Ispra, Lesa, Meina.
Demographic evolution
References
Cities and towns in Lombardy
|
```c++
//
//
// path_to_url
//
// Unless required by applicable law or agreed to in writing, software
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#include "paddle/cinn/common/arithmetic.h"
#include <ginac/ginac.h>
#include <glog/logging.h>
#include <gtest/gtest.h>
#include "paddle/cinn/common/ir_util.h"
#include "paddle/cinn/ir/ir.h"
#include "paddle/cinn/ir/ir_printer.h"
#include "paddle/cinn/ir/op/ir_operators.h"
#include "paddle/cinn/utils/string.h"
namespace cinn {
namespace common {
using utils::GetStreamCnt;
using utils::Join;
using utils::Trim;
using namespace ir; // NOLINT
TEST(GiNaC, simplify) {
using namespace GiNaC; // NOLINT
GiNaC::symbol x("x");
GiNaC::symbol y("y");
ex e = x * 0 + 1 + 2 + 3 - 100 + 30 * y - y * 21 + 0 * x;
LOG(INFO) << "e: " << e;
}
TEST(GiNaC, diff) {
using namespace GiNaC; // NOLINT
GiNaC::symbol x("x"), y("y");
ex e = (x + 1);
ex e1 = (y + 1);
e = diff(e, x);
e1 = diff(e1, x);
LOG(INFO) << "e: " << eval(e);
LOG(INFO) << "e1: " << eval(e1);
}
TEST(GiNaC, solve) {
using namespace GiNaC; // NOLINT
GiNaC::symbol x("x"), y("y");
lst eqns{2 * x + 3 == 19};
lst vars{x};
LOG(INFO) << "solve: " << lsolve(eqns, vars);
LOG(INFO) << diff(2 * x + 3, x);
}
TEST(Solve, basic) {
Var i("i", Int(32));
Expr lhs = Expr(i) * 2;
Expr rhs = Expr(2) * Expr(200);
Expr res;
bool is_positive;
std::tie(res, is_positive) = Solve(lhs, rhs, i);
LOG(INFO) << "res: " << res;
EXPECT_TRUE(is_positive);
EXPECT_TRUE(res == Expr(200));
}
TEST(Solve, basic1) {
Var i("i", Int(32));
Expr lhs = Expr(i) * 2;
Expr rhs = Expr(2) * Expr(200) + 3 * Expr(i);
Expr res;
bool is_positive;
std::tie(res, is_positive) = Solve(lhs, rhs, i);
LOG(INFO) << "res " << res;
EXPECT_TRUE(res == Expr(-400));
EXPECT_FALSE(is_positive);
}
} // namespace common
} // namespace cinn
```
|
The pound was the currency of Bermuda until 1970. It was equivalent to sterling, alongside which it circulated, and was similarly divided into 20 shillings each of 12 pence. Bermuda decimalised in 1970, replacing the pound with the Bermudian dollar at a rate of $1 = 8s.4d. (i.e., $1 = 100d), equal to the US dollar.
Coins
The first Bermudian currency issue was the so-called "hogge money", 2d, 3d and 6d, and 1/– coins issued between 1612 and 1624. Their name derives from the appearance of a pig on the obverse. At this time, Bermuda was known as Somers Island (which is still an official name) and this name appears on the coins. The next coins to be issued were copper pennies in 1793. When Bermuda adopted the sterling currency system in the first half of the nineteenth century, the coinage that circulated was exactly the standard sterling coinage that circulated in the United Kingdom. No special varieties of coinage were ever issued for general use in Bermuda. However, special silver crowns (five shillings) were issued in 1959 and again in 1964. These commemoratives were similar in appearance to the British crowns, but featured Bermudian designs on their reverses. The first issue has a map of the islands to mark their 350th anniversary of settlement. The second coin shows the islands' coat of arms. Because of the rising price of precious metals, the diameter of the 1964 issue was reduced from 38 to 36 millimeters and the silver content dropped from 92.5% to 50%. Their respective mintages were 100,000 and 500,000 (30,000 of the latter being issued in proof). Both coins remain readily available to collectors.
Banknotes
In 1914, the government introduced £1 notes. In 1920, 5/– notes were introduced, followed by 10/– in 1927 and £5 in 1941. The 5/– note ceased production in 1957, with £10 notes introduced in 1964.
History
For nearly four hundred years Spanish dollars, known as pieces of eight, were in widespread use on the world's trading routes, including the Caribbean Sea region. However, following the revolutionary wars in Latin America, the source of these silver trade coins dried up. The last Spanish dollar was minted at the Potosi mint in 1825.
The United Kingdom had adopted a very successful gold standard in 1821, and so the year 1825 was an opportune time to introduce the British sterling coinage into all the British colonies. An imperial order-in-council was passed in that year for the purposes of facilitating this aim by making sterling coinage legal tender in the colonies at the specified rating of $1 = 4s.4d. (One Spanish dollar to four shillings and four pence sterling). As the sterling silver coins were attached to a gold standard, this exchange rate did not realistically represent the value of the silver in the Spanish dollars as compared to the value of the gold in the British gold sovereign, and as such, the order-in-council had the reverse effect in many colonies. It had the effect of actually driving sterling coinage out, rather than encouraging its circulation.
Remedial legislation had to be introduced in 1838 so as to change over to the more realistic rating of $1 = 4s.2d. However, in Jamaica, British Honduras, Bermuda, and later in the Bahamas also, the official rating was set aside in favour of what was known as the 'Maccaroni' tradition in which a sterling shilling, referred to as a 'Maccaroni', was treated as one quarter of a dollar. The common link between these four territories was the Bank of Nova Scotia which brought in the 'Maccaroni' tradition, resulting in the successful introduction of both sterling coinage and sterling accounts.
It wasn't until 1 January 1842 that the authorities in Bermuda formally decided to make sterling the official currency of the colony to circulate concurrently with Doubloons (64 shillings) at the rate of $1 = 4s.2d. Contrary to expectations, and unlike in the Bahamas where US dollars circulated concurrently with sterling, the Bermudas did not allow themselves to be drawn into the U.S. currency area. The Spanish dollars fell away in the 1850s but returned again in the 1870s following the international silver crisis of 1873. In 1874, the Bermuda merchants agreed unanimously to decline to accept the heavy imports of U.S. currency except at a heavy discount, and it was then exported again. And in 1876, legislation was passed to demonetize the silver dollars.
In 1882, the local 'legal tender act' demonetized the gold doubloon, which had in effect been the real standard in Bermuda, and this left sterling as the sole legal tender. Sterling then remained the official currency of Bermuda until 1970.
Due to the collapse of sterling as the world's reserve currency and the rise of the US dollar, Bermuda introduced a dollar based currency that was fixed at an equal value to the US dollar. The new Bermuda dollars operated in conjunction with decimal fractional coinage, hence ending the £sd system in that colony in the year before it was ended in the United Kingdom itself. The decision to finally align with the US dollar was at least in part influenced by the devaluation of sterling in 1967 and Bermuda's increasing tendency to keep its reserves in US dollars. Although Bermuda changed to a U.S. based currency and changed the bulk of its reserves from sterling to U.S. dollars in 1970, it still nevertheless remained a member of the sterling area since at that time, sterling and the US dollar had a fixed exchange rate of £1 = $2.40.
Following the US dollar crisis of 1971 which ended the international Bretton Woods agreement of 1944, the US dollar devalued, but the Bermuda dollar maintained its link to sterling. On 22 June 1972, the United Kingdom unilaterally ended its sterling area based exchange control laws, hence excluding Bermuda from its sterling area membership privileges. Bermuda responded on 30 June 1972 by amending its own exchange control laws accordingly, such as to impose exchange control restrictions in relation to Bermuda only. At the same time, Bermuda realigned its dollar back to one-to-one with the US dollar and formally pegged it to the US dollar at that rate. As far as United Kingdom law was concerned, Bermuda still remained a member of the overseas sterling area until exchange controls were abolished altogether in 1979.
References
Bibliography
Chalmers, R., "A History of Currency in the British Colonies" (1893)
Currencies of the British Empire
Currencies of the Commonwealth of Nations
Economy of Bermuda
Currencies of North America
Modern obsolete currencies
1970 disestablishments
|
```javascript
const { TouchBar } = require('electron')
const { TouchBarButton, TouchBarSpacer } = TouchBar
const mainWindow = require('./main-window')
const allNotes = new TouchBarButton({
label: '',
click: () => {
mainWindow.webContents.send('list:navigate', '/home')
}
})
const starredNotes = new TouchBarButton({
label: '',
click: () => {
mainWindow.webContents.send('list:navigate', '/starred')
}
})
const trash = new TouchBarButton({
label: '',
click: () => {
mainWindow.webContents.send('list:navigate', '/trashed')
}
})
const newNote = new TouchBarButton({
label: '',
click: () => {
mainWindow.webContents.send('list:navigate', '/home')
mainWindow.webContents.send('top:new-note')
}
})
module.exports = new TouchBar([
allNotes,
starredNotes,
trash,
new TouchBarSpacer({ size: 'small' }),
newNote
])
```
|
The Independence Medal was instituted by the President of the Republic of Venda in 1979, for award to all ranks in commemoration of the independence of Venda.
The Venda Defence Force
The 900 member Venda Defence Force (VDF) was established upon that country's independence on 13 September 1979. The Republic of Venda ceased to exist on 27 April 1994 and the Venda Defence Force was amalgamated with six other military forces into the South African National Defence Force (SANDF).
Institution
The Independence Medal was instituted by the President of Venda in 1979.
Award criteria
The medal was awarded to all ranks in commemoration of the independence of Venda.
Order of wear
Since the Independence Medal was authorised for wear by one of the statutory forces which came to be part of the South African National Defence Force on 27 April 1994, it was accorded a position in the official South African order of precedence on that date.
Venda Defence Force until 26 April 1994
Official VDF order of precedence:
Preceded by the General Service Medal.
Succeeded by the Long Service Medal, Gold.
Venda official national order of precedence:
Preceded by the Police Medal for Combating Terrorism.
Succeeded by the Police Establishment Medal.
South African National Defence Force from 27 April 1994
Official SANDF order of precedence:
Preceded by the Independence Medal of the Republic of Bophuthatswana.
Succeeded by the Independence Medal of the Republic of Ciskei.
Official national order of precedence:
Preceded by the Independence Police Medal of the Republic of Bophuthatswana.
Succeeded by the Police Establishment Medal of the KwaZulu Homeland.
The position of the Independence Medal in the official order of precedence was revised twice after 1994, to accommodate the inclusion or institution of new decorations and medals, first in April 1996 when decorations and medals were belatedly instituted for the two former non-statutory forces, the Azanian People's Liberation Army and Umkhonto we Sizwe, and again upon the institution of a new set of honours on 27 April 2003, but it remained unchanged on both occasions.
Description
Obverse
The Independence Medal is a medallion struck in copper, 38 millimetres in diameter, depicting an elephant's head on a shield and inscribed "13 9 1979" above, "INDEPENDENCE" to the left and "DUVHA LA VHUDILANG" to the right of the shield.
Reverse
The reverse displays the Coat of Arms of the Republic of Venda.
Ribbon
The ribbon is 32 millimetres wide, with a 5 millimetres wide brown band, a 2 millimetres wide yellow band, a 5 millimetres wide green band and a 1 millimetre wide yellow band, repeated in reverse order and separated by a 6 millimetres wide blue band.
Discontinuation
Conferment of the Independence Medal was discontinued when the Republic of Venda ceased to exist on 27 April 1994.
References
Military decorations and medals of Venda
Awards established in 1979
Awards disestablished in 1994
1979 establishments in Africa
1994 disestablishments in Africa
|
Hoye-Crest is a summit along Backbone Mountain just inside of Garrett County, Maryland. It is the highest natural point in Maryland at an elevation of .
The location, named for Captain Charles E. Hoye (1876–1951), founder of the Garrett County Historical Society, offers a view of the North Branch Potomac River valley to the east. The Maryland Historical Society placed a historical marker at the summit during a dedication ceremony in September 1952.
Accessing Hoye-Crest
There is no vehicular access to Hoye-Crest. The best route by foot is a hike along the Maryland High Point Trail, from a point along U.S. Route 219 just south of Silver Lake, West Virginia at . The trail ascends Backbone Mountain along an old logging road on Monongahela National Forest property to the West Virginia-Maryland state line. The distance is about one mile each way. The trail then heads north along the state line to the high point. Hoye-Crest sits on private property (Western Pocahontas Properties), though access is permitted.
See also
List of U.S. states by elevation
Backbone Mountain
Monongahela National Forest
Meshach Browning
References
External links
Mountains of Maryland
Landforms of Garrett County, Maryland
Highest points of U.S. states
North American 1000 m summits
|
```javascript
// This file is needed because it is used by vscode and other tools that
// call `jest` directly. However, unless you are doing anything special
// do not edit this file
const standard = require('@grafana/toolkit/src/config/jest.plugin.config');
// This process will use the same config that `yarn test` is using
process.env.TZ = 'GMT';
module.exports = {
...standard.jestConfig(),
snapshotSerializers: ['enzyme-to-json/serializer'],
verbose: true,
collectCoverage: true,
collectCoverageFrom: [
'**/*.{ts,tsx}',
'!**/node_modules/**',
'!**/*styles.{ts,tsx}',
'!**/*constants.{ts,tsx}',
'!**/*module.{ts,tsx}',
'!**/*types.ts',
],
};
```
|
```javascript
// This will run an instance of Chrome via Web Driver locally.
// It will be headed (vs. headless) and is used for development purpose only.
const { Builder, logging } = require('selenium-webdriver');
const { Options: ChromeOptions, ServiceBuilder: ChromeServiceBuilder } = require('selenium-webdriver/chrome');
const AbortController = require('abort-controller');
const expect = require('expect');
const fetch = require('node-fetch');
const createDevProxies = require('./createDevProxies');
const findHostIP = require('./utils/findHostIP');
const findLocalIP = require('./utils/findLocalIP');
const registerProxies = require('../common/registerProxies');
const setAsyncInterval = require('./utils/setAsyncInterval');
const sleep = require('../../common/utils/sleep');
const ONE_DAY = 86400000;
global.expect = expect;
async function main() {
const abortController = new AbortController();
const hostIP = await findHostIP();
const localIP = await findLocalIP();
const service = await new ChromeServiceBuilder('./chromedriver.exe')
.addArguments('--allowed-ips', localIP)
.setHostname(hostIP)
.setStdio(['ignore', 'ignore', 'ignore'])
.build();
// eslint-disable-next-line no-magic-numbers
const webDriverURL = await service.start(10000);
try {
const preferences = new logging.Preferences();
preferences.setLevel(logging.Type.BROWSER, logging.Level.ALL);
const webDriver = await new Builder()
.forBrowser('chrome')
.setChromeOptions(new ChromeOptions().setLoggingPrefs(preferences))
.usingServer(webDriverURL)
.build();
const sessionId = (await webDriver.getSession()).getId();
const terminate = async () => {
abortController.abort();
// WebDriver.quit() will kill all async functions for executeScript().
// HTTP DELETE will kill the session.
// Combining two will forcefully kill the Web Driver session immediately.
try {
webDriver.quit(); // Don't await or Promise.all on quit().
await fetch(new URL(sessionId, webDriverURL), { method: 'DELETE', timeout: 2000 });
// eslint-disable-next-line no-empty
} catch (err) {}
};
process.once('SIGINT', terminate);
process.once('SIGTERM', terminate);
try {
// eslint-disable-next-line no-magic-numbers
await webDriver.get(process.argv[2] || 'path_to_url
registerProxies(webDriver, createDevProxies(webDriver));
setAsyncInterval(
async () => {
try {
await webDriver.getWindowHandle();
} catch (err) {
abortController.abort();
}
},
// eslint-disable-next-line no-magic-numbers
2000,
abortController.signal
);
await sleep(ONE_DAY, abortController.signal);
} finally {
await terminate();
}
} finally {
await service.kill();
}
}
main().catch(err => {
err.message === 'aborted' || console.error(err);
throw err;
});
```
|
Shlomi Elkabetz (; born 5 December 1972) is an Israeli actor, writer and director. He is known for playing Simon in HBO's Our Boys.
Early life
Elkabetz' were Moroccan Jews who immigrated to Israel. His mother was a hairdresser and his father was a postal employee. He was the youngest of four children, and his older sister was the late actress Ronit Elkabetz.
Career
Elkabetz is best known for his Vivian Amsalem trilogy comprising the films, To Take a Wife, Shiva and Gett: The Trial of Viviane Amsalem. Elkabetz co-wrote and co-directed the films with his older sister, Ronit Elkabetz, who also starred in the films as Viviane Amsalem, an unhappy Israeli housewife trapped in a marriage with a pious man she cannot stand. The films were loosely based on the relationship between the Elkabetz's parents.
Elkabetz also directed the 2011 film, Edut, which again starred his sister.
In 2016 he produced the film In Between.
Elkabetz made his acting debut as the lead in the 2019 series, Our Boys.
Personal life
Since the age of 21, Elkabetz splits his time between Tel Aviv and Paris.
He lives in Tel Aviv with his partner Yuval, and is the father of a daughter, Rene Lilian Elkabetz Dori, co-parenting with the singer Dikla.
Awards and recognition
References
External links
1972 births
Living people
Israeli male television actors
Israeli film directors
Israeli male screenwriters
Israeli people of Moroccan-Jewish descent
20th-century Moroccan Jews
Academic staff of Sapir Academic College
Israeli expatriates in France
Israeli gay writers
Israeli gay actors
Israeli gay artists
Israeli LGBT film directors
Israeli LGBT screenwriters
Gay Jews
Gay screenwriters
|
```python
#
#
# path_to_url
#
# Unless required by applicable law or agreed to in writing, software
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
"""Constructs simple task head layers."""
from typing import Any, Mapping, Optional, Union
import tensorflow as tf, tf_keras
from official.modeling import tf_utils
from official.vision.modeling.backbones import vit
class AddTemporalPositionEmbs(tf_keras.layers.Layer):
"""Adds learned temporal positional embeddings to the video features."""
def __init__(self,
posemb_init: Optional[tf_keras.initializers.Initializer] = None,
**kwargs):
"""Constructs Postional Embedding module.
Args:
posemb_init: The positional embedding initializer.
**kwargs: other args.
"""
super().__init__(**kwargs)
self.posemb_init = posemb_init
def build(self, inputs_shape: Union[tf.TensorShape, list[int]]) -> None:
pos_emb_length = inputs_shape[1]
pos_emb_shape = (1, pos_emb_length, inputs_shape[-1])
self.pos_embedding = self.add_weight(
'pos_embedding', pos_emb_shape, initializer=self.posemb_init)
def call(self, inputs: tf.Tensor) -> tf.Tensor:
pos_embedding = self.pos_embedding
# inputs.shape is (batch_size, temporal_len, spatial_len, emb_dim).
pos_embedding = tf.cast(pos_embedding, inputs.dtype)
_, t, _, c = inputs.shape
inputs = tf.reshape(pos_embedding, [-1, t, 1, c]) + inputs
return inputs
class MLP(tf_keras.layers.Layer):
"""Constructs the Multi-Layer Perceptron head."""
def __init__(
self,
num_hidden_layers: int,
num_hidden_channels: int,
num_output_channels: int,
use_sync_bn: bool,
norm_momentum: float = 0.99,
norm_epsilon: float = 1e-5,
activation: Optional[str] = None,
normalize_inputs: bool = False,
kernel_regularizer: Optional[tf_keras.regularizers.Regularizer] = None,
bias_regularizer: Optional[tf_keras.regularizers.Regularizer] = None,
**kwargs):
"""Multi-Layer Perceptron initialization.
Args:
num_hidden_layers: the number of hidden layers in the MLP.
num_hidden_channels: the number of hidden nodes.
num_output_channels: the number of final output nodes.
use_sync_bn: whether to use sync batch norm.
norm_momentum: the batch norm momentum.
norm_epsilon: the batch norm epsilon.
activation: the activation function.
normalize_inputs: whether to normalize inputs.
kernel_regularizer: tf_keras.regularizers.Regularizer object.
bias_regularizer: tf_keras.regularizers.Regularizer object.
**kwargs: keyword arguments to be passed.
"""
super().__init__(**kwargs)
self._num_hidden_layers = num_hidden_layers
self._num_hidden_channels = num_hidden_channels
self._num_output_channels = num_output_channels
self._use_sync_bn = use_sync_bn
self._norm_momentum = norm_momentum
self._norm_epsilon = norm_epsilon
self._activation = activation
self._normalize_inputs = normalize_inputs
self._kernel_regularizer = kernel_regularizer
self._bias_regularizer = bias_regularizer
self._layers = []
# MLP hidden layers
for _ in range(num_hidden_layers):
self._layers.append(
tf_keras.layers.Dense(
num_hidden_channels,
use_bias=False,
kernel_regularizer=kernel_regularizer,
bias_regularizer=bias_regularizer))
if use_sync_bn:
self._layers.append(
tf_keras.layers.experimental.SyncBatchNormalization(
momentum=norm_momentum,
epsilon=norm_epsilon))
else:
self._layers.append(
tf_keras.layers.BatchNormalization(
momentum=norm_momentum,
epsilon=norm_epsilon))
if activation is not None:
self._layers.append(tf_utils.get_activation(activation))
# Projection head
self._layers.append(tf_keras.layers.Dense(num_output_channels))
def call(self, inputs: tf.Tensor, training: bool) -> tf.Tensor:
"""Forward calls with N-D inputs tensor."""
if self._normalize_inputs:
inputs = tf.nn.l2_normalize(inputs, axis=-1)
for layer in self._layers:
if isinstance(layer, tf_keras.layers.Layer):
inputs = layer(inputs, training=training)
else: # activation
inputs = layer(inputs)
return inputs
def get_config(self) -> Mapping[str, Any]:
"""Gets class config parameters."""
config_dict = {
'num_hidden_layer': self._num_hidden_layer,
'num_hidden_channels': self._num_hidden_channels,
'num_output_channels': self._num_output_channels,
'use_sync_bn': self._use_sync_bn,
'norm_momentum': self._norm_momentum,
'norm_epsilon': self._norm_epsilon,
'activation': self._activation,
'normalize_inputs': self._normalize_inputs,
'kernel_regularizer': self._kernel_regularizer,
'bias_regularizer': self._bias_regularizer,
}
return config_dict
@classmethod
def from_config(cls, config: Mapping[str, Any]):
"""Factory constructor from config."""
return cls(**config)
class AttentionPoolerClassificationHead(tf_keras.layers.Layer):
"""Head layer for attention pooling classification network.
Applies pooling attention, dropout, and classifier projection. Expects input
to be vector with shape [batch_size, n, num_channels].
"""
def __init__(
self,
num_heads: int,
hidden_size: int,
num_classes: int,
attention_dropout_rate: float = 0.,
dropout_rate: float = 0.,
kernel_initializer: str = 'HeNormal',
kernel_regularizer: Optional[
tf_keras.regularizers.Regularizer] = tf_keras.regularizers.L2(1.5e-5),
bias_regularizer: Optional[tf_keras.regularizers.Regularizer] = None,
add_temporal_pos_embed: bool = False,
**kwargs):
"""Implementation for video model classifier head.
Args:
num_heads: number of heads in attention layer.
hidden_size: hidden size in attention layer.
num_classes: number of output classes for the final logits.
attention_dropout_rate: the dropout rate applied to the attention map.
dropout_rate: the dropout rate applied to the head projection.
kernel_initializer: kernel initializer for the conv operations.
kernel_regularizer: kernel regularizer for the conv operations.
bias_regularizer: bias regularizer for the conv operations.
add_temporal_pos_embed: whether to add temporal position embedding or not.
**kwargs: keyword arguments to be passed to this layer.
"""
super().__init__(**kwargs)
self._num_heads = num_heads
self._num_classes = num_classes
self._dropout_rate = dropout_rate
self._kernel_initializer = kernel_initializer
self._kernel_regularizer = kernel_regularizer
self._bias_regularizer = bias_regularizer
self._add_pooler_token = vit.TokenLayer(name='pooler_token')
self._add_temporal_pos_embed = add_temporal_pos_embed
if self._add_temporal_pos_embed:
self._pos_embed = AddTemporalPositionEmbs(
posemb_init=tf_keras.initializers.RandomNormal(stddev=0.02),
name='posembed_final_learnt',
)
self._pooler_attention_layer_norm = tf_keras.layers.LayerNormalization(
name='pooler_attention_layer_norm',
axis=-1,
epsilon=1e-6,
dtype=tf.float32)
self._pooler_attention_layer = tf_keras.layers.MultiHeadAttention(
num_heads=num_heads,
key_dim=(hidden_size // num_heads),
value_dim=None,
dropout=attention_dropout_rate,
use_bias=True,
kernel_initializer='glorot_uniform',
name='pooler_attention')
self._dropout = tf_keras.layers.Dropout(dropout_rate)
self._classifier = tf_keras.layers.Dense(
num_classes,
kernel_initializer=kernel_initializer,
kernel_regularizer=self._kernel_regularizer,
bias_regularizer=self._bias_regularizer)
def call(self, inputs: tf.Tensor) -> tf.Tensor:
"""Calls the layer with the given inputs."""
# Input Shape: [batch_size, n, input_channels]
x = inputs
tf.assert_rank(x, 4, message=
'(b, t, s, c) shaped inputs are required.')
if self._add_temporal_pos_embed:
x = self._pos_embed(x)
_, s, t, c = x.shape
x = tf.reshape(x, [-1, s * t, c])
x = self._pooler_attention_layer_norm(x)
x = self._add_pooler_token(x)
pooler_token = x[:, 0:1, :]
x = x[:, 1:, :]
x = self._pooler_attention_layer(query=pooler_token, value=x,
return_attention_scores=False)
if self._dropout_rate and self._dropout_rate > 0:
x = self._dropout(x)
return self._classifier(tf.squeeze(x, axis=1))
```
|
Alexander Sheldon (October 23, 1766 in Suffield, Hartford County, Connecticut – September 10, 1836 in Montgomery County, New York) was an American physician and politician.
Life
He was the son of Phineas Sheldon (1717–1807) and Ruth Harmon Sheldon (1733–1805). He graduated from Yale College in 1787. He was a member of the New York State Assembly from 1800 to 1808, in 1812 and in 1826, and was Speaker in 1804, 1805, 1806, 1808 and 1812. He was the last of the Speakers that wore the Cocked Hat as the badge of office.
He graduated from the New York College of Physicians and Surgeons in 1812.
He was a regent of the University of the State of New York. He was a delegate to the New York State Constitutional Convention of 1821.
He was married to Miriam King (b. 1770 Suffield). Their son, Smith Sheldon (1811–1884), established the publishing-house of Sheldon and Company. Their daughter Delia Sheldon was the mother of the Presbyterian missionary, Sheldon Jackson, who established more than a hundred churches, mostly in the Western United States.
Sources
Bio at Famous Americans
Ancestry
Members of NYSA from Montg. Co.
1766 births
1832 deaths
Speakers of the New York State Assembly
People from Suffield, Connecticut
Regents of the University of the State of New York
Yale College alumni
People of colonial Connecticut
New York College of Physicians and Surgeons alumni
|
```objective-c
//===-- RuntimeDyldELF.h - Run-time dynamic linker for MC-JIT ---*- C++ -*-===//
//
// See path_to_url for license information.
//
//===your_sha256_hash------===//
//
// ELF support for MC-JIT runtime dynamic linker.
//
//===your_sha256_hash------===//
#ifndef LLVM_LIB_EXECUTIONENGINE_RUNTIMEDYLD_RUNTIMEDYLDELF_H
#define LLVM_LIB_EXECUTIONENGINE_RUNTIMEDYLD_RUNTIMEDYLDELF_H
#include "RuntimeDyldImpl.h"
#include "llvm/ADT/DenseMap.h"
using namespace llvm;
namespace llvm {
namespace object {
class ELFObjectFileBase;
}
class RuntimeDyldELF : public RuntimeDyldImpl {
void resolveRelocation(const SectionEntry &Section, uint64_t Offset,
uint64_t Value, uint32_t Type, int64_t Addend,
uint64_t SymOffset = 0, SID SectionID = 0);
void resolveX86_64Relocation(const SectionEntry &Section, uint64_t Offset,
uint64_t Value, uint32_t Type, int64_t Addend,
uint64_t SymOffset);
void resolveX86Relocation(const SectionEntry &Section, uint64_t Offset,
uint32_t Value, uint32_t Type, int32_t Addend);
void resolveAArch64Relocation(const SectionEntry &Section, uint64_t Offset,
uint64_t Value, uint32_t Type, int64_t Addend);
bool resolveAArch64ShortBranch(unsigned SectionID, relocation_iterator RelI,
const RelocationValueRef &Value);
void resolveAArch64Branch(unsigned SectionID, const RelocationValueRef &Value,
relocation_iterator RelI, StubMap &Stubs);
void resolveARMRelocation(const SectionEntry &Section, uint64_t Offset,
uint32_t Value, uint32_t Type, int32_t Addend);
void resolvePPC32Relocation(const SectionEntry &Section, uint64_t Offset,
uint64_t Value, uint32_t Type, int64_t Addend);
void resolvePPC64Relocation(const SectionEntry &Section, uint64_t Offset,
uint64_t Value, uint32_t Type, int64_t Addend);
void resolveSystemZRelocation(const SectionEntry &Section, uint64_t Offset,
uint64_t Value, uint32_t Type, int64_t Addend);
void resolveBPFRelocation(const SectionEntry &Section, uint64_t Offset,
uint64_t Value, uint32_t Type, int64_t Addend);
unsigned getMaxStubSize() const override {
if (Arch == Triple::aarch64 || Arch == Triple::aarch64_be)
return 20; // movz; movk; movk; movk; br
if (Arch == Triple::arm || Arch == Triple::thumb)
return 8; // 32-bit instruction and 32-bit address
else if (IsMipsO32ABI || IsMipsN32ABI)
return 16;
else if (IsMipsN64ABI)
return 32;
else if (Arch == Triple::ppc64 || Arch == Triple::ppc64le)
return 44;
else if (Arch == Triple::x86_64)
return 6; // 2-byte jmp instruction + 32-bit relative address
else if (Arch == Triple::systemz)
return 16;
else
return 0;
}
Align getStubAlignment() override {
if (Arch == Triple::systemz)
return Align(8);
else
return Align(1);
}
void setMipsABI(const ObjectFile &Obj) override;
Error findPPC64TOCSection(const object::ELFObjectFileBase &Obj,
ObjSectionToIDMap &LocalSections,
RelocationValueRef &Rel);
Error findOPDEntrySection(const object::ELFObjectFileBase &Obj,
ObjSectionToIDMap &LocalSections,
RelocationValueRef &Rel);
protected:
size_t getGOTEntrySize() override;
private:
SectionEntry &getSection(unsigned SectionID) { return Sections[SectionID]; }
// Allocate no GOT entries for use in the given section.
uint64_t allocateGOTEntries(unsigned no);
// Find GOT entry corresponding to relocation or create new one.
uint64_t findOrAllocGOTEntry(const RelocationValueRef &Value,
unsigned GOTRelType);
// Resolve the relvative address of GOTOffset in Section ID and place
// it at the given Offset
void resolveGOTOffsetRelocation(unsigned SectionID, uint64_t Offset,
uint64_t GOTOffset, uint32_t Type);
// For a GOT entry referenced from SectionID, compute a relocation entry
// that will place the final resolved value in the GOT slot
RelocationEntry computeGOTOffsetRE(uint64_t GOTOffset, uint64_t SymbolOffset,
unsigned Type);
// Compute the address in memory where we can find the placeholder
void *computePlaceholderAddress(unsigned SectionID, uint64_t Offset) const;
// Split out common case for createing the RelocationEntry for when the relocation requires
// no particular advanced processing.
void processSimpleRelocation(unsigned SectionID, uint64_t Offset, unsigned RelType, RelocationValueRef Value);
// Return matching *LO16 relocation (Mips specific)
uint32_t getMatchingLoRelocation(uint32_t RelType,
bool IsLocal = false) const;
// The tentative ID for the GOT section
unsigned GOTSectionID;
// Records the current number of allocated slots in the GOT
// (This would be equivalent to GOTEntries.size() were it not for relocations
// that consume more than one slot)
unsigned CurrentGOTIndex;
protected:
// A map from section to a GOT section that has entries for section's GOT
// relocations. (Mips64 specific)
DenseMap<SID, SID> SectionToGOTMap;
private:
// A map to avoid duplicate got entries (Mips64 specific)
StringMap<uint64_t> GOTSymbolOffsets;
// *HI16 relocations will be added for resolving when we find matching
// *LO16 part. (Mips specific)
SmallVector<std::pair<RelocationValueRef, RelocationEntry>, 8> PendingRelocs;
// When a module is loaded we save the SectionID of the EH frame section
// in a table until we receive a request to register all unregistered
// EH frame sections with the memory manager.
SmallVector<SID, 2> UnregisteredEHFrameSections;
// Map between GOT relocation value and corresponding GOT offset
std::map<RelocationValueRef, uint64_t> GOTOffsetMap;
/// The ID of the current IFunc stub section
unsigned IFuncStubSectionID = 0;
/// The current offset into the IFunc stub section
uint64_t IFuncStubOffset = 0;
/// A IFunc stub and its original symbol
struct IFuncStub {
/// The offset of this stub in the IFunc stub section
uint64_t StubOffset;
/// The symbol table entry of the original symbol
SymbolTableEntry OriginalSymbol;
};
/// The IFunc stubs
SmallVector<IFuncStub, 2> IFuncStubs;
/// Create the code for the IFunc resolver at the given address. This code
/// works together with the stubs created in createIFuncStub() to call the
/// resolver function and then jump to the real function address.
/// It must not be larger than 64B.
void createIFuncResolver(uint8_t *Addr) const;
/// Create the code for an IFunc stub for the IFunc that is defined in
/// section IFuncSectionID at offset IFuncOffset. The IFunc resolver created
/// by createIFuncResolver() is defined in the section IFuncStubSectionID at
/// offset IFuncResolverOffset. The code should be written into the section
/// with the id IFuncStubSectionID at the offset IFuncStubOffset.
void createIFuncStub(unsigned IFuncStubSectionID,
uint64_t IFuncResolverOffset, uint64_t IFuncStubOffset,
unsigned IFuncSectionID, uint64_t IFuncOffset);
/// Return the maximum size of a stub created by createIFuncStub()
unsigned getMaxIFuncStubSize() const;
void processNewSymbol(const SymbolRef &ObjSymbol,
SymbolTableEntry &Entry) override;
bool relocationNeedsGot(const RelocationRef &R) const override;
bool relocationNeedsStub(const RelocationRef &R) const override;
// Process a GOTTPOFF TLS relocation for x86-64
// NOLINTNEXTLINE(readability-identifier-naming)
void processX86_64GOTTPOFFRelocation(unsigned SectionID, uint64_t Offset,
RelocationValueRef Value,
int64_t Addend);
// Process a TLSLD/TLSGD relocation for x86-64
// NOLINTNEXTLINE(readability-identifier-naming)
void processX86_64TLSRelocation(unsigned SectionID, uint64_t Offset,
uint64_t RelType, RelocationValueRef Value,
int64_t Addend,
const RelocationRef &GetAddrRelocation);
public:
RuntimeDyldELF(RuntimeDyld::MemoryManager &MemMgr,
JITSymbolResolver &Resolver);
~RuntimeDyldELF() override;
static std::unique_ptr<RuntimeDyldELF>
create(Triple::ArchType Arch, RuntimeDyld::MemoryManager &MemMgr,
JITSymbolResolver &Resolver);
std::unique_ptr<RuntimeDyld::LoadedObjectInfo>
loadObject(const object::ObjectFile &O) override;
void resolveRelocation(const RelocationEntry &RE, uint64_t Value) override;
Expected<relocation_iterator>
processRelocationRef(unsigned SectionID, relocation_iterator RelI,
const ObjectFile &Obj,
ObjSectionToIDMap &ObjSectionToID,
StubMap &Stubs) override;
bool isCompatibleFile(const object::ObjectFile &Obj) const override;
void registerEHFrames() override;
Error finalizeLoad(const ObjectFile &Obj,
ObjSectionToIDMap &SectionMap) override;
};
} // end namespace llvm
#endif
```
|
```sqlpl
SELECT *
FROM (
SELECT
([toString(number % 2)] :: Array(LowCardinality(String))) AS item_id,
count()
FROM numbers(3)
GROUP BY item_id WITH TOTALS
) AS l FULL JOIN (
SELECT
([toString((number % 2) * 2)] :: Array(String)) AS item_id
FROM numbers(3)
) AS r
ON l.item_id = r.item_id
ORDER BY 1,2,3;
```
|
```php
<?php
declare(strict_types = 1);
namespace Rebing\GraphQL\Tests\Support\Objects;
use GraphQL\Type\Definition\Type;
use Rebing\GraphQL\Support\Field;
class ExampleValidationField extends Field
{
protected $attributes = [
'name' => 'example_validation',
];
public function type(): Type
{
return Type::listOf(Type::string());
}
public function args(): array
{
return [
'index' => [
'name' => 'index',
'type' => Type::int(),
'rules' => ['required'],
],
];
}
public function resolve($root, $args): array
{
return ['test'];
}
}
```
|
Bhatgaon Vidhan Sabha Constituency is one of the 90 Vidhan Sabha (Legislative Assembly) constituencies of Chhattisgarh state in central India. This vidhan sabha consists of Bhaiyathan, Bishrampur, Bhatgaon, Jarhi, Surajpur.
Members of Legislative Assembly
Election results
2023
2018
See also
Telgaon
References
Assembly constituencies of Chhattisgarh
Surajpur district
|
```xml
<vector xmlns:android="path_to_url"
android:width="128dp"
android:height="128dp"
android:viewportWidth="128"
android:viewportHeight="128">
<path
android:pathData="m70,108 l16,-38 16,38z"
android:fillColor="#15324e"/>
<path
android:pathData="m84,48 l-26,60h12s12,-12 16,-38c4,26 16,38 16,38h12c-0.34,0.32 -20.47,-33.05 -26,-60z"
android:fillColor="#286097"/>
<path
android:pathData="m14,108s24,-32 28,-60c16,4 22,4 42,0 -4,28 -26,60 -26,60z"
android:fillColor="#3279bb"/>
</vector>
```
|
```smalltalk
using Xamarin.Forms;
using Xamarin.Forms.ControlGallery.WindowsUniversal;
using Xamarin.Forms.Controls;
using Xamarin.Forms.Platform.UWP;
[assembly: Dependency(typeof(RegistrarValidationService))]
namespace Xamarin.Forms.ControlGallery.WindowsUniversal
{
public class RegistrarValidationService : IRegistrarValidationService
{
public bool Validate(VisualElement element, out string message)
{
message = "Success";
if (element == null || element is OpenGLView)
return true;
var renderer = Platform.UWP.Platform.CreateRenderer(element);
if (renderer == null
|| renderer.GetType().Name == "DefaultRenderer"
)
{
message = $"Failed to load proper UWP renderer for {element.GetType().Name}";
return false;
}
return true;
}
}
}
```
|
Francis Oag Hulme-Moir (30 January 1910, Balmain, Sydney, Australia – 10 March 1979, Sydney) was an Australian Anglican bishop and military chaplain, who served as the 7th Anglican Bishop of Nelson from 1954 to 1965, as Bishop to the Armed Forces from 1965 to 1975, as Dean of Sydney from 1965 to 1967 and coadjutor bishop of Sydney from 1965 to 1975.
Hulme-Moir was born on 30 January 1910, educated at Sydney Technical High School and ordained in 1937. He was a Chaplain to the Australian Armed Forces from then until 1947 when he became Archdeacon of Ryde. On 11 June 1954 he was ordained to the episcopate. On 23 February 1965, he was appointed 6th Dean of Sydney a post he relinquished in late 1966 but remained coadjutor bishop. Hulme-Moir was particularly noted for his booming bass voice and engaging personality.
Hulme-Moir received the Order of Australia in 1976.
He died on 10 March 1979 and his funeral was attended by full military honours.
References
1910 births
1979 deaths
20th-century Anglican bishops in New Zealand
Anglican bishops of Nelson
Anglican bishops to the Australian Defence Force
Assistant bishops in the Anglican Diocese of Sydney
Deans of Sydney
New Zealand military chaplains
Officers of the Order of Australia
People educated at Sydney Technical High School
|
The Dessa Dawn Formation is a geologic formation in Alberta. It preserves fossils dating back to the Carboniferous period.
See also
List of fossiliferous stratigraphic units in Alberta
References
Carboniferous Alberta
Carboniferous southern paleotropical deposits
Geologic formations of Alberta
|
```java
*
* path_to_url
*
* Unless required by applicable law or agreed to in writing, software
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
*/
package org.flowable.engine.test.tenant;
import static org.assertj.core.api.Assertions.assertThat;
import static org.assertj.core.api.Assertions.assertThatThrownBy;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Collection;
import java.util.Collections;
import java.util.HashMap;
import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.Set;
import org.apache.commons.lang3.StringUtils;
import org.flowable.common.engine.api.FlowableIllegalArgumentException;
import org.flowable.common.engine.api.delegate.event.AbstractFlowableEventListener;
import org.flowable.common.engine.api.delegate.event.FlowableChangeTenantIdEvent;
import org.flowable.common.engine.api.delegate.event.FlowableEngineEventType;
import org.flowable.common.engine.api.delegate.event.FlowableEvent;
import org.flowable.common.engine.api.delegate.event.FlowableEventType;
import org.flowable.common.engine.api.scope.ScopeTypes;
import org.flowable.common.engine.api.tenant.ChangeTenantIdResult;
import org.flowable.common.engine.impl.history.HistoryLevel;
import org.flowable.engine.BpmnChangeTenantIdEntityTypes;
import org.flowable.engine.impl.test.HistoryTestHelper;
import org.flowable.engine.impl.test.PluggableFlowableTestCase;
import org.flowable.engine.runtime.ProcessInstance;
import org.flowable.engine.runtime.ProcessInstanceBuilder;
import org.flowable.job.api.Job;
import org.flowable.task.api.Task;
import org.junit.jupiter.api.AfterEach;
import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.Test;
class ChangeTenantIdProcessTest extends PluggableFlowableTestCase {
private static final String TEST_TENANT_A = "test-tenant-a";
private static final String TEST_TENANT_B = "test-tenant-b";
private static final String TEST_TENANT_C = "test-tenant-c";
protected TestEventListener eventListener = new TestEventListener();
@BeforeEach
void setUp() {
deploymentIdsForAutoCleanup.add(repositoryService.createDeployment()
.addClasspathResource("org/flowable/engine/test/tenant/testProcess.bpmn20.xml").tenantId(TEST_TENANT_A)
.deploy().getId());
deploymentIdsForAutoCleanup.add(repositoryService.createDeployment()
.addClasspathResource("org/flowable/engine/test/tenant/testProcess.bpmn20.xml").tenantId(TEST_TENANT_B)
.deploy().getId());
deploymentIdsForAutoCleanup.add(repositoryService.createDeployment()
.addClasspathResource("org/flowable/engine/test/tenant/testProcess.bpmn20.xml").tenantId(TEST_TENANT_C)
.deploy().getId());
deploymentIdsForAutoCleanup.add(repositoryService.createDeployment()
.addClasspathResource("org/flowable/engine/test/tenant/testProcessDup.bpmn20.xml").deploy().getId());
deploymentIdsForAutoCleanup.add(repositoryService.createDeployment()
.addClasspathResource("org/flowable/engine/test/tenant/testProcessForJobsAndEventSubscriptions.bpmn20.xml").tenantId(TEST_TENANT_A)
.deploy().getId());
processEngineConfiguration.getEventDispatcher().addEventListener(eventListener);
}
@AfterEach
void cleanUp() {
processEngineConfiguration.getEventDispatcher().removeEventListener(eventListener);
}
@Test
void testChangeTenantIdProcessInstance() {
//testDeployments() {
assertThat(repositoryService.createDeploymentQuery().count()).isEqualTo(5);
assertThat(repositoryService.createDeploymentQuery().deploymentWithoutTenantId().count()).isEqualTo(1);
assertThat(repositoryService.createDeploymentQuery().deploymentTenantId(TEST_TENANT_A).count()).isEqualTo(2);
assertThat(repositoryService.createDeploymentQuery().deploymentTenantId(TEST_TENANT_B).count()).isEqualTo(1);
assertThat(repositoryService.createDeploymentQuery().deploymentTenantId(TEST_TENANT_C).count()).isEqualTo(1);
//Starting process instances that will be completed
String processInstanceIdACompleted = startProcess(TEST_TENANT_A, "testProcess", "processInstanceIdACompleted", 2);
String processInstanceIdBCompleted = startProcess(TEST_TENANT_B, "testProcess", "processInstanceIdBCompleted", 2);
String processInstanceIdCCompleted = startProcess(TEST_TENANT_C, "testProcess", "processInstanceIdCCompleted", 2);
//Starting process instances that will remain active and moving jobs to different states
String processInstanceIdAActive = startProcess(TEST_TENANT_A, "testProcess", "processInstanceIdAActive", 1);
String processInstanceIdBActive = startProcess(TEST_TENANT_B, "testProcess", "processInstanceIdBActive", 1);
String processInstanceIdCActive = startProcess(TEST_TENANT_C, "testProcess", "processInstanceIdCActive", 1);
String processInstanceIdAAForJobs = startProcess(TEST_TENANT_A, "testProcessForJobsAndEventSubscriptions", "processInstanceIdAAForJobs", 0);
Job jobToBeSentToDeadLetter = managementService.createTimerJobQuery().processInstanceId(processInstanceIdAAForJobs)
.elementName("Timer to create a deadletter job").singleResult();
Job jobInTheDeadLetterQueue = managementService.moveJobToDeadLetterJob(jobToBeSentToDeadLetter.getId());
assertThat(jobInTheDeadLetterQueue).as("We have a job in the deadletter queue.").isNotNull();
String processInstanceIdAForSuspendedJobs = startProcess(TEST_TENANT_A, "testProcessForJobsAndEventSubscriptions", "processInstanceIdAAForJobsActive",
0);
runtimeService.suspendProcessInstanceById(processInstanceIdAForSuspendedJobs);
Job aSuspendedJob = managementService.createSuspendedJobQuery().processInstanceId(processInstanceIdAForSuspendedJobs)
.elementName("Timer to create a suspended job").singleResult();
assertThat(aSuspendedJob).as("We have a suspended job.").isNotNull();
Set<String> processInstancesTenantA = new HashSet<>(
Arrays.asList(processInstanceIdACompleted, processInstanceIdAActive, processInstanceIdAAForJobs, processInstanceIdAForSuspendedJobs));
Set<String> processInstancesTenantB = new HashSet<>(Arrays.asList(processInstanceIdBCompleted, processInstanceIdBActive));
Set<String> processInstancesTenantC = new HashSet<>(Arrays.asList(processInstanceIdCCompleted, processInstanceIdCActive));
// Prior to changing the Tenant Id, all elements are associated to the original tenant
checkTenantIdForAllInstances(processInstancesTenantA, TEST_TENANT_A, "prior to changing to " + TEST_TENANT_B);
checkTenantIdForAllInstances(processInstancesTenantB, TEST_TENANT_B, "prior to changing to " + TEST_TENANT_B);
checkTenantIdForAllInstances(processInstancesTenantC, TEST_TENANT_C, "prior to changing to " + TEST_TENANT_B);
// First, we simulate the change
ChangeTenantIdResult simulationResult = managementService
.createChangeTenantIdBuilder(TEST_TENANT_A, TEST_TENANT_B).simulate();
// All the instances should stay in the original tenant after the simulation
checkTenantIdForAllInstances(processInstancesTenantA, TEST_TENANT_A, "after simulating the change to " + TEST_TENANT_B);
checkTenantIdForAllInstances(processInstancesTenantB, TEST_TENANT_B, "after simulating the change to " + TEST_TENANT_B);
checkTenantIdForAllInstances(processInstancesTenantC, TEST_TENANT_C, "after simulating the change to " + TEST_TENANT_B);
// We now proceed with the changeTenantId operation for all the instances
ChangeTenantIdResult result = managementService
.createChangeTenantIdBuilder(TEST_TENANT_A, TEST_TENANT_B).complete();
// All the instances should now be assigned to the tenant B
checkTenantIdForAllInstances(processInstancesTenantA, TEST_TENANT_B, "after the change to " + TEST_TENANT_B);
checkTenantIdForAllInstances(processInstancesTenantB, TEST_TENANT_B, "after the change to " + TEST_TENANT_B);
// The instances for tenant C remain untouched
checkTenantIdForAllInstances(processInstancesTenantC, TEST_TENANT_C, "after the change to " + TEST_TENANT_B);
//Expected results map
Map<String, Long> resultMap = new HashMap<>();
resultMap.put(BpmnChangeTenantIdEntityTypes.ACTIVITY_INSTANCES, 35L);
resultMap.put(BpmnChangeTenantIdEntityTypes.EXECUTIONS, 16L);
resultMap.put(BpmnChangeTenantIdEntityTypes.EVENT_SUBSCRIPTIONS, 2L);
resultMap.put(BpmnChangeTenantIdEntityTypes.TASKS, 1L);
resultMap.put(BpmnChangeTenantIdEntityTypes.EXTERNAL_WORKER_JOBS, 1L);
resultMap.put(BpmnChangeTenantIdEntityTypes.HISTORIC_ACTIVITY_INSTANCES, 44L);
resultMap.put(BpmnChangeTenantIdEntityTypes.HISTORIC_PROCESS_INSTANCES, 4L);
resultMap.put(BpmnChangeTenantIdEntityTypes.HISTORIC_TASK_LOG_ENTRIES, 7L);
resultMap.put(BpmnChangeTenantIdEntityTypes.HISTORIC_TASK_INSTANCES, 4L);
resultMap.put(BpmnChangeTenantIdEntityTypes.JOBS, 1L);
resultMap.put(BpmnChangeTenantIdEntityTypes.SUSPENDED_JOBS, 5L);
resultMap.put(BpmnChangeTenantIdEntityTypes.TIMER_JOBS, 2L);
resultMap.put(BpmnChangeTenantIdEntityTypes.DEADLETTER_JOBS, 1L);
//Check that all the entities are returned
assertThat(simulationResult.getChangedEntityTypes())
.containsExactlyInAnyOrderElementsOf(resultMap.keySet());
assertThat(result.getChangedEntityTypes())
.containsExactlyInAnyOrderElementsOf(resultMap.keySet());
resultMap.forEach((key, value) -> {
//Check simulation result content
assertThat(simulationResult.getChangedInstances(key))
.as(key)
.isEqualTo(value);
//Check result content
assertThat(result.getChangedInstances(key))
.as(key)
.isEqualTo(value);
});
//Check that we can complete the active instances that we have changed
completeTask(processInstanceIdAActive);
assertProcessEnded(processInstanceIdAActive);
completeTask(processInstanceIdBActive);
assertProcessEnded(processInstanceIdBActive);
completeTask(processInstanceIdCActive);
assertProcessEnded(processInstanceIdCActive);
assertThat(eventListener.events).hasSize(1);
FlowableChangeTenantIdEvent event = eventListener.events.get(0);
assertThat(event.getEngineScopeType()).isEqualTo(ScopeTypes.BPMN);
assertThat(event.getSourceTenantId()).isEqualTo(TEST_TENANT_A);
assertThat(event.getTargetTenantId()).isEqualTo(TEST_TENANT_B);
assertThat(event.getDefinitionTenantId()).isNull();
}
@Test
void testChangeTenantIdProcessInstanceFromEmptyTenant() {
//testDeployments() {
assertThat(repositoryService.createDeploymentQuery().count()).isEqualTo(5);
assertThat(repositoryService.createDeploymentQuery().deploymentWithoutTenantId().count()).isEqualTo(1);
assertThat(repositoryService.createDeploymentQuery().deploymentTenantId(TEST_TENANT_A).count()).isEqualTo(2);
assertThat(repositoryService.createDeploymentQuery().deploymentTenantId(TEST_TENANT_B).count()).isEqualTo(1);
assertThat(repositoryService.createDeploymentQuery().deploymentTenantId(TEST_TENANT_C).count()).isEqualTo(1);
//Starting process instances that will be completed
String processInstanceIdNoTenantCompleted = startProcess(TEST_TENANT_A, "testProcess", "processInstanceIdNoTenantCompleted", 2, "");
String processInstanceIdBCompleted = startProcess(TEST_TENANT_B, "testProcess", "processInstanceIdBCompleted", 2);
String processInstanceIdCCompleted = startProcess(TEST_TENANT_C, "testProcess", "processInstanceIdCCompleted", 2);
//Starting process instances that will remain active and moving jobs to different states
String processInstanceIdNoTenantActive = startProcess(TEST_TENANT_A, "testProcess", "processInstanceIdNoTenantActive", 1, "");
String processInstanceIdBActive = startProcess(TEST_TENANT_B, "testProcess", "processInstanceIdBActive", 1);
String processInstanceIdCActive = startProcess(TEST_TENANT_C, "testProcess", "processInstanceIdCActive", 1);
String processInstanceIdNoTenantForJobs = startProcess(TEST_TENANT_A, "testProcessForJobsAndEventSubscriptions", "processInstanceIdNoTenantForJobs", 0,
"");
Job jobToBeSentToDeadLetter = managementService.createTimerJobQuery().processInstanceId(processInstanceIdNoTenantForJobs)
.elementName("Timer to create a deadletter job").singleResult();
Job jobInTheDeadLetterQueue = managementService.moveJobToDeadLetterJob(jobToBeSentToDeadLetter.getId());
assertThat(jobInTheDeadLetterQueue).as("We have a job in the deadletter queue.").isNotNull();
String processInstanceIdNoTenantForSuspendedJobs = startProcess(TEST_TENANT_A, "testProcessForJobsAndEventSubscriptions",
"processInstanceIdAAForJobsActive", 0, "");
runtimeService.suspendProcessInstanceById(processInstanceIdNoTenantForSuspendedJobs);
Job aSuspendedJob = managementService.createSuspendedJobQuery().processInstanceId(processInstanceIdNoTenantForSuspendedJobs)
.elementName("Timer to create a suspended job").singleResult();
assertThat(aSuspendedJob).as("We have a suspended job.").isNotNull();
Set<String> processInstancesNoTenant = new HashSet<>(
Arrays.asList(processInstanceIdNoTenantCompleted, processInstanceIdNoTenantActive, processInstanceIdNoTenantForJobs,
processInstanceIdNoTenantForSuspendedJobs));
Set<String> processInstancesTenantB = new HashSet<>(Arrays.asList(processInstanceIdBCompleted, processInstanceIdBActive));
Set<String> processInstancesTenantC = new HashSet<>(Arrays.asList(processInstanceIdCCompleted, processInstanceIdCActive));
// Prior to changing the Tenant Id, all elements are associated to the original tenant
checkTenantIdForAllInstances(processInstancesNoTenant, "", "prior to changing to " + TEST_TENANT_B);
checkTenantIdForAllInstances(processInstancesTenantB, TEST_TENANT_B, "prior to changing to " + TEST_TENANT_B);
checkTenantIdForAllInstances(processInstancesTenantC, TEST_TENANT_C, "prior to changing to " + TEST_TENANT_B);
// First, we simulate the change
ChangeTenantIdResult simulationResult = managementService
.createChangeTenantIdBuilder("", TEST_TENANT_B).simulate();
// All the instances should stay in the original tenant after the simulation
checkTenantIdForAllInstances(processInstancesNoTenant, "", "after simulating the change to " + TEST_TENANT_B);
checkTenantIdForAllInstances(processInstancesTenantB, TEST_TENANT_B, "after simulating the change to " + TEST_TENANT_B);
checkTenantIdForAllInstances(processInstancesTenantC, TEST_TENANT_C, "after simulating the change to " + TEST_TENANT_B);
// We now proceed with the changeTenantId operation for all the instances
ChangeTenantIdResult result = managementService
.createChangeTenantIdBuilder("", TEST_TENANT_B).complete();
// All the instances should now be assigned to the tenant B
checkTenantIdForAllInstances(processInstancesNoTenant, TEST_TENANT_B, "after the change to " + TEST_TENANT_B);
checkTenantIdForAllInstances(processInstancesTenantB, TEST_TENANT_B, "after the change to " + TEST_TENANT_B);
// The instances for tenant C remain untouched
checkTenantIdForAllInstances(processInstancesTenantC, TEST_TENANT_C, "after the change to " + TEST_TENANT_B);
//Expected results map
Map<String, Long> resultMap = new HashMap<>();
resultMap.put(BpmnChangeTenantIdEntityTypes.ACTIVITY_INSTANCES, 35L);
resultMap.put(BpmnChangeTenantIdEntityTypes.EXECUTIONS, 16L);
resultMap.put(BpmnChangeTenantIdEntityTypes.EVENT_SUBSCRIPTIONS, 2L);
resultMap.put(BpmnChangeTenantIdEntityTypes.TASKS, 1L);
resultMap.put(BpmnChangeTenantIdEntityTypes.EXTERNAL_WORKER_JOBS, 1L);
resultMap.put(BpmnChangeTenantIdEntityTypes.HISTORIC_ACTIVITY_INSTANCES, 44L);
resultMap.put(BpmnChangeTenantIdEntityTypes.HISTORIC_PROCESS_INSTANCES, 4L);
resultMap.put(BpmnChangeTenantIdEntityTypes.HISTORIC_TASK_LOG_ENTRIES, 7L);
resultMap.put(BpmnChangeTenantIdEntityTypes.HISTORIC_TASK_INSTANCES, 4L);
resultMap.put(BpmnChangeTenantIdEntityTypes.JOBS, 1L);
resultMap.put(BpmnChangeTenantIdEntityTypes.SUSPENDED_JOBS, 5L);
resultMap.put(BpmnChangeTenantIdEntityTypes.TIMER_JOBS, 2L);
resultMap.put(BpmnChangeTenantIdEntityTypes.DEADLETTER_JOBS, 1L);
//Check that all the entities are returned
assertThat(simulationResult.getChangedEntityTypes())
.containsExactlyInAnyOrderElementsOf(resultMap.keySet());
assertThat(result.getChangedEntityTypes())
.containsExactlyInAnyOrderElementsOf(resultMap.keySet());
resultMap.forEach((key, value) -> {
//Check simulation result content
assertThat(simulationResult.getChangedInstances(key))
.as(key)
.isEqualTo(value);
//Check result content
assertThat(result.getChangedInstances(key))
.as(key)
.isEqualTo(value);
});
//Check that we can complete the active instances that we have changed
completeTask(processInstanceIdNoTenantActive);
assertProcessEnded(processInstanceIdNoTenantActive);
completeTask(processInstanceIdBActive);
assertProcessEnded(processInstanceIdBActive);
completeTask(processInstanceIdCActive);
assertProcessEnded(processInstanceIdCActive);
assertThat(eventListener.events).hasSize(1);
FlowableChangeTenantIdEvent event = eventListener.events.get(0);
assertThat(event.getEngineScopeType()).isEqualTo(ScopeTypes.BPMN);
assertThat(event.getSourceTenantId()).isEqualTo("");
assertThat(event.getTargetTenantId()).isEqualTo(TEST_TENANT_B);
assertThat(event.getDefinitionTenantId()).isNull();
}
private void checkTenantIdForAllInstances(Set<String> processInstanceIds, String expectedTenantId, String moment) {
assertThat(runtimeService.createProcessInstanceQuery().processInstanceIds(processInstanceIds).list())
.isNotEmpty()
.allSatisfy(pi -> {
assertThat(StringUtils.defaultIfEmpty(pi.getTenantId(), ""))
.as("Active process instance '%s' %s must belong to %s but belongs to %s.",
pi.getName(), moment, expectedTenantId, pi.getTenantId())
.isEqualTo(expectedTenantId);
assertThat(runtimeService.createActivityInstanceQuery().processInstanceId(pi.getId()).list())
.isNotEmpty()
.allSatisfy(ai -> assertThat(StringUtils.defaultIfEmpty(ai.getTenantId(), ""))
.as("Active activity instance %s from %s %s must belong to %s but belongs to %s.",
ai.getActivityName(), pi.getName(), moment, expectedTenantId, ai.getTenantId())
.isEqualTo(expectedTenantId));
assertThat(runtimeService.createExecutionQuery().processInstanceId(pi.getId()).list())
.isNotEmpty()
.allSatisfy(ex -> assertThat(StringUtils.defaultIfEmpty(ex.getTenantId(), ""))
.as("Execution %s from %s %s must belong to %s but belongs to %s.",
ex.getName(), pi.getName(), moment, expectedTenantId, ex.getTenantId())
.isEqualTo(expectedTenantId));
assertThat(runtimeService.createEventSubscriptionQuery().processInstanceId(pi.getId()).list())
.allSatisfy(eve -> assertThat(StringUtils.defaultIfEmpty(eve.getTenantId(), ""))
.as("Event Subscription %s from %s %s must belong to %s but belongs to %s.",
eve.getId(), pi.getName(), moment, expectedTenantId, eve.getTenantId())
.isEqualTo(expectedTenantId));
assertThat(taskService.createTaskQuery().processInstanceId(pi.getId()).list())
.allSatisfy(task -> assertThat(StringUtils.defaultIfEmpty(task.getTenantId(), ""))
.as("Task %s from %s %s must belong to %s but belongs to %s.",
task.getName(), pi.getName(), moment, expectedTenantId, task.getTenantId())
.isEqualTo(expectedTenantId));
assertThat(managementService.createJobQuery().processInstanceId(pi.getId()).list())
.allSatisfy(job -> assertThat(StringUtils.defaultIfEmpty(job.getTenantId(), ""))
.as("Job %s from %s %s must belong to %s but belongs to %s.",
job.getId(), pi.getName(), moment, expectedTenantId, job.getTenantId())
.isEqualTo(expectedTenantId));
assertThat(managementService.createDeadLetterJobQuery().processInstanceId(pi.getId()).list())
.allSatisfy(job -> assertThat(StringUtils.defaultIfEmpty(job.getTenantId(), ""))
.as("Dead Letter Job %s from %s %s must belong to %s but belongs to %s.",
job.getId(), pi.getName(), moment, expectedTenantId, job.getTenantId())
.isEqualTo(expectedTenantId));
assertThat(managementService.createTimerJobQuery().processInstanceId(pi.getId()).list())
.allSatisfy(job -> assertThat(StringUtils.defaultIfEmpty(job.getTenantId(), ""))
.as("Timer Job %s from %s %s must belong to %s but belongs to %s.",
job.getId(), pi.getName(), moment, expectedTenantId, job.getTenantId())
.isEqualTo(expectedTenantId));
assertThat(managementService.createExternalWorkerJobQuery().processInstanceId(pi.getId()).list())
.allSatisfy(job -> assertThat(StringUtils.defaultIfEmpty(job.getTenantId(), ""))
.as("External Worker Job %s from %s %s must belong to %s but belongs to %s.",
job.getId(), pi.getName(), moment, expectedTenantId, job.getTenantId())
.isEqualTo(expectedTenantId));
});
if (HistoryTestHelper.isHistoryLevelAtLeast(HistoryLevel.ACTIVITY, processEngineConfiguration)) {
assertThat(historyService.createHistoricProcessInstanceQuery().processInstanceIds(processInstanceIds).list())
.hasSize(processInstanceIds.size())
.allSatisfy(hpi -> {
assertThat(StringUtils.defaultIfEmpty(hpi.getTenantId(), ""))
.as("Historic process instance '%s' %s must belong to %s but belongs to %s.",
hpi.getName(), moment, expectedTenantId, hpi.getTenantId())
.isEqualTo(expectedTenantId);
assertThat(historyService.createHistoricActivityInstanceQuery().processInstanceId(hpi.getId()).list())
.isNotEmpty()
.allSatisfy(hai -> assertThat(StringUtils.defaultIfEmpty(hai.getTenantId(), ""))
.as("Historic activity instance %s from %s %s must belong to %s but belongs to %s.",
hai.getActivityName(), hpi.getName(), moment, expectedTenantId, hai.getTenantId())
.isEqualTo(expectedTenantId));
assertThat(historyService.createHistoricTaskInstanceQuery().processInstanceId(hpi.getId()).list())
.allSatisfy(task -> assertThat(StringUtils.defaultIfEmpty(task.getTenantId(), ""))
.as("Historic Task %s from %s %s must belong to %s but belongs to %s.",
task.getName(), hpi.getName(), moment, expectedTenantId, task.getTenantId())
.isEqualTo(expectedTenantId));
assertThat(historyService.createHistoricTaskLogEntryQuery().processInstanceId(hpi.getId()).list())
.allSatisfy(log -> assertThat(StringUtils.defaultIfEmpty(log.getTenantId(), ""))
.as("Historic Task Log Entry %s from %s %s must belong to %s but belongs to %s.",
log.getLogNumber(), hpi.getName(), moment, expectedTenantId, log.getTenantId())
.isEqualTo(expectedTenantId));
});
}
}
@Test
void changeTenantIdWithDefinedDefinitionTenant() {
//Starting process instances that will be completed
String processInstanceIdACompleted = startProcess(TEST_TENANT_A, "testProcess", "processInstanceIdACompleted", 2);
String processInstanceIdADTCompleted = startProcess(TEST_TENANT_A, "testProcessDup", "processInstanceIdADTCompleted", 2,
TEST_TENANT_A); // For this instance we want to override the tenant Id.
String processInstanceIdBCompleted = startProcess(TEST_TENANT_B, "testProcess", "processInstanceIdBCompleted", 2);
String processInstanceIdCCompleted = startProcess(TEST_TENANT_C, "testProcess", "processInstanceIdCCompleted", 2);
//Starting process instances that will remain active
String processInstanceIdAActive = startProcess(TEST_TENANT_A, "testProcess", "processInstanceIdAActive", 1);
String processInstanceIdADTActive = startProcess(TEST_TENANT_A, "testProcessDup", "processInstanceIdADTActive", 1,
TEST_TENANT_A); // For this instance we want to override the tenant Id.
String processInstanceIdBActive = startProcess(TEST_TENANT_B, "testProcess", "processInstanceIdBActive", 1);
String processInstanceIdCActive = startProcess(TEST_TENANT_C, "testProcess", "processInstanceIdCActive", 1);
Set<String> processInstancesTenantADTOnly = new HashSet<>(Arrays.asList(processInstanceIdADTCompleted, processInstanceIdADTActive));
Set<String> processInstancesTenantANonDT = new HashSet<>(Arrays.asList(processInstanceIdACompleted, processInstanceIdAActive));
Set<String> processInstancesTenantAAll = new HashSet<>(
Arrays.asList(processInstanceIdACompleted, processInstanceIdAActive, processInstanceIdADTCompleted, processInstanceIdADTActive));
Set<String> processInstancesTenantB = new HashSet<>(Arrays.asList(processInstanceIdBCompleted, processInstanceIdBActive));
Set<String> processInstancesTenantC = new HashSet<>(Arrays.asList(processInstanceIdCCompleted, processInstanceIdCActive));
// Prior to changing the Tenant Id, all elements are associated to the original tenant
checkTenantIdForAllInstances(processInstancesTenantAAll, TEST_TENANT_A, "prior to changing to " + TEST_TENANT_B);
checkTenantIdForAllInstances(processInstancesTenantB, TEST_TENANT_B, "prior to changing to " + TEST_TENANT_B);
checkTenantIdForAllInstances(processInstancesTenantC, TEST_TENANT_C, "prior to changing to " + TEST_TENANT_B);
// First, we simulate the change
ChangeTenantIdResult simulationResult = managementService
.createChangeTenantIdBuilder(TEST_TENANT_A, TEST_TENANT_B)
.definitionTenantId("")
.simulate();
// All the instances should stay in the original tenant after the simulation
checkTenantIdForAllInstances(processInstancesTenantAAll, TEST_TENANT_A, "after simulating the change to " + TEST_TENANT_B);
checkTenantIdForAllInstances(processInstancesTenantB, TEST_TENANT_B, "after simulating the change to " + TEST_TENANT_B);
checkTenantIdForAllInstances(processInstancesTenantC, TEST_TENANT_C, "after simulating the change to " + TEST_TENANT_B);
// We now proceed with the changeTenantId operation for all the instances
ChangeTenantIdResult result = managementService
.createChangeTenantIdBuilder(TEST_TENANT_A, TEST_TENANT_B)
.definitionTenantId("")
.complete();
// All the instances from the default tenant should now be assigned to the tenant B
checkTenantIdForAllInstances(processInstancesTenantADTOnly, TEST_TENANT_B, "after the change to " + TEST_TENANT_B);
// But the instances that were created with a definition from tenant A must stay in tenant A
checkTenantIdForAllInstances(processInstancesTenantANonDT, TEST_TENANT_A, "after the change to " + TEST_TENANT_B);
// The instances from Tenant B are still associated to tenant B
checkTenantIdForAllInstances(processInstancesTenantB, TEST_TENANT_B, "after the change to " + TEST_TENANT_B);
// The instances for tenant C remain untouched in tenant C
checkTenantIdForAllInstances(processInstancesTenantC, TEST_TENANT_C, "after the change to " + TEST_TENANT_B);
//Expected results map
Map<String, Long> resultMap = new HashMap<>();
resultMap.put(BpmnChangeTenantIdEntityTypes.ACTIVITY_INSTANCES, 7L);
resultMap.put(BpmnChangeTenantIdEntityTypes.EXECUTIONS, 2L);
resultMap.put(BpmnChangeTenantIdEntityTypes.EVENT_SUBSCRIPTIONS, 0L);
resultMap.put(BpmnChangeTenantIdEntityTypes.TASKS, 1L);
resultMap.put(BpmnChangeTenantIdEntityTypes.EXTERNAL_WORKER_JOBS, 0L);
resultMap.put(BpmnChangeTenantIdEntityTypes.HISTORIC_ACTIVITY_INSTANCES, 16L);
resultMap.put(BpmnChangeTenantIdEntityTypes.HISTORIC_PROCESS_INSTANCES, 2L);
resultMap.put(BpmnChangeTenantIdEntityTypes.HISTORIC_TASK_LOG_ENTRIES, 7L);
resultMap.put(BpmnChangeTenantIdEntityTypes.HISTORIC_TASK_INSTANCES, 4L);
resultMap.put(BpmnChangeTenantIdEntityTypes.JOBS, 0L);
resultMap.put(BpmnChangeTenantIdEntityTypes.SUSPENDED_JOBS, 0L);
resultMap.put(BpmnChangeTenantIdEntityTypes.TIMER_JOBS, 0L);
resultMap.put(BpmnChangeTenantIdEntityTypes.DEADLETTER_JOBS, 0L);
//Check that all the entities are returned
assertThat(simulationResult.getChangedEntityTypes())
.containsExactlyInAnyOrderElementsOf(resultMap.keySet());
assertThat(result.getChangedEntityTypes())
.containsExactlyInAnyOrderElementsOf(resultMap.keySet());
resultMap.forEach((key, value) -> {
//Check simulation result content
assertThat(simulationResult.getChangedInstances(key))
.as(key)
.isEqualTo(value);
//Check result content
assertThat(result.getChangedInstances(key))
.as(key)
.isEqualTo(value);
});
//Check that we can complete the active instances that we have changed
completeTask(processInstanceIdAActive);
assertProcessEnded(processInstanceIdAActive);
completeTask(processInstanceIdADTActive);
assertProcessEnded(processInstanceIdADTActive);
completeTask(processInstanceIdBActive);
assertProcessEnded(processInstanceIdBActive);
completeTask(processInstanceIdCActive);
assertProcessEnded(processInstanceIdCActive);
assertThat(eventListener.events).hasSize(1);
FlowableChangeTenantIdEvent event = eventListener.events.get(0);
assertThat(event.getEngineScopeType()).isEqualTo(ScopeTypes.BPMN);
assertThat(event.getSourceTenantId()).isEqualTo(TEST_TENANT_A);
assertThat(event.getTargetTenantId()).isEqualTo(TEST_TENANT_B);
assertThat(event.getDefinitionTenantId()).isEqualTo("");
}
@Test
void changeTenantIdWhenTenantsAreInvalid() {
assertThatThrownBy(() -> managementService.createChangeTenantIdBuilder(TEST_TENANT_A, TEST_TENANT_A).simulate())
.isInstanceOf(FlowableIllegalArgumentException.class)
.hasMessage("The source and the target tenant ids must be different.");
assertThatThrownBy(() -> managementService.createChangeTenantIdBuilder(TEST_TENANT_A, TEST_TENANT_A).complete())
.isInstanceOf(FlowableIllegalArgumentException.class)
.hasMessage("The source and the target tenant ids must be different.");
assertThatThrownBy(() -> managementService.createChangeTenantIdBuilder(null, TEST_TENANT_A).simulate())
.isInstanceOf(FlowableIllegalArgumentException.class)
.hasMessage("The source tenant id must not be null.");
assertThatThrownBy(() -> managementService.createChangeTenantIdBuilder(null, TEST_TENANT_A).complete())
.isInstanceOf(FlowableIllegalArgumentException.class)
.hasMessage("The source tenant id must not be null.");
assertThatThrownBy(() -> managementService.createChangeTenantIdBuilder(TEST_TENANT_A, null).simulate())
.isInstanceOf(FlowableIllegalArgumentException.class)
.hasMessage("The target tenant id must not be null.");
assertThatThrownBy(() -> managementService.createChangeTenantIdBuilder(TEST_TENANT_A, null).complete())
.isInstanceOf(FlowableIllegalArgumentException.class)
.hasMessage("The target tenant id must not be null.");
assertThatThrownBy(() -> managementService.createChangeTenantIdBuilder(TEST_TENANT_A, TEST_TENANT_B).definitionTenantId(null))
.isInstanceOf(FlowableIllegalArgumentException.class)
.hasMessage("definitionTenantId must not be null");
}
private String startProcess(String tenantId, String processDefinitionKey, String processInstanceName, int completeTaskLoops) {
return startProcess(tenantId, processDefinitionKey, processInstanceName, completeTaskLoops, null);
}
private String startProcess(String tenantId, String processDefinitionKey, String processInstanceName, int completeTaskLoops, String overrideTenantId) {
ProcessInstanceBuilder processInstanceBuilder = runtimeService.createProcessInstanceBuilder().processDefinitionKey(processDefinitionKey)
.name(processInstanceName).tenantId(tenantId).fallbackToDefaultTenant();
if (overrideTenantId != null) {
processInstanceBuilder.overrideProcessDefinitionTenantId(overrideTenantId);
}
ProcessInstance processInstance = processInstanceBuilder.start();
for (int i = 0; i < completeTaskLoops; i++) {
completeTask(processInstance);
}
return processInstance.getId();
}
private void completeTask(ProcessInstance processInstance) {
completeTask(processInstance.getId());
}
private void completeTask(String processInstanceId) {
Task task = taskService.createTaskQuery().processInstanceId(processInstanceId).singleResult();
taskService.complete(task.getId());
}
protected static class TestEventListener extends AbstractFlowableEventListener {
protected final List<FlowableChangeTenantIdEvent> events = new ArrayList<>();
@Override
public void onEvent(FlowableEvent event) {
if (event instanceof FlowableChangeTenantIdEvent) {
events.add((FlowableChangeTenantIdEvent) event);
}
}
@Override
public boolean isFailOnException() {
return true;
}
@Override
public Collection<? extends FlowableEventType> getTypes() {
return Collections.singleton(FlowableEngineEventType.CHANGE_TENANT_ID);
}
}
}
```
|
Lennyville is an unincorporated community in Hancock County, West Virginia, United States. It lies at an elevation of 794 feet (242 m).
References
Unincorporated communities in Hancock County, West Virginia
Unincorporated communities in West Virginia
|
```css
Change the style of the decoration with `text-decoration-style`
Change the color of the decoration with `text-decoration-color`
Using the `font-variant` property to transform text to small-caps
Load custom fonts on a web page using `@font-face`
Page breaks for printing
```
|
```c++
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#include "src/type-info.h"
#include "src/ast.h"
#include "src/code-stubs.h"
#include "src/compiler.h"
#include "src/ic/ic.h"
#include "src/ic/stub-cache.h"
#include "src/objects-inl.h"
namespace v8 {
namespace internal {
TypeFeedbackOracle::TypeFeedbackOracle(
Isolate* isolate, Zone* zone, Handle<Code> code,
Handle<TypeFeedbackVector> feedback_vector, Handle<Context> native_context)
: native_context_(native_context), isolate_(isolate), zone_(zone) {
BuildDictionary(code);
DCHECK(dictionary_->IsDictionary());
// We make a copy of the feedback vector because a GC could clear
// the type feedback info contained therein.
// TODO(mvstanton): revisit the decision to copy when we weakly
// traverse the feedback vector at GC time.
feedback_vector_ = TypeFeedbackVector::Copy(isolate, feedback_vector);
}
static uint32_t IdToKey(TypeFeedbackId ast_id) {
return static_cast<uint32_t>(ast_id.ToInt());
}
Handle<Object> TypeFeedbackOracle::GetInfo(TypeFeedbackId ast_id) {
int entry = dictionary_->FindEntry(IdToKey(ast_id));
if (entry != UnseededNumberDictionary::kNotFound) {
Object* value = dictionary_->ValueAt(entry);
if (value->IsCell()) {
Cell* cell = Cell::cast(value);
return Handle<Object>(cell->value(), isolate());
} else {
return Handle<Object>(value, isolate());
}
}
return Handle<Object>::cast(isolate()->factory()->undefined_value());
}
Handle<Object> TypeFeedbackOracle::GetInfo(FeedbackVectorSlot slot) {
DCHECK(slot.ToInt() >= 0 && slot.ToInt() < feedback_vector_->length());
Handle<Object> undefined =
Handle<Object>::cast(isolate()->factory()->undefined_value());
Object* obj = feedback_vector_->Get(slot);
// Slots do not embed direct pointers to maps, functions. Instead
// a WeakCell is always used.
if (obj->IsWeakCell()) {
WeakCell* cell = WeakCell::cast(obj);
if (cell->cleared()) return undefined;
obj = cell->value();
}
if (obj->IsJSFunction() || obj->IsAllocationSite() || obj->IsSymbol() ||
obj->IsSimd128Value()) {
return Handle<Object>(obj, isolate());
}
return undefined;
}
InlineCacheState TypeFeedbackOracle::LoadInlineCacheState(TypeFeedbackId id) {
Handle<Object> maybe_code = GetInfo(id);
if (maybe_code->IsCode()) {
Handle<Code> code = Handle<Code>::cast(maybe_code);
if (code->is_inline_cache_stub()) return code->ic_state();
}
// If we can't find an IC, assume we've seen *something*, but we don't know
// what. PREMONOMORPHIC roughly encodes this meaning.
return PREMONOMORPHIC;
}
InlineCacheState TypeFeedbackOracle::LoadInlineCacheState(
FeedbackVectorSlot slot) {
if (!slot.IsInvalid()) {
FeedbackVectorSlotKind kind = feedback_vector_->GetKind(slot);
if (kind == FeedbackVectorSlotKind::LOAD_IC) {
LoadICNexus nexus(feedback_vector_, slot);
return nexus.StateFromFeedback();
} else if (kind == FeedbackVectorSlotKind::KEYED_LOAD_IC) {
KeyedLoadICNexus nexus(feedback_vector_, slot);
return nexus.StateFromFeedback();
}
}
// If we can't find an IC, assume we've seen *something*, but we don't know
// what. PREMONOMORPHIC roughly encodes this meaning.
return PREMONOMORPHIC;
}
bool TypeFeedbackOracle::StoreIsUninitialized(TypeFeedbackId ast_id) {
Handle<Object> maybe_code = GetInfo(ast_id);
if (!maybe_code->IsCode()) return false;
Handle<Code> code = Handle<Code>::cast(maybe_code);
return code->ic_state() == UNINITIALIZED;
}
bool TypeFeedbackOracle::StoreIsUninitialized(FeedbackVectorSlot slot) {
if (!slot.IsInvalid()) {
FeedbackVectorSlotKind kind = feedback_vector_->GetKind(slot);
if (kind == FeedbackVectorSlotKind::STORE_IC) {
StoreICNexus nexus(feedback_vector_, slot);
return nexus.StateFromFeedback() == UNINITIALIZED;
} else if (kind == FeedbackVectorSlotKind::KEYED_STORE_IC) {
KeyedStoreICNexus nexus(feedback_vector_, slot);
return nexus.StateFromFeedback() == UNINITIALIZED;
}
}
return true;
}
bool TypeFeedbackOracle::CallIsUninitialized(FeedbackVectorSlot slot) {
Handle<Object> value = GetInfo(slot);
return value->IsUndefined() ||
value.is_identical_to(
TypeFeedbackVector::UninitializedSentinel(isolate()));
}
bool TypeFeedbackOracle::CallIsMonomorphic(FeedbackVectorSlot slot) {
Handle<Object> value = GetInfo(slot);
return value->IsAllocationSite() || value->IsJSFunction();
}
bool TypeFeedbackOracle::CallNewIsMonomorphic(FeedbackVectorSlot slot) {
Handle<Object> info = GetInfo(slot);
return info->IsAllocationSite() || info->IsJSFunction();
}
byte TypeFeedbackOracle::ForInType(FeedbackVectorSlot feedback_vector_slot) {
Handle<Object> value = GetInfo(feedback_vector_slot);
return value.is_identical_to(
TypeFeedbackVector::UninitializedSentinel(isolate()))
? ForInStatement::FAST_FOR_IN
: ForInStatement::SLOW_FOR_IN;
}
void TypeFeedbackOracle::GetStoreModeAndKeyType(
TypeFeedbackId ast_id, KeyedAccessStoreMode* store_mode,
IcCheckType* key_type) {
Handle<Object> maybe_code = GetInfo(ast_id);
if (maybe_code->IsCode()) {
Handle<Code> code = Handle<Code>::cast(maybe_code);
if (code->kind() == Code::KEYED_STORE_IC) {
ExtraICState extra_ic_state = code->extra_ic_state();
*store_mode = KeyedStoreIC::GetKeyedAccessStoreMode(extra_ic_state);
*key_type = KeyedStoreIC::GetKeyType(extra_ic_state);
return;
}
}
*store_mode = STANDARD_STORE;
*key_type = ELEMENT;
}
void TypeFeedbackOracle::GetStoreModeAndKeyType(
FeedbackVectorSlot slot, KeyedAccessStoreMode* store_mode,
IcCheckType* key_type) {
if (!slot.IsInvalid() &&
feedback_vector_->GetKind(slot) ==
FeedbackVectorSlotKind::KEYED_STORE_IC) {
KeyedStoreICNexus nexus(feedback_vector_, slot);
*store_mode = nexus.GetKeyedAccessStoreMode();
*key_type = nexus.GetKeyType();
} else {
*store_mode = STANDARD_STORE;
*key_type = ELEMENT;
}
}
Handle<JSFunction> TypeFeedbackOracle::GetCallTarget(FeedbackVectorSlot slot) {
Handle<Object> info = GetInfo(slot);
if (info->IsAllocationSite()) {
return Handle<JSFunction>(isolate()->native_context()->array_function());
}
return Handle<JSFunction>::cast(info);
}
Handle<JSFunction> TypeFeedbackOracle::GetCallNewTarget(
FeedbackVectorSlot slot) {
Handle<Object> info = GetInfo(slot);
if (info->IsJSFunction()) {
return Handle<JSFunction>::cast(info);
}
DCHECK(info->IsAllocationSite());
return Handle<JSFunction>(isolate()->native_context()->array_function());
}
Handle<AllocationSite> TypeFeedbackOracle::GetCallAllocationSite(
FeedbackVectorSlot slot) {
Handle<Object> info = GetInfo(slot);
if (info->IsAllocationSite()) {
return Handle<AllocationSite>::cast(info);
}
return Handle<AllocationSite>::null();
}
Handle<AllocationSite> TypeFeedbackOracle::GetCallNewAllocationSite(
FeedbackVectorSlot slot) {
Handle<Object> info = GetInfo(slot);
if (info->IsAllocationSite()) {
return Handle<AllocationSite>::cast(info);
}
return Handle<AllocationSite>::null();
}
bool TypeFeedbackOracle::LoadIsBuiltin(
TypeFeedbackId id, Builtins::Name builtin) {
return *GetInfo(id) == isolate()->builtins()->builtin(builtin);
}
void TypeFeedbackOracle::CompareType(TypeFeedbackId id,
Type** left_type,
Type** right_type,
Type** combined_type) {
Handle<Object> info = GetInfo(id);
if (!info->IsCode()) {
// For some comparisons we don't have ICs, e.g. LiteralCompareTypeof.
*left_type = *right_type = *combined_type = Type::None(zone());
return;
}
Handle<Code> code = Handle<Code>::cast(info);
Handle<Map> map;
Map* raw_map = code->FindFirstMap();
if (raw_map != NULL) Map::TryUpdate(handle(raw_map)).ToHandle(&map);
if (code->is_compare_ic_stub()) {
CompareICStub stub(code->stub_key(), isolate());
*left_type = CompareICState::StateToType(zone(), stub.left());
*right_type = CompareICState::StateToType(zone(), stub.right());
*combined_type = CompareICState::StateToType(zone(), stub.state(), map);
} else if (code->is_compare_nil_ic_stub()) {
CompareNilICStub stub(isolate(), code->extra_ic_state());
*combined_type = stub.GetType(zone(), map);
*left_type = *right_type = stub.GetInputType(zone(), map);
}
}
void TypeFeedbackOracle::BinaryType(TypeFeedbackId id,
Type** left,
Type** right,
Type** result,
Maybe<int>* fixed_right_arg,
Handle<AllocationSite>* allocation_site,
Token::Value op) {
Handle<Object> object = GetInfo(id);
if (!object->IsCode()) {
// For some binary ops we don't have ICs, e.g. Token::COMMA, but for the
// operations covered by the BinaryOpIC we should always have them.
DCHECK(op < BinaryOpICState::FIRST_TOKEN ||
op > BinaryOpICState::LAST_TOKEN);
*left = *right = *result = Type::None(zone());
*fixed_right_arg = Nothing<int>();
*allocation_site = Handle<AllocationSite>::null();
return;
}
Handle<Code> code = Handle<Code>::cast(object);
DCHECK_EQ(Code::BINARY_OP_IC, code->kind());
BinaryOpICState state(isolate(), code->extra_ic_state());
DCHECK_EQ(op, state.op());
*left = state.GetLeftType(zone());
*right = state.GetRightType(zone());
*result = state.GetResultType(zone());
*fixed_right_arg = state.fixed_right_arg();
AllocationSite* first_allocation_site = code->FindFirstAllocationSite();
if (first_allocation_site != NULL) {
*allocation_site = handle(first_allocation_site);
} else {
*allocation_site = Handle<AllocationSite>::null();
}
}
Type* TypeFeedbackOracle::CountType(TypeFeedbackId id) {
Handle<Object> object = GetInfo(id);
if (!object->IsCode()) return Type::None(zone());
Handle<Code> code = Handle<Code>::cast(object);
DCHECK_EQ(Code::BINARY_OP_IC, code->kind());
BinaryOpICState state(isolate(), code->extra_ic_state());
return state.GetLeftType(zone());
}
bool TypeFeedbackOracle::HasOnlyStringMaps(SmallMapList* receiver_types) {
bool all_strings = receiver_types->length() > 0;
for (int i = 0; i < receiver_types->length(); i++) {
all_strings &= receiver_types->at(i)->IsStringMap();
}
return all_strings;
}
void TypeFeedbackOracle::PropertyReceiverTypes(FeedbackVectorSlot slot,
Handle<Name> name,
SmallMapList* receiver_types) {
receiver_types->Clear();
if (!slot.IsInvalid()) {
LoadICNexus nexus(feedback_vector_, slot);
Code::Flags flags = Code::ComputeHandlerFlags(Code::LOAD_IC);
CollectReceiverTypes(&nexus, name, flags, receiver_types);
}
}
void TypeFeedbackOracle::KeyedPropertyReceiverTypes(
FeedbackVectorSlot slot, SmallMapList* receiver_types, bool* is_string,
IcCheckType* key_type) {
receiver_types->Clear();
if (slot.IsInvalid()) {
*is_string = false;
*key_type = ELEMENT;
} else {
KeyedLoadICNexus nexus(feedback_vector_, slot);
CollectReceiverTypes<FeedbackNexus>(&nexus, receiver_types);
*is_string = HasOnlyStringMaps(receiver_types);
*key_type = nexus.FindFirstName() != NULL ? PROPERTY : ELEMENT;
}
}
void TypeFeedbackOracle::AssignmentReceiverTypes(TypeFeedbackId id,
Handle<Name> name,
SmallMapList* receiver_types) {
receiver_types->Clear();
Code::Flags flags = Code::ComputeHandlerFlags(Code::STORE_IC);
CollectReceiverTypes(id, name, flags, receiver_types);
}
void TypeFeedbackOracle::AssignmentReceiverTypes(FeedbackVectorSlot slot,
Handle<Name> name,
SmallMapList* receiver_types) {
receiver_types->Clear();
Code::Flags flags = Code::ComputeHandlerFlags(Code::STORE_IC);
CollectReceiverTypes(slot, name, flags, receiver_types);
}
void TypeFeedbackOracle::KeyedAssignmentReceiverTypes(
TypeFeedbackId id, SmallMapList* receiver_types,
KeyedAccessStoreMode* store_mode, IcCheckType* key_type) {
receiver_types->Clear();
CollectReceiverTypes(id, receiver_types);
GetStoreModeAndKeyType(id, store_mode, key_type);
}
void TypeFeedbackOracle::KeyedAssignmentReceiverTypes(
FeedbackVectorSlot slot, SmallMapList* receiver_types,
KeyedAccessStoreMode* store_mode, IcCheckType* key_type) {
receiver_types->Clear();
CollectReceiverTypes(slot, receiver_types);
GetStoreModeAndKeyType(slot, store_mode, key_type);
}
void TypeFeedbackOracle::CountReceiverTypes(TypeFeedbackId id,
SmallMapList* receiver_types) {
receiver_types->Clear();
CollectReceiverTypes(id, receiver_types);
}
void TypeFeedbackOracle::CountReceiverTypes(FeedbackVectorSlot slot,
SmallMapList* receiver_types) {
receiver_types->Clear();
if (!slot.IsInvalid()) CollectReceiverTypes(slot, receiver_types);
}
void TypeFeedbackOracle::CollectReceiverTypes(FeedbackVectorSlot slot,
Handle<Name> name,
Code::Flags flags,
SmallMapList* types) {
StoreICNexus nexus(feedback_vector_, slot);
CollectReceiverTypes<FeedbackNexus>(&nexus, name, flags, types);
}
void TypeFeedbackOracle::CollectReceiverTypes(TypeFeedbackId ast_id,
Handle<Name> name,
Code::Flags flags,
SmallMapList* types) {
Handle<Object> object = GetInfo(ast_id);
if (object->IsUndefined() || object->IsSmi()) return;
DCHECK(object->IsCode());
Handle<Code> code(Handle<Code>::cast(object));
CollectReceiverTypes<Code>(*code, name, flags, types);
}
template <class T>
void TypeFeedbackOracle::CollectReceiverTypes(T* obj, Handle<Name> name,
Code::Flags flags,
SmallMapList* types) {
if (FLAG_collect_megamorphic_maps_from_stub_cache &&
obj->ic_state() == MEGAMORPHIC) {
types->Reserve(4, zone());
isolate()->stub_cache()->CollectMatchingMaps(
types, name, flags, native_context_, zone());
} else {
CollectReceiverTypes<T>(obj, types);
}
}
void TypeFeedbackOracle::CollectReceiverTypes(TypeFeedbackId ast_id,
SmallMapList* types) {
Handle<Object> object = GetInfo(ast_id);
if (!object->IsCode()) return;
Handle<Code> code = Handle<Code>::cast(object);
CollectReceiverTypes<Code>(*code, types);
}
void TypeFeedbackOracle::CollectReceiverTypes(FeedbackVectorSlot slot,
SmallMapList* types) {
FeedbackVectorSlotKind kind = feedback_vector_->GetKind(slot);
if (kind == FeedbackVectorSlotKind::STORE_IC) {
StoreICNexus nexus(feedback_vector_, slot);
CollectReceiverTypes<FeedbackNexus>(&nexus, types);
} else {
DCHECK_EQ(FeedbackVectorSlotKind::KEYED_STORE_IC, kind);
KeyedStoreICNexus nexus(feedback_vector_, slot);
CollectReceiverTypes<FeedbackNexus>(&nexus, types);
}
}
template <class T>
void TypeFeedbackOracle::CollectReceiverTypes(T* obj, SmallMapList* types) {
MapHandleList maps;
if (obj->ic_state() == MONOMORPHIC) {
Map* map = obj->FindFirstMap();
if (map != NULL) maps.Add(handle(map));
} else if (obj->ic_state() == POLYMORPHIC) {
obj->FindAllMaps(&maps);
} else {
return;
}
types->Reserve(maps.length(), zone());
for (int i = 0; i < maps.length(); i++) {
Handle<Map> map(maps.at(i));
if (IsRelevantFeedback(*map, *native_context_)) {
types->AddMapIfMissing(maps.at(i), zone());
}
}
}
uint16_t TypeFeedbackOracle::ToBooleanTypes(TypeFeedbackId id) {
Handle<Object> object = GetInfo(id);
return object->IsCode() ? Handle<Code>::cast(object)->to_boolean_state() : 0;
}
// Things are a bit tricky here: The iterator for the RelocInfos and the infos
// themselves are not GC-safe, so we first get all infos, then we create the
// dictionary (possibly triggering GC), and finally we relocate the collected
// infos before we process them.
void TypeFeedbackOracle::BuildDictionary(Handle<Code> code) {
DisallowHeapAllocation no_allocation;
ZoneList<RelocInfo> infos(16, zone());
HandleScope scope(isolate());
GetRelocInfos(code, &infos);
CreateDictionary(code, &infos);
ProcessRelocInfos(&infos);
// Allocate handle in the parent scope.
dictionary_ = scope.CloseAndEscape(dictionary_);
}
void TypeFeedbackOracle::GetRelocInfos(Handle<Code> code,
ZoneList<RelocInfo>* infos) {
int mask = RelocInfo::ModeMask(RelocInfo::CODE_TARGET_WITH_ID);
for (RelocIterator it(*code, mask); !it.done(); it.next()) {
infos->Add(*it.rinfo(), zone());
}
}
void TypeFeedbackOracle::CreateDictionary(Handle<Code> code,
ZoneList<RelocInfo>* infos) {
AllowHeapAllocation allocation_allowed;
Code* old_code = *code;
dictionary_ = UnseededNumberDictionary::New(isolate(), infos->length());
RelocateRelocInfos(infos, old_code, *code);
}
void TypeFeedbackOracle::RelocateRelocInfos(ZoneList<RelocInfo>* infos,
Code* old_code,
Code* new_code) {
for (int i = 0; i < infos->length(); i++) {
RelocInfo* info = &(*infos)[i];
info->set_host(new_code);
info->set_pc(new_code->instruction_start() +
(info->pc() - old_code->instruction_start()));
}
}
void TypeFeedbackOracle::ProcessRelocInfos(ZoneList<RelocInfo>* infos) {
for (int i = 0; i < infos->length(); i++) {
RelocInfo reloc_entry = (*infos)[i];
Address target_address = reloc_entry.target_address();
TypeFeedbackId ast_id =
TypeFeedbackId(static_cast<unsigned>((*infos)[i].data()));
Code* target = Code::GetCodeFromTargetAddress(target_address);
switch (target->kind()) {
case Code::LOAD_IC:
case Code::STORE_IC:
case Code::KEYED_LOAD_IC:
case Code::KEYED_STORE_IC:
case Code::BINARY_OP_IC:
case Code::COMPARE_IC:
case Code::TO_BOOLEAN_IC:
case Code::COMPARE_NIL_IC:
SetInfo(ast_id, target);
break;
default:
break;
}
}
}
void TypeFeedbackOracle::SetInfo(TypeFeedbackId ast_id, Object* target) {
DCHECK(dictionary_->FindEntry(IdToKey(ast_id)) ==
UnseededNumberDictionary::kNotFound);
// Dictionary has been allocated with sufficient size for all elements.
DisallowHeapAllocation no_need_to_resize_dictionary;
HandleScope scope(isolate());
USE(UnseededNumberDictionary::AtNumberPut(
dictionary_, IdToKey(ast_id), handle(target, isolate())));
}
} // namespace internal
} // namespace v8
```
|
Lovejoy are an indie rock band formed in Brighton, England in 2021. The band consists of William Gold (also known by the stage name Wilbur Soot) as lead vocalist and rhythm guitarist, Joe Goldsmith as lead guitarist, Mark Boardman as drummer, and Ash Kabosu as bassist, with all four also sharing in songwriting.
Their debut EP, Are You Alright?, was released on 9 May 2021. Their second EP, Pebble Brain, came out on 14 October 2021. The band's third EP, Wake Up & It's Over, was released on 12 May 2023. The band independently releases their music under their own label, Anvil Cat Records, distributing with AWAL.
History
2021–2022: Formation, Are You Alright?, and Pebble Brain
Lovejoy was founded by William Gold and Joe Goldsmith in 2021. The two met from previously playing in the same folk group. Gold previously gained a following online for his Twitch streams and YouTube videos, as well as his solo music. The band was originally called "Hang the DJ". The current name is after Benedict Lovejoy, a friend of the band, who would sit with them during their early days of songwriting.
Before joining Lovejoy, drummer Mark Boardman had been studying editing at university, bass guitarist Ash Kabosu was working in TV broadcasting, and lead guitarist Joe Goldsmith was a tree surgeon. Gold met Kabosu, a friend of a friend, in a Smashburger shop in Brighton, and asked him if he'd like to join the band. During studio recording, the band hired Boardman on Fiverr, and after watching him perform, Gold invited him to join as a permanent member.
The band recorded their debut EP, Are You Alright?, in two days at Brighton Electric in March 2021. It was released on 8 May 2021, which saw Lovejoy debut at number 10 on Billboards Emerging Artists chart on 20 May 2021.
Their second EP, Pebble Brain, was recorded in August 2021 at Small Pond Recording Studios in Brighton, and released on 14 October 2021. The EP peaked at number 12 on the UK Albums Chart. The songs of this EP had the theme of failed romantic relationships, but also expand to political unrest. In October 2022, the band issued their first two EPs, as well as their cover of "Knee Deep at ATP", to vinyl for the first time.
Prior to their large concert and festival performances, Lovejoy performed unadvertised gigs under pseudonyms. From November 2022, the band began to receive renewed attention on mainstream radio, with airtime on BBC Radio 1, and on Dutch radio show "" (). Their songs have also been featured on BBC Music Introducing.
2023–present: Wake Up & It's Over, From Studio 4, and touring
On 10 February 2023, Lovejoy released the single "Call Me What You Like", which peaked at number 32 on the Official UK Top 40 chart. The song would later be featured on their third EP, Wake Up & It's Over, which was released on 12 May 2023 and debuted at number 5 on the UK album charts. In addition, "Portrait of a Blank Slate" from the EP is featured in the soundtrack for EA Sports FC 24.
On 7 April 2023, under the Anvil Cat alias, the band released a surprise EP titled From Studio 4. It contains acoustic versions of the songs from their debut EP, Are You Alright?, as well as a previously unreleased track introduction, "Tomorrow".
In May 2023, the band performed in London for a Spotify event entitled Our Generation. They released two songs from the session as part of Spotify Singles, including an acoustic version of "Call Me What You Like" and a cover of "The Perfect Pair" by Beabadoobee. Gold said the band has "been huge admirers of Beabadoobee for a while now and love the melody of this song, so it was great to give it our own Lovejoy twist".
The band appears on the front cover of issue 77 of the music magazine Dork, which was released in June 2023. On 25 June 2023, Lovejoy performed at Glastonbury Festival on the BBC Music Introducing stage. They have also performed at Rock Werchter, TRNSMT, the Montreux Jazz Festival, Osheaga, and Lollapalooza. They performed at Leeds Festival on 25 August 2023, and Reading Festival on 27 August 2023. They started their Wake Up & It's Over Tour, across the United Kingdom.
On 2 October 2023, the band announced that their upcoming single "Normal People Things" would be released on 6 October 2023, after having previously teased the track in the week prior. The same day, they announced their Road to 100 tour.
The band performed "Call Me What You Like" on the Late Show with Stephen Colbert on 13 October 2023.
Members
Band members
William Gold – lead vocals, rhythm guitar, songwriting (2021–present)
Joe Goldsmith – lead guitar, backing vocals, songwriting (2021–present)
Ash Kabosu – bass, songwriting (2021–present)
Mark Boardman – drums, backing vocals, songwriting (2021–present)
Touring members
Touring personnel differs between shows.
Leandra Badruza – trumpet, keyboards (2022–present)
Isaac Beer – trumpet (2022)
Zoe – trumpet, keyboards (2022–2023)
Alan Osmundson – trumpet, keyboards (2023–present)
Jackie Coleman – trumpet, keyboards (2023–present)
Discography
Extended plays
Compilation albums
Streaming-exclusive releases
Singles
Other charted songs
Other appearances
Music videos
Tours
Northern Autumn Tour (2022)
December Tour (2022)
Inselaffe Tour (2023)
Across the Pond Tour (2023)
Wake Up & It's Over Tour (2023)
Road to 100 Tour (2023)
Notes
References
External links
2021 establishments in England
Musical groups established in 2021
Musical groups from Brighton and Hove
AWAL artists
English indie rock groups
English pop punk groups
|
```xml
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<TargetFramework>netstandard2.0</TargetFramework>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="StyleCop.Analyzers" Version="1.1.118" PrivateAssets="all">
<IncludeAssets>runtime; build; native; contentfiles; analyzers</IncludeAssets>
</PackageReference>
<PackageReference Include="AspNetCoreAnalyzers" Version="0.2.0.1-dev" PrivateAssets="all" />
<PackageReference Include="IDisposableAnalyzers" Version="3.4.8" PrivateAssets="all" />
<PackageReference Include="Microsoft.Windows.CsWin32" Version="0.1.647-beta" PrivateAssets="all">
<IncludeAssets>runtime; build; native; contentfiles; analyzers; buildtransitive</IncludeAssets>
</PackageReference>
</ItemGroup>
</Project>
```
|
Perinatal mortality (PNM) is the death of a fetus or neonate and is the basis to calculate the perinatal mortality rate. Perinatal means "relating to the period starting a few weeks before birth and including the birth and a few weeks after birth."
Variations in the precise definition of the perinatal mortality exist, specifically concerning the issue of inclusion or exclusion of early fetal and late neonatal fatalities. The World Health Organization defines perinatal mortality as the "number of stillbirths and deaths in the first week of life per 1,000 total births, the perinatal period commences at 22 completed weeks (154 days) of gestation, and ends seven completed days after birth", but other definitions have been used.
The UK figure is about 8 per 1,000 and varies markedly by social class with the highest rates seen in Asian women. Globally, an estimated 2.6 million neonates died in 2013 before the first month of age down from 4.5 million in 1990.
Causes
Preterm birth is the most common cause of perinatal mortality, causing almost 30 percent of neonatal deaths. Infant respiratory distress syndrome, in turn, is the leading cause of death in preterm infants, affecting about 1% of newborn infants. Birth defects cause about 21 percent of neonatal death.
Fetal mortality
Fetal mortality refers to stillbirths or fetal death. It encompasses any death of a fetus after 20 weeks of gestation or 500 gm. In some definitions of the PNM early fetal mortality (week 20–27 gestation) is not included, and the PNM may only include late fetal death and neonatal death. Fetal death can also be divided into death prior to labor, antenatal (antepartum) death, and death during labor, intranatal (intrapartum) death.
Neonatal mortality
Neonatal mortality refers to death of a live-born baby within the first 28 days of life. Early neonatal mortality refers to the death of a live-born baby within the first seven days of life, while late neonatal mortality refers to death after 7 days until before 28 days. Some definitions of the PNM include only the early neonatal mortality. Neonatal mortality is affected by the quality of in-hospital care for the neonate. Neonatal mortality and postneonatal mortality (covering the remaining 11 months of the first year of life) are reflected in the infant mortality rate.
Perinatal mortality rate
The PNMR refers to the number of perinatal deaths per 1,000 total births. It is usually reported on an annual basis. It is a major marker to assess the quality of health care delivery. Comparisons between different rates may be hampered by varying definitions, registration bias, and differences in the underlying risks of the populations.
PNMRs vary widely and may be below 10 for certain developed countries and more than 10 times higher in developing countries. The WHO has not published contemporary data.
Effects of neonatal nutrition on neonatal mortality
Probiotic supplementation of preterm and low birthweight babies during their first month of life can reduce the risk of blood infections, bowel sickness and death in low- and middle-income settings. However, supplementing with Vitamin A does not reduce the risk of death and increases the risk of bulging fontanelle, which may cause brain damage.
See also
Maternal death
Miscarriage
Neonatal intensive care unit
Neonaticide
Pregnancy and Infant Loss Remembrance Day
Stillbirth
References
External links
WHO 2005 report
European Perinatal Health Report 2010
Medical aspects of death
Obstetrics
Infant mortality
Medical terminology
Midwifery
Pregnancy with abortive outcome
fi:Kohtukuolema
|
```javascript
#!/usr/bin/env node
'use strict';
require('./create-erxes-app');
```
|
Raymond Parkin (28 January 1911 – 18 July 1971) was an English professional footballer who played at inside right and later in his career at right half. He spent a large part of his career at Arsenal, where he played mainly in the reserves, and also appeared for Middlesbrough, before becoming a regular member of Southampton's Second Division side.
Football career
Parkin was born in Crook, County Durham and played his youth football at Esh Winning before joining Newcastle United as an amateur in October 1926.
He made no first-team appearances for Newcastle and moved south to join First Division Arsenal in February 1928. His Arsenal debut came in a 5–1 defeat at Sunderland on 1 January 1929. He was in and out of the side for the rest of the season and scored his first goals for Arsenal when he netted twice in a 7–1 victory over Bury on 30 March, with David Jack scoring four goals. Despite scoring three goals in five matches in his debut season, Parkin made no first-team appearances in the next two seasons, and it was not until September 1931 that he made another Football League appearance. On 30 January 1932, he scored a hat-trick in a 4–0 victory over Manchester City.
Although Parkin remained with Arsenal until January 1936, he only made eleven appearances in his final four seasons, before being transferred to Middlesbrough for a fee of £2,500. He appeared regularly for Arsenal's reserve team, making 232 appearances and winning the Combination League five times. In his eight years at Highbury, he only made 26 first-team appearances, scoring 11 goals.
After nearly two years at Middlesbrough with only six first-team appearances, he moved in September 1937 for a fee of £1,500 to Southampton, where his former Arsenal teammate, Tom Parker, was manager. Parkin scored on his Saints debut, a 3–3 draw with West Ham United on 18 September. He made 13 appearances at inside right, before losing his place to another new signing, Ted Bates in December. Parkin was recalled to the side in February, and remained in the side for the rest of the season, either at inside right or centre forward.
The following season, Parkin was moved to right half to replace Cyril King, retaining his place for the rest of the season. In September 1939, he played twice before the Football League was abandoned for the Second World War.
Honours
Arsenal
Football Combination (formerly the London Combination) champions: 1928–29, 1929–30, 1930–31, 1933–34, 1934–35
World War II and after
During the Second World War, "the Board gave permission for Parkin to guest for Holiday Sports". After 1945 he worked in a coalmine not far from Leicester, as an electrician.
References
External links
Career details on www.11v11.com
1911 births
1971 deaths
People from Crook, County Durham
Footballers from County Durham
English men's footballers
Newcastle United F.C. players
Arsenal F.C. players
Southampton F.C. players
Middlesbrough F.C. players
English Football League players
Men's association football midfielders
Men's association football forwards
Esh Winning F.C. players
|
```xml
/*
* @license Apache-2.0
*
*
*
* path_to_url
*
* Unless required by applicable law or agreed to in writing, software
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
*/
// TypeScript Version: 4.1
/// <reference types="@stdlib/types"/>
import * as random from '@stdlib/types/random';
import { TypedIterator } from '@stdlib/types/iter';
/**
* Interface defining function options.
*/
interface Options {
/**
* Pseudorandom number generator which generates uniformly distributed pseudorandom numbers.
*/
prng?: random.PRNG;
/**
* Pseudorandom number generator seed.
*/
seed?: random.PRNGSeedMT19937;
/**
* Pseudorandom number generator state.
*/
state?: random.PRNGStateMT19937;
/**
* Boolean indicating whether to copy a provided pseudorandom number generator state (default: true).
*/
copy?: boolean;
/**
* Number of iterations.
*/
iter?: number;
}
/**
* Interface for iterators of pseudorandom numbers having integer values.
*/
interface Iterator<T> extends TypedIterator<T> {
/**
* Underlying pseudorandom number generator.
*/
readonly PRNG: random.PRNG;
/**
* Pseudorandom number generator seed.
*/
readonly seed: random.PRNGSeedMT19937;
/**
* Length of generator seed.
*/
readonly seedLength: number;
/**
* Generator state.
*/
state: random.PRNGStateMT19937;
/**
* Length of generator state.
*/
readonly stateLength: number;
/**
* Size (in bytes) of generator state.
*/
readonly byteLength: number;
}
/**
* Returns an iterator for generating pseudorandom numbers drawn from a standard normal distribution using the Improved Ziggurat algorithm.
*
* @param options - function options
* @param options.prng - pseudorandom number generator which generates uniformly distributed pseudorandom numbers
* @param options.seed - pseudorandom number generator seed
* @param options.state - pseudorandom number generator state
* @param options.copy - boolean indicating whether to copy a provided pseudorandom number generator state (default: true)
* @param options.iter - number of iterations
* @throws must provide valid options
* @returns iterator
*
* @example
* var iter = iterator();
*
* var r = iter.next().value;
* // returns <number>
*
* r = iter.next().value;
* // returns <number>
*
* r = iter.next().value;
* // returns <number>
*
* // ...
*/
declare function iterator( options?: Options ): Iterator<number>;
// EXPORTS //
export = iterator;
```
|
Mihai Eminescu (; born Mihail Eminovici; 15 January 1850 – 15 June 1889) was a Romanian Romantic poet from Moldavia, novelist, and journalist, generally regarded as the most famous and influential Romanian poet. Eminescu was an active member of the Junimea literary society and worked as an editor for the newspaper Timpul ("The Time"), the official newspaper of the Conservative Party (1880–1918). His poetry was first published when he was 16 and he went to Vienna, Austria to study when he was 19. The poet's manuscripts, containing 46 volumes and approximately 14,000 pages, were offered by Titu Maiorescu as a gift to the Romanian Academy during the meeting that was held on 25 January 1902. Notable works include Luceafărul (The Vesper/The Evening Star/The Lucifer/The Daystar), Odă în metru antic (Ode in Ancient Meter), and the five Letters (Epistles/Satires). In his poems, he frequently used metaphysical, mythological and historical subjects.
His father was Gheorghe Eminovici, an aristocrat from Bukovina, which was then part of the Austrian Empire (while his grandfather came from Banat). He crossed the border into Moldavia, settling in Ipotești, near the town of Botoșani. He married Raluca Iurașcu, an heiress of an old noble family. In a Junimea register, Eminescu wrote down his birthday date as 22 December 1849, while in the documents of Cernăuți Gymnasium, where Eminescu studied, his birth date is 15 January 1850. Nevertheless, Titu Maiorescu, in his work Eminescu and His Poems (1889) quoted N. D. Giurescu's research and adopted his conclusion regarding the date and place of Mihai Eminescu's birth, as being 15 January 1850, in Botoșani. This date resulted from several sources, among which there was a file of notes on christenings from the archives of the Uspenia (Princely) Church of Botoșani; inside this file, the date of birth was "15 January 1850" and the date of christening was the 21st of the same month. The date of his birth was confirmed by the poet's elder sister, Aglae Drogli, who affirmed that the place of birth was the village of Ipotești, Botoșani County.
Life
Early years
Mihail (as he appears in baptismal records) or Mihai (the more common form of the name that he used) was born in Botoșani, Moldavia. He spent his early childhood in Botoșani and Ipotești, in his parents family home. From 1858 to 1866 he attended school in Cernăuți. He finished 4th grade as the 5th of 82 students, after which he attended two years of gymnasium.
The first evidence of Eminescu as a writer is in 1866. In January of that year Romanian teacher Aron Pumnul died and his students in Cernăuţi published a pamphlet, Lăcrămioarele învățăceilor gimnaziaști (The Tears of the Gymnasium Students) in which a poem entitled La mormântul lui Aron Pumnul (At the Grave of Aron Pumnul) appears, signed "M. Eminovici". On 25 February his poem De-aș avea (If I Had) was published in Iosif Vulcan's literary magazine Familia in Pest. This began a steady series of published poems (and the occasional translation from German). Also, it was Iosif Vulcan, who disliked the Slavic source suffix "-ici" of the young poet's last name, that chose for him the more apparent Romanian "nom de plume" Mihai Eminescu.
In 1867, he joined Iorgu Caragiale's troupe as a clerk and prompter; the next year he transferred to Mihai Pascaly's troupe. Both of these were among the leading Romanian theatrical troupes of their day, the latter including Matei Millo and . He soon settled in Bucharest, where at the end of November he became a clerk and copyist for the National Theater. Throughout this period, he continued to write and publish poems. He also paid his rent by translating hundreds of pages of a book by Heinrich Theodor Rötscher, although this never resulted in a completed work. Also at this time he began his novel Geniu pustiu (Wasted Genius), published posthumously in 1904 in an unfinished form.
On 1 April 1869, he was one of the co-founders of the "Orient" literary circle, whose interests included the gathering of Romanian folklore and documents relating to Romanian literary history. On 29 June, various members of the "Orient" group were commissioned to go to different provinces. Eminescu was assigned Moldavia. That summer, he quite by chance ran into his brother Iorgu, a military officer, in Cișmigiu Gardens, but firmly rebuffed Iorgu's attempt to get him to renew his ties to his family.
Still in the summer of 1869, he left Pascaly's troupe and traveled to Cernăuţi and Iaşi. He renewed ties to his family; his father promised him a regular allowance to pursue studies in Vienna in the fall. As always, he continued to write and publish poetry; notably, on the occasion of the death of the former ruler of Wallachia, Barbu Dimitrie Știrbei, he published a leaflet, La moartea principelui Știrbei ("On the Death of Prince Știrbei").
1870s
From October 1869 to 1872 Eminescu studied at the University of Vienna. Not fulfilling the requirements to become a university student (as he did not have a baccalaureate degree), he attended lectures as a so-called "extraordinary auditor" at the Faculty of Philosophy and Law. He was active in student life, befriended Ioan Slavici, and came to know Vienna through Veronica Micle; he became a contributor to Convorbiri Literare (Literary Conversations), edited by Junimea (The Youth). The leaders of this cultural organisation, Petre P. Carp, Vasile Pogor, Theodor Rosetti, Iacob Negruzzi and Titu Maiorescu, exercised their political and cultural influence over Eminescu for the rest of his life. Impressed by one of Eminescu's poems, Venere şi Madonă (Venus and Madonna), Iacob Negruzzi, the editor of Convorbiri Literare, traveled to Vienna to meet him. Negruzzi would later write how he could pick Eminescu out of a crowd of young people in a Viennese café by his "romantic" appearance: long hair and gaze lost in thoughts.
In 1870 Eminescu wrote three articles under the pseudonym "Varro" in Federaţiunea in Pest, on the situation of Romanians and other minorities in the Austro-Hungarian Empire. He then became a journalist for the newspaper Albina (The Bee) in Pest. From 1872 to 1874 he continued as a student in Berlin, thanks to a stipend offered by Junimea.
From 1874 to 1877, he worked as director of the Central Library in Iași, substitute teacher, school inspector for the counties of Iași and Vaslui, and editor of the newspaper Curierul de Iași (The Courier of Iaşi), all thanks to his friendship with Titu Maiorescu, the leader of Junimea and rector of the University of Iași. He continued to publish in Convorbiri Literare. He also was a good friend of Ion Creangă, a writer, whom he convinced to become a writer and introduced to the Junimea literary club.
In 1877 he moved to Bucharest, where until 1883 he was first journalist, then (1880) editor-in-chief of the newspaper Timpul (The Time). During this time he wrote Scrisorile, Luceafărul, Odă în metru antic etc. Most of his notable editorial pieces belong to this period, when Romania was fighting the Ottoman Empire in the Russo-Turkish War of 1877–1878 and throughout the diplomatic race that eventually brought about the international recognition of Romanian independence, but under the condition of bestowing Romanian citizenship to all subjects of Jewish faith. Eminescu opposed this and another clause of the Treaty of Berlin: Romania's having to give southern Bessarabia to Russia in exchange for Northern Dobruja, a former Ottoman province on the Black Sea.
Later life and death
The 1880s were a time of crisis and deterioration in the poet's life, culminating with his death in 1889. The details of this are still debated.
From 1883 – when Eminescu's personal crisis and his more problematic health issues became evident – until 1886, the poet was treated in Austria and Italy, by specialists that managed to get him on his feet, as testified by his good friend, writer Ioan Slavici. In 1886, Eminescu suffered a nervous breakdown and was treated by Romanian doctors, in particular Julian Bogdan and Panait Zosin. Immediately diagnosed with syphilis, after being hospitalized in a nervous diseases hospice within the Neamț Monastery, the poet was treated with mercury. Firstly, massages in Botoșani, applied by Dr. Itszak, and then in Bucharest at Dr. Alexandru A. Suțu's sanatorium, where between February–June 1889 he was injected with mercuric chloride. Professor Doctor Irinel Popescu, corresponding member of the Romanian Academy and president of the Academy of Medical Sciences of Romania, states that Eminescu died because of mercury poisoning. He also says that the poet was "treated" by a group of incompetent doctors and held in misery, which also shortened his life. Mercury was prohibited as treatment of syphilis in Western Europe in the 19th century, because of its adverse effects.
Mihai Eminescu died at 4 am, on 15 June 1889 at the Caritas Institute, a sanatorium run by Dr. Suțu and located on Plantelor Street Sector 2, Bucharest. Eminescu's last wish was a glass of milk, which the attending doctor slipped through the metallic peephole of the "cell" where he spent the last hours of his life. In response to this favor he was said to have whispered, "I'm crumbled". The next day, on 16 June 1889 he was officially declared deceased and legal papers to that effect were prepared by doctors Suțu and Petrescu, who submitted the official report. This paperwork is seen as ambiguous, because the poet's cause of death is not clearly stated and there was no indication of any other underlying condition that may have so suddenly resulted in his death. In fact both the poet's medical file and autopsy report indicate symptoms of a mental and not physical disorder. Moreover, at the autopsy performed by Dr. Tomescu and then by Dr. Marinescu from the laboratory at Babeș-Bolyai University, the brain could not be studied, because a nurse inadvertently forgot it on an open window, where it quickly decomposed.
One of the first hypotheses that disagreed with the post mortem findings for Eminescu's cause of death was printed on 28 June 1926 in an article from the newspaper Universul. This article forwards the hypothesis that Eminescu died after another patient, Petre Poenaru, former headmaster in Craiova, hit him in the head with a board.
Dr. Vineș, the physician assigned to Eminescu at Caritas argued at that time that the poet's death was the result of an infection secondary to his head injury. Specifically, he says that the head wound was infected, turning into an erysipelas, which then spread to the face, neck, upper limbs, thorax, and abdomen. In the same report, cited by Nicolae Georgescu in his work, Eminescu târziu, Vineș states that "Eminescu's death was not due to head trauma occurred 25 days earlier and which had healed completely, but was the consequence of an older endocarditis (diagnosed by late professor N. Tomescu)".
Contemporary specialists, primarily physicians who have dealt with the Eminescu case, reject both hypotheses on the cause of death of the poet. According to them, the poet died of cardio-respiratory arrest caused by mercury poisoning. Eminescu was wrongly diagnosed and treated, aiming his removal from public life, as some eminescologists claim. Eminescu was diagnosed since 1886 by Dr. Julian Bogdan from Iași as syphilitic, paralytic and on the verge of dementia due to alcohol abuse and syphilitic gummas emerged on the brain. The same diagnosis is given by Dr. Panait Zosin, who consulted Eminescu on 6 November 1886 and wrote that patient Eminescu suffered from a "mental alienation", caused by the emergence of syphilis and worsened by alcoholism. Further research showed that the poet was not suffering from syphilis.
Works
Nicolae Iorga, the Romanian historian, considers Eminescu the godfather of the modern Romanian language, in the same way that Shakespeare is seen to have directly influenced the English language. He is unanimously celebrated as the greatest and most representative Romanian poet.
Poems and Prose of Mihai Eminescu (editor: , publisher: The Center for Romanian Studies, Iași, Oxford, and Portland, 2000, ) contains a selection of English-language renditions of Eminescu's poems and prose.
Poetry
His poems span a large range of themes, from nature and love to hate and social commentary. His childhood years were evoked in his later poetry with deep nostalgia.
Eminescu's poems have been translated in over 60 languages. His life, work and poetry strongly influenced the Romanian culture and his poems are widely studied in Romanian public schools.
His most notable poems are:
, first poem of Mihai Eminescu
Ce-ți doresc eu ție, dulce Românie
Somnoroase păsărele
Pe lângă plopii fără soț
Doina (the name is a traditional type of Romanian song), 1884
Lacul (The Lake), 1876
Luceafărul (The Vesper), 1883
Floare albastră (Blue Flower), 1884
Dorința (Desire), 1884
Sara pe deal (Evening on the Hill), 1885
O, rămai (Oh, Linger On), 1884
Epigonii (Epigones), 1884
Scrisori (Letters or "Epistles-Satires")
Și dacă (And if...), 1883
Odă în metru antic (Ode in Ancient Meter), 1883
Mai am un singur dor (I Have Yet One Desire), 1883
Glossă (Gloss), 1883
La Steaua (To The Star), 1886
Memento mori, 1872
Povestea magului călător în stele
Prose
Sarmanul Dionis (Poor Dionis), 1872
Cezara, 1876
Avatarii Faraonului Tla, postum
Geniu pustiu (Deserted genius), novel, posthumous
Presence in English language anthologies
Testament – Anthology of Modern Romanian Verse / Testament – Antologie de Poezie Română Modernă – Bilingual Edition English & Romanian – Daniel Ioniță (editor and translator) with Eva Foster and Daniel Reynaud – Minerva Publishing 2012 and 2015 (second edition) –
Testament – Anthology of Romanian Verse – American Edition - monolingual English language edition – Daniel Ioniță (editor and principal translator) with Eva Foster, Daniel Reynaud and Rochelle Bews – Australian-Romanian Academy for Culture – 2017 –
The Bessarabia of My Soul / Basarabia Sufletului Meu - a collection of poetry from the Republic of Moldova – bilingual English/Romanian – Daniel Ioniță and Maria Tonu (editors), with Eva Foster, Daniel Reynaud and Rochelle Bews – MediaTon, Toronto, Canada – 2018 –
Testment – 400 Years of Romanian Poetry – 400 de ani de poezie românească – bilingual edition – Daniel Ioniță (editor and principal translator) with Daniel Reynaud, Adriana Paul & Eva Foster – Editura Minerva, 2019 –
Romanian Poetry from its Origins to the Present – bilingual edition English/Romanian – Daniel Ioniță (editor and principal translator) with Daniel Reynaud, Adriana Paul and Eva Foster – Australian-Romanian Academy Publishing – 2020 – ;
Romanian culture
Eminescu was only 20 when Titu Maiorescu, the top literary critic in Romania, dubbed him "a real poet", in an essay where only a handful of the Romanian poets of the time were spared Maiorescu's harsh criticism. In the following decade, Eminescu's notability as a poet grew continually thanks to (1) the way he managed to enrich the literary language with words and phrases from all Romanian regions, from old texts, and with new words that he coined from his wide philosophical readings; (2) the use of bold metaphors, much too rare in earlier Romanian poetry; (3) last but not least, he was arguably the first Romanian writer who published in all Romanian provinces and was constantly interested in the problems of Romanians everywhere. He defined himself as a Romantic, in a poem addressed To My Critics (Criticilor mei), and this designation, his untimely death as well as his bohemian lifestyle (he never pursued a degree, a position, a wife or fortune) had him associated with the Romantic figure of the genius. As early as the late 1880s, Eminescu had a group of faithful followers. His 1883 poem Luceafărul was so notable that a new literary review took its name after it.
The most realistic psychological analysis of Eminescu was written by I. L. Caragiale, who, after the poet's death published three short articles on this subject: In Nirvana, Irony and Two notes. Caragiale stated that Eminescu's characteristic feature was the fact that "he had an excessively unique nature". Eminescu's life was a continuous oscillation between introvert and extrovert attitudes.
The portrait that Titu Maiorescu made in the study Eminescu and poems emphasizes Eminescu's introvert dominant traits. Titu Maiorescu promoted the image of a dreamer who was far away from reality, who did not suffer because of the material conditions that he lived in, regardless of all the ironies and eulogies of his neighbour, his main characteristic was "abstract serenity".
In reality, just as one can discover from his poems and letters and just as Caragiale remembered, Eminescu was seldom influenced by boisterous subconscious motivations. Eminescu's life was but an overlap of different-sized cycles, made of sudden bursts that were nurtured by dreams and crises due to the impact with reality. The cycles could last from a few hours or days to weeks or months, depending on the importance of events, or could even last longer, when they were linked to the events that significantly marked his life, such as his relation with Veronica, his political activity during his years as a student, or the fact that he attended the gatherings at the Junimea society or the articles he published in the newspaper Timpul. He used to have a unique manner of describing his own crisis of jealousy.
National poet
Eminescu was soon proclaimed Romania's national poet, not because he wrote in an age of national revival, but rather because he was received as an author of paramount significance by Romanians in all provinces. Even today, he is considered the national poet of Romania, Moldova, and of the Romanians who live in Bukovina ().
Iconography
Eminescu is omnipresent in today's Romania. His statues are everywhere; his face was on the 1000-lei banknotes issued in 1991, 1992, and 1998, and is on the 500-lei banknote issued in 2005 as the highest-denominated Romanian banknote (see Romanian leu); Eminescu's Linden Tree is one of the country's most famous natural landmarks, while many schools and other institutions are named after him. The anniversaries of his birth and death are celebrated each year in many Romanian cities, and they became national celebrations in 1989 (the centennial of his death) and 2000 (150 years after his birth, proclaimed Eminescu's Year in Romania).
Several young Romanian writers provoked a huge scandal when they wrote about their demystified idea of Eminescu and went so far as to reject the "official" interpretation of his work.
International legacy
Romanian composer Didia Saint Georges (1888-1979) used Eminescu’s text for her songs.
A monument jointly dedicated to Eminescu and Allama Iqbal was erected in Islamabad, Pakistan on 15 January 2004, commemorating Pakistani-Romanian ties, as well as the dialogue between civilizations which is possible through the cross-cultural appreciation of their poetic legacies.
Composer Rodica Sutzu used Eminescu's text for her song “Gazel, opus 15.”
In 2004, the Mihai Eminescu Statue was erected in Montreal, Quebec, Canada.
On 8 April 2008, a crater on the planet Mercury was named for him.
A boulevard passing by the Romanian embassy in Sofia, Bulgaria is named after Eminescu.
In 2021, the Dutch artist Kasper Peters performs a theater show entitled "Eminescu", dedicated to the poet.
On 15 January 2023, the first monument in Spain in honor of Mihai Eminescu was erected in the city of Rivas-Vaciamadrid. A memorial bench is located in front of the library. Federico Garcia Lorca at the city's Constitution Square.
In Romania, there are at least 133 monuments (statues and busts) dedicated to Mihai Eminescu. Most of these are located in the region of Moldova (42), followed by Transylvania (32). In Muntenia, there are 21 such monuments, while in Oltenia Eminescu is comemmorated through 11 busts. The remaining monuments are placed in Crișana (8), Maramureș (7), and Dobrogea (3).
Political views
Due to his conservative nationalistic views, Eminescu was easily adopted as an icon by the Romanian right.
After a decade when Eminescu's works were criticized as "mystic" and "bourgeois", Romanian Communists ended by adopting Eminescu as the major Romanian poet. What opened the door for this thaw was the poem Împărat și proletar (Emperor and proletarian) that Eminescu wrote under the influence of the 1870–1871 events in France, and which ended in a Schopenhauerian critique of human life. An expurgated version only showed the stanzas that could present Eminescu as a poet interested in the fate of proletarians.
It has also been revealed that Eminescu demanded strong anti-Jewish legislation on the German model, saying, among other things, that "the Jew does not deserve any rights anywhere in Europe because he is not working." This was not, however, an unusual stance to take in the cultural and literary milieu of his age.
See also
Mihai Eminescu National Theater
References
Footnotes
Bibliography
George Călinescu, La vie d'Eminescu, Bucarest: Univers, 1989, 439 p.
Marin Bucur (ed.), Caietele Mihai Eminescu, București, Editura Eminescu, 1972
Murărașu, Dumitru (1983), Mihai Eminescu. Viața și Opera, Bucharest: Eminescu.
Petrescu, Ioana Em. (1972), Eminescu. Modele cosmologice și viziune poetică, Bucharest: Minerva.
Dumitrescu-Bușulenga, Zoe (1986), Eminescu și romantismul german, Bucharest: Eminescu.
Bhose, Amita (1978), Eminescu şi India, Iași: Junimea.
Ițu, Mircia (1995), Indianismul lui Eminescu, Brașov: Orientul Latin.
Vianu, Tudor (1930), Poezia lui Eminescu, Bucharest: Cartea Românească.
Negoițescu, Ion (1970), Poezia lui Eminescu, Iași: Junimea.
Simion, Eugen (1964), Proza lui Eminescu, Bucharest: Editura pentru literatură.
External links
Gabriel's website – Works both in English and original
Translated poems by Peter Mamara
Mihai Eminescu. 10 poems in English translations by Octavian Cocoş (audio)
Romanian Poetry – Mihai Eminescu (English)
Romanian Poetry – Mihai Eminescu (Romanian)
Institute for Cultural Memory: Mihai Eminescu – Poetry
Mihai Eminescu Poesii (bilingual pages English Romanian)
Mihai Eminescu poetry (with English translations of some of his poems)
MoldData Literature
Year 2000: "Mihai Eminescu Year" (includes bio, poems, critiques, etc.)
The Mihai Eminescu Trust
The Nation's Poet: A recent collection sparks debate over Romania's "national poet" by Emilia Stere
Eminescu – a political victim : An interview with Nicolae Georgescu in Jurnalul National (in Romanian)
Mihai Eminescu: Complete works (in Romanian)
Mihai Eminescu : poezii biografie
1850 births
1889 deaths
People from Botoșani
People from the Principality of Moldavia
Romanian male poets
Romanian essayists
Romanian nationalists
Romanian folklorists
Romanian male short story writers
Romanian short story writers
Junimists
Romantic poets
Romanian-language poets
19th-century Romanian poets
Male essayists
19th-century short story writers
19th-century male writers
19th-century essayists
Members of the Romanian Academy elected posthumously
Burials at Bellu Cemetery
|
The robust cardinalfish (Epigonus robustus) is a species of deepwater cardinalfish found around the world in southern temperate waters at depths of from . It can reach a length of TL.
It is a vigorous fish which, apart from the second dorsal fin exhibiting a longer spine, resembles the big-eyed cardinalfish.
References
Tony Ayling & Geoffrey Cox, Collins Guide to the Sea Fishes of New Zealand, (William Collins Publishers Ltd, Auckland, New Zealand 1982)
Epigonidae
Fish described in 1927
Taxa named by Keppel Harcourt Barnard
|
```css
`currentColor` improves code reusability
Make text unselectable
Removing the bullets from the `ul`
Difference between `initial` and `inherit`
Styling elements using `::before` and `::after`
```
|
Yogyakarta International Airport Rail Link () is an airport rail link service in Special Region of Yogyakarta and Central Java, Indonesia, operated by Kereta Api Indonesia. Launched on 6 May 2019, it has had two routes, – and Yogyakarta–Wojo–, before changed into single Yogyakarta– route in 2021 after a spur line to the airport was fully built.
The service is one of the options to reach Yogyakarta International Airport which replaced Adisutjipto International Airport as the main airport in Yogyakarta and surrounding areas. The train used to terminate at Wojo Station, due to the direct rail connection to the airport that was yet to complete. Wojo Station was considered the closest active station to the airport as the closer Kedundang Station is currently inactive and under reconstruction. There were Perum DAMRI shuttle buses from Wojo Station to the airport, which has distance.
The rail connection to the Yogyakarta International Airport is planned to be completed by 2021. It was completed on 17 September.
Stations
All stations which served by the service have exclusive comfortable waiting room for the airport passengers and it is provided with international standard toilet.
See also
Rail transport in Indonesia
References
Airport rail links in Indonesia
Transport in the Special Region of Yogyakarta
Passenger rail transport in Indonesia
Railway lines opened in 2019
Rapid transit in Indonesia
|
This list comprises all players who have participated in at least one league match for FC London in London, Ontario, Canada, since the team's first season in the USL Premier Development League in 2009. Players who were on the roster but never played a first team game are not listed; players who appeared for the team in other competitions (Open Canada Cup, etc.) but never actually made a USL appearance are noted at the bottom of the page where appropriate.
A
Omar Apodaca
Michael Arnold
B
Thomas Beattie
Scott Bibby
Kyle Buxton
C
Vince Caminiti
Dominic Casciato
Haris Cekic
Scott Cliff
D
Ali Dadikhuda
Kevin De Serpa
Anthony Di Biase
E
Patrick Eavenson
F
Sam Fairhurst
Estevao Franco
G
Camilo Gonzalez
H
Chris Harrington
Carl Haworth
Steve Hepton
Alan Hirmiz
Luke Holmes
Jarrett Humphreys
I
Jovan Ivanovich
J
Jeff Jelinek
Mike Jonca
L
Alex Lewis
M
Ryan Maduro
Kyle Manscuk
Matthew Marcin
Michael Marcoccia
Alan McGreal
Aaron McMurray
Johnny Morris
P
Michael Pereira
Anthony Perez
R
John Raley
Todd Rutledge
S
Sebastian Stihler
Andrew Sousa
T
Matt Tymoczko
W
Ryan Walter
James Welsh
Ryan Woods
Z
Kevin Zimmermann Cuevas
Sources
2010 Forest City London stats
2009 Forest City London stats
References
London
Association football player non-biographical articles
|
```objective-c
path_to_url
Unless required by applicable law or agreed to in writing, software
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
==============================================================================*/
#ifndef TENSORFLOW_LIB_IO_RECORD_READER_H_
#define TENSORFLOW_LIB_IO_RECORD_READER_H_
#include "tensorflow/core/lib/core/status.h"
#include "tensorflow/core/lib/core/stringpiece.h"
#if !defined(IS_SLIM_BUILD)
#include "tensorflow/core/lib/io/inputstream_interface.h"
#include "tensorflow/core/lib/io/zlib_compression_options.h"
#include "tensorflow/core/lib/io/zlib_inputstream.h"
#endif // IS_SLIM_BUILD
#include "tensorflow/core/platform/macros.h"
#include "tensorflow/core/platform/types.h"
namespace tensorflow {
class RandomAccessFile;
namespace io {
class RecordReaderOptions {
public:
enum CompressionType { NONE = 0, ZLIB_COMPRESSION = 1 };
CompressionType compression_type = NONE;
// If buffer_size is non-zero, then all reads must be sequential, and no
// skipping around is permitted. (Note: this is the same behavior as reading
// compressed files.) Consider using SequentialRecordReader.
int64 buffer_size = 0;
static RecordReaderOptions CreateRecordReaderOptions(
const string& compression_type);
#if !defined(IS_SLIM_BUILD)
// Options specific to zlib compression.
ZlibCompressionOptions zlib_options;
#endif // IS_SLIM_BUILD
};
// Low-level interface to read TFRecord files.
//
// If using compression or buffering, consider using SequentialRecordReader.
//
// Note: this class is not thread safe; external synchronization required.
class RecordReader {
public:
// Create a reader that will return log records from "*file".
// "*file" must remain live while this Reader is in use.
explicit RecordReader(
RandomAccessFile* file,
const RecordReaderOptions& options = RecordReaderOptions());
virtual ~RecordReader() = default;
// Read the record at "*offset" into *record and update *offset to
// point to the offset of the next record. Returns OK on success,
// OUT_OF_RANGE for end of file, or something else for an error.
//
// Note: if buffering is used (with or without compression), access must be
// sequential.
Status ReadRecord(uint64* offset, string* record);
private:
Status ReadChecksummed(uint64 offset, size_t n, StringPiece* result,
string* storage);
RandomAccessFile* src_;
RecordReaderOptions options_;
std::unique_ptr<InputStreamInterface> input_stream_;
#if !defined(IS_SLIM_BUILD)
std::unique_ptr<ZlibInputStream> zlib_input_stream_;
#endif // IS_SLIM_BUILD
TF_DISALLOW_COPY_AND_ASSIGN(RecordReader);
};
// High-level interface to read TFRecord files.
//
// Note: this class is not thread safe; external synchronization required.
class SequentialRecordReader {
public:
// Create a reader that will return log records from "*file".
// "*file" must remain live while this Reader is in use.
explicit SequentialRecordReader(
RandomAccessFile* file,
const RecordReaderOptions& options = RecordReaderOptions());
virtual ~SequentialRecordReader() = default;
// Reads the next record in the file into *record. Returns OK on success,
// OUT_OF_RANGE for end of file, or something else for an error.
Status ReadRecord(string* record) {
return underlying_.ReadRecord(&offset_, record);
}
private:
RecordReader underlying_;
uint64 offset_ = 0;
};
} // namespace io
} // namespace tensorflow
#endif // TENSORFLOW_LIB_IO_RECORD_READER_H_
```
|
```javascript
module["exports"] = [
"Baglung",
"Banke",
"Bara",
"Bardiya",
"Bhaktapur",
"Bhojupu",
"Chitwan",
"Dailekh",
"Dang",
"Dhading",
"Dhankuta",
"Dhanusa",
"Dolakha",
"Dolpha",
"Gorkha",
"Gulmi",
"Humla",
"Ilam",
"Jajarkot",
"Jhapa",
"Jumla",
"Kabhrepalanchok",
"Kalikot",
"Kapilvastu",
"Kaski",
"Kathmandu",
"Lalitpur",
"Lamjung",
"Manang",
"Mohottari",
"Morang",
"Mugu",
"Mustang",
"Myagdi",
"Nawalparasi",
"Nuwakot",
"Palpa",
"Parbat",
"Parsa",
"Ramechhap",
"Rauswa",
"Rautahat",
"Rolpa",
"Rupandehi",
"Sankhuwasabha",
"Sarlahi",
"Sindhuli",
"Sindhupalchok",
"Sunsari",
"Surket",
"Syangja",
"Tanahu",
"Terhathum"
];
```
|
The Zastava M84 is a general-purpose machine gun manufactured by Zastava Arms. It is a gas-operated, air-cooled, belt-fed and fully automatic shoulder-fired weapon.
The M84 is a reverse-engineered copy of the Soviet Union's PKM, with a few differences such as a differently shaped stock, and a slightly longer and heavier barrel barrel which has slightly different measurements at the gas port and forward of the trunnion in diameter.
Variants
M84
The M84 is intended for infantry use, against enemy infantry and light vehicles. It is also configured for tripod mounting (like the PKS).
M86
The M86 is a tank machine gun, and is designed to mount as a coaxial weapon on M-84 tanks and other combat vehicles. The stock, bipod, and iron sights are omitted from this version, and it includes a heavier barrel and electric trigger, much like the Russian PKMT. Another version, the M86A, is designed for external mounts and can be used dismounted.
Users
: used by the Burkinabese contingent of the United Nations Multidimensional Integrated Stabilization Mission in Mali
Former user, replaced by FN MAG and Ultimax 100
Syrian National Army
: designated Mitraljez 7.62 mm M84
Gallery
References
External links
Official website of Zastava Arms
Zastava M84
General-purpose machine guns
7.62×54mmR machine guns
M84
Machine guns of Yugoslavia
Zastava Arms
Weapons and ammunition introduced in 1984
|
Shekari may refer to:
Rans S-16 Shekari, light aircraft
Ishaya Shekari, a Nigerian military governor
Reza Shekari, Iranian footballer
Shekari, Afghanistan
|
Adnan Alisic (born 10 February 1984) is a Dutch footballer who plays as a midfielder for RVV COAL in the Dutch Tweede Klasse.
Club career
Born in Rotterdam, Alisic made his professional debut for Utrecht on 30 November 2003, replacing Donny de Groot in the 80th minute of the Eredivisie home match against Roda JC, which Utrecht won 3-1. After failing to break into the Utrecht first-team, Alisic dropped down a division to sign for Dordrecht in 2006. After a successful season in the second tier of Dutch football, Alisic signed for Eredivisie club Excelsior in 2007, before moving to Hungarian club Debrecen in July 2011. Alisic returned to Dordrecht in summer 2012 after an unsuccessful spell in Hungary.
In summer 2015 he turned semi-pro when joining GVVV.
References
External links
Voetbal International profile
1984 births
Living people
Dutch people of Bosnia and Herzegovina descent
Dutch men's footballers
Footballers from Rotterdam
FC Dordrecht players
FC Utrecht players
Excelsior Rotterdam players
Debreceni VSC players
Eredivisie players
Eerste Divisie players
Derde Divisie players
Nemzeti Bajnokság I players
Dutch expatriate men's footballers
Expatriate men's footballers in Hungary
Dutch expatriate sportspeople in Hungary
Men's association football midfielders
GVVV players
RKSV Leonidas players
Vierde Divisie players
|
WS, Ws, or ws may refer to:
Businesses and organizations
Ware Shoals Railroad (reporting mark WS)
WestJet (IATA airline code WS)
Society of Writers to His Majesty's Signet, in post-nomial abbreviation
Williams Street, the production arm for Cartoon Network’s nighttime programming block, Adult Swim.
Warm Showers, a non-profit hospitality exchange network for world cyclists.
Williams-Sonoma, Inc., American kitchenware and home furnishings retailer.
Places
WS postcode area, West Midlands, UK
Samoa (ISO 3166-1 country code WS)
Winschoten railway station, the Netherlandsm station code
Winston-Salem, North Carolina
Science and technology
.ws, the Internet country code top-level domain for Samoa
ws:// WebSocket protocol prefix in a URI
Watt second (Ws) or Joule, a unit of energy
Web service, software system designed to support machine-to-machine interaction over the Web
Werner syndrome, premature aging
Williams syndrome, a developmental disorder
WonderSwan, handheld game console
Sports
World Sailing, world governing body for the sport of sailing
The World Series, the annual championship series of Major League Baseball postseason
World Series (disambiguation)
See also
The W's, a 1990s Christian swing band
WS FTP, a File Transfer Protocol client
|
```javascript
import path from 'path';
import fs from 'fs';
import assert from 'assert';
import { transformFileSync } from '@babel/core';
function trim(str) {
return str.replace(/^\s+|\s+$/, '');
}
describe('React Server transpilation', () => {
const fixturesDir = path.join(__dirname, 'fixtures');
fs.readdirSync(fixturesDir).map((caseName) => {
it(`should ${caseName.split('-').join(' ')}`, () => {
const fixtureDir = path.join(fixturesDir, caseName);
const actualPath = path.join(fixtureDir, 'actual.js');
const actual = transformFileSync(actualPath).code;
const expected = fs.readFileSync(
path.join(fixtureDir, 'expected.js')
).toString();
assert.equal(trim(actual), trim(expected));
});
});
});
```
|
Larry Richard Christenson (born November 10, 1953), nicknamed "L.C.", is an American former professional baseball pitcher, who played his entire Major League Baseball (MLB) career for the Philadelphia Phillies (1973–1983).
Early life
Christenson attended Marysville (WA) High School where he was noted more for his basketball than baseball skills. He struck out 143 batters in 72 innings and had an earned run average (ERA) of 0.28 in his senior year.
Career
Christenson was selected third overall in the first round by the Phillies in the 1972 MLB draft, just one day after his graduation.
A short time later, he began his professional career with the Phillies’ Minor League Baseball (MiLB) Pulaski Phillies of the Appalachian League. Both his first MiLB and MLB hits were home runs and he is tied with Rick Wise for most home runs (11) by a pitcher in Phillies history.
Christenson made his MLB debut on April 13, 1973, beating the National League (NL)-rival New York Mets, 7–1, while pitching a complete game. At the time, he was the youngest player in MLB at age 19; he would remain so until 18-year-old David Clyde debuted for the Texas Rangers, that June 27.
Christenson would bounce back and forth from the majors to the minors until 1975, when the Phillies called him up to stay. He went 11–6 that season and would become a key cog on Phillies teams that would win three straight NL Eastern Division titles (1976–1978). Christenson would have his best seasons those three years: 1976, going 13–8 with a 3.68 earned run average (ERA); 1977 (his best season), when he went 19–6 with a 4.06 ERA, winning 15 of his last 16 decisions; and 1978, where he slipped to 13–14, despite posting a career-best ERA of 3.24. In the 1978 National League Championship Series, Christenson was the Phillies’ Game 1 starter.
Thereafter, injuries would begin to plague Christenson's career. He began the 1979 season on the disabled list (DL), with elbow problems, missing the first month. Later, that June, Christenson broke his collarbone during a charity bicycle ride and missed several weeks. He ended up with a 5–10 record that season. He was nearly dealt along with Tug McGraw and Bake McBride to the Texas Rangers for Sparky Lyle and Johnny Grubb at the 1979 Winter Meetings in Toronto, but the proposed transaction was never executed because a deferred money issue in Lyle's contract went unresolved.
In 1980, Christenson started off 3–0, but went on the DL, again, and had elbow surgery. He recovered to finish the season 5–1 and start Game 4 of the 1980 World Series, but was knocked out of the game in the first inning. In 1981, Christenson posted a less-than-stellar 4–7 record, but notched a win in the 1981 National League Division Series, against the Montreal Expos. His last injury-free season was 1982, when he made 32 starts and went 9–10. In 1983, Christenson went under the knife for elbow surgery for the final time, after a 2–4 start. He failed to make the postseason roster and the Phillies gave him his unconditional release on November 10 of that year, his 30th birthday.
Although only a .150 hitter (64-for-427) in his 11-year major league career, Christenson hit 11 home runs with 46 RBI and 24 bases on balls.
Christensen tried for several years, spent in his home state of Washington, to rehabilitate from his numerous surgeries, but was unable to return to baseball.
Personal life
Christenson began a career in institutional investing in 1985, and currently is president of Christenson Investment Partners, which works with institutional asset managers and investors. Christenson resides in the Philadelphia area. He has two adult daughters; Claire and Libby. Christenson maintains his ties with the Phillies and is well known locally for his work on behalf of numerous charities.
References
Kashatus, William C. Lefty & Tim: How Steve Carlton and Tim McCarver Became Baseball's Best Battery. Lincoln: University of Nebraska Press, 2022.
External links
Larry Christenson at Baseball Almanac
Larry Christenson at Baseball Biography
Larry Christenson at Baseball Gauge
1953 births
Major League Baseball pitchers
Baseball players from Washington (state)
Philadelphia Phillies players
Eugene Emeralds players
Pulaski Phillies players
Toledo Mud Hens players
Living people
People from Marysville, Washington
Sportspeople from Snohomish County, Washington
|
```javascript
/** PURE_IMPORTS_START _Observable,_util_isPromise,_util_isArrayLike,_util_isInteropObservable,_util_isIterable,_fromArray,_fromPromise,_fromIterable,_fromObservable,_util_subscribeTo PURE_IMPORTS_END */
import { Observable } from '../Observable';
import { isPromise } from '../util/isPromise';
import { isArrayLike } from '../util/isArrayLike';
import { isInteropObservable } from '../util/isInteropObservable';
import { isIterable } from '../util/isIterable';
import { fromArray } from './fromArray';
import { fromPromise } from './fromPromise';
import { fromIterable } from './fromIterable';
import { fromObservable } from './fromObservable';
import { subscribeTo } from '../util/subscribeTo';
export function from(input, scheduler) {
if (!scheduler) {
if (input instanceof Observable) {
return input;
}
return new Observable(subscribeTo(input));
}
if (input != null) {
if (isInteropObservable(input)) {
return fromObservable(input, scheduler);
}
else if (isPromise(input)) {
return fromPromise(input, scheduler);
}
else if (isArrayLike(input)) {
return fromArray(input, scheduler);
}
else if (isIterable(input) || typeof input === 'string') {
return fromIterable(input, scheduler);
}
}
throw new TypeError((input !== null && typeof input || input) + ' is not observable');
}
//# sourceMappingURL=from.js.map
```
|
```php
<?php
use Illuminate\Support\Collection;
it('provides chunk by macro', function () {
expect(Collection::hasMacro('chunkBy'))->toBeTrue();
});
it('can chunk the collection with a given callback', function () {
$collection = new Collection(['A', 'A', 'A', 'B', 'B', 'A', 'A', 'C', 'B', 'B', 'A']);
$chunkedBy = $collection->chunkBy(function ($item) {
return $item == 'A';
});
$expected = [
['A', 'A', 'A'],
['B', 'B'],
['A', 'A'],
['C', 'B', 'B'],
['A'],
];
expect($chunkedBy->toArray())->toEqual($expected);
});
it('can chunk the collection with a given callback with associative keys', function () {
$collection = new Collection(['a' => 'A', 'b' => 'A', 'c' => 'A', 'd' => 'B', 'e' => 'B', 'f' => 'A', 'g' => 'A', 'h' => 'C', 'i' => 'B', 'j' => 'B', 'k' => 'A']);
$chunkedBy = $collection->chunkBy(function ($item) {
return $item == 'A';
});
$expected = [
['A', 'A', 'A'],
['B', 'B'],
['A', 'A'],
['C', 'B', 'B'],
['A'],
];
expect($chunkedBy->toArray())->toEqual($expected);
});
it('can chunk the collection with a given callback and preserve the original keys', function () {
$collection = new Collection(['A', 'A', 'A', 'B', 'B', 'A', 'A', 'C', 'B', 'B', 'A']);
$chunkedBy = $collection->chunkBy(function ($item) {
return $item == 'A';
}, true);
$expected = [
[0 => 'A', 1 => 'A', 2 => 'A'],
[3 => 'B', 4 => 'B'],
[5 => 'A', 6 => 'A'],
[7 => 'C', 8 => 'B', 9 => 'B'],
[10 => 'A'],
];
expect($chunkedBy->toArray())->toEqual($expected);
});
it('can chunk the collection with a given callback with associative keys and preserve the original keys', function () {
$collection = new Collection(['a' => 'A', 'b' => 'A', 'c' => 'A', 'd' => 'B', 'e' => 'B', 'f' => 'A', 'g' => 'A', 'h' => 'C', 'i' => 'B', 'j' => 'B', 'k' => 'A']);
$chunkedBy = $collection->chunkBy(function ($item) {
return $item == 'A';
}, true);
$expected = [
['a' => 'A', 'b' => 'A', 'c' => 'A'],
['d' => 'B', 'e' => 'B'],
['f' => 'A', 'g' => 'A'],
['h' => 'C', 'i' => 'B', 'j' => 'B'],
['k' => 'A'],
];
expect($chunkedBy->toArray())->toEqual($expected);
});
```
|
Ukraine participated in the Junior Eurovision Song Contest 2016. The Ukrainian entrant for the 2016 contest in Valletta, Malta was selected through a national selection, organised by the Ukrainian broadcaster National Television Company of Ukraine (NTU). The semi-final took place on 13 August 2016, while the final took place on 10 September 2016. The winner was Sofia Rol with the song "Planet Craves For Love".
Background
Prior to the 2016 Contest, Ukraine had participated in the Junior Eurovision Song Contest ten times since its debut in . Ukraine have never missed a contest since their debut appearance, having won the contest once in with the song "Nebo", performed by Anastasiya Petryk. The Ukrainian capital Kyiv has hosted the contest twice, at the Palace of Sports in , and the Palace "Ukraine" in .
Before Junior Eurovision
National final
The Ukrainian broadcaster announced on 8 July 2016, that they would be participating at the contest taking place in Valletta, Malta on 20 November 2016. The semi-final of the national selection for selecting their entrant and song took place on 13 August 2016, while the final will take place on 10 September 2016.
Semi-final
The semi-final was held in the studios of the national broadcaster NTU on Saturday, 13 August at 10:30 CET. A professional jury selected 12 acts who proceeded to the national final which was due to take place on 10 September.
Final
The final took place on 10 September 2016, which saw twelve competing acts participating in a televised production where the winner was determined by a 50/50 combination of both public telephone vote and the votes of jury members made up of music professionals. Sofia Rol was selected to represent Ukraine with the song "Planet Craves For Love".
Jury members
The jury members were as follow:
Maria Burmaka - singer and composer
Svitlana Tarabarova - singer
Valentyn Koval - general director of the Ukrainian music TV channels M1, M2
Vadym Lysytsia - composer and producer
Viktor Knysh - stage director of the Junior Eurovision Song Contest 2013
Artist and song information
Sofia Rol
Sofia was born on 13 August 2002 in Kyiv, Ukraine. When she was four Sofia started attending the Ukrainian Song and Dance Ensemble, Zerniatko. The following year she attended the Sunflower Show Group, Soniakh, at the Kids and Youth Creativity House in Ukraine.
In 2008 she started working over her first album, Sofia Rol – for the Children of Ukraine which featured songs written by Iryna Kyrylina. The album also included karaoke versions of the songs and was released internationally in November 2009 and became popular with the Ukrainian diaspora.
Despite her young age Sofia already has a wealth of experience at performing at concerts and festivals and has won many prizes. In 2008 she won an award at Sim-Sim Festival in Kyiv and in December that year and in May 2009 she won the Sim-Sim Grand Prize. In June 2009 she won two awards, Popular Vocals and most popular Ukrainian Song at the Zorianyi Symeiiz (Starry Symeiiz) Festival in Yalta. She also won the audience choice award. In 2013 Sofia participated in the same festival, Zorianyi Symeiiz, winning first prize.
In October 2009 Sofia won the Grand Prize at the XIII all-Ukrainian Art Festival of Children and Youth called Funny Autumn Holidays – 2009 held in Kyiv. In 2010 Sofia took part in the television contests Krok Do Zirok (Step to the Stars), Nashchadky (Future Generations) Kumyry Ta Kumyrchyky (Big and Little Idols), Pisennyi Vernisazh (Song Vernissage).
In 2014 Sofia finished first in the Kyiv Art Time Contest and also participated in the Ukrainian national selection for Junior Eurovision. In 2015 she represented Ukraine at the Junior Sanremo Music Festival.
At Junior Eurovision
During the opening ceremony and the running order draw which took place on 14 November 2016, Ukraine was drawn to perform tenth on 20 November 2016, following Belarus and preceding Italy.
Final
Voting
During the press conference for the Junior Eurovision Song Contest 2016, held in Stockholm, the Reference Group announced several changes to the voting format for the 2016 contest. Previously, points had been awarded based on a combination of 50% National juries and 50% televoting, with one more set of points also given out by a 'Kids' Jury'. However, this year, points will be awarded based on a 50/50 combination of each country’s Adult and , to be announced by a spokesperson. For the first time since the inauguration of the contest the voting procedure will not include a public televote. Following these results, three expert jurors will also announce their points from 1-8, 10, and 12. These professional jurors are: Christer Björkman, Mads Grimstad, and Jedward.
References
Junior Eurovision Song Contest
Ukraine
2016
|
Jan ten Compe (1713, Amsterdam – 1761, Amsterdam), was an 18th-century landscape painter from the Northern Netherlands.
Biography
According to his biographer Jan van Gool, he was a follower of Jan van der Heyden and Gerrit Berckheyde. He works were in demand by wealthy patrons such as Mayor Rendorp of Amsterdam and Mr. De Groot of the Hague, where Van Gool saw his paintings of prominent buildings and landmarks of Rotterdam, Delft, the Hague, Leiden, Haarlem, and Amsterdam.
According to the RKD he was the pupil of Dirck Dalens III and the teacher of Geerit Toorenburgh. He was also known as Jan ten Kompe or I.T. Conyn.
References
Jan ten Compe on Artnet
1713 births
1761 deaths
18th-century Dutch painters
18th-century Dutch male artists
Dutch male painters
Painters from Amsterdam
|
```groff
.\" $OpenBSD: tvtwo.4,v 1.11 2011/12/03 23:01:21 schwarze Exp $
.\"
.\" Redistribution and use in source and binary forms, with or without
.\" modification, are permitted provided that the following conditions
.\" are met:
.\" 1. Redistributions of source code must retain the above copyright
.\" notice, this list of conditions and the following disclaimer.
.\" 2. Redistributions in binary form must reproduce the above copyright
.\" notice, this list of conditions and the following disclaimer in the
.\" documentation and/or other materials provided with the distribution.
.\"
.\" THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
.\" IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
.\" WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
.\" DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT,
.\" INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
.\" (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
.\" SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
.\" HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
.\" STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
.\" ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
.\" POSSIBILITY OF SUCH DAMAGE.
.\"
.Dd $Mdocdate: December 3 2011 $
.Dt TVTWO 4 sparc64
.Os
.Sh NAME
.Nm tvtwo
.Nd accelerated 24-bit color frame buffer
.Sh SYNOPSIS
.Cd "tvtwo* at sbus?"
.Cd "wsdisplay* at tvtwo?"
.Sh DESCRIPTION
The Parallax XVideo and PowerVideo frame buffers, also known as
.Sq tvtwo ,
are memory based color frame buffers, with graphics acceleration
and overlay capabilities, and hardware MPEG decoding.
.Pp
The
.Nm
driver interfaces the frame buffer to the
.Xr wscons 4
console framework.
It does not provide direct device driver entry points
but makes its functions available via the internal
.Xr wsdisplay 4
interface.
.Sh DISPLAY RESOLUTION
The XVideo and PowerVideo frame buffers will adapt their resolution and
refresh rate to the monitor they are connected to.
However, when not connected to a Sun monitor, the device will default to the
.Xr cgthree 4 Ns -compatible
1152x900 resolution, with a refresh rate of 67Hz.
A different resolution can be forced using the rotary switch on the edge
of the board.
.Pp
The available modes are as follows:
.Bl -column "Rotary" "Resolution" "Refresh Rate" -offset indent
.It Sy Rotary Ta Sy Resolution Ta Sy "Refresh Rate"
.It Li 0 Ta autodetect Ta autodetect
.It Li 1 Ta 1152x900 Ta 67Hz
.It Li 2 Ta 1152x900 Ta 76Hz
.It Li 3 Ta 1152x900 Ta 60Hz
.It Li 4 Ta 1024x768 Ta 77Hz
.It Li 5 Ta 640x480 Ta 60Hz
.El
.Pp
All other rotary positions will behave as position 0, except for positions
E and F.
Position E enables the board built-in debugger on the serial port, and
should not be used by end-users.
Position F selects the video mode settings stored in the cards NVRAM.
These settings cannot be modified under
.Ox .
.Sh SEE ALSO
.Xr intro 4 ,
.Xr sbus 4 ,
.Xr wscons 4 ,
.Xr wsdisplay 4
.Sh CAVEATS
This driver does not support any acceleration features at the moment.
```
|
Pesiöjärvi is a medium-sized lake in the Oulujoki main catchment area. It is located in Suomussalmi municipality, in the region Kainuu, Finland.
See also
List of lakes in Finland
References
Lakes of Suomussalmi
|
```java
/**
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
*
* path_to_url
*
* Unless required by applicable law or agreed to in writing,
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* specific language governing permissions and limitations
*/
package org.apache.weex.ui.animation;
import android.animation.PropertyValuesHolder;
import android.support.annotation.NonNull;
import android.support.annotation.Nullable;
import android.support.v4.util.ArrayMap;
import android.text.TextUtils;
import android.util.Pair;
import android.util.Property;
import android.view.View;
import org.apache.weex.WXEnvironment;
import org.apache.weex.common.Constants;
import org.apache.weex.common.WXErrorCode;
import org.apache.weex.utils.FunctionParser;
import org.apache.weex.utils.WXDataStructureUtil;
import org.apache.weex.utils.WXExceptionUtils;
import org.apache.weex.utils.WXLogUtils;
import org.apache.weex.utils.WXUtils;
import org.apache.weex.utils.WXViewUtils;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Collections;
import java.util.HashMap;
import java.util.LinkedHashMap;
import java.util.List;
import java.util.Map;
/**
* Created by furture on 2017/10/24.
*/
public class TransformParser {
public final static String WX_TRANSLATE = "translate";
public final static String WX_TRANSLATE_X = "translateX";
public final static String WX_TRANSLATE_Y = "translateY";
public final static String WX_ROTATE = "rotate";
public final static String WX_ROTATE_X ="rotateX";
public final static String WX_ROTATE_Y ="rotateY";
public final static String WX_ROTATE_Z ="rotateZ";
public final static String WX_SCALE = "scale";
public final static String WX_SCALE_X = "scaleX";
public final static String WX_SCALE_Y = "scaleY";
public final static String BACKGROUND_COLOR = Constants.Name.BACKGROUND_COLOR;
public final static String WIDTH = Constants.Name.WIDTH;
public final static String HEIGHT = Constants.Name.HEIGHT;
public final static String TOP = "top";
public final static String BOTTOM = "bottom";
public final static String RIGHT = "right";
public final static String LEFT = "left";
public final static String CENTER = "center";
private static final String HALF = "50%";
private static final String FULL = "100%";
private static final String ZERO = "0%";
private static final String PX = "px";
private static final String DEG = "deg";
public static Map<String, List<Property<View,Float>>> wxToAndroidMap = new ArrayMap<>();
static {
wxToAndroidMap.put(WX_TRANSLATE, Arrays.asList
(View.TRANSLATION_X, View.TRANSLATION_Y));
wxToAndroidMap.put(WX_TRANSLATE_X, Collections.singletonList(View.TRANSLATION_X));
wxToAndroidMap.put(WX_TRANSLATE_Y, Collections.singletonList(View.TRANSLATION_Y));
wxToAndroidMap.put(WX_ROTATE, Collections.singletonList(View.ROTATION));
wxToAndroidMap.put(WX_ROTATE_Z, Collections.singletonList(View.ROTATION));
wxToAndroidMap.put(WX_ROTATE_X, Collections.singletonList(View.ROTATION_X));
wxToAndroidMap.put(WX_ROTATE_Y, Collections.singletonList(View.ROTATION_Y));
wxToAndroidMap.put(WX_SCALE, Arrays.asList(View.SCALE_X, View.SCALE_Y));
wxToAndroidMap.put(WX_SCALE_X, Collections.singletonList(View.SCALE_X));
wxToAndroidMap.put(WX_SCALE_Y, Collections.singletonList(View.SCALE_Y));
wxToAndroidMap.put(Constants.Name.PERSPECTIVE, Collections.singletonList(CameraDistanceProperty.getInstance()));
wxToAndroidMap = Collections.unmodifiableMap(wxToAndroidMap);
}
public static PropertyValuesHolder[] toHolders(Map<Property<View,Float>, Float> transformMap){
PropertyValuesHolder[] holders = new PropertyValuesHolder[transformMap.size()];
int i=0;
for (Map.Entry<Property<View, Float>, Float> entry : transformMap.entrySet()) {
holders[i] = PropertyValuesHolder.ofFloat(entry.getKey(), entry.getValue());
i++;
}
return holders;
}
public static Map<Property<View,Float>, Float> parseTransForm(String instanceId, @Nullable String rawTransform, final int width,
final int height, final int viewportW) {
try{
if (!TextUtils.isEmpty(rawTransform)) {
FunctionParser<Property<View,Float>, Float> parser = new FunctionParser<>
(rawTransform, new FunctionParser.Mapper<Property<View,Float>, Float>() {
@Override
public Map<Property<View,Float>, Float> map(String functionName, List<String> raw) {
if (raw != null && !raw.isEmpty()) {
if (wxToAndroidMap.containsKey(functionName)) {
return convertParam(width, height,viewportW, wxToAndroidMap.get(functionName), raw);
}
}
return new HashMap<>();
}
private Map<Property<View,Float>, Float> convertParam(int width, int height, int viewportW,
@NonNull List<Property<View,Float>> propertyList,
@NonNull List<String> rawValue) {
Map<Property<View,Float>, Float> result = WXDataStructureUtil.newHashMapWithExpectedSize(propertyList.size());
List<Float> convertedList = new ArrayList<>(propertyList.size());
if (propertyList.contains(View.ROTATION) ||
propertyList.contains(View.ROTATION_X) ||
propertyList.contains(View.ROTATION_Y)) {
convertedList.addAll(parseRotationZ(rawValue));
}else if (propertyList.contains(View.TRANSLATION_X) ||
propertyList.contains(View.TRANSLATION_Y)) {
convertedList.addAll(parseTranslation(propertyList, width, height, rawValue,viewportW));
} else if (propertyList.contains(View.SCALE_X) ||
propertyList.contains(View.SCALE_Y)) {
convertedList.addAll(parseScale(propertyList.size(), rawValue));
}
else if(propertyList.contains(CameraDistanceProperty.getInstance())){
convertedList.add(parseCameraDistance(rawValue));
}
if (propertyList.size() == convertedList.size()) {
for (int i = 0; i < propertyList.size(); i++) {
result.put(propertyList.get(i), convertedList.get(i));
}
}
return result;
}
private List<Float> parseScale(int size, @NonNull List<String> rawValue) {
List<Float> convertedList = new ArrayList<>(rawValue.size() * 2);
List<Float> rawFloat = new ArrayList<>(rawValue.size());
for (String item : rawValue) {
rawFloat.add(WXUtils.fastGetFloat(item));
}
convertedList.addAll(rawFloat);
if (size != 1 && rawValue.size() == 1) {
convertedList.addAll(rawFloat);
}
return convertedList;
}
private @NonNull
List<Float> parseRotationZ(@NonNull List<String> rawValue) {
List<Float> convertedList = new ArrayList<>(1);
int suffix;
for (String raw : rawValue) {
if ((suffix = raw.lastIndexOf(DEG)) != -1) {
convertedList.add(WXUtils.fastGetFloat(raw.substring(0, suffix)));
} else {
convertedList.add((float) Math.toDegrees(WXUtils.fastGetFloat(raw)));
}
}
return convertedList;
}
/**
* As "translate(50%, 25%)" or "translate(25px, 30px)" both are valid,
* parsing translate is complicated than other method.
* Add your waste time here if you try to optimize this method like {@link #parseScale(int, List)}
* Time: 0.5h
*/
private List<Float> parseTranslation(List<Property<View,Float>> propertyList,
int width, int height,
@NonNull List<String> rawValue, int viewportW) {
List<Float> convertedList = new ArrayList<>(2);
String first = rawValue.get(0);
if (propertyList.size() == 1) {
parseSingleTranslation(propertyList, width, height, convertedList, first,viewportW);
} else {
parseDoubleTranslation(width, height, rawValue, convertedList, first,viewportW);
}
return convertedList;
}
private void parseSingleTranslation(List<Property<View,Float>> propertyList, int width, int height,
List<Float> convertedList, String first, int viewportW) {
if (propertyList.contains(View.TRANSLATION_X)) {
convertedList.add(parsePercentOrPx(first, width,viewportW));
} else if (propertyList.contains(View.TRANSLATION_Y)) {
convertedList.add(parsePercentOrPx(first, height,viewportW));
}
}
private void parseDoubleTranslation(int width, int height,
@NonNull List<String> rawValue,
List<Float> convertedList, String first, int viewportW) {
String second;
if (rawValue.size() == 1) {
second = first;
} else {
second = rawValue.get(1);
}
convertedList.add(parsePercentOrPx(first, width,viewportW));
convertedList.add(parsePercentOrPx(second, height,viewportW));
}
private Float parseCameraDistance(List<String> rawValue){
float ret= Float.MAX_VALUE;
if(rawValue.size() == 1){
float value = WXViewUtils.getRealPxByWidth(WXUtils.getFloat(rawValue.get(0)), viewportW);
float scale = WXEnvironment.getApplication().getResources().getDisplayMetrics().density;
if (!Float.isNaN(value) && value > 0) {
ret = value * scale;
}
}
return ret;
}
});
return parser.parse();
}
}catch (Exception e){
WXLogUtils.e("TransformParser", e);
WXExceptionUtils.commitCriticalExceptionRT(instanceId,
WXErrorCode.WX_RENDER_ERR_TRANSITION,
"parse animation transition",
WXErrorCode.WX_RENDER_ERR_TRANSITION.getErrorMsg() + "parse transition error: " + e.getMessage(),
null);
}
return new LinkedHashMap<>();
}
private static Pair<Float, Float> parsePivot(@Nullable String transformOrigin,
int width, int height, int viewportW) {
if (!TextUtils.isEmpty(transformOrigin)) {
int firstSpace = transformOrigin.indexOf(FunctionParser.SPACE);
if (firstSpace != -1) {
int i = firstSpace;
for (; i < transformOrigin.length(); i++) {
if (transformOrigin.charAt(i) != FunctionParser.SPACE) {
break;
}
}
if (i < transformOrigin.length() && transformOrigin.charAt(i) != FunctionParser.SPACE) {
List<String> list = new ArrayList<>(2);
list.add(transformOrigin.substring(0, firstSpace).trim());
list.add(transformOrigin.substring(i, transformOrigin.length()).trim());
return parsePivot(list, width, height,viewportW);
}
}
}
return null;
}
private static Pair<Float, Float> parsePivot(@NonNull List<String> list, int width, int height, int viewportW) {
return new Pair<>(
parsePivotX(list.get(0), width,viewportW), parsePivotY(list.get(1), height,viewportW));
}
private static float parsePivotX(String x, int width, int viewportW) {
String value = x;
if (WXAnimationBean.Style.LEFT.equals(x)) {
value = ZERO;
} else if (WXAnimationBean.Style.RIGHT.equals(x)) {
value = FULL;
} else if (WXAnimationBean.Style.CENTER.equals(x)) {
value = HALF;
}
return parsePercentOrPx(value, width,viewportW);
}
private static float parsePivotY(String y, int height, int viewportW) {
String value = y;
if (WXAnimationBean.Style.TOP.equals(y)) {
value = ZERO;
} else if (WXAnimationBean.Style.BOTTOM.equals(y)) {
value = FULL;
} else if (WXAnimationBean.Style.CENTER.equals(y)) {
value = HALF;
}
return parsePercentOrPx(value, height,viewportW);
}
private static float parsePercentOrPx(String raw, int unit, int viewportW) {
final int precision = 1;
int suffix;
if ((suffix = raw.lastIndexOf(WXUtils.PERCENT)) != -1) {
return parsePercent(raw.substring(0, suffix), unit, precision);
} else if ((suffix = raw.lastIndexOf(PX)) != -1) {
return WXViewUtils.getRealPxByWidth(WXUtils.fastGetFloat(raw.substring(0, suffix), precision),viewportW);
}
return WXViewUtils.getRealPxByWidth(WXUtils.fastGetFloat(raw, precision),viewportW);
}
private static float parsePercent(String percent, int unit, int precision) {
return WXUtils.fastGetFloat(percent, precision) / 100 * unit;
}
}
```
|
```javascript
'use strict';
var zookeeper = require('node-zookeeper-client');
var client = zookeeper.createClient('localhost:2181');
if (client.state) {
console.log('ok');
}
```
|
This is a list of seasons played by Sporting de Gijón in Spanish and European football, from 1916 to the most recent completed season. It details the club's achievements in major competitions, and the top scorers in league games for each season.
The club was runner-up of the Liga one time, of the Spanish Cup two times and played the UEFA Cup six times.
Key
Key to league record:
Pos = Final position
Pld = Matches played
W = Matches won
D = Matches drawn
L = Matches lost
GF = Goals for
GA = Goals against
Pts = Points
Key to playoffs record:
PP = Promotion playoffs
RP = Relegation playoffs
→ = Remained in the same category
↑ = Promoted
↓ = Relegated
Key to rounds:
W = Winner
RU = Runner-up
SF = Semi-finals
QF = Quarter-finals
R16 = Round of 16
R32 = Round of 32
R64 = Round of 64
6R = Sixth round
5R = Fifth round
4R = Fourth round
3R = Third round
2R = Second round
1R = First round
GS = Group stage
Seasons
References
External links
Profile at BDFútbol
Profile at Futbolme
Seasons
Sporting Gijon
Seasons
|
```html
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "path_to_url">
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=US-ASCII">
<title>Struct template as_feature<tag::weighted_mean_of_variates< VariateType, VariateTag >(lazy)></title>
<link rel="stylesheet" href="../../../../doc/src/boostbook.css" type="text/css">
<meta name="generator" content="DocBook XSL Stylesheets V1.79.1">
<link rel="home" href="../../index.html" title="The Boost C++ Libraries BoostBook Documentation Subset">
<link rel="up" href="../../accumulators/reference.html#header.boost.accumulators.statistics.weighted_mean_hpp" title="Header <boost/accumulators/statistics/weighted_mean.hpp>">
<link rel="prev" href="as_feat_1_3_2_6_3_44_1_1_2.html" title="Struct as_feature<tag::weighted_mean(immediate)>">
<link rel="next" href="as_feat_1_3_2_6_3_44_1_1_4.html" title="Struct template as_feature<tag::weighted_mean_of_variates< VariateType, VariateTag >(immediate)>">
</head>
<body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF">
<table cellpadding="2" width="100%"><tr>
<td valign="top"><img alt="Boost C++ Libraries" width="277" height="86" src="../../../../boost.png"></td>
<td align="center"><a href="../../../../index.html">Home</a></td>
<td align="center"><a href="../../../../libs/libraries.htm">Libraries</a></td>
<td align="center"><a href="path_to_url">People</a></td>
<td align="center"><a href="path_to_url">FAQ</a></td>
<td align="center"><a href="../../../../more/index.htm">More</a></td>
</tr></table>
<hr>
<div class="spirit-nav">
<a accesskey="p" href="as_feat_1_3_2_6_3_44_1_1_2.html"><img src="../../../../doc/src/images/prev.png" alt="Prev"></a><a accesskey="u" href="../../accumulators/reference.html#header.boost.accumulators.statistics.weighted_mean_hpp"><img src="../../../../doc/src/images/up.png" alt="Up"></a><a accesskey="h" href="../../index.html"><img src="../../../../doc/src/images/home.png" alt="Home"></a><a accesskey="n" href="as_feat_1_3_2_6_3_44_1_1_4.html"><img src="../../../../doc/src/images/next.png" alt="Next"></a>
</div>
<div class="refentry">
<a name="boost.accumulators.as_feat_1_3_2_6_3_44_1_1_3"></a><div class="titlepage"></div>
<div class="refnamediv">
<h2><span class="refentrytitle">Struct template as_feature<tag::weighted_mean_of_variates< VariateType, VariateTag >(lazy)></span></h2>
<p>boost::accumulators::as_feature<tag::weighted_mean_of_variates< VariateType, VariateTag >(lazy)></p>
</div>
<h2 xmlns:rev="path_to_url~gregod/boost/tools/doc/revision" class="refsynopsisdiv-title">Synopsis</h2>
<div xmlns:rev="path_to_url~gregod/boost/tools/doc/revision" class="refsynopsisdiv"><pre class="synopsis"><span class="comment">// In header: <<a class="link" href="../../accumulators/reference.html#header.boost.accumulators.statistics.weighted_mean_hpp" title="Header <boost/accumulators/statistics/weighted_mean.hpp>">boost/accumulators/statistics/weighted_mean.hpp</a>>
</span><span class="keyword">template</span><span class="special"><</span><span class="keyword">typename</span> VariateType<span class="special">,</span> <span class="keyword">typename</span> VariateTag<span class="special">></span>
<span class="keyword">struct</span> <a class="link" href="as_feat_1_3_2_6_3_44_1_1_3.html" title="Struct template as_feature<tag::weighted_mean_of_variates< VariateType, VariateTag >(lazy)>">as_feature</a><span class="special"><</span><span class="identifier">tag</span><span class="special">::</span><span class="identifier">weighted_mean_of_variates</span><span class="special"><</span> <span class="identifier">VariateType</span><span class="special">,</span> <span class="identifier">VariateTag</span> <span class="special">></span><span class="special">(</span><span class="identifier">lazy</span><span class="special">)</span><span class="special">></span> <span class="special">{</span>
<span class="comment">// types</span>
<span class="keyword">typedef</span> <a class="link" href="tag/weighted_mean_of_variates.html" title="Struct template weighted_mean_of_variates">tag::weighted_mean_of_variates</a><span class="special"><</span> <span class="identifier">VariateType</span><span class="special">,</span> <span class="identifier">VariateTag</span> <span class="special">></span> <a name="boost.accumulators.as_feat_1_3_2_6_3_44_1_1_3.type"></a><span class="identifier">type</span><span class="special">;</span>
<span class="special">}</span><span class="special">;</span></pre></div>
</div>
<table xmlns:rev="path_to_url~gregod/boost/tools/doc/revision" width="100%"><tr>
<td align="left"></td>
file LICENSE_1_0.txt or copy at <a href="path_to_url" target="_top">path_to_url
</p>
</div></td>
</tr></table>
<hr>
<div class="spirit-nav">
<a accesskey="p" href="as_feat_1_3_2_6_3_44_1_1_2.html"><img src="../../../../doc/src/images/prev.png" alt="Prev"></a><a accesskey="u" href="../../accumulators/reference.html#header.boost.accumulators.statistics.weighted_mean_hpp"><img src="../../../../doc/src/images/up.png" alt="Up"></a><a accesskey="h" href="../../index.html"><img src="../../../../doc/src/images/home.png" alt="Home"></a><a accesskey="n" href="as_feat_1_3_2_6_3_44_1_1_4.html"><img src="../../../../doc/src/images/next.png" alt="Next"></a>
</div>
</body>
</html>
```
|
"Ordinary People" is a song by American recording artist John Legend. It was written and produced by Legend and will.i.am for his debut album Get Lifted (2004). It was released as the album's second single and later certified gold by the RIAA. Critics were positive towards the song, praising it for its raw emotion and simplicity. At the 48th Annual Grammy Awards "Ordinary People" received three nominations for Song of the Year, Best R&B Song and Best Male R&B Vocal Performance, ultimately winning the latter. The song appears on Now 19.
Music video
The music video for "Ordinary People", directed by Chris Milk and Legend's then-label boss Kanye West, features Legend playing a grand piano in an all-white space, while couples and families fight and reconcile around and in front of the piano. For the final minute of the video, Legend is joined by a string section and a harmonica (played offscreen). Legend walks to and from the piano with a glass of water, as a short bookending to the video proper.
Composition
The main chord progression is derived from the introduction to Stevie Wonder's "My Cherie Amour", transposed to the key of F Major. This is punctuated in the music video version, when the string section and harmonica are brought in at the last chorus of the song.
The song's lyrical themes include contrast, contradiction, guilt, doubt and fear.
Legend sings about how people make errors of judgment in relationships ("I know I misbehaved/And you've made your mistakes/And we both still got room left to grow."), and that fighting and making up in the end is a regular obstacle: "And though love sometimes hurts/I still put you first/And we'll make this thing work/But I think we should take it slow." The lyrics include parallel structure to address the common ups-and-downs of maintaining a relationship: "Maybe we'll live and learn/Maybe we'll crash and burn/Maybe you'll stay/Maybe you'll leave/Maybe you'll return/Maybe another fight/Maybe we won't survive/Maybe we'll grow, we never know." The song's title itself is taken from its chorus, "We're just ordinary people/We don't know which way to go/'Cause we're ordinary people/Maybe we should take it slow."
Legend explained the song's lyrical content in the book Chicken Soup For the Soul: The Story Behind The Song: "The idea for the song is that relationships are difficult and the outcome uncertain. If a relationship is going to work, it will require compromise and, even then, it is not always going to end the way you want it to. No specific experience in my life led me to the lyrics for this song, although my parents were married twice to each other and divorced twice from each other. Their relationship is, of course, one of my reference points, but I didn't write this to be autobiographical or biographical. It is just a statement about relationships and my view on them."
Reception
Critics were overwhelmingly positive towards "Ordinary People", many of whom complimented the song's juxtaposition of simple stark piano and John Legend's vocal range. Entertainment Weekly noted "Ordinary People" as being both "the simplest" and "perhaps the most perfectly realized song" of the Get Lifted album, describing it as "an exquisite ballad" that is "both immediately familiar and intensely exotic." A review from The Guardian called the song "a real gem", and lauded further: "[I]t's not only sonically arresting but lyrically reflective. Refusing to tie up loose ends, Legend is ambivalent about the relationship described in the song, admitting that there's 'no fairy-tale conclusion'. Good for him." PopMatters was favorable towards the single, stating it "is representative of true talent." Jonathan Forgang, reviewing for Stylus magazine, stated: "'Ordinary People,' the first of the piano and voice ballads, is a bit more derivative than the earlier tracks but expertly performed. Legend's voice has a naked quality to it, warm and full without any of the drawbacks of virtuosity." The Times thought the song was full of "remorseful reflection" and said that "the album as a whole is a stunning advertisement for the less-is-more, from-the-soul approach, and Legend’s extraordinary voice (alternately angelic keen and cracked rasp) and piano-playing are equalled in quality by the depth of his songs."
On 14 April 2012, the song was performed on BBC's The Voice UK by semi-finalist Jaz Ellington as a second song (requested by Jessie J), resulting in some members of the UK public buying the track on iTunes. The song re-entered the Official UK Top 40 at number 27 on 15 April, and the following week climbed to number 4.
Cover versions
George Benson and Al Jarreau covered the song for their 2006 album Givin' It Up.
Asher Book performs the song in the 2009 film Fame. Book's version is available on the soundtrack to the film.
Aloe Blacc covered the song as Gente Ordinaria, singing it in Spanish.
Becky Hill and Jaz Ellington covered the song on the first series of The Voice UK, which led to the song re-entering the chart at number 4.
Mathai covered the song in the first live show of season two of The Voice US
Candice Glover covered the song during the semi-final (top 20) live show of the 12th season of American Idol.
Nathan Sykes covered this song during his Sykes Secret Shows showcase on July 5, 2015
Personnel
Produced by John Legend
Engineered by Anthony Kilhoffer, Andy Manganello and Michael Peters
Assistant engineers: Mike Eleopoulos, Pablo Arraya and Val Brathwaite
Mixed by Manny Marroquin at Larrabee Studios, LA
Assistant mix engineer: Jared Robbins
Vocals and piano by John Legend
Recorded at Record Plant, LA and Sony Music Studios
Track listing
CD single
"Ordinary People" (Album Version) - 4:40
"Ordinary People" (Johnny Douglas Radio Edit) - 3:34
"Ordinary People" (Live at The Scala) - 7:12
"Ordinary People" (Heavy Metal Remix) - 4:50
"Ordinary People" (Video) - 4:55
12" vinyl
A
"Ordinary People" (Johnny Douglas Remix) - 5:01
"Ordinary People" (Johnny Douglas Remix Instrumental) - 4:13
B
"Ordinary People" (Heavy Metal Remix) - 4:50
"Ordinary People" (Heavy Remix Instrumental) - 4:50
"Ordinary People" (Album Version) - 4:40
Charts and certifications
Weekly charts
Year-end charts
Certifications
Release history
References
External links
2005 singles
Black-and-white music videos
Columbia Records singles
GOOD Music singles
John Legend songs
Pop ballads
Music videos directed by Kanye West
Contemporary R&B ballads
Song recordings produced by will.i.am
Songs written by John Legend
Songs written by will.i.am
Sony Music singles
|
```javascript
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
// Largely ported from
// path_to_url
// using path_to_url with further edits
function testTypedArrays(callback) {
[
Uint8Array,
Int8Array,
Uint16Array,
Int16Array,
Uint32Array,
Int32Array,
Uint8ClampedArray,
Float32Array,
Float64Array
]
.forEach(callback);
}
testTypedArrays.floatOnly = function (callback) {
[Float32Array, Float64Array].forEach(callback);
};
// %TypedArray%.prototype.includes throws a TypeError when used on non-typed
// arrays
(function() {
var taIncludes = Uint8Array.prototype.includes;
assertThrows(function() {
taIncludes.call({
length: 2,
0: 1,
1: 2
}, 2);
}, TypeError);
assertThrows(function() {
taIncludes.call([1, 2, 3], 2);
}, TypeError);
assertThrows(function() {
taIncludes.call(null, 2);
}, TypeError);
assertThrows(function() {
taIncludes.call(undefined, 2);
}, TypeError);
})();
// %TypedArray%.prototype.includes should terminate if ToNumber ends up being
// called on a symbol fromIndex
(function() {
testTypedArrays(function(TypedArrayConstructor) {
var ta = new TypedArrayConstructor([1, 2, 3]);
assertThrows(function() {
ta.includes(2, Symbol());
}, TypeError);
});
})();
// %TypedArray%.prototype.includes should terminate if an exception occurs
// converting the fromIndex to a number
(function() {
function Test262Error() {}
var fromIndex = {
valueOf: function() {
throw new Test262Error();
}
};
testTypedArrays(function(TypedArrayConstructor) {
var ta = new TypedArrayConstructor([1, 2, 3]);
assertThrows(function() {
ta.includes(2, fromIndex);
}, Test262Error);
});
})();
// %TypedArray%.prototype.includes should search the whole array, as the
// optional second argument fromIndex defaults to 0
(function() {
testTypedArrays(function(TypedArrayConstructor) {
var ta = new TypedArrayConstructor([1, 2, 3]);
assertTrue(ta.includes(1));
assertTrue(ta.includes(2));
assertTrue(ta.includes(3));
});
})();
// %TypedArray%.prototype.includes returns false if fromIndex is greater or
// equal to the length of the array
(function() {
testTypedArrays(function(TypedArrayConstructor) {
var ta = new TypedArrayConstructor([1, 2]);
assertFalse(ta.includes(2, 3));
assertFalse(ta.includes(2, 2));
});
})();
// %TypedArray%.prototype.includes searches the whole array if the computed
// index from the given negative fromIndex argument is less than 0
(function() {
testTypedArrays(function(TypedArrayConstructor) {
var ta = new TypedArrayConstructor([1, 3]);
assertTrue(ta.includes(1, -4));
assertTrue(ta.includes(1, -4));
});
})();
// %TypedArray%.prototype.includes should use a negative value as the offset
// from the end of the array to compute fromIndex
(function() {
testTypedArrays(function(TypedArrayConstructor) {
var ta = new TypedArrayConstructor([12, 13]);
assertTrue(ta.includes(13, -1));
assertFalse(ta.includes(12, -1));
assertTrue(ta.includes(12, -2));
});
})();
// %TypedArray%.prototype.includes converts its fromIndex parameter to an
// integer
(function() {
testTypedArrays(function(TypedArrayConstructor) {
var ta = new TypedArrayConstructor([1, 2, 3]);
assertFalse(ta.includes(1, 3.3));
assertTrue(ta.includes(1, -Infinity));
assertTrue(ta.includes(3, 2.9));
assertTrue(ta.includes(3, NaN));
var numberLike = {
valueOf: function() {
return 2;
}
};
assertFalse(ta.includes(1, numberLike));
assertFalse(ta.includes(1, "2"));
assertTrue(ta.includes(3, numberLike));
assertTrue(ta.includes(3, "2"));
});
})();
// %TypedArray%.prototype.includes should have length 1
(function() {
assertEquals(1, Uint8Array.prototype.includes.length);
})();
// %TypedArray%.prototype.includes should have name property with value
// 'includes'
(function() {
assertEquals("includes", Uint8Array.prototype.includes.name);
})();
// %TypedArray%.prototype.includes should always return false on zero-length
// typed arrays
(function() {
testTypedArrays(function(TypedArrayConstructor) {
var ta = new TypedArrayConstructor([]);
assertFalse(ta.includes(2));
assertFalse(ta.includes());
assertFalse(ta.includes(undefined));
assertFalse(ta.includes(NaN));
});
})();
// %TypedArray%.prototype.includes should use the SameValueZero algorithm to
// compare
(function() {
testTypedArrays.floatOnly(function(FloatArrayConstructor) {
assertTrue(new FloatArrayConstructor([1, 2, NaN]).includes(NaN));
assertTrue(new FloatArrayConstructor([1, 2, -0]).includes(+0));
assertTrue(new FloatArrayConstructor([1, 2, -0]).includes(-0));
assertTrue(new FloatArrayConstructor([1, 2, +0]).includes(-0));
assertTrue(new FloatArrayConstructor([1, 2, +0]).includes(+0));
assertFalse(new FloatArrayConstructor([1, 2, -Infinity]).includes(+Infinity));
assertTrue(new FloatArrayConstructor([1, 2, -Infinity]).includes(-Infinity));
assertFalse(new FloatArrayConstructor([1, 2, +Infinity]).includes(-Infinity));
assertTrue(new FloatArrayConstructor([1, 2, +Infinity]).includes(+Infinity));
});
})();
```
|
```xml
export const defaultQuery = `\
# Welcome to Ruru, our distribution of GraphiQL and related tooling to
# inspect your GraphQL API.
#
# GraphiQL is an in-browser tool for writing, validating, and
# testing GraphQL queries.
#
# Type queries into this side of the screen, and you will see intelligent
# typeaheads aware of the current GraphQL type schema and live syntax and
# validation errors highlighted within the text.
#
# GraphQL queries typically start with a "{" character. Lines that starts
# with a # are ignored.
#
# An example GraphQL query might look like:
#
# {
# field(arg: "value") {
# subField
# }
# }
#
# Keyboard shortcuts:
#
# Prettify Query: Shift-Ctrl-P (or press the prettify button above)
#
# Merge Query: Shift-Ctrl-M (or press the merge button above)
#
# Run Query: Ctrl-Enter (or press the play button above)
#
# Auto Complete: Ctrl-Space (or just start typing)
#
`;
```
|
Novica Čanović (Serbian Cyrillic: Новица Чановић; 29 November 1961 – 3 July 1993) was a Serbian high jumper who represented SFR Yugoslavia during his active career.
Biography
Čanović was born in Kumanovo in today's North Macedonia. He finished fifteenth at the 1983 European Indoor Championships and won the gold medal at the 1987 Mediterranean Games. He also competed at the 1984 Olympic Games without reaching the final.
Čanović became Yugoslavian high jump champion in 1982, 1983, 1984, 1986 and 1987, rivalling with Danial Temim, Hrvoje Fižuleto and Sašo Apostolovski. He also became indoor champion in 1987.
He still holds the high jump record in Croatia (senior 228, junior 218 cm and indoor senior 228 cm) as he competed for Croatian AC Slavonija Osijek. His personal best jump was 2.28 metres, achieved in July 1985 in Split.
He died in July 1993 in Knin as a soldier of the Army of the Republic of Serbian Krajina.
References
1961 births
1993 deaths
Sportspeople from Kumanovo
Serbs of North Macedonia
Serbs of Croatia
Yugoslav male high jumpers
Serbian male high jumpers
Athletes (track and field) at the 1984 Summer Olympics
Olympic athletes for Yugoslavia
Mediterranean Games gold medalists for Yugoslavia
Mediterranean Games medalists in athletics
Athletes (track and field) at the 1987 Mediterranean Games
Serbian military personnel killed in action
Military personnel killed in the Croatian War of Independence
Olympians killed in warfare
|
```xml
import * as React from 'react';
import { useThemeProviderClasses } from './useThemeProviderClasses';
import { useThemeProvider } from './useThemeProvider';
import { useMergedRefs } from '@fluentui/react-hooks';
import type { ThemeProviderProps } from './ThemeProvider.types';
/**
* ThemeProvider, used for providing css variables and registering stylesheets.
*/
export const ThemeProvider: React.FunctionComponent<ThemeProviderProps> = React.forwardRef<
HTMLDivElement,
ThemeProviderProps
>((props: ThemeProviderProps, ref: React.Ref<HTMLDivElement>) => {
const rootRef = useMergedRefs(ref, React.useRef<HTMLElement>(null));
const { render, state } = useThemeProvider(props, {
ref: rootRef,
as: 'div',
applyTo: 'element',
});
// Render styles.
useThemeProviderClasses(state);
// Return the rendered content.
return render(state);
});
ThemeProvider.displayName = 'ThemeProvider';
```
|
Eagle When She Flies is the thirty-first solo studio album by American singer-songwriter Dolly Parton. It was released on March 7, 1991, by Columbia Records. The album was produced by Steve Buckingham and Gary Smith, with Parton serving as executive producer. It continues Parton's return to mainstream country sounds following 1989's White Limozeen. The album features collaborations with Lorrie Morgan and Ricky Van Shelton, with additional supporting vocals provided by Vince Gill and Emmylou Harris. The album was a commercial success, becoming Parton's first solo album to peak at number one on the Billboard Top Country Albums chart since 1980s 9 to 5 and Odd Jobs. It was certified Platinum in by the RIAA in 1992. The album spawned four singles, the most successful being "Rockin' Years" with Ricky Van Shelton, which topped the Billboard Hot Country Singles & Tracks chart. In support of the album, Parton embarked on the Eagle When She Flies Tour, her only concert tour of the 1990s.
Release and promotion
The album was released March 7, 1991, on CD, cassette, and LP.
Dolly Parton's duet with Shelton, "Rockin' Years", topped the country charts, and the follow-up single co-written by Carl Perkins, "Silver and Gold", was a #15 country single. Rounding out the hit singles was the title song "Eagle When She Flies", which only reached a #33 peak, despite spending 20 weeks on the Billboard Country Singles chart. Her duet with Lorrie Morgan, "Best Woman Wins", appeared simultaneously on Lorrie Morgan's 1991 album Something in Red. She co-wrote the song "Family" with Carl Perkins and "Wildest Dreams" with Mac Davis.
Critical reception
Frank King from Calgary Herald wrote, "Hot damn, she's back. Just when the world is ready to write off Dolly as a cupie doll incapable of anything but candy fluff pop albums and silly duets with Kenny Rogers, she wipes us out with an inspiring, heart-on-your-sleeve country classic. The guest list - Vince Gill, Patty Loveless, Emmylou Harris, Alison Krauss - is impressive, but it's Dolly that shines from start to finish. Most of the 11 tracks are self-penned and drip with honest-to-goodness emotion. Makes a fella proud to slide on his cowboy boots and declare he finally likes Dolly for more than her pin-up appearance."
Commercial performance
The album also topped the U.S. country albums charts, Parton's first solo album to reach the top in a decade (and her last to do so until 2016) and reached #24 on the pop albums charts. The album spent 73 weeks on the Billboard Top Country Albums chart. It was her first solo studio album to reach number one album in the United States after 1980's 9 to 5 and Odd Jobs. The album's single week at number one interrupted what would otherwise have been an unbroken run of over 14 months in the top spot for Garth Brooks.
The album sold 74,000 copies in its first week. It ended up being certified Platinum by the Recording Industry Association of America. The album has sold 1.14 million copies as of July 2016.
Reissues
In 2009, Sony Music reissued Eagle When She Flies in a triple-feature CD set with White Limozeen and Slow Dancing with the Moon.
Track listing
Personnel
Adapted from the album liner notes.
Dolly Parton – lead vocals
The Mighty Fine Band:
Mike Davis – organ
Richard Dennison – background vocals
Jimmy Mattingly – fiddle, mandolin
Jennifer O'Brien – background vocals
Gary Smith – piano, keyboards
Howard Smith – background vocals
Steve Turner – drums
Paul Uhrig – bass
Bruce Watkins – acoustic guitar
Kent Wells – electric guitar
Additional musicians
Sam Bacco – percussion
Romantic Roy Huskey – upright bass
Mark Casstevens – acoustic guitar, mandolin
Paddy Corcoran – acoustic guitar on "If You Need Me"
Glen Duncan – fiddle
Paul Franklin – steel, dobro
Steve Gibson – guitar, mandolin
Carl Jackson – acoustic guitar on "If You Need Me"
Joey Miskulin – accordion
Mark O'Connor – fiddle on "What a Heartache"
Allisa Jones Wall – hammer dulcimer
Additional vocalists
Lea Jane Berinati – background vocals
Paddy Corcoran – harmony vocals on "If You Need Me"
Joy Gardner – background vocals
Vince Gill – harmony vocals on "Silver and Gold"
Vicki Hampton – background vocals
Emmylou Harris – harmony vocals on "Country Road"
Carl Jackson – harmony vocals on "If You Need Me"
The Kid Connection – additional background vocals on "Family"
Alison Krauss – harmony vocals on "If You Need Me"
Patty Loveless – harmony vocals on "Country Road"
Lewis Nunley – background vocals
John Wesley Ryles – background vocals
Lisa Silver – background vocals
Harry Stinson - harmony vocals on "Silver and Gold"
Dennis Wilson – background vocals
Curtis Young – background vocals
Production
Joe Bogan – additional engineering
Steve Buckingham – producer
Ray Bunch – string arrangements
Robert Charles – assistant engineer
Richard Dennison – vocal supervision
Javelina East – string recording
Chrissy Follmar – assistant engineer
Carlos Grier – digital editing
Larry Jeffries – assistant engineer
Brad Jones – assistant engineer
John Kunz – engineering assistant, mixing assistant
Sean Londin – assistant engineer
Gary Paczosa – engineering, mixing
John David Parker – assistant engineer
Dolly Parton – executive producer
Denny Purcell – mastering
Gary Smith – producer
Other personnel
David Blair – hair
Tony Chase – fashion, styling
Rachel Dennison – makeup
Sandy Gallin for Gallin-Morey Associates – management
Bill Johnson – art direction
Randee St. Nicholas – photographer
Jodi Lynn Miller – design assistant
Chart performance
Album
Album (Year-End)
Certifications
References
Dolly Parton albums
1991 albums
Albums produced by Steve Buckingham (record producer)
Columbia Records albums
Albums produced by Gary Smith (record producer)
|
```asciidoc
// Module included in the following assemblies:
//
// assembly-config.adoc
[id='con-config-mirrormaker2-connectors-{context}']
= Configuring MirrorMaker 2 connectors
[role="_abstract"]
Use MirrorMaker 2 connector configuration for the internal connectors that orchestrate the synchronization of data between Kafka clusters.
MirrorMaker 2 consists of the following connectors:
`MirrorSourceConnector`:: The source connector replicates topics from a source cluster to a target cluster. It also replicates ACLs and is necessary for the `MirrorCheckpointConnector` to run.
`MirrorCheckpointConnector`:: The checkpoint connector periodically tracks offsets. If enabled, it also synchronizes consumer group offsets between the source and target cluster.
`MirrorHeartbeatConnector`:: The heartbeat connector periodically checks connectivity between the source and target cluster.
The following table describes connector properties and the connectors you configure to use them.
.MirrorMaker 2 connector configuration properties
[cols="4a,2,2,2",options="header"]
|===
|Property
|sourceConnector
|checkpointConnector
|heartbeatConnector
|admin.timeout.ms:: Timeout for admin tasks, such as detecting new topics. Default is `60000` (1 minute).
|
|
|
|replication.policy.class:: Policy to define the remote topic naming convention. Default is `org.apache.kafka.connect.mirror.DefaultReplicationPolicy`.
|
|
|
|replication.policy.separator:: The separator used for topic naming in the target cluster. By default, the separator is set to a dot (.).
Separator configuration is only applicable to the `DefaultReplicationPolicy` replication policy class, which defines remote topic names.
The `IdentityReplicationPolicy` class does not use the property as topics retain their original names.
|
|
|
|consumer.poll.timeout.ms:: Timeout when polling the source cluster. Default is `1000` (1 second).
|
|
|
|offset-syncs.topic.location:: The location of the `offset-syncs` topic, which can be the `source` (default) or `target` cluster.
|
|
|
|topic.filter.class:: Topic filter to select the topics to replicate. Default is `org.apache.kafka.connect.mirror.DefaultTopicFilter`.
|
|
|
|config.property.filter.class:: Topic filter to select the topic configuration properties to replicate. Default is `org.apache.kafka.connect.mirror.DefaultConfigPropertyFilter`.
|
|
|
|config.properties.exclude:: Topic configuration properties that should not be replicated. Supports comma-separated property names and regular expressions.
|
|
|
|offset.lag.max:: Maximum allowable (out-of-sync) offset lag before a remote partition is synchronized. Default is `100`.
|
|
|
|offset-syncs.topic.replication.factor:: Replication factor for the internal `offset-syncs` topic. Default is `3`.
|
|
|
|refresh.topics.enabled:: Enables check for new topics and partitions. Default is `true`.
|
|
|
|refresh.topics.interval.seconds:: Frequency of topic refresh. Default is `600` (10 minutes). By default, a check for new topics in the source cluster is made every 10 minutes.
You can change the frequency by adding `refresh.topics.interval.seconds` to the source connector configuration.
|
|
|
|replication.factor:: The replication factor for new topics. Default is `2`.
|
|
|
|sync.topic.acls.enabled:: Enables synchronization of ACLs from the source cluster. Default is `true`. For more information, see xref:con-mirrormaker-acls-{context}[].
|
|
|
|sync.topic.acls.interval.seconds:: Frequency of ACL synchronization. Default is `600` (10 minutes).
|
|
|
|sync.topic.configs.enabled:: Enables synchronization of topic configuration from the source cluster. Default is `true`.
|
|
|
|sync.topic.configs.interval.seconds:: Frequency of topic configuration synchronization. Default `600` (10 minutes).
|
|
|
|checkpoints.topic.replication.factor:: Replication factor for the internal `checkpoints` topic. Default is `3`.
|
|
|
|emit.checkpoints.enabled:: Enables synchronization of consumer offsets to the target cluster. Default is `true`.
|
|
|
|emit.checkpoints.interval.seconds:: Frequency of consumer offset synchronization. Default is `60` (1 minute).
|
|
|
|group.filter.class:: Group filter to select the consumer groups to replicate. Default is `org.apache.kafka.connect.mirror.DefaultGroupFilter`.
|
|
|
|refresh.groups.enabled:: Enables check for new consumer groups. Default is `true`.
|
|
|
|refresh.groups.interval.seconds:: Frequency of consumer group refresh. Default is `600` (10 minutes).
|
|
|
|sync.group.offsets.enabled:: Enables synchronization of consumer group offsets to the target cluster `__consumer_offsets` topic. Default is `false`.
|
|
|
|sync.group.offsets.interval.seconds:: Frequency of consumer group offset synchronization. Default is `60` (1 minute).
|
|
|
|emit.heartbeats.enabled:: Enables connectivity checks on the target cluster. Default is `true`.
|
|
|
|emit.heartbeats.interval.seconds:: Frequency of connectivity checks. Default is `1` (1 second).
|
|
|
|heartbeats.topic.replication.factor:: Replication factor for the internal `heartbeats` topic. Default is `3`.
|
|
|
|===
== Changing the location of the consumer group offsets topic
MirrorMaker 2 tracks offsets for consumer groups using internal topics.
`offset-syncs` topic:: The `offset-syncs` topic maps the source and target offsets for replicated topic partitions from record metadata.
`checkpoints` topic:: The `checkpoints` topic maps the last committed offset in the source and target cluster for replicated topic partitions in each consumer group.
As they are used internally by MirrorMaker 2, you do not interact directly with these topics.
`MirrorCheckpointConnector` emits _checkpoints_ for offset tracking.
Offsets for the `checkpoints` topic are tracked at predetermined intervals through configuration.
Both topics enable replication to be fully restored from the correct offset position on failover.
The location of the `offset-syncs` topic is the `source` cluster by default.
You can use the `offset-syncs.topic.location` connector configuration to change this to the `target` cluster.
You need read/write access to the cluster that contains the topic.
Using the target cluster as the location of the `offset-syncs` topic allows you to use MirrorMaker 2 even if you have only read access to the source cluster.
== Synchronizing consumer group offsets
The `__consumer_offsets` topic stores information on committed offsets for each consumer group.
Offset synchronization periodically transfers the consumer offsets for the consumer groups of a source cluster into the consumer offsets topic of a target cluster.
Offset synchronization is particularly useful in an _active/passive_ configuration.
If the active cluster goes down, consumer applications can switch to the passive (standby) cluster and pick up from the last transferred offset position.
To use topic offset synchronization, enable the synchronization by adding `sync.group.offsets.enabled` to the checkpoint connector configuration, and setting the property to `true`.
Synchronization is disabled by default.
When using the `IdentityReplicationPolicy` in the source connector, it also has to be configured in the checkpoint connector configuration.
This ensures that the mirrored consumer offsets will be applied for the correct topics.
Consumer offsets are only synchronized for consumer groups that are not active in the target cluster.
If the consumer groups are in the target cluster, the synchronization cannot be performed and an `UNKNOWN_MEMBER_ID` error is returned.
If enabled, the synchronization of offsets from the source cluster is made periodically.
You can change the frequency by adding `sync.group.offsets.interval.seconds` and `emit.checkpoints.interval.seconds` to the checkpoint connector configuration.
The properties specify the frequency in seconds that the consumer group offsets are synchronized, and the frequency of checkpoints emitted for offset tracking.
The default for both properties is 60 seconds.
You can also change the frequency of checks for new consumer groups using the `refresh.groups.interval.seconds` property, which is performed every 10 minutes by default.
Because the synchronization is time-based, any switchover by consumers to a passive cluster will likely result in some duplication of messages.
NOTE: If you have an application written in Java, you can use the `RemoteClusterUtils.java` utility to synchronize offsets through the application. The utility fetches remote offsets for a consumer group from the `checkpoints` topic.
== Deciding when to use the heartbeat connector
The heartbeat connector emits heartbeats to check connectivity between source and target Kafka clusters.
An internal `heartbeat` topic is replicated from the source cluster, which means that the heartbeat connector must be connected to the source cluster.
The `heartbeat` topic is located on the target cluster, which allows it to do the following:
* Identify all source clusters it is mirroring data from
* Verify the liveness and latency of the mirroring process
This helps to make sure that the process is not stuck or has stopped for any reason.
While the heartbeat connector can be a valuable tool for monitoring the mirroring processes between Kafka clusters, it's not always necessary to use it.
For example, if your deployment has low network latency or a small number of topics, you might prefer to monitor the mirroring process using log messages or other monitoring tools.
If you decide not to use the heartbeat connector, simply omit it from your MirrorMaker 2 configuration.
== Aligning the configuration of MirrorMaker 2 connectors
To ensure that MirrorMaker 2 connectors work properly, make sure to align certain configuration settings across connectors.
Specifically, ensure that the following properties have the same value across all applicable connectors:
* `replication.policy.class`
* `replication.policy.separator`
* `offset-syncs.topic.location`
* `topic.filter.class`
For example, the value for `replication.policy.class` must be the same for the source, checkpoint, and heartbeat connectors.
Mismatched or missing settings cause issues with data replication or offset syncing, so it's essential to keep all relevant connectors configured with the same settings.
```
|
USS Carolita (PYC-38) was a patrol boat in the United States Navy.
Carolita was built in 1923 as Ripple by Germaniawerft, Kiel, Germany; purchased by the Navy on 1 April 1942 from Herman G. Buckley, of Chicago, Illinois; and commissioned on 6 November 1942.
World War II operations
Arriving at Boston, Massachusetts, on 16 December 1942, Carolita, operated there until 3 August 1943 when she departed for Key West, Florida, via Norfolk, Virginia, and repairs at Miami, Florida.
Sound School assignment
She served with the Sound School from 8 September 1943, helping to train men in the techniques of anti-submarine warfare.
Decommissioning
Carolita was decommissioned on 28 February 1944 and used as a target.
References
External links
Patrol vessels of the United States Navy
1923 ships
|
```xml
<?xml version="1.0" encoding="UTF-8"?>
<web-app xmlns:xsi="path_to_url" xmlns="path_to_url" xsi:schemaLocation="path_to_url path_to_url" version="3.0">
<display-name>FirstJSP</display-name>
<welcome-file-list>
<welcome-file>home.jsp</welcome-file>
</welcome-file-list>
<servlet>
<servlet-name>Test</servlet-name>
<jsp-file>/WEB-INF/test.jsp</jsp-file>
<init-param>
<param-name>test</param-name>
<param-value>Test Value</param-value>
</init-param>
</servlet>
<servlet-mapping>
<servlet-name>Test</servlet-name>
<url-pattern>/Test.do</url-pattern>
</servlet-mapping>
<servlet>
<servlet-name>Test1</servlet-name>
<jsp-file>/WEB-INF/test.jsp</jsp-file>
</servlet>
<servlet-mapping>
<servlet-name>Test1</servlet-name>
<url-pattern>/Test1.do</url-pattern>
</servlet-mapping>
</web-app>
```
|
Hockey club Halytski Levy (since 2017), known before 2017 as Hockey club Levy (, ) is a Ukrainian Professional Hockey League club based in Novoiavorivsk, Lviv Oblast. They are a founding member of the Professional Hockey League of Ukraine.
Seasons and records
Season by season results
Note: GP = Games played, W = Wins, OTW = Overtime wins, OTL = Overtime Losses, L = Losses, Pts = Points, GF = Goals for, GA = Goals against
Players
Team captains
Dmytro Hnitko, 2011– present
Head coaches
Denis Bulgakov, 2011–2012
Vladislav Ershov, 2012– present
References
Defunct ice hockey teams in Ukraine
Sport in Lviv
Professional Hockey League teams
Ice hockey clubs established in 2011
2011 establishments in Ukraine
|
Eldorado is a Spanish classic hard rock band formed in Madrid in 2007.
History
In January 2007, bassist César Sánchez and guitarist Nano Paramio formed the band Eldorado. The band drew inspiration from 1970s classic rock groups Bad Company, Deep Purple and Led Zeppelin. After distributing some demo songs, the group was picked up by producer Richard Chycki, who has worked with Aerosmith, Dream Theater, Gotthard, and Rush. Furthermore, Richard Chycki was the guitarist in Winter Rose, a band many may be familiar with because it also featured Dream Theater vocalist James LaBrie back in 1989.
First album - En Busca de Eldorado
With drummer Alex Rada, César Sánchez and Nano Paramio mentioned as the only members in the CD liner notes the first album En Busca de Eldorado was recorded at Estudio Uno Madrid in Madrid, Spain. Only mentioned as guest was Ignacio Vicente Torrecillas on vocals, who also played keyboards on the song En Busca De Eldorado. Another guest on the album was Jaume Pla, who played guitar on Un Mal Presentimiento, Mistreated and Identidad, while producer Richard Chycki played tambourine on Déjame Decirte and El Final.
Seven tracks on the album were sung in Spanish, but the album also contained one track sung in English, a cover of Mistreated, originally recorded by Deep Purple on their album Burn. In October 2007 the group finished recording in Madrid and the album was produced & mixed by Richard Chycki in "Mixland", Toronto, Canada and mastered by Mika Jussila in Finvox Studios, Helsinki, Finland. For reasons unknown Ignacio Vicente Torrecillas was only a guest on the album and Eldorado announced their new lead vocalist to be Jesús Trujillo, who toured with the band supporting the album after its release and provided lead vocals on all subsequent albums of the band to date.
The band released En Busca de Eldorado in May 2008 and toured Spain during the fall. Because of a scheduling conflict, drummer Alex Rada was unable to tour and resigned from the band; he was replaced by Javier Planelles who previously played in Madrid band 69 Revoluciones and released a self-titled album with them in 2005.
Second album - Dorado / Golden
Because En Busca de Eldorado received rave reviews, Chycki agreed to produce the band's second album, which was composed during the band's fall 2008 tour, and recorded in Toronto, Canada in June 2009. Prior to the album's release, Nano Paramio left the band at the end of 2009 to pursue another musical direction, leaving César Sánchez as the only original member of Eldorado. Nano Paramio's departure prompted an extensive search for a new guitarist. The album was released in November 2009 as Dorado with the lyrics sung in Spanish. As on the first album there was one song in English, another cover, this time I Don't Need No Doctor, a song from 1966 by Ray Charles although classic rock fans may be more familiar with the version Humble Pie (band) recorded in 1971. In April 2010, the English language version, Golden, was released in Europe by the record label Bad Reputation, and was followed by a North American release in September . The English version also contained the song I Don't Need No Doctor as well as four bonus tracks from the Spanish album. The album was nominated for Best Hard Rock / Metal Album for 2009 at the Independent Music Awards where it received a Vox Populi (popular vote). Its single "The House Of The 7 Smokestacks", was a finalist in the Australian MusicOz Awards, and the song "Atlantico" was a finalist in an International Songwriting Composition (ISC). The band participated in a Canadian Music Week Festival.
Third album - Paranormal Radio / Antigravity Sound Machine
Eldorado worked on songs for a third album but had to wait six months while Chycki was working on production for Rush's Clockwork Angels album. The band toured with Alter Bridge and Thin Lizzy, and also had a crowd-supported fundraiser. In March 2012, the band recorded its third album, which would also be released in two languages. The Spanish edition, Paranormal Radio was released on 20 May, and the English edition, Antigravity Sound Machine, was released on 5 November. Unlike the previous releases the albums didn't contain any cover song. The album won the Best Hard Rock / Metal Album 2012 award in the Spanish Independent Music Awards. The group toured Spain as part of the "Recommended Tour" by Radio3, and also toured in Europe in March–April and September 2013. In October 2013 the band announce on their website that drummer Javier Planelles has decided to leave Eldorado for personal reasons and that he will be replaced by Christian Giardino, a Spanish drummer who has also worked for several years in Argentina as a session player in different bands.
Fourth album - Karma Generator / Babylonia Haze
According to the official website of Eldorado, in June 2014 the band again launched a crowdfunding campaign with the objective of recording a new album. The new album was recorded in August / September 2014 in Musigrama Studios in Madrid with Richard Ckycki again as the producer. The new album, like the previous two, was edited in two versions, the Spanish one "Karma Generator" and the English one "Babylonia Haze". Both were released at the beginning of 2015.
Personnel
The current members of the band include:
Jesús Trujillo - vocals, keyboard and acoustic guitar (2007–present)
Andres Duende - guitar (2010–present)
César Sánchez - bass (2007–present)
Javier Planelles - drums (2008-2013) (2016–present)
Former members include:
Nano Paramio - guitar (2007-2009)
Alex Rada - drums (2007-2008)
Christian Giardino - drums (2013–2016)
Discography
Studio albums
Listed with Spanish version (/ English version):
En Busca de Eldorado (2008)
Dorado (2009) / Golden (2010)
Paranormal Radio (2012) / Antigravity Sound Machine (2012)
Karma Generator (2015) / Babylonia Haze (2015)
Mundo Aéreo (2016) / Riding The Sun (2016)
Awards
Notes
References
*
External links
Spanish rock music groups
|
Makkoshotyka is a village in Borsod-Abaúj-Zemplén County, Hungary.
References
Populated places in Borsod-Abaúj-Zemplén County
|
```c++
#pragma once
#include <steem/protocol/base.hpp>
#include <steem/chain/evaluator.hpp>
namespace steem { namespace plugins { namespace follow {
using namespace std;
using steem::protocol::account_name_type;
using steem::protocol::base_operation;
class follow_plugin;
struct follow_operation : base_operation
{
account_name_type follower;
account_name_type following;
set< string > what; /// blog, mute
void validate()const;
void get_required_posting_authorities( flat_set<account_name_type>& a )const { a.insert( follower ); }
};
struct reblog_operation : base_operation
{
account_name_type account;
account_name_type author;
string permlink;
void validate()const;
void get_required_posting_authorities( flat_set<account_name_type>& a )const { a.insert( account ); }
};
typedef fc::static_variant<
follow_operation,
reblog_operation
> follow_plugin_operation;
STEEM_DEFINE_PLUGIN_EVALUATOR( follow_plugin, follow_plugin_operation, follow );
STEEM_DEFINE_PLUGIN_EVALUATOR( follow_plugin, follow_plugin_operation, reblog );
} } } // steem::plugins::follow
FC_REFLECT( steem::plugins::follow::follow_operation, (follower)(following)(what) )
FC_REFLECT( steem::plugins::follow::reblog_operation, (account)(author)(permlink) )
STEEM_DECLARE_OPERATION_TYPE( steem::plugins::follow::follow_plugin_operation )
FC_REFLECT_TYPENAME( steem::plugins::follow::follow_plugin_operation )
```
|
```xml
export type Browser = "Arc" | "Brave Browser" | "Firefox" | "Google Chrome" | "Microsoft Edge" | "Yandex Browser";
```
|
```yaml
filetype: lisp
detect:
filename: "(emacs|zile)$|\\.(el|li?sp|scm|ss|rkt)$"
rules:
- default: "\\([a-z-]+"
- symbol: "\\(([\\-+*/<>]|<=|>=)|'"
- constant.number: "\\b[0-9]+b>"
- special: "\\bnil\\b"
- preproc: "\\b[tT]b>"
- constant.string: "\\\"(\\\\.|[^\"])*\\\""
- constant.specialChar: "'[A-Za-z][A-Za-z0-9_-]+"
- constant.specialChar: "\\\\.?"
- comment: "(^|[[:space:]]);.*"
- indent-char.whitespace: "[[:space:]]+$"
- indent-char: " + +| + +"
```
|
```objective-c
/* Bluetooth: Mesh Generic OnOff, Generic Level, Lighting & Vendor Models
*
* SPDX-FileContributor: 2018-2021 Espressif Systems (Shanghai) CO LTD
*
*/
#ifndef _GENERIC_SERVER_H_
#define _GENERIC_SERVER_H_
#include "mesh/server_common.h"
#ifdef __cplusplus
extern "C" {
#endif
struct bt_mesh_gen_onoff_state {
uint8_t onoff;
uint8_t target_onoff;
};
struct bt_mesh_gen_onoff_srv {
struct bt_mesh_model *model;
struct bt_mesh_server_rsp_ctrl rsp_ctrl;
struct bt_mesh_gen_onoff_state state;
struct bt_mesh_last_msg_info last;
struct bt_mesh_state_transition transition;
};
struct bt_mesh_gen_level_state {
int16_t level;
int16_t target_level;
int16_t last_level;
int32_t last_delta;
bool move_start;
bool positive;
};
struct bt_mesh_gen_level_srv {
struct bt_mesh_model *model;
struct bt_mesh_server_rsp_ctrl rsp_ctrl;
struct bt_mesh_gen_level_state state;
struct bt_mesh_last_msg_info last;
struct bt_mesh_state_transition transition;
int32_t tt_delta_level;
};
struct bt_mesh_gen_def_trans_time_state {
uint8_t trans_time;
};
struct bt_mesh_gen_def_trans_time_srv {
struct bt_mesh_model *model;
struct bt_mesh_server_rsp_ctrl rsp_ctrl;
struct bt_mesh_gen_def_trans_time_state state;
};
struct bt_mesh_gen_onpowerup_state {
uint8_t onpowerup;
};
struct bt_mesh_gen_power_onoff_srv {
struct bt_mesh_model *model;
struct bt_mesh_server_rsp_ctrl rsp_ctrl;
struct bt_mesh_gen_onpowerup_state *state;
};
struct bt_mesh_gen_power_onoff_setup_srv {
struct bt_mesh_model *model;
struct bt_mesh_server_rsp_ctrl rsp_ctrl;
struct bt_mesh_gen_onpowerup_state *state;
};
struct bt_mesh_gen_power_level_state {
uint16_t power_actual;
uint16_t target_power_actual;
uint16_t power_last;
uint16_t power_default;
uint8_t status_code;
uint16_t power_range_min;
uint16_t power_range_max;
};
struct bt_mesh_gen_power_level_srv {
struct bt_mesh_model *model;
struct bt_mesh_server_rsp_ctrl rsp_ctrl;
struct bt_mesh_gen_power_level_state *state;
struct bt_mesh_last_msg_info last;
struct bt_mesh_state_transition transition;
int32_t tt_delta_level;
};
struct bt_mesh_gen_power_level_setup_srv {
struct bt_mesh_model *model;
struct bt_mesh_server_rsp_ctrl rsp_ctrl;
struct bt_mesh_gen_power_level_state *state;
};
struct bt_mesh_gen_battery_state {
uint32_t battery_level : 8,
time_to_discharge : 24;
uint32_t time_to_charge : 24,
battery_flags : 8;
};
struct bt_mesh_gen_battery_srv {
struct bt_mesh_model *model;
struct bt_mesh_server_rsp_ctrl rsp_ctrl;
struct bt_mesh_gen_battery_state state;
};
struct bt_mesh_gen_location_state {
int32_t global_latitude;
int32_t global_longitude;
int16_t global_altitude;
int16_t local_north;
int16_t local_east;
int16_t local_altitude;
uint8_t floor_number;
uint16_t uncertainty;
};
struct bt_mesh_gen_location_srv {
struct bt_mesh_model *model;
struct bt_mesh_server_rsp_ctrl rsp_ctrl;
struct bt_mesh_gen_location_state *state;
};
struct bt_mesh_gen_location_setup_srv {
struct bt_mesh_model *model;
struct bt_mesh_server_rsp_ctrl rsp_ctrl;
struct bt_mesh_gen_location_state *state;
};
/**
* According to the hierarchy of Generic Property states (Model Spec section 3.1.8),
* the Manufacturer Properties and Admin Properties may contain multiple Property
* states. User Properties just a collection of which can be accessed.
*
* property_count: Number of the properties contained in the table
* properties: Table of the properties
*
* These variables need to be initialized in the application layer, the precise
* number of the properties should be set and memories used to store the property
* values should be allocated.
*/
enum bt_mesh_gen_user_prop_access {
USER_ACCESS_PROHIBIT,
USER_ACCESS_READ,
USER_ACCESS_WRITE,
USER_ACCESS_READ_WRITE,
};
enum bt_mesh_gen_admin_prop_access {
ADMIN_NOT_USER_PROP,
ADMIN_ACCESS_READ,
ADMIN_ACCESS_WRITE,
ADMIN_ACCESS_READ_WRITE,
};
enum bt_mesh_gen_manu_prop_access {
MANU_NOT_USER_PROP,
MANU_ACCESS_READ,
};
struct bt_mesh_generic_property {
uint16_t id;
uint8_t user_access;
uint8_t admin_access;
uint8_t manu_access;
struct net_buf_simple *val;
};
struct bt_mesh_gen_user_prop_srv {
struct bt_mesh_model *model;
struct bt_mesh_server_rsp_ctrl rsp_ctrl;
uint8_t property_count;
struct bt_mesh_generic_property *properties;
};
struct bt_mesh_gen_admin_prop_srv {
struct bt_mesh_model *model;
struct bt_mesh_server_rsp_ctrl rsp_ctrl;
uint8_t property_count;
struct bt_mesh_generic_property *properties;
};
struct bt_mesh_gen_manu_prop_srv {
struct bt_mesh_model *model;
struct bt_mesh_server_rsp_ctrl rsp_ctrl;
uint8_t property_count;
struct bt_mesh_generic_property *properties;
};
struct bt_mesh_gen_client_prop_srv {
struct bt_mesh_model *model;
struct bt_mesh_server_rsp_ctrl rsp_ctrl;
uint8_t id_count;
uint16_t *property_ids;
};
typedef union {
struct {
uint8_t onoff;
} gen_onoff_set;
struct {
int16_t level;
} gen_level_set;
struct {
int16_t level;
} gen_delta_set;
struct {
int16_t level;
} gen_move_set;
struct {
uint8_t trans_time;
} gen_def_trans_time_set;
struct {
uint8_t onpowerup;
} gen_onpowerup_set;
struct {
uint16_t power;
} gen_power_level_set;
struct {
uint16_t power;
} gen_power_default_set;
struct {
uint16_t range_min;
uint16_t range_max;
} gen_power_range_set;
struct {
int32_t latitude;
int32_t longitude;
int16_t altitude;
} gen_loc_global_set;
struct {
int16_t north;
int16_t east;
int16_t altitude;
uint8_t floor_number;
uint16_t uncertainty;
} gen_loc_local_set;
struct {
uint16_t id;
struct net_buf_simple *value;
} gen_user_prop_set;
struct {
uint16_t id;
uint8_t access;
struct net_buf_simple *value;
} gen_admin_prop_set;
struct {
uint16_t id;
uint8_t access;
} gen_manu_prop_set;
} bt_mesh_gen_server_state_change_t;
typedef union {
struct {
uint16_t id;
} user_property_get;
struct {
uint16_t id;
} admin_property_get;
struct {
uint16_t id;
} manu_property_get;
struct {
uint16_t id;
} client_properties_get;
} bt_mesh_gen_server_recv_get_msg_t;
typedef union {
struct {
bool op_en;
uint8_t onoff;
uint8_t tid;
uint8_t trans_time;
uint8_t delay;
} onoff_set;
struct {
bool op_en;
int16_t level;
uint8_t tid;
uint8_t trans_time;
uint8_t delay;
} level_set;
struct {
bool op_en;
int32_t delta_level;
uint8_t tid;
uint8_t trans_time;
uint8_t delay;
} delta_set;
struct {
bool op_en;
int16_t delta_level;
uint8_t tid;
uint8_t trans_time;
uint8_t delay;
} move_set;
struct {
uint8_t trans_time;
} def_trans_time_set;
struct {
uint8_t onpowerup;
} onpowerup_set;
struct {
bool op_en;
uint16_t power;
uint8_t tid;
uint8_t trans_time;
uint8_t delay;
} power_level_set;
struct {
uint16_t power;
} power_default_set;
struct {
uint16_t range_min;
uint16_t range_max;
} power_range_set;
struct {
int32_t latitude;
int32_t longitude;
int16_t altitude;
} loc_global_set;
struct {
int16_t north;
int16_t east;
int16_t altitude;
uint8_t floor_number;
uint16_t uncertainty;
} loc_local_set;
struct {
uint16_t id;
struct net_buf_simple *value;
} user_property_set;
struct {
uint16_t id;
uint8_t access;
struct net_buf_simple *value;
} admin_property_set;
struct {
uint16_t id;
uint8_t access;
} manu_property_set;
} bt_mesh_gen_server_recv_set_msg_t;
void bt_mesh_generic_server_lock(void);
void bt_mesh_generic_server_unlock(void);
void gen_onoff_publish(struct bt_mesh_model *model);
void gen_level_publish(struct bt_mesh_model *model);
void gen_onpowerup_publish(struct bt_mesh_model *model);
void gen_power_level_publish(struct bt_mesh_model *model, uint16_t opcode);
#ifdef __cplusplus
}
#endif
#endif /* _GENERIC_SERVER_H_ */
```
|
Gorazi () may refer to:
Gorazi, Fars
Gorazi, Kerman
Gorazi, Bakharz, Razavi Khorasan Province
Gorazi, Khvaf, Razavi Khorasan Province
|
```prolog
#!perl -w
$0 =~ s|\.bat||i;
unless (-f $0) {
$0 =~ s|.*[/\\]||;
for (".", split ';', $ENV{PATH}) {
$_ = "." if $_ eq "";
$0 = "$_/$0" , goto doit if -f "$_/$0";
}
die "'$0' not found.\n";
}
doit: exec "perl", "-x", $0, @ARGV;
die "Failed to exec '$0': $!";
__END__
=head1 NAME
runperl.bat - "universal" batch file to run perl scripts
=head1 SYNOPSIS
C:\> copy runperl.bat foo.bat
C:\> foo
[..runs the perl script 'foo'..]
C:\> foo.bat
[..runs the perl script 'foo'..]
=head1 DESCRIPTION
This file can be copied to any file name ending in the ".bat" suffix.
When executed on a DOS-like operating system, it will invoke the perl
script of the same name, but without the ".bat" suffix. It will
look for the script in the same directory as itself, and then in
the current directory, and then search the directories in your PATH.
It relies on the C<exec()> operator, so you will need to make sure
that works in your perl.
This method of invoking perl scripts has some advantages over
batch-file wrappers like C<pl2bat.bat>: it avoids duplication
of all the code; it ensures C<$0> contains the same name as the
executing file, without any egregious ".bat" suffix; it allows
you to separate your perl scripts from the wrapper used to
run them; since the wrapper is generic, you can use symbolic
links to simply link to C<runperl.bat>, if you are serving your
files on a filesystem that supports that.
On the other hand, if the batch file is invoked with the ".bat"
suffix, it does an extra C<exec()>. This may be a performance
issue. You can avoid this by running it without specifying
the ".bat" suffix.
Perl is invoked with the -x flag, so the script must contain
a C<#!perl> line. Any flags found on that line will be honored.
=head1 BUGS
Perl is invoked with the -S flag, so it will search the PATH to find
the script. This may have undesirable effects.
=head1 SEE ALSO
perl, perlwin32, pl2bat.bat
=cut
```
|
```scss
@import "../../pretixbase/scss/_variables.scss";
@import "../../bootstrap/scss/_bootstrap.scss";
@import "../../fontawesome/scss/font-awesome.scss";
@import "../../pretixbase/scss/_theme.scss";
body {
background: #fbf7fc;
}
footer {
text-align: center;
padding: 10px 0;
font-size: 11px;
}
.logo {
width: 200px;
margin: auto;
display: block;
margin-top: 10%;
margin-bottom: 30px;
height: auto;
max-width: 100%;
}
.form-signin {
@extend .well;
background: white;
box-shadow: 0 7px 14px 0 rgba(78, 50, 92, 0.1),0 3px 6px 0 rgba(0,0,0,.07);
border: 1px solid white;
max-width: 330px;
margin: auto;
padding-bottom: 0;
.control-label {
display: none;
}
.buttons {
text-align: right;
}
h3 {
margin-top: 0;
}
p:last-child {
margin-bottom: 20px;
}
}
.container > .alert {
max-width: 330px;
margin: auto;
margin-bottom: 20px;
}
.impersonate-warning {
background-color: #ffe761;
background-image: -webkit-linear-gradient(-45deg, rgba(0, 0, 0, .04) 25%, transparent 25%, transparent 50%, rgba(0, 0, 0, .04) 50%, rgba(0, 0, 0, .04) 75%, transparent 75%, transparent);
background-image: -moz-linear-gradient(-45deg, rgba(0, 0, 0, .04) 25%, transparent 25%, transparent 50%, rgba(0, 0, 0, .04) 50%, rgba(0, 0, 0, .04) 75%, transparent 75%, transparent);
background-image: linear-gradient(135deg, rgba(0, 0, 0, .04) 25%, transparent 25%, transparent 50%, rgba(0, 0, 0, .04) 50%, rgba(0, 0, 0, .04) 75%, transparent 75%, transparent);
padding: 15px;
max-width: 330px;
box-shadow: 0 7px 14px 0 rgba(78, 50, 92, 0.1),0 3px 6px 0 rgba(0,0,0,.07);
margin: 15px auto;
border-radius: $border-radius-base;
}
@import "../../pretixbase/scss/_rtl.scss";
@import "../../bootstrap/scss/_rtl.scss";
```
|
An alphabetical listing from list of British Columbia rivers, which is in order of watershed locations
A
Adam River
Adams River
Akie River
Alces River
Alouette River
Alsek River
Anderson River
Artlish River
Ash River
Asitka River
Atleo River
Atlin Lake
Atnarko River
Ayton Creek
Azure River
B
Bancroft Creek
Barrière River
Bear River
Bear River
Beatton River
Beaver River
Bedwell River
Bella Coola River
Benson River
Besa River
Birkenhead River
Bishop River
Black River
Blanchard River
Bloedel Creek
Blue River
Blueberry River
Bonaparte River
Bowron River
Bowser Lake, Bowser River
Bridge River
Browns River
Bulkley River
Bull River
Burman River
C
Cadwallader Creek
Cameron River
Cameron River
Campbell River (Semiahmoo Bay)
Campbell River (Vancouver Island)
Canoe River
Capilano River
Cariboo River
Carmanah Creek
Castle Creek
Caycuse River
Cayoosh Creek
Cervus Creek
Cheakamus River
Chehalis River
Chemainus River
Cheslatta River
Chilanko River
Chilcat River (sp. Chilkat in Alaska)
Chilcotin River
Chilko River
China Creek
Chipman Creek
Chischa River
Chowade River
Chukachida River
Churn Creek
Chute Creek
Clayoquot River
Clearwater River
Clore River
Coal River
Coldwater River (from Coquihalla Pass)
Colquitz Creek
Columbia River
Comox Creek
Conuma River
Coquihalla River
Coquitlam River
Corrigan Creek
Cottonwood
Cous Creek
Cowichan River
Craig River
Crooked River
Cruickshank River
Cypress River
D
Dall River
Darling River
Davie River
Deadman River
Dean Channel
Dean River
Dease River
Decker Lake
Doig River
Doré River
Drinkwater Creek
Dudidontu River
Duncan River
Dunedin River
Dunsmuir Creek
Duti River
E
Eagle River (Dease River tributary)
Eagle River (Shuswap Lake)
East Klanawa River
East Tutshi River
Ecstall River
Edmond River
Effingham River
Elaho River
Elk River
Elk River
Endako River, Burns Lake
Englishman River
Entiako River
Eve River
Exchamsiks River
F
Fantail River
Finlay River
Firesteel River
Fisherman River
Fitzsimmons Creek
Flathead River
Fleet River
Flemer River
Fontas River
Fort Nelson River
Fox River
Franklin River
Fraser River
Frog River
G
Gataga River
Gates River
Gitnadoix River
Gladys River
Goat River
Gold River
Goldstream River
Goodspeed River
Gordon River
Gossen Creek
Graham River
Grayling River
Green River
Greenstone Creek
Gun Creek
Gundahoo River
H
Hackett River
Halfway River
Harrison River
Hayes River
Heber River
Hendon River
Herrick Creek
Hesquiat River
Homan River
Homathko River
Horsefly River
Houston River
Hurley River
I
Illecillewaet River
Ingenika River
Inhini River
Inklin River
Iron River
Iskut River
J
Jack Elliott Creek
Jacklah River
Jennings River
Jordan River
K
Kaouk River
Kasiks River
Katete River
Kechika River
Kedahda River
Kehlechoa River
Kelsall River
Kemano River
Kennedy River
Keogh River
Kettle River
Kicking Horse River
Kiskatinaw River
Kispiox River
Kitimat River
Kitlope River
Kitnayakwa River
Kitsumkalum River, Kitsumkalum Lake
Klanawa River
Klappan River
Klastline River
Kleanza Creek
Klehini River
Klinaklini River
Kluatantan River
Kluayetz Creek
Knight Inlet
Koeye River
Kokish River
Koksilah River
Kootenay River
Kusawa River
Kwadacha River
Kwinitsa Creek
Kwois Creek
L
Lakelse River
Leckie Creek
Leech River
Leiner River
Lillooet River
Lindeman Creek (from Chilkoot Pass)
Little Iskut River
Little Klappan River
Little Nitinat River
Little Oyster River
Little Qualicum River
Little Rancheria River
Little River (Cariboo River tributary)
Little River (Little Shuswap Lake)
Little River (Vancouver Island)
Little Tahltan River
Little Tuya River
Lord River
Loss Creek
M
Machmell River
MacJack River
Mahood River
Major Hart River
Mamquam River
Manson River
Marble River
Matthew River
Maxan Lake
McBride River
McCook River
McGregor River
McLennan River
McLeod River
McNeil River
Meager Creek
Megin River
Memkay River
Mesilinka River
Middle Memkay River
Milk River
Minaker River
Mitchell River
Moberly River
Morice Lake
Morice River
Mosquito Creek
Mosque River
Moyena River
Muchalat River
Murray River
Murtle River
Muskwa River
Myra Creek
N
Nabesche River
Nahatlatch River
Nahlin River
Nahmint River
Nahwitti River
Nakina River
Nakonake River
Nanaimo River
Narraway River
Nass River
Natalkuz Lake
Nation River
Nautley River
Nechako River
Nesook River
Niagara Creek
Nicola River
Nicomekl River
Nimpkish Lake
Nimpkish River
Ningunsaw River
Nitinat River
Noaxe Creek
Nomash River
North Kwadacha River
North Memkay River
North Nanaimo River
North Thompson River
Noyse Creek
O
Obo River
O'Donnel River
Okanagan River
Oktwanch River
Omineca River
Osborn River
Osilinka River
Ospika River
Oyster River
P
Pachena River
Pack River
Parsnip River
Parton River
Partridge River
Pend d'Oreille River
Perry River
Petitot River
Piggott Creek
Pike River
Pine River
Pitman River
Pitt River
Pouce Coupe River
Powell River
Primrose River
Prophet River
Puntledge River
Q
Qualicum River
Quartz Creek
Quatsie River
Quesnel River
Quinsam River
R
Rabbit River
Racine Creek
Racing River
Raging River
Ralph River
Range Creek
Rapid River
Raush River
Red River
Redwillow River (Smoky River, Alberta drainage)
Relay Creek
River of Golden Dreams
Roaring River
Robertson River
Robson River
Ross River
Rutherford Creek
Ryan River
S
Sahtaneh River
Salmon River
Salmon River
Samotua River
San Jose River
San Josef River
San Juan River
Sand River
Sarita River
Saunders Creek
Schipa River
Serpentine river
Seton Creek
Seymour River
Shawnigan River
Sheemahant River
Shegunia River
Shepherd Creek
Sheslay River
Shushartie River
Shuswap River
Sicintine River
Sikanni Chief River
Silver Salmon River
Similkameen River
Sittakanay River (confluence with the Taku is in Alaska)
Skagit River
Skeena River
Slim Creek
Sloko River
Smith River
Snake River
Somass River
Sombrio River
Soo River
Sooke Lake
Sooke River
South Englishman River
South Nanaimo River
South Sarita River
South Thompson River
South Whiting River
Southgate River
Spatsizi River
Spillimacheen River
Sproat River
Squamish River
Squinguila River
St. Mary's River
Stamp River
Stave River
Stein River
Stellako River
Stikine River
Stranby River
Stuart River
Stuhini Creek
Sturdee River
Sucowa River
Sukunwa River
Surprise Creek
Sustut River
Sutlahine River
Swannell River
Swanson River
Swift River
Swift River
Sydney River
T
Tahini River (NB different from Takhini River, which is in same area and rains the other way)
Tahltan River
Tahsis River
Tahsish River
Tahtsa Reach, Tahtsa Lake
Takhini River
Taku River
Talbot Creek
Tanzilla River
Taseko River
Tats Creek
Tatsatua Creek
Tatshenshini River
Taylor River
Tchaikazan River
Teihsum River
Telkwa River
Teslin River
Tetachuck Lake
Tetsa River
Thelwood Creek
Thompson River
Tkope Creek
Tlupana River
Toad River
Toba Inlet
Toba River
Toboggan Creek
Tofino Creek
Toodoggone River
Toquart River
Torpy River
Torres Channel
Trent River
Trout Creek?
Trout River
Tsable River
Tsitika River
Tsolum River
Tsuiquate River
Tuchodi River
Tulsequah River
Turnagain River
Tutshi River (from White Pass)
Tuya River
Tyaughton Creek
U
Ucana River
Unuk River
Upana River
Ursus Creek
V
Vedder River (aka Chilliwack River)
Vents River
W
Walbran Creek
Wannock River
Wap Creek
Wapiti River (Smoky River, Alberta drainage)
Warneford River
Watt Creek
Waukwaas River
West Kiskatinaw River
West Road or Blackwater River
West Toad River
White River
Whiting River
Wicked River
Willow River
Wolf River
Wolverine River
Wolverine River
Woss Creek
X-Y
Yahk River
Yalakom River
Yeth Creek
Z
Zeballos River
Zohini Creek
Zymoetz River
See also
List of British Columbia rivers
British Columbia
Rivers
|
```go
package chunk
import (
"github.com/prometheus/common/model"
"github.com/prometheus/prometheus/model/histogram"
"github.com/prometheus/prometheus/tsdb/chunkenc"
)
// Iterator enables efficient access to the content of a chunk. It is
// generally not safe to use an Iterator concurrently with or after chunk
// mutation.
type Iterator interface {
// Scans the next value in the chunk. Directly after the iterator has
// been created, the next value is the first value in the
// chunk. Otherwise, it is the value following the last value scanned or
// found (by one of the Find... methods). Returns chunkenc.ValNoe if either
// the end of the chunk is reached or an error has occurred.
Scan() chunkenc.ValueType
// Finds the oldest value at or after the provided time and returns the value type.
// Returns chunkenc.ValNone if either the chunk contains no value at or after
// the provided time, or an error has occurred.
FindAtOrAfter(model.Time) chunkenc.ValueType
// Returns a batch of the provisded size; NB not idempotent! Should only be called
// once per Scan.
Batch(size int, valType chunkenc.ValueType) Batch
// Returns the last error encountered. In general, an error signals data
// corruption in the chunk and requires quarantining.
Err() error
}
// BatchSize is samples per batch; this was choose by benchmarking all sizes from
// 1 to 128.
const BatchSize = 12
// Batch is a sorted set of (timestamp, value) pairs. They are intended to be small,
// and passed by value. Value can vary depending on the chunk value type.
type Batch struct {
Timestamps [BatchSize]int64
Values [BatchSize]float64
Histograms [BatchSize]*histogram.Histogram
FloatHistograms [BatchSize]*histogram.FloatHistogram
Index int
Length int
ValType chunkenc.ValueType
}
```
|
```xml
<?xml version="1.0" encoding="utf-8"?>
<objectAnimator
xmlns:android="path_to_url"
android:interpolator="@android:interpolator/fast_out_linear_in"
android:propertyName="fillAlpha"
android:valueFrom="0f"
android:valueTo="1f"
android:duration="900"
android:startOffset="300"
android:repeatCount="infinite"
android:repeatMode="reverse" />
```
|
```java
/*
*/
package docs.http.javadsl.server.directives;
import akka.http.javadsl.model.HttpEntities;
import akka.http.javadsl.model.HttpRequest;
import akka.http.javadsl.model.HttpResponse;
import akka.http.javadsl.model.MediaTypes;
import akka.http.javadsl.model.headers.AcceptEncoding;
import akka.http.javadsl.model.headers.ContentEncoding;
import akka.http.javadsl.model.headers.HttpEncodings;
import akka.http.javadsl.coding.Coder;
import akka.http.javadsl.server.Rejections;
import akka.http.javadsl.server.Route;
import akka.http.javadsl.testkit.JUnitRouteTest;
import akka.util.ByteString;
import org.junit.Test;
import java.util.Collections;
import static akka.http.javadsl.unmarshalling.Unmarshaller.entityToString;
//#responseEncodingAccepted
import static akka.http.javadsl.server.Directives.complete;
import static akka.http.javadsl.server.Directives.responseEncodingAccepted;
//#responseEncodingAccepted
//#encodeResponse
import static akka.http.javadsl.server.Directives.complete;
import static akka.http.javadsl.server.Directives.encodeResponse;
//#encodeResponse
//#encodeResponseWith
import static akka.http.javadsl.server.Directives.complete;
import static akka.http.javadsl.server.Directives.encodeResponseWith;
//#encodeResponseWith
//#decodeRequest
import static akka.http.javadsl.server.Directives.complete;
import static akka.http.javadsl.server.Directives.decodeRequest;
import static akka.http.javadsl.server.Directives.entity;
//#decodeRequest
//#decodeRequestWith
import static akka.http.javadsl.server.Directives.complete;
import static akka.http.javadsl.server.Directives.decodeRequestWith;
import static akka.http.javadsl.server.Directives.entity;
//#decodeRequestWith
//#withPrecompressedMediaTypeSupport
import static akka.http.javadsl.server.Directives.complete;
import static akka.http.javadsl.server.Directives.withPrecompressedMediaTypeSupport;
//#withPrecompressedMediaTypeSupport
@SuppressWarnings("deprecation")
public class CodingDirectivesExamplesTest extends JUnitRouteTest {
@Test
public void testResponseEncodingAccepted() {
//#responseEncodingAccepted
final Route route = responseEncodingAccepted(HttpEncodings.GZIP, () ->
complete("content")
);
// tests:
testRoute(route).run(HttpRequest.GET("/"))
.assertEntity("content");
runRouteUnSealed(route,
HttpRequest.GET("/")
.addHeader(AcceptEncoding.create(HttpEncodings.DEFLATE)))
.assertRejections(Rejections.unacceptedResponseEncoding(HttpEncodings.GZIP));
//#responseEncodingAccepted
}
@Test
public void testEncodeResponse() {
//#encodeResponse
final Route route = encodeResponse(() -> complete("content"));
// tests:
testRoute(route).run(
HttpRequest.GET("/")
.addHeader(AcceptEncoding.create(HttpEncodings.GZIP))
.addHeader(AcceptEncoding.create(HttpEncodings.DEFLATE))
).assertHeaderExists(ContentEncoding.create(HttpEncodings.GZIP));
testRoute(route).run(
HttpRequest.GET("/")
.addHeader(AcceptEncoding.create(HttpEncodings.DEFLATE))
).assertHeaderExists(ContentEncoding.create(HttpEncodings.DEFLATE));
// This case failed!
// testRoute(route).run(
// HttpRequest.GET("/")
// .addHeader(AcceptEncoding.create(HttpEncodings.IDENTITY))
// ).assertHeaderExists(ContentEncoding.create(HttpEncodings.IDENTITY));
//#encodeResponse
}
@Test
public void testEncodeResponseWith() {
//#encodeResponseWith
final Route route = encodeResponseWith(
Collections.singletonList(Coder.Gzip),
() -> complete("content")
);
// tests:
testRoute(route).run(HttpRequest.GET("/"))
.assertHeaderExists(ContentEncoding.create(HttpEncodings.GZIP));
testRoute(route).run(
HttpRequest.GET("/")
.addHeader(AcceptEncoding.create(HttpEncodings.GZIP))
.addHeader(AcceptEncoding.create(HttpEncodings.DEFLATE))
).assertHeaderExists(ContentEncoding.create(HttpEncodings.GZIP));
runRouteUnSealed(route,
HttpRequest.GET("/")
.addHeader(AcceptEncoding.create(HttpEncodings.DEFLATE))
).assertRejections(Rejections.unacceptedResponseEncoding(HttpEncodings.GZIP));
runRouteUnSealed(route,
HttpRequest.GET("/")
.addHeader(AcceptEncoding.create(HttpEncodings.IDENTITY))
).assertRejections(Rejections.unacceptedResponseEncoding(HttpEncodings.GZIP));
final Route routeWithLevel9 = encodeResponseWith(
Collections.singletonList(Coder.GzipLevel9),
() -> complete("content")
);
testRoute(routeWithLevel9).run(HttpRequest.GET("/"))
.assertHeaderExists(ContentEncoding.create(HttpEncodings.GZIP));
//#encodeResponseWith
}
@Test
public void testDecodeRequest() {
//#decodeRequest
final ByteString helloGzipped = Coder.Gzip.encode(ByteString.fromString("Hello"));
final ByteString helloDeflated = Coder.Deflate.encode(ByteString.fromString("Hello"));
final Route route = decodeRequest(() ->
entity(entityToString(), content ->
complete("Request content: '" + content + "'")
)
);
// tests:
testRoute(route).run(
HttpRequest.POST("/").withEntity(helloGzipped)
.addHeader(ContentEncoding.create(HttpEncodings.GZIP)))
.assertEntity("Request content: 'Hello'");
testRoute(route).run(
HttpRequest.POST("/").withEntity(helloDeflated)
.addHeader(ContentEncoding.create(HttpEncodings.DEFLATE)))
.assertEntity("Request content: 'Hello'");
testRoute(route).run(
HttpRequest.POST("/").withEntity("hello uncompressed")
.addHeader(ContentEncoding.create(HttpEncodings.IDENTITY)))
.assertEntity( "Request content: 'hello uncompressed'");
//#decodeRequest
}
@Test
public void testDecodeRequestWith() {
//#decodeRequestWith
final ByteString helloGzipped = Coder.Gzip.encode(ByteString.fromString("Hello"));
final ByteString helloDeflated = Coder.Deflate.encode(ByteString.fromString("Hello"));
final Route route = decodeRequestWith(Coder.Gzip, () ->
entity(entityToString(), content ->
complete("Request content: '" + content + "'")
)
);
// tests:
testRoute(route).run(
HttpRequest.POST("/").withEntity(helloGzipped)
.addHeader(ContentEncoding.create(HttpEncodings.GZIP)))
.assertEntity("Request content: 'Hello'");
runRouteUnSealed(route,
HttpRequest.POST("/").withEntity(helloDeflated)
.addHeader(ContentEncoding.create(HttpEncodings.DEFLATE)))
.assertRejections(Rejections.unsupportedRequestEncoding(HttpEncodings.GZIP));
runRouteUnSealed(route,
HttpRequest.POST("/").withEntity("hello")
.addHeader(ContentEncoding.create(HttpEncodings.IDENTITY)))
.assertRejections(Rejections.unsupportedRequestEncoding(HttpEncodings.GZIP));
//#decodeRequestWith
}
@Test
public void testWithPrecompressedMediaTypeSupport() {
//#withPrecompressedMediaTypeSupport
final ByteString svgz = Coder.Gzip.encode(ByteString.fromString("<svg/>"));
final Route route = withPrecompressedMediaTypeSupport(() ->
complete(
HttpResponse.create().withEntity(
HttpEntities.create(MediaTypes.IMAGE_SVGZ.toContentType(), svgz))
)
);
// tests:
testRoute(route).run(HttpRequest.GET("/"))
.assertMediaType(MediaTypes.IMAGE_SVG_XML)
.assertHeaderExists(ContentEncoding.create(HttpEncodings.GZIP));
//#withPrecompressedMediaTypeSupport
}
}
```
|
Howard Township is a civil township of Cass County in the U.S. state of Michigan. The population was 6,207 at the 2010 census.
Geography
Howard Township is located in southwestern Cass County, bordered by Berrien County and the city of Niles to the west. According to the United States Census Bureau, the township has a total area of , of which is land and , or 2.17%, is water. Barron Lake, near the center of the township, is the largest body of water.
Demographics
As of the census of 2000, there were 6,309 people, 2,472 households, and 1,846 families residing in the township. The population density was . There were 2,663 housing units at an average density of . The racial makeup of the township was 93.87% White, 3.71% African American, 0.41% Native American, 0.17% Asian, 0.03% Pacific Islander, 0.59% from other races, and 1.22% from two or more races. Hispanic or Latino of any race were 0.97% of the population.
There were 2,472 households, out of which 28.9% had children under the age of 18 living with them, 62.1% were married couples living together, 8.3% had a female householder with no husband present, and 25.3% were non-families. 21.3% of all households were made up of individuals, and 8.5% had someone living alone who was 65 years of age or older. The average household size was 2.55 and the average family size was 2.92.
In the township the population was spread out, with 23.0% under the age of 18, 6.3% from 18 to 24, 27.4% from 25 to 44, 29.5% from 45 to 64, and 13.9% who were 65 years of age or older. The median age was 41 years. For every 100 females, there were 102.0 males. For every 100 females age 18 and over, there were 100.2 males.
The median income for a household in the township was $41,477, and the median income for a family was $47,382. Males had a median income of $36,098 versus $23,780 for females. The per capita income for the township was $19,429. About 4.6% of families and 7.0% of the population were below the poverty line, including 9.2% of those under age 18 and 7.3% of those age 65 or over.
References
External links
Howard Township official website
Townships in Cass County, Michigan
South Bend – Mishawaka metropolitan area
Townships in Michigan
|
Desert Pirate is Thomas Leeb's fourth available release and features 10 instrumentals.
Track listing
"Grooveyard"
"Jebuda"
"Nai Nai"
"Oachkatzlschwoaf"
"Isobel"
"No Woman, No Cry"
"Desert Pirate"
"Ladzekpo"
"Oft Geht Bled"
"Quicksilver"
All songs by Thomas Leeb, except
"Isobel" (Björk, arr. Leeb)
"No Woman, No Cry" (Vincent Ford, arr. Leeb)
Personnel
Thomas Leeb - acoustic guitar
Eric Spitzer - mixing & mastering
References
2007 albums
Thomas Leeb albums
|
```c
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to
* deal in the Software without restriction, including without limitation the
* rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
* sell copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
* IN THE SOFTWARE.
*/
#include "uv.h"
#include "uv/tree.h"
#include "internal.h"
#include "heap-inl.h"
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
int uv_loop_init(uv_loop_t* loop) {
void* saved_data;
int err;
saved_data = loop->data;
memset(loop, 0, sizeof(*loop));
loop->data = saved_data;
heap_init((struct heap*) &loop->timer_heap);
QUEUE_INIT(&loop->wq);
QUEUE_INIT(&loop->idle_handles);
QUEUE_INIT(&loop->async_handles);
QUEUE_INIT(&loop->check_handles);
QUEUE_INIT(&loop->prepare_handles);
QUEUE_INIT(&loop->handle_queue);
loop->active_handles = 0;
loop->active_reqs.count = 0;
loop->nfds = 0;
loop->watchers = NULL;
loop->nwatchers = 0;
QUEUE_INIT(&loop->pending_queue);
QUEUE_INIT(&loop->watcher_queue);
loop->closing_handles = NULL;
uv__update_time(loop);
loop->async_io_watcher.fd = -1;
loop->async_wfd = -1;
loop->signal_pipefd[0] = -1;
loop->signal_pipefd[1] = -1;
loop->backend_fd = -1;
loop->emfile_fd = -1;
loop->timer_counter = 0;
loop->stop_flag = 0;
err = uv__platform_loop_init(loop);
if (err)
return err;
uv__signal_global_once_init();
err = uv_signal_init(loop, &loop->child_watcher);
if (err)
goto fail_signal_init;
uv__handle_unref(&loop->child_watcher);
loop->child_watcher.flags |= UV_HANDLE_INTERNAL;
QUEUE_INIT(&loop->process_handles);
err = uv_rwlock_init(&loop->cloexec_lock);
if (err)
goto fail_rwlock_init;
err = uv_mutex_init(&loop->wq_mutex);
if (err)
goto fail_mutex_init;
err = uv_async_init(loop, &loop->wq_async, uv__work_done);
if (err)
goto fail_async_init;
uv__handle_unref(&loop->wq_async);
loop->wq_async.flags |= UV_HANDLE_INTERNAL;
return 0;
fail_async_init:
uv_mutex_destroy(&loop->wq_mutex);
fail_mutex_init:
uv_rwlock_destroy(&loop->cloexec_lock);
fail_rwlock_init:
uv__signal_loop_cleanup(loop);
fail_signal_init:
uv__platform_loop_delete(loop);
return err;
}
int uv_loop_fork(uv_loop_t* loop) {
int err;
unsigned int i;
uv__io_t* w;
err = uv__io_fork(loop);
if (err)
return err;
err = uv__async_fork(loop);
if (err)
return err;
err = uv__signal_loop_fork(loop);
if (err)
return err;
/* Rearm all the watchers that aren't re-queued by the above. */
for (i = 0; i < loop->nwatchers; i++) {
w = loop->watchers[i];
if (w == NULL)
continue;
if (w->pevents != 0 && QUEUE_EMPTY(&w->watcher_queue)) {
w->events = 0; /* Force re-registration in uv__io_poll. */
QUEUE_INSERT_TAIL(&loop->watcher_queue, &w->watcher_queue);
}
}
return 0;
}
void uv__loop_close(uv_loop_t* loop) {
uv__signal_loop_cleanup(loop);
uv__platform_loop_delete(loop);
uv__async_stop(loop);
if (loop->emfile_fd != -1) {
uv__close(loop->emfile_fd);
loop->emfile_fd = -1;
}
if (loop->backend_fd != -1) {
uv__close(loop->backend_fd);
loop->backend_fd = -1;
}
uv_mutex_lock(&loop->wq_mutex);
assert(QUEUE_EMPTY(&loop->wq) && "thread pool work queue not empty!");
assert(!uv__has_active_reqs(loop));
uv_mutex_unlock(&loop->wq_mutex);
uv_mutex_destroy(&loop->wq_mutex);
/*
* Note that all thread pool stuff is finished at this point and
* it is safe to just destroy rw lock
*/
uv_rwlock_destroy(&loop->cloexec_lock);
#if 0
assert(QUEUE_EMPTY(&loop->pending_queue));
assert(QUEUE_EMPTY(&loop->watcher_queue));
assert(loop->nfds == 0);
#endif
uv__free(loop->watchers);
loop->watchers = NULL;
loop->nwatchers = 0;
}
int uv__loop_configure(uv_loop_t* loop, uv_loop_option option, va_list ap) {
if (option != UV_LOOP_BLOCK_SIGNAL)
return UV_ENOSYS;
if (va_arg(ap, int) != SIGPROF)
return UV_EINVAL;
loop->flags |= UV_LOOP_BLOCK_SIGPROF;
return 0;
}
```
|
```php
<?php
namespace Mpociot\ApiDoc\Tests\Unit;
use Illuminate\Routing\Route;
use Illuminate\Support\Facades\Route as RouteFacade;
use Mpociot\ApiDoc\ApiDocGeneratorServiceProvider;
use Mpociot\ApiDoc\Tests\Fixtures\TestController;
class LaravelGeneratorTest extends GeneratorTestCase
{
protected function getPackageProviders($app)
{
return [
ApiDocGeneratorServiceProvider::class,
];
}
public function createRoute(string $httpMethod, string $path, string $controllerMethod, $register = false, $class = TestController::class)
{
if ($register) {
return RouteFacade::{$httpMethod}($path, $class . "@$controllerMethod");
} else {
return new Route([$httpMethod], $path, ['uses' => $class . "@$controllerMethod"]);
}
}
public function createRouteUsesArray(string $httpMethod, string $path, string $controllerMethod, $register = false, $class = TestController::class)
{
if ($register) {
return RouteFacade::{$httpMethod}($path, [$class . "$controllerMethod"]);
} else {
return new Route([$httpMethod], $path, ['uses' => [$class, $controllerMethod]]);
}
}
}
```
|
```python
import zlib
import os
import sys
import getopt
import xml.dom.minidom as minidom
#import hashlib
import traceback
from Crypto import Random
from Crypto.Hash import SHA256
from Crypto.Signature import pss as pss
from Crypto.PublicKey import RSA
import binascii
import base64
'''
'''
OK = 0
ERR = 1
STOP = 2
RET_INVALID_ARG = 3
RET_FILE_NOT_EXIST = 4
RET_FILE_PATH_ERR = 5
RET_FILE_OPERATION_ERR = 6
RET_XML_CONFIG_ERR=7
FILE_OPERATE_SIZE = 4 * 1024
TLV_SHA256_CHECKSUM = 1
TLV_CRC_CHECKSUM = TLV_SHA256_CHECKSUM + 1
TLV_SHA256_RSA2048_CHECKSUM = TLV_SHA256_CHECKSUM + 2
INVALID_OFFSET = -1
TLV_T_LEN = 2
TLV_L_LEN = 2
TLV_HEAD_LEN_POS = 4
TLV_TOTAL_LEN_POS = 8
DEFAULT_CONFIG_FILE_NAME = 'config.xml'
DEFAULT_KEY_FILE_NAME = 'private_key.pem'
def min(a, b):
if a <= b:
return a
else:
return b
def append_buffer(buffer, pos, value):
buffer.append(value)
def serialize_byte(buffer, value, byte_number = 4, pos = 0, callback = append_buffer):
max_num = 4
if byte_number > max_num:
print 'write byte_number %d err'%(byte_number)
raise Exception()
return ERR
mask = 0xff000000
shift_num = 24
for i in range(0, max_num - byte_number):
mask = (mask >> 8)
shift_num -= 8
for i in range(0, byte_number):
callback(buffer, i + pos, ((mask & value) >> shift_num))
mask = (mask >> 8)
shift_num -= 8
return OK
class file_writer(object):
def __init__(self, file_name):
self.fd = open(file_name, 'w+b')
def __del__(self):
self.fd.close()
def write(self, value, byte_num = 4, offset = INVALID_OFFSET):
if offset != INVALID_OFFSET:
self.fd.seek(offset)
if isinstance(value, int) or isinstance(value, long):
buffer = bytearray()
serialize_byte(buffer, value, byte_num)
elif isinstance(value, str):
buffer=bytearray(value)
else:
buffer = value
self.fd.seek(self.fd.tell())
self.fd.write(buffer)
def read(self, size, offset = INVALID_OFFSET):
if offset != INVALID_OFFSET:
self.fd.seek(offset)
return self.fd.read(size)
def tell(self):
return self.fd.tell()
def seek(self, offset):
self.fd.seek(offset)
class config_info(object):
def __init__(self):
self.input_file=""
self.output_file=""
self.config_file=""
self.key_file = ""
def parse_args(self):
try:
opts, args = getopt.getopt(sys.argv[1:], 'c:i:o:k')
except getopt.GetoptError as err:
print str(err)
return RET_INVALID_ARG
self.config_file = "config.xml"
for opt, arg in opts:
if opt == "-c":
self.config_file = arg
elif opt == "-i":
self.input_file = arg
elif opt == "-o":
self.output_file = arg
elif opt == "-k":
self.key_file = arg
else:
pass
if not os.path.isfile(self.input_file):
print "{0} is not a file!".format(self.input_file)
return RET_INVALID_ARG
if self.output_file == '':
self.output_file = os.path.join(os.getcwd(), os.path.basename(self.input_file) + ".out_bin")
if self.config_file == '':
self.config_file = DEFAULT_CONFIG_FILE_NAME
if self.key_file == '':
self.key_file = DEFAULT_KEY_FILE_NAME
if not os.path.isfile(self.config_file):
print "config xml file \"{}\" not exist".format(self.config_file)
return RET_INVALID_ARG
return OK
def get_input_file(self):
return self.input_file
def get_output_file(self):
return self.output_file
def get_config_file(self):
return self.config_file
class checksum(object):
def name(self):
return ''
def size(self):
return 0
def attribute(self):
return -1
def update(self, buffer):
pass
def get_checksum(self):
return ""
def reset(self):
pass
class sha256_checksum(checksum):
def __init__(self):
self.reset()
def name(self):
return 'sha256'
def attribute(self):
return TLV_SHA256_CHECKSUM
def size(self):
return 32
def update(self, buffer):
self.sha256.update(buffer)
def get_checksum(self):
print 'sha256 checksum:%s'%(self.sha256.hexdigest())
return self.sha256.digest()
def reset(self):
#self.sha256 = hashlib.sha256('')
self.sha256 = SHA256.new()
class sha256_rsa2048_checksum(sha256_checksum):
def __init__(self, priave_key_file_name):
sha256_checksum.__init__(self)
self.priave_key_file_name = priave_key_file_name
def name(self):
return 'sha256_rsa2048'
def attribute(self):
return TLV_SHA256_RSA2048_CHECKSUM
def size(self):
return 256
def get_checksum(self):
print 'sha256:%s'%(self.sha256.hexdigest())
with open(self.priave_key_file_name) as f:
key = f.read()
rsakey = RSA.importKey(key)
signer = pss.new(rsakey)
sign = signer.sign(self.sha256)
print 'sha256-rsa2048 checksum:%s'%(binascii.hexlify(sign))
#print 'base 64:%s' %(base64.b64encode(sign))
return sign
class crc256_checksum(checksum):
def name(self):
return 'crc256'
class none_checksum(checksum):
def name(self):
return 'none'
class tlv_type(object):
def set_writer(self, writer):
self.writer = writer
def name(self):
return ''
def get_value(self, value, tlv):
return [len(value), value]
def write(self, tlv):
[l, v] = self.get_value(tlv.firstChild.data, tlv)
if 0 == l:
return [OK, 0]
if l < 0:
return [RET_XML_CONFIG_ERR, 0]
self.writer.write(v, l)
return [OK, l]
class string_type(tlv_type):
def name(self):
return 'string'
class integer_type(tlv_type):
def name(self):
return 'integer'
def get_value(self, value, tlv):
value_len = tlv.getAttribute('value_len')
if value_len == '' or long(value_len, 0) > 4:
print 'value_len error'
return [RET_XML_CONFIG_ERR, '']
return [long(value_len, 0), long(value, 0)]
class software_header(object):
def __init__(self, config, writer):
self.writer = writer
self.config = config
self.software_checksum_offset = INVALID_OFFSET
self.head_len = 0
self.software_checksum = ''
ret = self.__write_config()
if ret != OK:
return
self.__update_head_checksum()
def __write_version(self, dom):
ver = dom.getElementsByTagName("version")
version_no = 0
if 0 == len(ver):
print "version tag not exist,using version No.0"
else:
version_no = int(ver[0].firstChild.data, 0)
self.writer.write(version_no)
def __write_tlvs(self, dom):
tlvs = dom.getElementsByTagName("tlv")
types = [string_type(), integer_type()]
for type in types:
type.set_writer(self.writer)
for tlv in tlvs:
values = tlv.getElementsByTagName("value")
if len(values) == 0:
print '%s has no value'%(tlv.nodeName)
continue
attribute = tlv.getAttribute("attribute")
if attribute == '':
print 'attribute empty'
return RET_XML_CONFIG_ERR
self.writer.write(long(attribute, 0), TLV_T_LEN)
len_pos = self.writer.tell()
self.writer.write(0, TLV_L_LEN)
values_len = 0
for value in values:
ret = ERR
name = ''
value_len = 0
for type in types:
name = value.getAttribute("type")
if name == type.name():
[ret, value_len] = type.write(value)
break
if ret != OK:
print 'value type %s err'%(name)
return ret
values_len += value_len
pos = self.writer.tell()
self.writer.write(values_len, TLV_L_LEN, len_pos)
self.writer.seek(pos)
return OK
def __init_checksum(self, dom):
checksum = dom.getElementsByTagName("checksum")
if len(checksum) == 0:
print "no checksum tag in config file"
return RET_XML_CONFIG_ERR
algs = [sha256_checksum(), crc256_checksum(), sha256_rsa2048_checksum(self.config.key_file), none_checksum()]
for alg in algs:
if alg.name() == checksum[0].firstChild.data:
self.checksum_alg = alg
return OK
print 'no checksum alg selected'
return RET_XML_CONFIG_ERR
def __write_config(self):
dom = minidom.parse(self.config.config_file)
#version No.
self.__write_version(dom);
#reserve header length
self.writer.write(0)
#reserve total length
self.writer.write(0)
#tlvs
self.__write_tlvs(dom)
ret = self.__init_checksum(dom)
if ret != OK:
return ret
size = self.checksum_alg.size()
if size > 0:
self.writer.write(self.checksum_alg.attribute(), TLV_T_LEN)
self.writer.write(size, TLV_L_LEN)
self.software_checksum_offset = self.writer.tell()
self.writer.write(bytearray(size))
self.head_len = self.writer.tell()
self.writer.write(self.head_len, 4, TLV_HEAD_LEN_POS)
self.writer.write(self.head_len + os.path.getsize(self.config.input_file), 4, TLV_TOTAL_LEN_POS)
self.writer.seek(self.head_len)
return OK
def write_software(self, buffer):
if self.software_checksum_offset == INVALID_OFFSET:
return
self.checksum_alg.update(buffer)
def write_software_end(self):
#write head and software sha256
if self.software_checksum_offset == INVALID_OFFSET:
return
software_checksum = self.checksum_alg.get_checksum()
self.writer.write(software_checksum, len(software_checksum), self.software_checksum_offset)
def __update_head_checksum(self):
read_len = self.head_len
while read_len > 0:
tmp_len = min(read_len, FILE_OPERATE_SIZE)
if read_len == self.head_len:
buffer = self.writer.read(tmp_len, 0)
else:
buffer = self.writer.read(tmp_len)
self.checksum_alg.update(buffer)
read_len -= tmp_len
class software_maker(object):
def __init__(self):
pass
def make_software(self, config):
ret = OK
input_fd = None
header = None
try:
writer = file_writer(config.get_output_file())
header = software_header(config, writer)
input_fd = open(config.input_file, 'rb')
while True:
buffer = input_fd.read(FILE_OPERATE_SIZE)
if len(buffer) == 0: # EOF or file empty. return hashes
header.write_software_end()
break
header.write_software(buffer)
writer.write(buffer)
except IOError as e:
print "IOError {0}".format(str(e))
ret = RET_FILE_OPERATION_ERR
traceback.print_exc()
except Exception as err:
print 'except hanppen!' + str(err)
ret = RET_FILE_OPERATION_ERR
traceback.print_exc()
finally:
if not input_fd is None:
input_fd.close()
print 'make software %s to %s ,length %d, ret %d'%(config.input_file, config.output_file, os.path.getsize(config.input_file), ret)
return ret
def print_usage():
print "Usage: {} [-c config_xml_file] [-o output_file] -i input_file [-k key_file]".format(sys.argv[0])
def main():
config = config_info()
ret = config.parse_args()
if ret != OK:
print "parse args err ret %d"%(ret)
print_usage()
return ret
maker = software_maker()
ret = maker.make_software(config)
if ret != OK:
print_usage()
return ret
if __name__ == "__main__":
main()
```
|
Myanmar–South Korea relations (; ) are the bilateral relations between the Republic of the Union of Myanmar and the Republic of Korea. The two countries established their diplomatic relations on 16 May 1975.
The history of contact between the two countries goes back to 1948, the year of the declaration of Burmese independence. Although the Burmese military junta and North Korea had cooperated over nuclear issues for the past few decades, Prime Minister U Nu initially favoured Syngman Rhee's South Korean government. During the Korean War, Burma donated 50,000 dollars worth of rice to Korea and balanced the interest of both Koreas, taking into consideration the position of China.
Commercial and trade relationships between the two countries grew rapidly after the 2010s, when Myanmar implemented political reforms, South Korea being Myanmar's sixth largest foreign investor by August 2020.
Historical background
Burma and South Korea already had contact in 1948 when Burma became independent. U Nu's government voted in favor of the motion in the UN that recognized Syngman Rhee's government as the legitimate government over all of Korea; however, Burma refused to recognize either state and wished to see a peaceful solution to the nascent Korean crisis. When the Korean War broke out, since Burma was seen as a country with a non-aligned orientation, it did not send troops to fight in Korea.
After the war, Burma began to develop contact with both Koreas in an unofficial setting. By 1961, there were non-ambassadorial consulates of both Koreas in Burma. Burma established formal diplomatic relations with both Koreas in May 1975, after Ne Win took power.
Rangoon bombing
On 9 October 1983, Chun Doo-hwan, fifth president of South Korea, made an official visit to the capital of Burma, Rangoon, and was the target of a failed assassination attempt, orchestrated by North Korea, at the Martyrs' Mausoleum. Although the president himself narrowly escaped, the bomb killed 21 people, including four cabinet ministers of South Korea.
Recent history
Beginning of Myanmar's political reforms: 2010–2015
South Korea's 10th president, Lee Myung-bak, visited Myanmar in May 2012, the first visit since the Rangoon bombing. It is reported that the then-Burmese President Thein Sein promised to follow the UN Security Council's resolution on North Korea's controversial nuclear and missile programs and said there was no nuclear cooperation with Pyongyang. Lee also met with then-opposition leader Aung San Suu Kyi at Sedona Hotel in Yangon on 15 May. She said that Burmese and South Koreans had things in common, such as both having to "take hard road to democratic leadership". In October of the same year, Thein Sein made a three-day visit to South Korea and also visited "unspecified military-related companies" in his tour.
In January 2013, Aung San Suu Kyi paid her first visit to South Korea and met with both President Lee and Park Geun-hye, president-elect at that time. She attended the opening ceremony of the 2013 Special Winter Olympics World Games in Pyeongchang and gave a keynote speech at the Global Development Summit, where she compared her life under house arrest to those with intellectual disabilities, saying that the "real revolution is the revolution of spirit". On 30 January, she received the 2004 Gwangju Prize for Human Rights in Gwangju, which was later cancelled by the foundation on 18 December 2018, due to her inaction on Rohingya issues.
Closer relations: 2016–2020
From 3 to 5 September 2019, President Moon Jae-in and First Lady Kim Jung-sook travelled Myanmar, as part of their three-country tour, during which both Myanmar and Korean governments planned to discuss "sustainable win-win growth". During his trip, the two governments signed a total of 10 memorandums of understanding (MOUs) and a framework agreement on infrastructure and investments. 600 school buses were donated to Myanmar by South Korea. Moon addressed a business forum and joined the groundbreaking ceremony of the Myanmar–South Korea Industrial Complex Zone, to be completed in 2024, in Yangon. After his trip, Seoul launched the Joint Commission for Trade and Industrial Cooperation to further bilateral trade, and the Korea Trade-Investment Promotion Agency (KOTRA) opened a “Korea Desk” to increase business opportunities.
At a state dinner hosted by President Win Myint in honour of the Korean president and his wife in Naypyidaw, President Moon remembered Burma's help to Korea in the Korean War, stating "Korea still has not forgotten its gratitude." On 3 September, the two first ladies – Kim Jung-sook and Cho Cho – had an informal conversation at the Presidential Palace, which marked the first of its kind, forty four years after the two countries established relations.
From 25 to 27 November 2019, State Counsellor Aung San Suu Kyi visited Busan to attend the ASEAN-ROK Commemorative Summit and 1st Mekong-ROK Summit and delivered keynote speeches at the ASEAN-ROK Culture Innovation Summit and the ASEAN-ROK CEO Summit. She also held a bilateral meeting with President Moon where they signed 4 MOUs on cooperation on fisheries; technical and vocational training; the environment; and development of digital economy, higher education, smart cities and connectivity.
In November 2020, the year which marked the 45th anniversary of the two countries' bilateral relations, South Korean deputy foreign minister Kim Gunn visited Myanmar and showed South Korea's expectations for closer ties with Myanmar in the second term of Aung San Suu Kyi's NLD government.
After 2021 Myanmar coup d'état
In response to the February coup in Myanmar, on 26 February 2021, the South Korean National Assembly passed a resolution condemning the coup. Since February 2021, the Myanmar community in South Korea has staged coup protests which were also joined by Korean citizens ranging from students to monks. In March, South Korea stopped military support of Myanmar, and considered limiting other forms of assistance. Citizens of Myanmar in South Korea were granted exemptions which allowed them to extend their visits.
On 26 August, Cheongwadae issued a statement saying, "The [South Korean] government will continue to make contributions, going forward, so that the Myanmar situation can be resolved in a direction to meet the aspirations of its people." On the 66th Memorial Day on 6 June, President Moon delivered a speech predicting that "Spring in Myanmar" would surely come, like the Gwangju Uprising made way for democracy in South Korea.
On 2 September, when the National Unity Government of Myanmar established a representative office – appointing Yan Naing Htun as representative – in Seoul, South Korea became the first country in Asia to host such an office.
Cultural exchange
During her state visit to Busan in November 2019, a commitment was made to train Project K, Myanmar's only K-pop style boy band, in South Korea by State Counsellor Aung San Suu Kyi, who considered herself a fan of the group. In September 2020, the band traveled to South Korea to be trained for a month in the lead-up to performing at the Asia Song Festival, held in October 2020 in Gyeongju. In December 2020, South Korea's Ministry of Culture, Sports and Tourism requested 1.5 billion won in funding to support the K-pop training of Asian artists, citing Project K. The National Assembly ultimately approved 400 million won to fund the program.
Starring and Wutt Hmone Shwe Yi as main leads, A Flower Above the Clouds (2019) was the first Korean-Myanmar joint film; the film was mentioned in President Moon's remarks at the state dinner in Naypyidaw.
On 9 October 2020, the Korean Embassy in Yangon and the Ministry of Religious Affairs and Culture of Myanmar put on a joint cultural performance, which was aired on MRTV, in commemoration of 45th anniversary of the two countries' bilateral relations.
See also
Foreign relations of Myanmar
Foreign relations of South Korea
References
External links
Myanmar embassy in Seoul
South Korean embassy in Yangon
Myanmar–South Korea relations
Korea, South
Myanmar
1975 establishments in Burma
1975 establishments in South Korea
|
```objective-c
//
// WKWebView+Post.h
// TLChat
//
// Created by on 17/9/10.
//
#import <WebKit/WebKit.h>
@interface WKWebView (Post)
@end
```
|
```c++
// Use, modification and distribution are subject to the
// LICENSE_1_0.txt or copy at path_to_url
//
# include <pch.hpp>
#ifndef BOOST_MATH_TR1_SOURCE
# define BOOST_MATH_TR1_SOURCE
#endif
#include <boost/math/tr1.hpp>
#include <boost/math/special_functions/zeta.hpp>
#include "c_policy.hpp"
extern "C" double BOOST_MATH_TR1_DECL boost_riemann_zeta BOOST_PREVENT_MACRO_SUBSTITUTION(double x) BOOST_MATH_C99_THROW_SPEC
{
return c_policies::zeta BOOST_PREVENT_MACRO_SUBSTITUTION(x);
}
```
|
```c
/* $OpenBSD: wscons_machdep.c,v 1.7 2004/10/05 19:27:55 mickey Exp $ */
/*
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
* IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
* OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
* IN NO EVENT SHALL THE AUTHOR OR HIS RELATIVES BE LIABLE FOR ANY DIRECT,
* INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
* (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
* SERVICES; LOSS OF MIND, USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
* STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING
* IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
* THE POSSIBILITY OF SUCH DAMAGE.
*/
#include <sys/param.h>
#include <sys/systm.h>
#include <sys/kernel.h>
#include <sys/conf.h>
#include <dev/cons.h>
#include "wsdisplay.h"
#if NWSDISPLAY > 0
#include <dev/wscons/wsdisplayvar.h>
#endif
#include "wskbd.h"
#if NWSKBD > 0
#include <dev/wscons/wskbdvar.h>
#endif
cons_decl(ws);
void
wscnprobe(struct consdev *cp)
{
/*
* Due to various device probe restrictions, the wscons console
* can never be enabled early during boot.
* It will be enabled as soon as enough wscons components get
* attached.
* So do nothing there, the switch will occur in
* wsdisplay_emul_attach() later.
*/
}
void
wscninit(struct consdev *cp)
{
}
void
wscnputc(dev_t dev, int i)
{
#if NWSDISPLAY > 0
wsdisplay_cnputc(dev, i);
#endif
}
int
wscngetc(dev_t dev)
{
#if NWSKBD > 0
return (wskbd_cngetc(dev));
#else
return (0);
#endif
}
void
wscnpollc(dev_t dev, int on)
{
#if NWSKBD > 0
wskbd_cnpollc(dev, on);
#endif
}
```
|
Direct materials cost the cost of direct materials which can be easily identified with the unit of production. For example, the cost of glass is a direct materials cost in light bulb manufacturing.
The manufacture of products or goods required material as the prime element. In general, these materials are divided into two categories. These categories are direct materials and indirect materials.
Direct materials are also called productive materials, raw materials, raw stock, stores and only materials without any descriptive title.
See also
Variance analysis (accounting)
Direct material total variance
Direct material price variance
Direct material usage variance
References
Costs
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.