text
stringlengths 1
22.8M
|
|---|
Bevare Gud vår Kung (Swedish, 'God Save Our King') was the first royal anthem of Sweden. Written in 1805 by Abraham Niclas Edelcrantz (1754–1821) to honor King Gustav IV Adolf, it was set to the melody of the British anthem "God Save the King". The song would serve as the de facto royal anthem from 1805 to 1893, when Kungssången was adopted as the official royal anthem.
Lyrics
See also
Kungssången
References
National symbols of Sweden
Royal anthems
Swedish monarchy
European anthems
1805 songs
God Save the King
Swedish patriotic songs
|
Lewis Lake is a lake in Kanabec County, in the U.S. state of Minnesota.
Lewis Lake bears the name of a pioneer who settled there.
See also
List of lakes in Minnesota
References
Lakes of Minnesota
Lakes of Kanabec County, Minnesota
|
Mary Ethel Seaton (25 July 1887, Rangoon – 17 June 1974, Oxford) was a British scholar of English literature, specialising in the late middle ages. She was a Fellow of the Royal Society of Literature, and twice winner of the Rose Mary Crawshay Prize (1921, 1952).
Life
Mary Ethel Seaton was born in Rangoon, Burma, to Francis Lambert Seaton, a member of the East India Company, and Fanny Warner. She attended the Ladies' College, Guernsey, and Portsmouth High School before taking a scholarship at Girton College, Cambridge. She received firsts in the Medieval and Modern Languages Tripos in 1909 and 1910.
Seaton was a lecturer at Girton College from 1910 to 1916, after which she worked in the censorship office in London till 1918. She began her master's thesis at the University of London in 1920, on the topic of literary relations between England and Scandinavia in the 17th century. This work, titled A Study of the Relations between England and the Scandinavian Countries in the Seventeenth Century Based upon the Evidence of Acquaintance in English writers with Scandinavian Literatures and Myths was awarded the 1921 Rose Mary Crawshay Prize.
She became a Fellow of St Hugh's College, Oxford in 1925, and a University Lecturer in 1939.
In 1951, Seaton was awarded a Doctor of Letters. Seaton's edition of Abraham Fraunce's Arcadian Rhetorike won the Rose Mary Crawshay Prize for 1952.
After retirement, Seaton continued her research into fifteenth century English literature. She examined the connections between Chaucer, Wyatt and Richard Roos, and was particularly appreciated for her analysis of Roos' anagrams and the superlative reconstruction of life in the 15th century English court. Her contention that Roos, a "Lancastrian poet", was the writer of verses attributed to Chaucer (The Romaunt of the Rose) and Wyatt caused contention in scholarly circles.
Seaton died in Oxford in 1974. She bequeathed her college her estate to endow the Fanny Seaton Schoolmistress Studentship.
Selected works
References
Bibliography
People educated at Portsmouth High School (Southsea)
1974 deaths
Alumni of Girton College, Cambridge
Fellows of St Hugh's College, Oxford
Rose Mary Crawshay Prize winners
British academics of English literature
|
Caladenia corynephora, commonly known as the club-lipped spider orchid, is a plant in the orchid family Orchidaceae and is endemic to the south-west of Western Australia. It has a single erect, hairy leaf and one or two greenish-yellow and red flowers which have a labellum with a club-like tip. It is the only Western Australian caladenia with a clubbed labellum.
Description
Caladenia corynephora is a terrestrial, perennial, deciduous, herb with an underground tuber and a single erect, hairy leaf long and wide. One or two flowers long and wide are borne on a spike high. The dorsal sepal is erect and the lateral sepals and petals are downswept, greenish-yellow with red stripes along their centres and their tips are covered with glandular hairs. The labellum is greenish-yellow with a club-shaped, red tip and a fringe of very long, narrow segments. The centre line of the labellum has four or more rows of red calli. Flowering occurs between late November and early February.
Taxonomy and naming
Caladenia corynephora was first formally described in 1971 by Alex George from a specimen collected on the banks of the Donnelly River near Pemberton. The description was published in Nuytsia. The specific epithet (corynephora) is derived from the Ancient Greek words koryne meaning "club or "mace" and phero meaning "to bear" or "to carry", referring to the clubbed labellum of this species.
Distribution and habitat
The club-lipped spider orchid grows in habitats including winter-wet swamps, on granite outcrops and in karri forest between Albany and Margaret River in the Esperance Plains, Jarrah Forest, Swan Coastal Plain and Warren biogeographic regions.
Conservation
Caladenia corynephora is classified as "not threatened" by the Western Australian Government Department of Parks and Wildlife.
References
corynephora
Endemic orchids of Australia
Orchids of Western Australia
Plants described in 2001
|
```xml
/*
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
*
* path_to_url
*
* Unless required by applicable law or agreed to in writing, software
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
*/
// A basic subset.
export * from "./create";
export * from "./flatten";
export * from "./group_and_combine";
export * from "./pardo";
export * from "./transform";
export * from "./window";
export * from "./windowings";
import { requireForSerialization } from "../serialization";
import { packageName } from "../utils/packageJson";
export { impulse, withRowCoder } from "./internal";
requireForSerialization(`${packageName}/transforms`, exports);
```
|
Phyllostachys vivax, the Chinese timber bamboo, is a species of flowering plant in the bamboo subfamily of the grass family Poaceae, native to China.
It is a tall, robust evergreen plant growing quickly to or more, with strong green canes to in diameter, and topped by drooping leaves. Sources vary as to the maximum size, with one source quoting . Mature canes turn yellow.
Initially forming clumps, the plants will eventually establish large thickets via underground running rhizomes, unless artificially restricted. The form P. vivax f. aureocaulis from eastern China is frequently found in cultivation, and has more vivid yellow canes striped with green. It is suitable for parks or large gardens, and is hardy down to at least . It has been given the Royal Horticultural Society’s Award of Garden Merit.
The Latin specific epithet vivax means “long-lived”.
References
Flora of China
vivax
|
An Indian burn, also known as a Snake bite or Chinese burn in the UK, is a pain-inducing prank, where the prankster grabs onto the victim's forearm or wrist, and starts turning the skin away from themselves with one hand, and with another hand towards themselves, causing an unpleasant burning sensation to the skin. The prank is popular in a school setting.
Terminology
The prank is known by various different names in the United States, such as Indian sunburn or Indian rug burn, and also as Chinese wrist burn, and as the snake bite. In the United Kingdom, it is known as a Chinese burn.
Variations
A variation of the prank can be done with a yarn that can be rubbed against the skin in a similar manner when starting fire in a small and dried haystack.
Criticism
Some Native Americans disapprove the use of the term Indian burn, including other vocabulary starting with the prefix "Indian-", such as Indian corn, Indian summer and Indian giver, among others.
Statistics
According to a poll carried out in the United Kingdom, with a sample size of 1,844 adults, 27% recalled receiving Indian burns in secondary school.
See also
List of practical joke topics
Wedgie
References
Abuse
Harassment and bullying
Pain
Practical jokes
Suffering
Native American topics
|
```jsx
import VarItem from "components/VarItem";
import { CSSVAR_KEYS } from "root/constants";
import cn from "root/App.module.css";
function Progress() {
const ids = CSSVAR_KEYS.progress.map((i) => i.id);
return (
<div className={cn.items}>
{ids.map((id) => {
return <VarItem key={id} id={id} />;
})}
</div>
);
}
export default Progress;
```
|
Lemes is a surname. Notable people with the surname include:
Bernardo Lemes (born 2002), Brazilian footballer
Caíque Venâncio Lemes (born 1993), Brazilian footballer
Gonzalo Lemes (born 1980), Uruguayan footballer
Manoel Cristiano Ribeiro Lemes (born 1989), Brazilian footballer
|
Peder Lauridsen Kylling (c. 1640 – 1696) was a 17th-century Danish botanist.
Biography
He was born in Assens and began studies at the University of Copenhagen in 1660. He graduated in theology in 1666 and was called as parish minister. However, for reasons now unknown, the call was withdrawn shortly afterward. Kylling then engaged in studies of botany.
His best known work is the Viridarium Danicum ("Danish Garden"), published in 1688. This work contains an alphabetic list of plant species and their places of occurrence in the crown lands of the Danish king, mainly from Zealand, but also from Jutland and Slesvig. More than 1,100 plant species were mentioned in the book.
Some of the entries in the Viridarium Danicum are known to have been contributed by Henrik Gerner who was then the priest in Birkerød.
The species list was later critically reviewed by M. T. Lange. Kylling is known to have worked on an enlarged edition, which however was never published. According to some sources, that manuscript was found in the library of Albrecht von Haller.
The plant genus Kyllinga (Cyperaceae) was named in his honour by botanist Christen Friis Rottbøll.
References
Citation
Bricka, C F in Projekt Runeberg, Dansk Biografisk Lexikon. Vol 5 Page 609 - Henrik Gerner
External links
Viridarium Danicum
1640s births
1696 deaths
17th-century Danish botanists
People from Assens Municipality
University of Copenhagen alumni
|
```smalltalk
namespace Ductus.FluentDocker.Model.Stacks
{
public sealed class StackLsResponse
{
public string Name { get; set; }
public int Services { get; set; }
public Orchestrator Orchestrator { get; set; }
public string Namespace { get; set; }
public static Orchestrator ToOrchestrator(string value)
{
if (string.IsNullOrEmpty(value))
return Orchestrator.All;
value = value.ToLower();
if (value.Equals("kubernetes"))
return Orchestrator.Kubernetes;
return value.Equals("swarm") ? Orchestrator.Swarm : Orchestrator.All;
}
}
}
```
|
```scala
//#test-dependencies
libraryDependencies ++= Seq(
lagomScaladslTestKit,
"org.scalatest" %% "scalatest" % "3.0.1" % Test
)
//#test-dependencies
//#scala-test-val
val scalaTest = "org.scalatest" %% "scalatest" % "3.0.1" % "test"
//#scala-test-val
//#test-dependencies-val
libraryDependencies ++= Seq(
lagomScaladslTestKit,
scalaTest
)
//#test-dependencies-val
//#fork
lazy val `hello-impl` = (project in file("hello-impl"))
.enablePlugins(LagomScala)
.settings(lagomForkedTestSettings: _*)
.settings(
// ...
)
//#fork
```
|
```go
/*
path_to_url
Unless required by applicable law or agreed to in writing, software
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
*/
// Package lru implements an LRU cache.
package lru
import "container/list"
// Cache is an LRU cache. It is not safe for concurrent access.
type Cache struct {
// MaxEntries is the maximum number of cache entries before
// an item is evicted. Zero means no limit.
MaxEntries int
// OnEvicted optionally specifies a callback function to be
// executed when an entry is purged from the cache.
OnEvicted func(key Key, value interface{})
ll *list.List
cache map[interface{}]*list.Element
}
// A Key may be any value that is comparable. See path_to_url#Comparison_operators
type Key interface{}
type entry struct {
key Key
value interface{}
}
// New creates a new Cache.
// If maxEntries is zero, the cache has no limit and it's assumed
// that eviction is done by the caller.
func New(maxEntries int) *Cache {
return &Cache{
MaxEntries: maxEntries,
ll: list.New(),
cache: make(map[interface{}]*list.Element),
}
}
// Add adds a value to the cache.
func (c *Cache) Add(key Key, value interface{}) {
if c.cache == nil {
c.cache = make(map[interface{}]*list.Element)
c.ll = list.New()
}
if ee, ok := c.cache[key]; ok {
c.ll.MoveToFront(ee)
ee.Value.(*entry).value = value
return
}
ele := c.ll.PushFront(&entry{key, value})
c.cache[key] = ele
if c.MaxEntries != 0 && c.ll.Len() > c.MaxEntries {
c.RemoveOldest()
}
}
// Get looks up a key's value from the cache.
func (c *Cache) Get(key Key) (value interface{}, ok bool) {
if c.cache == nil {
return
}
if ele, hit := c.cache[key]; hit {
c.ll.MoveToFront(ele)
return ele.Value.(*entry).value, true
}
return
}
// Remove removes the provided key from the cache.
func (c *Cache) Remove(key Key) {
if c.cache == nil {
return
}
if ele, hit := c.cache[key]; hit {
c.removeElement(ele)
}
}
// RemoveOldest removes the oldest item from the cache.
func (c *Cache) RemoveOldest() {
if c.cache == nil {
return
}
ele := c.ll.Back()
if ele != nil {
c.removeElement(ele)
}
}
func (c *Cache) removeElement(e *list.Element) {
c.ll.Remove(e)
kv := e.Value.(*entry)
delete(c.cache, kv.key)
if c.OnEvicted != nil {
c.OnEvicted(kv.key, kv.value)
}
}
// Len returns the number of items in the cache.
func (c *Cache) Len() int {
if c.cache == nil {
return 0
}
return c.ll.Len()
}
// Clear purges all stored items from the cache.
func (c *Cache) Clear() {
if c.OnEvicted != nil {
for _, e := range c.cache {
kv := e.Value.(*entry)
c.OnEvicted(kv.key, kv.value)
}
}
c.ll = nil
c.cache = nil
}
```
|
Duncan Stewart (born 24 May 1984) is a Scottish professional golfer who was born in Grantown-on-Spey, Scotland. He turned professional in 2007 and played on mini-tours, winning three events on the PGA EuroPro Tour from 2011 to 2012.
In 2012, he finished second in the EuroPro Tour Order of Merit, to earn a place on the Challenge Tour for 2013. In his first full season, he finished in 20th place in the Challenge Tour rankings, and was able to play 10 events on the European Tour in 2014.
He won the 2016 Challenge de Madrid on the Challenge Tour.
Professional wins (4)
Challenge Tour wins (1)
PGA EuroPro Tour wins (3)
Team appearances
Professional
World Cup (representing Scotland): 2016
See also
2016 Challenge Tour graduates
References
External links
Scottish male golfers
European Tour golfers
Jacksonville University alumni
Golfers from Highland (council area)
1984 births
Living people
|
Yale Cancer Center (YCC) was founded in 1974 as a result of an act of Congress in 1971, which declared the nation's "war on cancer". It is one of a network of 54 Comprehensive Cancer Centers designated by the National Cancer Institute (NCI). Currently directed by Dr. Eric Winer, the Cancer Center brings together the resources of the Yale School of Medicine (YSM), Yale New Haven Hospital (YNHH), and the Yale School of Public Health (YSPH).
Overview and history
In 1942, Louis S. Goodman, M.D., and Alfred Gilman, Ph.D., in the Yale Department of Pharmacology were the first scientists to use nitrogen mustard, the first alkylating anticancer agent, as chemotherapy to treat cancer in a patient.
During a talk for the Beaumont Medical Club in March 2005, David S. Fischer, M.D. clinical professor of medicine, said, "This was the first patient in the world treated by chemotherapy ... This was proof that cancer could be treated by chemicals."
This initial success led to the development of the world's first multi-center clinical trials in cancer chemotherapy.
Clinical care
Clinical care is led by Roy S. Herbst, Chief of Medical Oncology and Associate Director for Translational Research, Yale medical oncologists care for patients in Smilow Cancer Hospital. To organize patient care, Yale Cancer Center and Smilow Cancer Hospital have developed 13 multidisciplinary programs to provide physicians and specialists at Yale Cancer Center with the opportunity to focus their expertise on specific types of cancers.
Research
Yale School of Medicine was home to the country’s first university-based Medical Oncology Section, and its faculty has since pioneered many breakthrough cancer treatments.
Basic research in cancer is a hallmark of Yale Cancer Center, which draws approximately $96 million in cancer research funding to Yale every year.
The research portfolio of Yale Cancer Center comprises six research programs:
Cancer Immunology
Cancer Prevention and Control
Cancer Signaling Networks
Developmental Therapeutics
Genetics, Genomics, and Epigenetics
Radiobiology and Genome Integrity
References
External links
Cancer organizations based in the United States
1974 establishments in Connecticut
Medical research institutes in the United States
Yale University buildings
Yale School of Medicine
NCI-designated cancer centers
Research institutes in Connecticut
|
```objective-c
/*
*
*/
#ifndef _NUVOTON_NPCX_SOC_PINS_H_
#define _NUVOTON_NPCX_SOC_PINS_H_
#include <stdint.h>
#include "reg/reg_def.h"
#ifdef __cplusplus
extern "C" {
#endif
/**
* @brief NPCX pin-mux configuration structure
*
* Used to indicate the device's corresponding DEVALT register/bit for
* pin-muxing and its polarity to enable alternative functionality.
*/
struct npcx_alt {
uint8_t group;
uint8_t bit:3;
uint8_t inverted:1;
uint8_t reserved:4;
};
/**
* @brief NPCX low-voltage configuration structure
*
* Used to indicate the device's corresponding LV_GPIO_CTL register/bit for
* low-voltage detection.
*/
struct npcx_lvol {
uint8_t ctrl:5; /** Related register index for low-voltage conf. */
uint8_t bit:3; /** Related register bit for low-voltage conf. */
};
/**
* @brief Select i2c port pads of i2c controller
*
* @param controller i2c controller device
* @param port index for i2c port pads
*/
void npcx_pinctrl_i2c_port_sel(int controller, int port);
/**
* @brief Force the internal SPI flash write-protect pin (WP) to low level to
* protect the flash Status registers.
*/
int npcx_pinctrl_flash_write_protect_set(void);
/**
* @brief Get write protection status
*
* @return 1 if write protection is set, 0 otherwise.
*/
bool npcx_pinctrl_flash_write_protect_is_set(void);
/**
* @brief Enable low-voltage input detection
*
* @param lvol_ctrl Related register index for low-voltage detection
* @param lvol_bit Related register bit for low-voltage detection
* @param enable True to enable low-voltage input detection, false to disable.
*/
void npcx_lvol_set_detect_level(int lvol_ctrl, int lvol_bit, bool enable);
/**
* @brief Get status of low-voltage input detection
*
* @param lvol_ctrl Related register index for low-voltage detection
* @param lvol_bit Related register bit for low-voltage detection
* @return True means the low-voltage power supply is enabled, otherwise disabled.
*/
bool npcx_lvol_get_detect_level(int lvol_ctrl, int lvol_bit);
/**
* @brief Select the host interface type
*
* @param hif_type host interface type
*/
void npcx_host_interface_sel(enum npcx_hif_type hif_type);
#ifdef __cplusplus
}
#endif
#endif /* _NUVOTON_NPCX_SOC_PINS_H_ */
```
|
```makefile
PKG_NAME="libretro-2048"
PKG_VERSION="331c1de588ed8f8c370dcbc488e5434a3c09f0f2"
PKG_SHA256=your_sha256_hash
PKG_LICENSE="Public domain"
PKG_SITE="path_to_url"
PKG_URL="path_to_url{PKG_VERSION}.tar.gz"
PKG_DEPENDS_TARGET="toolchain"
PKG_LONGDESC="Port of 2048 puzzle game to the libretro API."
PKG_TOOLCHAIN="make"
PKG_MAKE_OPTS_TARGET="-f Makefile.libretro"
PKG_LIBNAME="2048_libretro.so"
PKG_LIBPATH="${PKG_LIBNAME}"
PKG_LIBVAR="2048_LIB"
makeinstall_target() {
mkdir -p ${SYSROOT_PREFIX}/usr/lib/cmake/${PKG_NAME}
cp ${PKG_LIBPATH} ${SYSROOT_PREFIX}/usr/lib/${PKG_LIBNAME}
echo "set(${PKG_LIBVAR} ${SYSROOT_PREFIX}/usr/lib/${PKG_LIBNAME})" >${SYSROOT_PREFIX}/usr/lib/cmake/${PKG_NAME}/${PKG_NAME}-config.cmake
}
```
|
```python
#!/usr/bin/env python
# -*- coding: latin-1 -*-
''' Nose test generators
Need function load / save / roundtrip tests
'''
from __future__ import division, print_function, absolute_import
import os
from os.path import join as pjoin, dirname
from glob import glob
from io import BytesIO
from tempfile import mkdtemp
from scipy._lib.six import u, text_type, string_types
import warnings
import shutil
import gzip
from numpy.testing import (assert_array_equal, assert_array_almost_equal,
assert_equal, assert_raises, run_module_suite,
assert_)
import numpy as np
from numpy import array
import scipy.sparse as SP
import scipy.io.matlab.byteordercodes as boc
from scipy.io.matlab.miobase import matdims, MatWriteError, MatReadError
from scipy.io.matlab.mio import (mat_reader_factory, loadmat, savemat, whosmat)
from scipy.io.matlab.mio5 import (MatlabObject, MatFile5Writer, MatFile5Reader,
MatlabFunction, varmats_from_mat,
to_writeable, EmptyStructMarker)
from scipy.io.matlab import mio5_params as mio5p
test_data_path = pjoin(dirname(__file__), 'data')
def mlarr(*args, **kwargs):
"""Convenience function to return matlab-compatible 2D array."""
arr = np.array(*args, **kwargs)
arr.shape = matdims(arr)
return arr
# Define cases to test
theta = np.pi/4*np.arange(9,dtype=float).reshape(1,9)
case_table4 = [
{'name': 'double',
'classes': {'testdouble': 'double'},
'expected': {'testdouble': theta}
}]
case_table4.append(
{'name': 'string',
'classes': {'teststring': 'char'},
'expected': {'teststring':
array([u('"Do nine men interpret?" "Nine men," I nod.')])}
})
case_table4.append(
{'name': 'complex',
'classes': {'testcomplex': 'double'},
'expected': {'testcomplex': np.cos(theta) + 1j*np.sin(theta)}
})
A = np.zeros((3,5))
A[0] = list(range(1,6))
A[:,0] = list(range(1,4))
case_table4.append(
{'name': 'matrix',
'classes': {'testmatrix': 'double'},
'expected': {'testmatrix': A},
})
case_table4.append(
{'name': 'sparse',
'classes': {'testsparse': 'sparse'},
'expected': {'testsparse': SP.coo_matrix(A)},
})
B = A.astype(complex)
B[0,0] += 1j
case_table4.append(
{'name': 'sparsecomplex',
'classes': {'testsparsecomplex': 'sparse'},
'expected': {'testsparsecomplex': SP.coo_matrix(B)},
})
case_table4.append(
{'name': 'multi',
'classes': {'theta': 'double', 'a': 'double'},
'expected': {'theta': theta, 'a': A},
})
case_table4.append(
{'name': 'minus',
'classes': {'testminus': 'double'},
'expected': {'testminus': mlarr(-1)},
})
case_table4.append(
{'name': 'onechar',
'classes': {'testonechar': 'char'},
'expected': {'testonechar': array([u('r')])},
})
# Cell arrays stored as object arrays
CA = mlarr(( # tuple for object array creation
[],
mlarr([1]),
mlarr([[1,2]]),
mlarr([[1,2,3]])), dtype=object).reshape(1,-1)
CA[0,0] = array(
[u('This cell contains this string and 3 arrays of increasing length')])
case_table5 = [
{'name': 'cell',
'classes': {'testcell': 'cell'},
'expected': {'testcell': CA}}]
CAE = mlarr(( # tuple for object array creation
mlarr(1),
mlarr(2),
mlarr([]),
mlarr([]),
mlarr(3)), dtype=object).reshape(1,-1)
objarr = np.empty((1,1),dtype=object)
objarr[0,0] = mlarr(1)
case_table5.append(
{'name': 'scalarcell',
'classes': {'testscalarcell': 'cell'},
'expected': {'testscalarcell': objarr}
})
case_table5.append(
{'name': 'emptycell',
'classes': {'testemptycell': 'cell'},
'expected': {'testemptycell': CAE}})
case_table5.append(
{'name': 'stringarray',
'classes': {'teststringarray': 'char'},
'expected': {'teststringarray': array(
[u('one '), u('two '), u('three')])},
})
case_table5.append(
{'name': '3dmatrix',
'classes': {'test3dmatrix': 'double'},
'expected': {
'test3dmatrix': np.transpose(np.reshape(list(range(1,25)), (4,3,2)))}
})
st_sub_arr = array([np.sqrt(2),np.exp(1),np.pi]).reshape(1,3)
dtype = [(n, object) for n in ['stringfield', 'doublefield', 'complexfield']]
st1 = np.zeros((1,1), dtype)
st1['stringfield'][0,0] = array([u('Rats live on no evil star.')])
st1['doublefield'][0,0] = st_sub_arr
st1['complexfield'][0,0] = st_sub_arr * (1 + 1j)
case_table5.append(
{'name': 'struct',
'classes': {'teststruct': 'struct'},
'expected': {'teststruct': st1}
})
CN = np.zeros((1,2), dtype=object)
CN[0,0] = mlarr(1)
CN[0,1] = np.zeros((1,3), dtype=object)
CN[0,1][0,0] = mlarr(2, dtype=np.uint8)
CN[0,1][0,1] = mlarr([[3]], dtype=np.uint8)
CN[0,1][0,2] = np.zeros((1,2), dtype=object)
CN[0,1][0,2][0,0] = mlarr(4, dtype=np.uint8)
CN[0,1][0,2][0,1] = mlarr(5, dtype=np.uint8)
case_table5.append(
{'name': 'cellnest',
'classes': {'testcellnest': 'cell'},
'expected': {'testcellnest': CN},
})
st2 = np.empty((1,1), dtype=[(n, object) for n in ['one', 'two']])
st2[0,0]['one'] = mlarr(1)
st2[0,0]['two'] = np.empty((1,1), dtype=[('three', object)])
st2[0,0]['two'][0,0]['three'] = array([u('number 3')])
case_table5.append(
{'name': 'structnest',
'classes': {'teststructnest': 'struct'},
'expected': {'teststructnest': st2}
})
a = np.empty((1,2), dtype=[(n, object) for n in ['one', 'two']])
a[0,0]['one'] = mlarr(1)
a[0,0]['two'] = mlarr(2)
a[0,1]['one'] = array([u('number 1')])
a[0,1]['two'] = array([u('number 2')])
case_table5.append(
{'name': 'structarr',
'classes': {'teststructarr': 'struct'},
'expected': {'teststructarr': a}
})
ODT = np.dtype([(n, object) for n in
['expr', 'inputExpr', 'args',
'isEmpty', 'numArgs', 'version']])
MO = MatlabObject(np.zeros((1,1), dtype=ODT), 'inline')
m0 = MO[0,0]
m0['expr'] = array([u('x')])
m0['inputExpr'] = array([u(' x = INLINE_INPUTS_{1};')])
m0['args'] = array([u('x')])
m0['isEmpty'] = mlarr(0)
m0['numArgs'] = mlarr(1)
m0['version'] = mlarr(1)
case_table5.append(
{'name': 'object',
'classes': {'testobject': 'object'},
'expected': {'testobject': MO}
})
fp_u_str = open(pjoin(test_data_path, 'japanese_utf8.txt'), 'rb')
u_str = fp_u_str.read().decode('utf-8')
fp_u_str.close()
case_table5.append(
{'name': 'unicode',
'classes': {'testunicode': 'char'},
'expected': {'testunicode': array([u_str])}
})
case_table5.append(
{'name': 'sparse',
'classes': {'testsparse': 'sparse'},
'expected': {'testsparse': SP.coo_matrix(A)},
})
case_table5.append(
{'name': 'sparsecomplex',
'classes': {'testsparsecomplex': 'sparse'},
'expected': {'testsparsecomplex': SP.coo_matrix(B)},
})
case_table5.append(
{'name': 'bool',
'classes': {'testbools': 'logical'},
'expected': {'testbools':
array([[True], [False]])},
})
case_table5_rt = case_table5[:]
# Inline functions can't be concatenated in matlab, so RT only
case_table5_rt.append(
{'name': 'objectarray',
'classes': {'testobjectarray': 'object'},
'expected': {'testobjectarray': np.repeat(MO, 2).reshape(1,2)}})
def types_compatible(var1, var2):
"""Check if types are same or compatible.
0-D numpy scalars are compatible with bare python scalars.
"""
type1 = type(var1)
type2 = type(var2)
if type1 is type2:
return True
if type1 is np.ndarray and var1.shape == ():
return type(var1.item()) is type2
if type2 is np.ndarray and var2.shape == ():
return type(var2.item()) is type1
return False
def _check_level(label, expected, actual):
""" Check one level of a potentially nested array """
if SP.issparse(expected): # allow different types of sparse matrices
assert_(SP.issparse(actual))
assert_array_almost_equal(actual.todense(),
expected.todense(),
err_msg=label,
decimal=5)
return
# Check types are as expected
assert_(types_compatible(expected, actual),
"Expected type %s, got %s at %s" %
(type(expected), type(actual), label))
# A field in a record array may not be an ndarray
# A scalar from a record array will be type np.void
if not isinstance(expected,
(np.void, np.ndarray, MatlabObject)):
assert_equal(expected, actual)
return
# This is an ndarray-like thing
assert_(expected.shape == actual.shape,
msg='Expected shape %s, got %s at %s' % (expected.shape,
actual.shape,
label))
ex_dtype = expected.dtype
if ex_dtype.hasobject: # array of objects
if isinstance(expected, MatlabObject):
assert_equal(expected.classname, actual.classname)
for i, ev in enumerate(expected):
level_label = "%s, [%d], " % (label, i)
_check_level(level_label, ev, actual[i])
return
if ex_dtype.fields: # probably recarray
for fn in ex_dtype.fields:
level_label = "%s, field %s, " % (label, fn)
_check_level(level_label,
expected[fn], actual[fn])
return
if ex_dtype.type in (text_type, # string or bool
np.unicode_,
np.bool_):
assert_equal(actual, expected, err_msg=label)
return
# Something numeric
assert_array_almost_equal(actual, expected, err_msg=label, decimal=5)
def _load_check_case(name, files, case):
for file_name in files:
matdict = loadmat(file_name, struct_as_record=True)
label = "test %s; file %s" % (name, file_name)
for k, expected in case.items():
k_label = "%s, variable %s" % (label, k)
assert_(k in matdict, "Missing key at %s" % k_label)
_check_level(k_label, expected, matdict[k])
def _whos_check_case(name, files, case, classes):
for file_name in files:
label = "test %s; file %s" % (name, file_name)
whos = whosmat(file_name)
expected_whos = []
for k, expected in case.items():
expected_whos.append((k, expected.shape, classes[k]))
whos.sort()
expected_whos.sort()
assert_equal(whos, expected_whos,
"%s: %r != %r" % (label, whos, expected_whos)
)
# Round trip tests
def _rt_check_case(name, expected, format):
mat_stream = BytesIO()
savemat(mat_stream, expected, format=format)
mat_stream.seek(0)
_load_check_case(name, [mat_stream], expected)
# generator for load tests
def test_load():
for case in case_table4 + case_table5:
name = case['name']
expected = case['expected']
filt = pjoin(test_data_path, 'test%s_*.mat' % name)
files = glob(filt)
assert_(len(files) > 0,
"No files for test %s using filter %s" % (name, filt))
yield _load_check_case, name, files, expected
# generator for whos tests
def test_whos():
for case in case_table4 + case_table5:
name = case['name']
expected = case['expected']
classes = case['classes']
filt = pjoin(test_data_path, 'test%s_*.mat' % name)
files = glob(filt)
assert_(len(files) > 0,
"No files for test %s using filter %s" % (name, filt))
yield _whos_check_case, name, files, expected, classes
# generator for round trip tests
def test_round_trip():
for case in case_table4 + case_table5_rt:
case_table4_names = [case['name'] for case in case_table4]
name = case['name'] + '_round_trip'
expected = case['expected']
for format in (['4', '5'] if case['name'] in case_table4_names else ['5']):
yield _rt_check_case, name, expected, format
def test_gzip_simple():
xdense = np.zeros((20,20))
xdense[2,3] = 2.3
xdense[4,5] = 4.5
x = SP.csc_matrix(xdense)
name = 'gzip_test'
expected = {'x':x}
format = '4'
tmpdir = mkdtemp()
try:
fname = pjoin(tmpdir,name)
mat_stream = gzip.open(fname,mode='wb')
savemat(mat_stream, expected, format=format)
mat_stream.close()
mat_stream = gzip.open(fname,mode='rb')
actual = loadmat(mat_stream, struct_as_record=True)
mat_stream.close()
finally:
shutil.rmtree(tmpdir)
assert_array_almost_equal(actual['x'].todense(),
expected['x'].todense(),
err_msg=repr(actual))
def test_multiple_open():
# Ticket #1039, on Windows: check that files are not left open
tmpdir = mkdtemp()
try:
x = dict(x=np.zeros((2, 2)))
fname = pjoin(tmpdir, "a.mat")
# Check that file is not left open
savemat(fname, x)
os.unlink(fname)
savemat(fname, x)
loadmat(fname)
os.unlink(fname)
# Check that stream is left open
f = open(fname, 'wb')
savemat(f, x)
f.seek(0)
f.close()
f = open(fname, 'rb')
loadmat(f)
f.seek(0)
f.close()
finally:
shutil.rmtree(tmpdir)
def test_mat73():
# Check any hdf5 files raise an error
filenames = glob(
pjoin(test_data_path, 'testhdf5*.mat'))
assert_(len(filenames) > 0)
for filename in filenames:
fp = open(filename, 'rb')
assert_raises(NotImplementedError,
loadmat,
fp,
struct_as_record=True)
fp.close()
def test_warnings():
# This test is an echo of the previous behavior, which was to raise a
# warning if the user triggered a search for mat files on the Python system
# path. We can remove the test in the next version after upcoming (0.13)
fname = pjoin(test_data_path, 'testdouble_7.1_GLNX86.mat')
with warnings.catch_warnings():
warnings.simplefilter('error')
# This should not generate a warning
mres = loadmat(fname, struct_as_record=True)
# This neither
mres = loadmat(fname, struct_as_record=False)
def test_regression_653():
# Saving a dictionary with only invalid keys used to raise an error. Now we
# save this as an empty struct in matlab space.
sio = BytesIO()
savemat(sio, {'d':{1:2}}, format='5')
back = loadmat(sio)['d']
# Check we got an empty struct equivalent
assert_equal(back.shape, (1,1))
assert_equal(back.dtype, np.dtype(object))
assert_(back[0,0] is None)
def test_structname_len():
# Test limit for length of field names in structs
lim = 31
fldname = 'a' * lim
st1 = np.zeros((1,1), dtype=[(fldname, object)])
savemat(BytesIO(), {'longstruct': st1}, format='5')
fldname = 'a' * (lim+1)
st1 = np.zeros((1,1), dtype=[(fldname, object)])
assert_raises(ValueError, savemat, BytesIO(),
{'longstruct': st1}, format='5')
def test_4_and_long_field_names_incompatible():
# Long field names option not supported in 4
my_struct = np.zeros((1,1),dtype=[('my_fieldname',object)])
assert_raises(ValueError, savemat, BytesIO(),
{'my_struct':my_struct}, format='4', long_field_names=True)
def test_long_field_names():
# Test limit for length of field names in structs
lim = 63
fldname = 'a' * lim
st1 = np.zeros((1,1), dtype=[(fldname, object)])
savemat(BytesIO(), {'longstruct': st1}, format='5',long_field_names=True)
fldname = 'a' * (lim+1)
st1 = np.zeros((1,1), dtype=[(fldname, object)])
assert_raises(ValueError, savemat, BytesIO(),
{'longstruct': st1}, format='5',long_field_names=True)
def test_long_field_names_in_struct():
# Regression test - long_field_names was erased if you passed a struct
# within a struct
lim = 63
fldname = 'a' * lim
cell = np.ndarray((1,2),dtype=object)
st1 = np.zeros((1,1), dtype=[(fldname, object)])
cell[0,0] = st1
cell[0,1] = st1
savemat(BytesIO(), {'longstruct': cell}, format='5',long_field_names=True)
#
# Check to make sure it fails with long field names off
#
assert_raises(ValueError, savemat, BytesIO(),
{'longstruct': cell}, format='5', long_field_names=False)
def test_cell_with_one_thing_in_it():
# Regression test - make a cell array that's 1 x 2 and put two
# strings in it. It works. Make a cell array that's 1 x 1 and put
# a string in it. It should work but, in the old days, it didn't.
cells = np.ndarray((1,2),dtype=object)
cells[0,0] = 'Hello'
cells[0,1] = 'World'
savemat(BytesIO(), {'x': cells}, format='5')
cells = np.ndarray((1,1),dtype=object)
cells[0,0] = 'Hello, world'
savemat(BytesIO(), {'x': cells}, format='5')
def test_writer_properties():
# Tests getting, setting of properties of matrix writer
mfw = MatFile5Writer(BytesIO())
yield assert_equal, mfw.global_vars, []
mfw.global_vars = ['avar']
yield assert_equal, mfw.global_vars, ['avar']
yield assert_equal, mfw.unicode_strings, False
mfw.unicode_strings = True
yield assert_equal, mfw.unicode_strings, True
yield assert_equal, mfw.long_field_names, False
mfw.long_field_names = True
yield assert_equal, mfw.long_field_names, True
def test_use_small_element():
# Test whether we're using small data element or not
sio = BytesIO()
wtr = MatFile5Writer(sio)
# First check size for no sde for name
arr = np.zeros(10)
wtr.put_variables({'aaaaa': arr})
w_sz = len(sio.getvalue())
# Check small name results in largish difference in size
sio.truncate(0)
sio.seek(0)
wtr.put_variables({'aaaa': arr})
yield assert_, w_sz - len(sio.getvalue()) > 4
# Whereas increasing name size makes less difference
sio.truncate(0)
sio.seek(0)
wtr.put_variables({'aaaaaa': arr})
yield assert_, len(sio.getvalue()) - w_sz < 4
def test_save_dict():
# Test that dict can be saved (as recarray), loaded as matstruct
dict_types = ((dict, False),)
try:
from collections import OrderedDict
except ImportError:
pass
else:
dict_types += ((OrderedDict, True),)
ab_exp = np.array([[(1, 2)]], dtype=[('a', object), ('b', object)])
ba_exp = np.array([[(2, 1)]], dtype=[('b', object), ('a', object)])
for dict_type, is_ordered in dict_types:
# Initialize with tuples to keep order for OrderedDict
d = dict_type([('a', 1), ('b', 2)])
stream = BytesIO()
savemat(stream, {'dict': d})
stream.seek(0)
vals = loadmat(stream)['dict']
assert_equal(set(vals.dtype.names), set(['a', 'b']))
if is_ordered: # Input was ordered, output in ab order
assert_array_equal(vals, ab_exp)
else: # Not ordered input, either order output
if vals.dtype.names[0] == 'a':
assert_array_equal(vals, ab_exp)
else:
assert_array_equal(vals, ba_exp)
def test_1d_shape():
# New 5 behavior is 1D -> row vector
arr = np.arange(5)
for format in ('4', '5'):
# Column is the default
stream = BytesIO()
savemat(stream, {'oned': arr}, format=format)
vals = loadmat(stream)
assert_equal(vals['oned'].shape, (1, 5))
# can be explicitly 'column' for oned_as
stream = BytesIO()
savemat(stream, {'oned':arr},
format=format,
oned_as='column')
vals = loadmat(stream)
assert_equal(vals['oned'].shape, (5,1))
# but different from 'row'
stream = BytesIO()
savemat(stream, {'oned':arr},
format=format,
oned_as='row')
vals = loadmat(stream)
assert_equal(vals['oned'].shape, (1,5))
def test_compression():
arr = np.zeros(100).reshape((5,20))
arr[2,10] = 1
stream = BytesIO()
savemat(stream, {'arr':arr})
raw_len = len(stream.getvalue())
vals = loadmat(stream)
yield assert_array_equal, vals['arr'], arr
stream = BytesIO()
savemat(stream, {'arr':arr}, do_compression=True)
compressed_len = len(stream.getvalue())
vals = loadmat(stream)
yield assert_array_equal, vals['arr'], arr
yield assert_, raw_len > compressed_len
# Concatenate, test later
arr2 = arr.copy()
arr2[0,0] = 1
stream = BytesIO()
savemat(stream, {'arr':arr, 'arr2':arr2}, do_compression=False)
vals = loadmat(stream)
yield assert_array_equal, vals['arr2'], arr2
stream = BytesIO()
savemat(stream, {'arr':arr, 'arr2':arr2}, do_compression=True)
vals = loadmat(stream)
yield assert_array_equal, vals['arr2'], arr2
def test_single_object():
stream = BytesIO()
savemat(stream, {'A':np.array(1, dtype=object)})
def test_skip_variable():
# Test skipping over the first of two variables in a MAT file
# using mat_reader_factory and put_variables to read them in.
#
# This is a regression test of a problem that's caused by
# using the compressed file reader seek instead of the raw file
# I/O seek when skipping over a compressed chunk.
#
# The problem arises when the chunk is large: this file has
# a 256x256 array of random (uncompressible) doubles.
#
filename = pjoin(test_data_path,'test_skip_variable.mat')
#
# Prove that it loads with loadmat
#
d = loadmat(filename, struct_as_record=True)
yield assert_, 'first' in d
yield assert_, 'second' in d
#
# Make the factory
#
factory = mat_reader_factory(filename, struct_as_record=True)
#
# This is where the factory breaks with an error in MatMatrixGetter.to_next
#
d = factory.get_variables('second')
yield assert_, 'second' in d
factory.mat_stream.close()
def test_empty_struct():
# ticket 885
filename = pjoin(test_data_path,'test_empty_struct.mat')
# before ticket fix, this would crash with ValueError, empty data
# type
d = loadmat(filename, struct_as_record=True)
a = d['a']
assert_equal(a.shape, (1,1))
assert_equal(a.dtype, np.dtype(object))
assert_(a[0,0] is None)
stream = BytesIO()
arr = np.array((), dtype='U')
# before ticket fix, this used to give data type not understood
savemat(stream, {'arr':arr})
d = loadmat(stream)
a2 = d['arr']
assert_array_equal(a2, arr)
def test_save_empty_dict():
# saving empty dict also gives empty struct
stream = BytesIO()
savemat(stream, {'arr': {}})
d = loadmat(stream)
a = d['arr']
assert_equal(a.shape, (1,1))
assert_equal(a.dtype, np.dtype(object))
assert_(a[0,0] is None)
def assert_any_equal(output, alternatives):
""" Assert `output` is equal to at least one element in `alternatives`
"""
one_equal = False
for expected in alternatives:
if np.all(output == expected):
one_equal = True
break
assert_(one_equal)
def test_to_writeable():
# Test to_writeable function
res = to_writeable(np.array([1])) # pass through ndarrays
assert_equal(res.shape, (1,))
assert_array_equal(res, 1)
# Dict fields can be written in any order
expected1 = np.array([(1, 2)], dtype=[('a', '|O8'), ('b', '|O8')])
expected2 = np.array([(2, 1)], dtype=[('b', '|O8'), ('a', '|O8')])
alternatives = (expected1, expected2)
assert_any_equal(to_writeable({'a':1,'b':2}), alternatives)
# Fields with underscores discarded
assert_any_equal(to_writeable({'a':1,'b':2, '_c':3}), alternatives)
# Not-string fields discarded
assert_any_equal(to_writeable({'a':1,'b':2, 100:3}), alternatives)
# String fields that are valid Python identifiers discarded
assert_any_equal(to_writeable({'a':1,'b':2, '99':3}), alternatives)
# Object with field names is equivalent
class klass(object):
pass
c = klass
c.a = 1
c.b = 2
assert_any_equal(to_writeable(c), alternatives)
# empty list and tuple go to empty array
res = to_writeable([])
assert_equal(res.shape, (0,))
assert_equal(res.dtype.type, np.float64)
res = to_writeable(())
assert_equal(res.shape, (0,))
assert_equal(res.dtype.type, np.float64)
# None -> None
assert_(to_writeable(None) is None)
# String to strings
assert_equal(to_writeable('a string').dtype.type, np.str_)
# Scalars to numpy to numpy scalars
res = to_writeable(1)
assert_equal(res.shape, ())
assert_equal(res.dtype.type, np.array(1).dtype.type)
assert_array_equal(res, 1)
# Empty dict returns EmptyStructMarker
assert_(to_writeable({}) is EmptyStructMarker)
# Object does not have (even empty) __dict__
assert_(to_writeable(object()) is None)
# Custom object does have empty __dict__, returns EmptyStructMarker
class C(object):
pass
assert_(to_writeable(c()) is EmptyStructMarker)
# dict keys with legal characters are convertible
res = to_writeable({'a': 1})['a']
assert_equal(res.shape, (1,))
assert_equal(res.dtype.type, np.object_)
# Only fields with illegal characters, falls back to EmptyStruct
assert_(to_writeable({'1':1}) is EmptyStructMarker)
assert_(to_writeable({'_a':1}) is EmptyStructMarker)
# Unless there are valid fields, in which case structured array
assert_equal(to_writeable({'1':1, 'f': 2}),
np.array([(2,)], dtype=[('f', '|O8')]))
def test_recarray():
# check roundtrip of structured array
dt = [('f1', 'f8'),
('f2', 'S10')]
arr = np.zeros((2,), dtype=dt)
arr[0]['f1'] = 0.5
arr[0]['f2'] = 'python'
arr[1]['f1'] = 99
arr[1]['f2'] = 'not perl'
stream = BytesIO()
savemat(stream, {'arr': arr})
d = loadmat(stream, struct_as_record=False)
a20 = d['arr'][0,0]
yield assert_equal, a20.f1, 0.5
yield assert_equal, a20.f2, 'python'
d = loadmat(stream, struct_as_record=True)
a20 = d['arr'][0,0]
yield assert_equal, a20['f1'], 0.5
yield assert_equal, a20['f2'], 'python'
# structs always come back as object types
yield assert_equal, a20.dtype, np.dtype([('f1', 'O'),
('f2', 'O')])
a21 = d['arr'].flat[1]
yield assert_equal, a21['f1'], 99
yield assert_equal, a21['f2'], 'not perl'
def test_save_object():
class C(object):
pass
c = C()
c.field1 = 1
c.field2 = 'a string'
stream = BytesIO()
savemat(stream, {'c': c})
d = loadmat(stream, struct_as_record=False)
c2 = d['c'][0,0]
assert_equal(c2.field1, 1)
assert_equal(c2.field2, 'a string')
d = loadmat(stream, struct_as_record=True)
c2 = d['c'][0,0]
assert_equal(c2['field1'], 1)
assert_equal(c2['field2'], 'a string')
def test_read_opts():
# tests if read is seeing option sets, at initialization and after
# initialization
arr = np.arange(6).reshape(1,6)
stream = BytesIO()
savemat(stream, {'a': arr})
rdr = MatFile5Reader(stream)
back_dict = rdr.get_variables()
rarr = back_dict['a']
assert_array_equal(rarr, arr)
rdr = MatFile5Reader(stream, squeeze_me=True)
assert_array_equal(rdr.get_variables()['a'], arr.reshape((6,)))
rdr.squeeze_me = False
assert_array_equal(rarr, arr)
rdr = MatFile5Reader(stream, byte_order=boc.native_code)
assert_array_equal(rdr.get_variables()['a'], arr)
# inverted byte code leads to error on read because of swapped
# header etc
rdr = MatFile5Reader(stream, byte_order=boc.swapped_code)
assert_raises(Exception, rdr.get_variables)
rdr.byte_order = boc.native_code
assert_array_equal(rdr.get_variables()['a'], arr)
arr = np.array(['a string'])
stream.truncate(0)
stream.seek(0)
savemat(stream, {'a': arr})
rdr = MatFile5Reader(stream)
assert_array_equal(rdr.get_variables()['a'], arr)
rdr = MatFile5Reader(stream, chars_as_strings=False)
carr = np.atleast_2d(np.array(list(arr.item()), dtype='U1'))
assert_array_equal(rdr.get_variables()['a'], carr)
rdr.chars_as_strings = True
assert_array_equal(rdr.get_variables()['a'], arr)
def test_empty_string():
# make sure reading empty string does not raise error
estring_fname = pjoin(test_data_path, 'single_empty_string.mat')
fp = open(estring_fname, 'rb')
rdr = MatFile5Reader(fp)
d = rdr.get_variables()
fp.close()
assert_array_equal(d['a'], np.array([], dtype='U1'))
# empty string round trip. Matlab cannot distiguish
# between a string array that is empty, and a string array
# containing a single empty string, because it stores strings as
# arrays of char. There is no way of having an array of char that
# is not empty, but contains an empty string.
stream = BytesIO()
savemat(stream, {'a': np.array([''])})
rdr = MatFile5Reader(stream)
d = rdr.get_variables()
assert_array_equal(d['a'], np.array([], dtype='U1'))
stream.truncate(0)
stream.seek(0)
savemat(stream, {'a': np.array([], dtype='U1')})
rdr = MatFile5Reader(stream)
d = rdr.get_variables()
assert_array_equal(d['a'], np.array([], dtype='U1'))
stream.close()
def test_corrupted_data():
import zlib
for exc, fname in [(ValueError, 'corrupted_zlib_data.mat'),
(zlib.error, 'corrupted_zlib_checksum.mat')]:
with open(pjoin(test_data_path, fname), 'rb') as fp:
rdr = MatFile5Reader(fp)
assert_raises(exc, rdr.get_variables)
def test_corrupted_data_check_can_be_disabled():
with open(pjoin(test_data_path, 'corrupted_zlib_data.mat'), 'rb') as fp:
rdr = MatFile5Reader(fp, verify_compressed_data_integrity=False)
rdr.get_variables()
def test_read_both_endian():
# make sure big- and little- endian data is read correctly
for fname in ('big_endian.mat', 'little_endian.mat'):
fp = open(pjoin(test_data_path, fname), 'rb')
rdr = MatFile5Reader(fp)
d = rdr.get_variables()
fp.close()
assert_array_equal(d['strings'],
np.array([['hello'],
['world']], dtype=object))
assert_array_equal(d['floats'],
np.array([[2., 3.],
[3., 4.]], dtype=np.float32))
def test_write_opposite_endian():
# We don't support writing opposite endian .mat files, but we need to behave
# correctly if the user supplies an other-endian numpy array to write out
float_arr = np.array([[2., 3.],
[3., 4.]])
int_arr = np.arange(6).reshape((2, 3))
uni_arr = np.array(['hello', 'world'], dtype='U')
stream = BytesIO()
savemat(stream, {'floats': float_arr.byteswap().newbyteorder(),
'ints': int_arr.byteswap().newbyteorder(),
'uni_arr': uni_arr.byteswap().newbyteorder()})
rdr = MatFile5Reader(stream)
d = rdr.get_variables()
assert_array_equal(d['floats'], float_arr)
assert_array_equal(d['ints'], int_arr)
assert_array_equal(d['uni_arr'], uni_arr)
stream.close()
def test_logical_array():
# The roundtrip test doesn't verify that we load the data up with the
# correct (bool) dtype
with open(pjoin(test_data_path, 'testbool_8_WIN64.mat'), 'rb') as fobj:
rdr = MatFile5Reader(fobj, mat_dtype=True)
d = rdr.get_variables()
x = np.array([[True], [False]], dtype=np.bool_)
assert_array_equal(d['testbools'], x)
assert_equal(d['testbools'].dtype, x.dtype)
def test_logical_out_type():
# Confirm that bool type written as uint8, uint8 class
# See gh-4022
stream = BytesIO()
barr = np.array([False, True, False])
savemat(stream, {'barray': barr})
stream.seek(0)
reader = MatFile5Reader(stream)
reader.initialize_read()
reader.read_file_header()
hdr, _ = reader.read_var_header()
assert_equal(hdr.mclass, mio5p.mxUINT8_CLASS)
assert_equal(hdr.is_logical, True)
var = reader.read_var_array(hdr, False)
assert_equal(var.dtype.type, np.uint8)
def test_mat4_3d():
# test behavior when writing 3D arrays to matlab 4 files
stream = BytesIO()
arr = np.arange(24).reshape((2,3,4))
assert_raises(ValueError, savemat, stream, {'a': arr}, True, '4')
def test_func_read():
func_eg = pjoin(test_data_path, 'testfunc_7.4_GLNX86.mat')
fp = open(func_eg, 'rb')
rdr = MatFile5Reader(fp)
d = rdr.get_variables()
fp.close()
assert_(isinstance(d['testfunc'], MatlabFunction))
stream = BytesIO()
wtr = MatFile5Writer(stream)
assert_raises(MatWriteError, wtr.put_variables, d)
def test_mat_dtype():
double_eg = pjoin(test_data_path, 'testmatrix_6.1_SOL2.mat')
fp = open(double_eg, 'rb')
rdr = MatFile5Reader(fp, mat_dtype=False)
d = rdr.get_variables()
fp.close()
yield assert_equal, d['testmatrix'].dtype.kind, 'u'
fp = open(double_eg, 'rb')
rdr = MatFile5Reader(fp, mat_dtype=True)
d = rdr.get_variables()
fp.close()
yield assert_equal, d['testmatrix'].dtype.kind, 'f'
def test_sparse_in_struct():
# reproduces bug found by DC where Cython code was insisting on
# ndarray return type, but getting sparse matrix
st = {'sparsefield': SP.coo_matrix(np.eye(4))}
stream = BytesIO()
savemat(stream, {'a':st})
d = loadmat(stream, struct_as_record=True)
yield assert_array_equal, d['a'][0,0]['sparsefield'].todense(), np.eye(4)
def test_mat_struct_squeeze():
stream = BytesIO()
in_d = {'st':{'one':1, 'two':2}}
savemat(stream, in_d)
# no error without squeeze
out_d = loadmat(stream, struct_as_record=False)
# previous error was with squeeze, with mat_struct
out_d = loadmat(stream,
struct_as_record=False,
squeeze_me=True,
)
def test_scalar_squeeze():
stream = BytesIO()
in_d = {'scalar': [[0.1]], 'string': 'my name', 'st':{'one':1, 'two':2}}
savemat(stream, in_d)
out_d = loadmat(stream, squeeze_me=True)
assert_(isinstance(out_d['scalar'], float))
assert_(isinstance(out_d['string'], string_types))
assert_(isinstance(out_d['st'], np.ndarray))
def test_str_round():
# from report by Angus McMorland on mailing list 3 May 2010
stream = BytesIO()
in_arr = np.array(['Hello', 'Foob'])
out_arr = np.array(['Hello', 'Foob '])
savemat(stream, dict(a=in_arr))
res = loadmat(stream)
# resulted in ['HloolFoa', 'elWrdobr']
assert_array_equal(res['a'], out_arr)
stream.truncate(0)
stream.seek(0)
# Make Fortran ordered version of string
in_str = in_arr.tostring(order='F')
in_from_str = np.ndarray(shape=a.shape,
dtype=in_arr.dtype,
order='F',
buffer=in_str)
savemat(stream, dict(a=in_from_str))
assert_array_equal(res['a'], out_arr)
# unicode save did lead to buffer too small error
stream.truncate(0)
stream.seek(0)
in_arr_u = in_arr.astype('U')
out_arr_u = out_arr.astype('U')
savemat(stream, {'a': in_arr_u})
res = loadmat(stream)
assert_array_equal(res['a'], out_arr_u)
def test_fieldnames():
# Check that field names are as expected
stream = BytesIO()
savemat(stream, {'a': {'a':1, 'b':2}})
res = loadmat(stream)
field_names = res['a'].dtype.names
assert_equal(set(field_names), set(('a', 'b')))
def test_loadmat_varnames():
# Test that we can get just one variable from a mat file using loadmat
mat5_sys_names = ['__globals__',
'__header__',
'__version__']
for eg_file, sys_v_names in (
(pjoin(test_data_path, 'testmulti_4.2c_SOL2.mat'), []), (pjoin(
test_data_path, 'testmulti_7.4_GLNX86.mat'), mat5_sys_names)):
vars = loadmat(eg_file)
assert_equal(set(vars.keys()), set(['a', 'theta'] + sys_v_names))
vars = loadmat(eg_file, variable_names='a')
assert_equal(set(vars.keys()), set(['a'] + sys_v_names))
vars = loadmat(eg_file, variable_names=['a'])
assert_equal(set(vars.keys()), set(['a'] + sys_v_names))
vars = loadmat(eg_file, variable_names=['theta'])
assert_equal(set(vars.keys()), set(['theta'] + sys_v_names))
vars = loadmat(eg_file, variable_names=('theta',))
assert_equal(set(vars.keys()), set(['theta'] + sys_v_names))
vars = loadmat(eg_file, variable_names=[])
assert_equal(set(vars.keys()), set(sys_v_names))
vnames = ['theta']
vars = loadmat(eg_file, variable_names=vnames)
assert_equal(vnames, ['theta'])
def test_round_types():
# Check that saving, loading preserves dtype in most cases
arr = np.arange(10)
stream = BytesIO()
for dts in ('f8','f4','i8','i4','i2','i1',
'u8','u4','u2','u1','c16','c8'):
stream.truncate(0)
stream.seek(0) # needed for BytesIO in python 3
savemat(stream, {'arr': arr.astype(dts)})
vars = loadmat(stream)
assert_equal(np.dtype(dts), vars['arr'].dtype)
def test_varmats_from_mat():
# Make a mat file with several variables, write it, read it back
names_vars = (('arr', mlarr(np.arange(10))),
('mystr', mlarr('a string')),
('mynum', mlarr(10)))
# Dict like thing to give variables in defined order
class C(object):
def items(self):
return names_vars
stream = BytesIO()
savemat(stream, C())
varmats = varmats_from_mat(stream)
assert_equal(len(varmats), 3)
for i in range(3):
name, var_stream = varmats[i]
exp_name, exp_res = names_vars[i]
assert_equal(name, exp_name)
res = loadmat(var_stream)
assert_array_equal(res[name], exp_res)
def test_one_by_zero():
# Test 1x0 chars get read correctly
func_eg = pjoin(test_data_path, 'one_by_zero_char.mat')
fp = open(func_eg, 'rb')
rdr = MatFile5Reader(fp)
d = rdr.get_variables()
fp.close()
assert_equal(d['var'].shape, (0,))
def test_load_mat4_le():
# We were getting byte order wrong when reading little-endian floa64 dense
# matrices on big-endian platforms
mat4_fname = pjoin(test_data_path, 'test_mat4_le_floats.mat')
vars = loadmat(mat4_fname)
assert_array_equal(vars['a'], [[0.1, 1.2]])
def test_unicode_mat4():
# Mat4 should save unicode as latin1
bio = BytesIO()
var = {'second_cat': u('Schrdinger')}
savemat(bio, var, format='4')
var_back = loadmat(bio)
assert_equal(var_back['second_cat'], var['second_cat'])
def test_logical_sparse():
# Test we can read logical sparse stored in mat file as bytes.
# See path_to_url
# In some files saved by MATLAB, the sparse data elements (Real Part
# Subelement in MATLAB speak) are stored with apparent type double
# (miDOUBLE) but are in fact single bytes.
filename = pjoin(test_data_path,'logical_sparse.mat')
# Before fix, this would crash with:
# ValueError: indices and data should have the same size
d = loadmat(filename, struct_as_record=True)
log_sp = d['sp_log_5_4']
assert_(isinstance(log_sp, SP.csc_matrix))
assert_equal(log_sp.dtype.type, np.bool_)
assert_array_equal(log_sp.toarray(),
[[True, True, True, False],
[False, False, True, False],
[False, False, True, False],
[False, False, False, False],
[False, False, False, False]])
def test_empty_sparse():
# Can we read empty sparse matrices?
sio = BytesIO()
import scipy.sparse
empty_sparse = scipy.sparse.csr_matrix([[0,0],[0,0]])
savemat(sio, dict(x=empty_sparse))
sio.seek(0)
res = loadmat(sio)
assert_array_equal(res['x'].shape, empty_sparse.shape)
assert_array_equal(res['x'].todense(), 0)
# Do empty sparse matrices get written with max nnz 1?
# See path_to_url
sio.seek(0)
reader = MatFile5Reader(sio)
reader.initialize_read()
reader.read_file_header()
hdr, _ = reader.read_var_header()
assert_equal(hdr.nzmax, 1)
def test_empty_mat_error():
# Test we get a specific warning for an empty mat file
sio = BytesIO()
assert_raises(MatReadError, loadmat, sio)
def test_miuint32_compromise():
# Reader should accept miUINT32 for miINT32, but check signs
# mat file with miUINT32 for miINT32, but OK values
filename = pjoin(test_data_path, 'miuint32_for_miint32.mat')
res = loadmat(filename)
assert_equal(res['an_array'], np.arange(10)[None, :])
# mat file with miUINT32 for miINT32, with negative value
filename = pjoin(test_data_path, 'bad_miuint32.mat')
with warnings.catch_warnings(record=True): # Py3k ResourceWarning
assert_raises(ValueError, loadmat, filename)
def test_miutf8_for_miint8_compromise():
# Check reader accepts ascii as miUTF8 for array names
filename = pjoin(test_data_path, 'miutf8_array_name.mat')
res = loadmat(filename)
assert_equal(res['array_name'], [[1]])
# mat file with non-ascii utf8 name raises error
filename = pjoin(test_data_path, 'bad_miutf8_array_name.mat')
with warnings.catch_warnings(record=True): # Py3k ResourceWarning
assert_raises(ValueError, loadmat, filename)
def test_bad_utf8():
# Check that reader reads bad UTF with 'replace' option
filename = pjoin(test_data_path,'broken_utf8.mat')
res = loadmat(filename)
assert_equal(res['bad_string'],
b'\x80 am broken'.decode('utf8', 'replace'))
if __name__ == "__main__":
run_module_suite()
```
|
Teri Kasam: (तेरी कसम) is a 1982 Indian Bollywood movie directed by A. C. Tirulokchandar and starring Kumar Gaurav, Poonam Dhillon, Girish Karnad, Ranjeeta and Nirupa Roy. It is a remake of the Tamil film Puguntha Veedu (1972).
Plot
Dolly (Poonam Dhillon) has been brought up by her rich brother in the most lavish fashion. Tony (Kumar Gaurav) is from a poor family and studies in the same college as Dolly. He is in love with Dolly but is shy to admit it. Dolly however loves an unknown voice. When she learns that it is Tony's voice, she decides to marry him. But Tony refuses to marry unless his sister Shanti is married. So Dolly's brother marries Shanti. Dolly's arrogance creates tension in the lives of everyone. She disrespects Tony's mother and admits her to a general ward of a hospital. When Tony learns this, he is infuriated and leaves his wife and his job. He takes to singing as profession and becomes a famous singer. What follows is a tale of realization on Dolly's and Tony's part. It is all about rich-poor relations as shown in earlier films. And then humiliation which in turn causes the rise of a hero, as a successful singer.
Cast
Kumar Gaurav as Tony
Poonam Dhillon as Dolly
Girish Karnad as Rakesh
Ranjeeta as Shanti
Nirupa Roy as Parvati
Paintal as Tony's Friend
Rakesh Bedi as Tony's Friend
Mushtaq Merchant as Tony's Friend
Manmauji as Tony's Friend
Birbal as Hotel Manager
Soundtrack
All songs of the movie were sung by Amit Kumar (including a duet song with Lata Mangeshkar) and became very popular.
Lyrics by Anand Bakshi
Awards
30th Filmfare Awards:
Nominated
Best Supporting Actor – Girish Karnad
Best Supporting Actress – Ranjeeta Kaur
Best Male Playback Singer – Amit Kumar for "Yeh Zameen Gaa Rahi Hai"
References
External links
Cult of Kumar
Hindi remakes of Tamil films
1982 films
1980s Hindi-language films
Films scored by R. D. Burman
Films directed by A. C. Tirulokchandar
|
```emacs lisp
;;; semantic/analyze/complete.el --- Smart Completions
;; Author: Eric M. Ludlam <zappo@gnu.org>
;; This file is part of GNU Emacs.
;; GNU Emacs is free software: you can redistribute it and/or modify
;; (at your option) any later version.
;; GNU Emacs is distributed in the hope that it will be useful,
;; but WITHOUT ANY WARRANTY; without even the implied warranty of
;; MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
;; along with GNU Emacs. If not, see <path_to_url
;;; Commentary:
;;
;; Calculate smart completions.
;;
;; Uses the analyzer context routine to determine the best possible
;; list of completions.
;;
;;; History:
;;
;; Code was moved here from semantic/analyze.el
(require 'semantic/analyze)
;; For semantic-find-* macros:
(eval-when-compile (require 'semantic/find))
;;; Code:
;;; Helper Fcns
;;
;;
;;;###autoload
(define-overloadable-function semantic-analyze-type-constants (type)
"For the tag TYPE, return any constant symbols of TYPE.
Used as options when completing.")
(defun semantic-analyze-type-constants-default (type)
"Do nothing with TYPE."
nil)
(defun semantic-analyze-tags-of-class-list (tags classlist)
"Return the tags in TAGS that are of classes in CLASSLIST."
(let ((origc tags))
;; Accept only tags that are of the datatype specified by
;; the desired classes.
(setq tags (apply 'nconc ;; All input lists are permutable.
(mapcar (lambda (class)
(semantic-find-tags-by-class class origc))
classlist)))
tags))
;;; MAIN completion calculator
;;
;;;###autoload
(define-overloadable-function semantic-analyze-possible-completions (context &rest flags)
"Return a list of semantic tags which are possible completions.
CONTEXT is either a position (such as point), or a precalculated
context. Passing in a context is useful if the caller also needs
to access parts of the analysis.
The remaining FLAGS arguments are passed to the mode specific completion engine.
Bad flags should be ignored by modes that don't use them.
See `semantic-analyze-possible-completions-default' for details on the default FLAGS.
Completions run through the following filters:
* Elements currently in scope
* Constants currently in scope
* Elements match the :prefix in the CONTEXT.
* Type of the completion matches the type of the context.
Context type matching can identify the following:
* No specific type
* Assignment into a variable of some type.
* Argument to a function with type constraints.
When called interactively, displays the list of possible completions
in a buffer."
(interactive "d")
;; In theory, we don't need the below since the context will
;; do it for us.
;;(semantic-refresh-tags-safe)
(if (semantic-active-p)
(with-syntax-table semantic-lex-syntax-table
(let* ((context (if (semantic-analyze-context-child-p context)
context
(semantic-analyze-current-context context)))
(ans (if (not context)
(error "Nothing to complete")
(:override))))
;; If interactive, display them.
(when (called-interactively-p 'any)
(with-output-to-temp-buffer "*Possible Completions*"
(semantic-analyze-princ-sequence ans "" (current-buffer)))
(shrink-window-if-larger-than-buffer
(get-buffer-window "*Possible Completions*")))
ans))
;; Buffer was not parsed by Semantic.
;; Raise error if called interactively.
(when (called-interactively-p 'any)
(error "Buffer was not parsed by Semantic."))))
(defun semantic-analyze-possible-completions-default (context &optional flags)
"Default method for producing smart completions.
Argument CONTEXT is an object specifying the locally derived context.
The optional argument FLAGS changes which return options are returned.
FLAGS can be any number of:
`no-tc' - do not apply data-type constraint.
`no-longprefix' - ignore long multi-symbol prefixes.
`no-unique' - do not apply unique by name filtering."
(let* ((a context)
(desired-type (semantic-analyze-type-constraint a))
(desired-class (oref a prefixclass))
(prefix (oref a prefix))
(prefixtypes (oref a prefixtypes))
(completetext nil)
(completetexttype nil)
(scope (oref a scope))
(localvar (when scope (oref scope localvar)))
(origc nil)
(c nil)
(any nil)
(do-typeconstraint (not (memq 'no-tc flags)))
(do-longprefix (not (memq 'no-longprefix flags)))
(do-unique (not (memq 'no-unique flags)))
)
(when (not do-longprefix)
;; If we are not doing the long prefix, shorten all the key
;; elements.
(setq prefix (list (car (reverse prefix)))
prefixtypes nil))
;; Calculate what our prefix string is so that we can
;; find all our matching text.
(setq completetext (car (reverse prefix)))
(if (semantic-tag-p completetext)
(setq completetext (semantic-tag-name completetext)))
(if (and (not completetext) (not desired-type))
(error "Nothing to complete"))
(if (not completetext) (setq completetext ""))
;; This better be a reasonable type, or we should fry it.
;; The prefixtypes should always be at least 1 less than
;; the prefix since the type is never looked up for the last
;; item when calculating a sequence.
(setq completetexttype (car (reverse prefixtypes)))
(when (or (not completetexttype)
(not (and (semantic-tag-p completetexttype)
(eq (semantic-tag-class completetexttype) 'type))))
;; What should I do here? I think this is an error condition.
(setq completetexttype nil)
;; If we had something that was a completetexttype but it wasn't
;; valid, then express our dismay!
(when (> (length prefix) 1)
(let* ((errprefix (car (cdr (reverse prefix)))))
(error "Cannot find types for `%s'"
(cond ((semantic-tag-p errprefix)
(semantic-format-tag-prototype errprefix))
(t
(format "%S" errprefix)))))
))
;; There are many places to get our completion stream for.
;; Here we go.
(if completetexttype
(setq c (semantic-find-tags-for-completion
completetext
(semantic-analyze-scoped-type-parts completetexttype scope)
))
;; No type based on the completetext. This is a free-range
;; var or function. We need to expand our search beyond this
;; scope into semanticdb, etc.
(setq c (nconc
;; Argument list and local variables
(semantic-find-tags-for-completion completetext localvar)
;; The current scope
(semantic-find-tags-for-completion completetext (when scope (oref scope fullscope)))
;; The world
(semantic-analyze-find-tags-by-prefix completetext))
)
)
(let ((loopc c)
(dtname (semantic-tag-name desired-type)))
;; Save off our first batch of completions
(setq origc c)
;; Reset c.
(setq c nil)
;; Loop over all the found matches, and categorize them
;; as being possible features.
(while (and loopc do-typeconstraint)
(cond
;; Strip operators
((semantic-tag-get-attribute (car loopc) :operator-flag)
nil
)
;; If we are completing from within some prefix,
;; then we want to exclude constructors and destructors
((and completetexttype
(or (semantic-tag-get-attribute (car loopc) :constructor-flag)
(semantic-tag-get-attribute (car loopc) :destructor-flag)))
nil
)
;; If there is a desired type, we need a pair of restrictions
(desired-type
(cond
;; Ok, we now have a completion list based on the text we found
;; we want to complete on. Now filter that stream against the
;; type we want to search for.
((string= dtname (semantic-analyze-type-to-name (semantic-tag-type (car loopc))))
(setq c (cons (car loopc) c))
)
;; Now anything that is a compound type which could contain
;; additional things which are of the desired type
((semantic-tag-type (car loopc))
(let ((att (semantic-analyze-tag-type (car loopc) scope))
)
(if (and att (semantic-tag-type-members att))
(setq c (cons (car loopc) c))))
)
) ; cond
); desired type
;; No desired type, no other restrictions. Just add.
(t
(setq c (cons (car loopc) c)))
); cond
(setq loopc (cdr loopc)))
(when desired-type
;; Some types, like the enum in C, have special constant values that
;; we could complete with. Thus, if the target is an enum, we can
;; find possible symbol values to fill in that value.
(let ((constants
(semantic-analyze-type-constants desired-type)))
(if constants
(progn
;; Filter
(setq constants
(semantic-find-tags-for-completion
completetext constants))
;; Add to the list
(setq c (nconc c constants)))
)))
)
(when desired-class
(setq c (semantic-analyze-tags-of-class-list c desired-class)))
(if do-unique
(if c
;; Pull out trash.
;; NOTE TO SELF: Is this too slow?
(setq c (semantic-unique-tag-table-by-name c))
(setq c (semantic-unique-tag-table-by-name origc)))
(when (not c)
(setq c origc)))
;; All done!
c))
(provide 'semantic/analyze/complete)
;; Local variables:
;; generated-autoload-file: "../loaddefs.el"
;; generated-autoload-load-name: "semantic/analyze/complete"
;; End:
;;; semantic/analyze/complete.el ends here
```
|
Anthony J. Gallela is a game designer who has worked primarily on board games and role-playing games.
Career
Anthony J. Gallela was a co-producer for the ManaFest and KublaCon game conventions; a freelance writer for several industry publications; a game store manager; and a consultant and broker for several award-winning games. Gallela was a co-developer of the Theatrix roleplaying game (published by Backstage Press), and the co-designer (with Japji Khalsa) of the adventure game, Dwarven Dig! (from Kenzer & Company), which earned him an Origins Award nomination. Gallela has been the executive director of the Game Manufacturers Association.
References
External links
Board game designers
Living people
Place of birth missing (living people)
Role-playing game designers
Year of birth missing (living people)
|
Acoustic quieting is the process of making machinery quieter by damping vibrations to prevent them from reaching the observer. Machinery vibrates, causing sound waves in air, hydroacoustic waves in water, and mechanical stresses in solid matter. Quieting is achieved by absorbing the vibrational energy or minimizing the source of the vibration. It may also be redirected away from the observer.
One of the major reasons for the development of acoustic quieting techniques was for making submarines difficult to detect by sonar. This military goal of the mid- and late-twentieth century allowed the technology to be adapted to many industries and products, such as computers (e.g. hard drive technology), automobiles (e.g. motor mounts), and even sporting goods (e.g. golf clubs).
Aspects of acoustic quieting
When the goal is acoustic quieting, a number of different aspects might be considered. Each aspect of acoustics can be taken alone or in concert so that the end result is that the reception of noise by the observer is minimized.
Acoustic quieting might consider...
Noise generation: by limiting the noise at its source,
Sympathetic vibrations: by acoustic decoupling,
Resonations: by acoustic damping or changing the size of the resonator,
Sound transmissions: by reducing transmission using many methods (depending whether the transmission is through air, liquid, or solid), or
Sound reflections: by limiting the reflection using many methods, e.g. by using acoustic absorption (deadening) materials, trapping the sound, opening a "window" to let sound out, etc.
By analyzing the entire sequence of events, from the source to the observer, an acoustic engineer can provide many ways to quieten the machine. The challenge is to do this in a practical and inexpensive way. The engineer might focus on changing materials, using a damping material, isolating the machine, running the machine in a vacuum, or running the machine slower.
Methods of quieting
Mechanical acoustic quieting
Sound isolation: Noise isolation is isolating noise to prevent it from transferring out of one area, using barriers like deadening materials to trap sound and vibrational energy. Example: In home and office construction, many builders place sound-control barriers (such as fiberglass batting) in walls to deaden the transmission of noise through them.
Noise absorption: In architectural acoustics, unwanted sounds can be absorbed rather than reflected inside the room of an observer. This is useful for noises with no point source and when a listener needs to hear sounds only from a point source and not echo reflections. Example: In a recording studio, sound proofing is accomplished with bass traps and anechoic chambers. Wallace Sabine, an American physicist, is credited with studying sound reverberations in 1900, and Carl Eyring revised his equations in 1930 for Bell Labs. Another example is the ubiquitous use of dropped ceilings and acoustical tiles in modern office buildings with high ceilings. Submarine hulls have special coatings that absorb sound.
Acoustic damping: Vibration isolation prevents vibration from transferring beyond the device into another material. Damping mounts have progressed in the industry to offer vibrational resistance in many degrees of freedom. Recent advances include shock isolators damping in at least six degrees of freedom. Acoustic damping also has uses in seismic shock protection of buildings. Motors and rotating shafts are commonly fitted with these mounts at the points where they contact the building or the chassis of a large machine.
Acoustic decoupling: certain parts of a machine can be built to keep the frame, chassis, or external shafts from receiving unwanted vibrations from a moving part. Example: Volkswagen has registered a patent for an "acoustically decoupled underbody for a motor vehicle.". Another example: Western Digital has registered a patent for an "acoustic vibration decoupler for a disk drive pivot bearing assembly.".
Preventing stalls: Whenever a machine undergoes an aerodynamic stall, it will abruptly vibrate.
Preventing cavitation: When a machine is in contact with a fluid, it may be susceptible to cavitation. The sounds of gas bubbles imploding is the source of the noise. Ships and submarines which have screws that cavitate are more vulnerable to detection by sonar.
Preventing water hammer: In hydraulics and plumbing, water hammer is a known cause for the failure of piping systems. It also generates considerable noise. A valve that abruptly opens or shuts is the most common cause for water hammer.
Shock absorption: Just as automotive shock absorbers are used to prevent mechanical shocks from reaching the passengers in a car, they are also important for quieting shocks.
Reduction of resonance: Essentially any piece of metal or glass has certain frequencies to which it is susceptible to resonate. A machine that resonates would make a tremendous noise. Resonance also occurs in enclosures, such as when echoes reverberate in an ocarina or the pipe of a pipe organ.
Material selection: By choosing nonmetallic components, the transmission of sound and vibrations can be minimized. For example: instead of using rigid brass fittings, a machine using flexible plastic pipe fittings may be much quieter. In some cases air can be evacuated from a machine and sealed hermetically, the vacuum inside becoming a barrier to sound transmission. In cases where porous plastic materials are used in acoustic applications, the porosity of the plastic is adjusted to either dampen specific wavelengths or for minimal sound loss in a speaker grill cover.
Quieting for specific observers
Underwater acoustics: All of the above types of acoustic quieting apply to submarines. Additionally, a submarine may employ a tactic that prevent sounds from reaching a listener at a particular ocean depth. Operating below the depth of the sound channel axis, where the speed of sound in water is the lowest, a submarine can prevent detection by surface ships, unless these ships use equipment like a towed array and/or an underwater drone to place hydrophones below the sound channel axis.
Sound refraction: Just as a submarine can use refraction to hide its acoustic signature from surface vessels, the same principle of sound refraction can be used to prevent certain observers from hearing the noise. For example, an outdoor observer close to the ground will have sound waves refracted toward him when the ground is cooler than the ambient air and away from him when the ground is hotter than the air.
Sound redirection: One of the obvious ways to reduce the received sound level of an observer is to place the observer out of the path of the highest amplitude sounds. For example, if we mark off a circle around a jet engine and make sound power level observations along that circle, we would expect that the sound is loudest directly in line with the jet's exhaust. Observations perpendicular to the exhaust would be significantly quieter.
Hearing protection: An observer may be forced to wear ear plugs in areas of high ambient noise levels. This may be the only quieting method available in areas of noise pollution, such as an open-air firing range or an airport.
Electronic quieting
Electronic vibration control: Electronics, sensors, and computers are now employed to reduce vibration. Using high speed logic, vibrations can be damped quickly and effectively by counteracting the motion before it exceeds a certain threshold.
Electronic noise control: Electronics, sensors, and computers are also employed to cancel noise by using phase cancellation which matches the sound amplitude with a wave of the opposite polarity. This method employs the use of an active sound generating device, such as a loudspeaker to counteract ambient noise in an area. See noise-canceling headphone. Workers in noisy environments may favor this method over ear plugs.
Noise reduction: In sound and video equipment, noise reduction is the process of removing noise from a signal. This is strictly for electronic noise or noise which has been detected and put into electronic form.
Noise canceling: If both the noise and the signal are received by an electronic or digital medium, noise can be filtered from the signal electronically and retransmitted without the noise. See noise-canceling microphone. Helicopter pilots rely on this technology to speak on the radio.
See also
Acoustic signature
Noise reduction, for electronic noise
Sound masking, for noise masking by saturation
Pink noise
Stealth technology, for signature reduction in general
Longitudinal wave
Soundproofing
Mechanical resonance
Sound masking
Seismic retrofit
Helicopter noise reduction
Muffler
Deperming
Degaussing
References
Acoustics
Noise control
Stealth technology
|
```objective-c
/*############################################################################
#
#
# path_to_url
#
# Unless required by applicable law or agreed to in writing, software
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
############################################################################*/
/// Common public definitions for math headers
/*! \file */
#ifndef EPID_MEMBER_TINY_MATH_MATHDEFS_H_
#define EPID_MEMBER_TINY_MATH_MATHDEFS_H_
/// Return code representing failure of math function
#define MATH_FAIL 0
/// Return code representing success of math function
#define MATH_SUCCESS 1
#endif // EPID_MEMBER_TINY_MATH_MATHDEFS_H_
```
|
Magic Goes Wrong is a comedy play by Henry Lewis, Jonathan Sayer, Henry Shields (of Mischief Theatre Company) and Penn & Teller. It follows the series of Mischief's Goes Wrong series of plays following The Play That Goes Wrong and Peter Pan Goes Wrong.
Production history
The play opened in the Quays Theatre at The Lowry, Salford from 6 to 11 August 2019, prior to its opening in London's West End at the Vaudeville Theatre from 14 December 2019. It is notable for employing a far greater level of black comedy than previous instalments in the Goes Wrong series, including many on-stage gory demises for the guest characters. Although, slightly paradoxically, the ending is considerably more upbeat and sentimental than the group's other productions.
The play was the third Mischief production running simultaneously in the West End alongside the long-running productions of The Play That Goes Wrong and The Comedy About a Bank Robbery (until its closure in March 2020), and the fourth in London while Peter Pan Goes Wrong played the Christmas 2019 season at the Alexandra Palace.
In March 2020, the play stopped performances due to the COVID-19 pandemic. The play resumed its London run on 21 October 2021 at a new venue, the Apollo Theatre. A UK tour commenced at the Curve, Leicester from 20 July 2021.
Cast and characters
Awards and nominations
References
2019 plays
Mischief Theatre
West End plays
British plays
|
Espérance Sportive de Mostaganem (), known as ES Mostaganem or simply 'ESM for short, is an Algerian football club in Mostaganem. The club was founded in 1940 and its colours are green and white. Their home stadium, Mohamed Bensaïd Stadium, has a capacity of 37,000 spectators. The club is currently playing in the Algerian Ligue 2.
History
On Mars 25, 2018, ES Mostaganem were promoted to the Algerian Ligue Professionnelle 2 after winning 2017–18 Ligue Nationale du Football Amateur "Group West".
On May 28, 2022, ES Mostaganem were promoted to the Algerian Ligue 2.
Honours
Algerian Cup:
Runner-up (2 times): 1962–63, 1964–65
Algerian Third Division:
Champion (1 time): 2007–08
References
External links
Football clubs in Algeria
Association football clubs established in 1940
Es Mostaganem
Algerian Ligue 2 clubs
1940 establishments in Algeria
Sports clubs and teams in Algeria
|
```c++
#define BOOST_TEST_MODULE find_flow_cost_bundled_properties_and_named_params_test
#include <boost/test/unit_test.hpp>
#include <boost/graph/successive_shortest_path_nonnegative_weights.hpp>
#include <boost/graph/find_flow_cost.hpp>
#include "min_cost_max_flow_utils.hpp"
typedef boost::adjacency_list_traits<boost::vecS,boost::vecS,boost::directedS> traits;
struct edge_t {
double capacity;
float cost;
float residual_capacity;
traits::edge_descriptor reversed_edge;
};
struct node_t {
traits::edge_descriptor predecessor;
int dist;
int dist_prev;
boost::vertex_index_t id;
};
typedef boost::adjacency_list<boost::listS, boost::vecS, boost::directedS, node_t, edge_t > Graph;
// Unit test written in order to fails (at compile time) if the find_flow_cost()
// is not properly handling bundled properties
BOOST_AUTO_TEST_CASE(using_bundled_properties_with_find_max_flow_test)
{
Graph g;
traits::vertex_descriptor s,t;
boost::property_map<Graph,double edge_t::* >::type capacity = get(&edge_t::capacity, g);
boost::property_map<Graph,float edge_t::* >::type cost = get(&edge_t::cost, g);
boost::property_map<Graph,float edge_t::* >::type residual_capacity = get(&edge_t::residual_capacity, g);
boost::property_map<Graph,traits::edge_descriptor edge_t::* >::type rev = get(&edge_t::reversed_edge, g);
boost::property_map<Graph,traits::edge_descriptor node_t::* >::type pred = get(&node_t::predecessor, g);
boost::property_map<Graph,boost::vertex_index_t>::type vertex_indices = get(boost::vertex_index, g);
boost::property_map<Graph,int node_t::* >::type dist = get(&node_t::dist, g);
boost::property_map<Graph,int node_t::* >::type dist_prev = get(&node_t::dist_prev, g);
boost::SampleGraph::getSampleGraph(g,s,t,capacity,residual_capacity,cost,rev);
boost::successive_shortest_path_nonnegative_weights(g,s,t,
capacity,residual_capacity,cost,rev,vertex_indices,
pred,dist,dist_prev);
// The "bundled properties" version (producing errors)
int flow_cost = boost::find_flow_cost(g,capacity,residual_capacity,cost);
BOOST_CHECK_EQUAL(flow_cost, 29);
}
// Unit test written in order to fails (at compile time) if the find_flow_cost()
// is not properly handling bundled properties
BOOST_AUTO_TEST_CASE(your_sha256_hasht)
{
Graph g;
traits::vertex_descriptor s,t;
boost::property_map<Graph,double edge_t::* >::type capacity = get(&edge_t::capacity, g);
boost::property_map<Graph,float edge_t::* >::type cost = get(&edge_t::cost, g);
boost::property_map<Graph,float edge_t::* >::type residual_capacity = get(&edge_t::residual_capacity, g);
boost::property_map<Graph,traits::edge_descriptor edge_t::* >::type rev = get(&edge_t::reversed_edge, g);
boost::property_map<Graph,traits::edge_descriptor node_t::* >::type pred = get(&node_t::predecessor, g);
boost::property_map<Graph,boost::vertex_index_t>::type vertex_indices = get(boost::vertex_index, g);
boost::property_map<Graph,int node_t::* >::type dist = get(&node_t::dist, g);
boost::property_map<Graph,int node_t::* >::type dist_prev = get(&node_t::dist_prev, g);
boost::SampleGraph::getSampleGraph(g,s,t,capacity,residual_capacity,cost,rev);
boost::successive_shortest_path_nonnegative_weights(g,s,t,
capacity,residual_capacity,cost,rev,vertex_indices,
pred,dist,dist_prev);
// The "named parameters" version (with "bundled properties"; producing errors)
int flow_cost = boost::find_flow_cost(g,
boost::capacity_map(capacity)
.residual_capacity_map(residual_capacity)
.weight_map(cost));
BOOST_CHECK_EQUAL(flow_cost, 29);
}
```
|
Krüller is the seventh studio album by one-man industrial metal band Author & Punisher, released on February 11, 2022 via Relapse Records. Produced by Tristan Shone of Author & Punisher and electronic musician Vytear, the album features contributions from Perturbator, as well as from Tool members Danny Carey and Justin Chancellor.
Background and recording
Early writing process for Krüller started in 2020, following the band's 2020 tour with Tool, which was cut short due to the advent of COVID-19 pandemic. On that tour, Shone has recorded his live performances, which he analyzed and reflected upon following the end of the tour. He has and opted to focus on "the balance between heaviness and melody" on the next record. Building upon Shone's custom-built instruments, the "dub" and "drone" machines, the album was co-produced by IDM musician Jason Begin, who also aided in mixing duties. Begin also contributed to a substantial portion of the beats and electronic programming on the track "Blacksmith". In addition, the album saw the re-introduction of guitars to Author & Punisher's music, with Phil Sgrosso's contributions. On the track "Centurion", Shone worked with Tool bassist Justin Chancellor, while the track "Misery" featured drumming contributions from Tool drummer Danny Carey, with whom Shone has met prior to the 2020 tour. The latter track featured a programmed beat in addition to Carey's drumming. The cover of "Glory Box" by Portishead has been performed live by Shone since 2007.
Music and lyrics
With the re-introduction of guitars, Krüller showcases alternative rock, shoegaze and gothic metal influences. Pitchfork critic Brian Howe has noted that the record "sounds as much like Alice in Chains as it does Godflesh, Throbbing Gristle, or even Nine Inch Nails." AllMusic's Paul Simpson has stated that "tracks like "Incinerator" are reminiscent of the industrial side of '90s alternative metal, but drawn out and extra bleak and apocalyptic." Sam Law of Kerrang! noted that "the pounding, dissonant industrial of Godflesh and Scorn is still prevalent in the mix, but now it’s augmented by the eerie Blade Runner synths of Vangelis and vintage Nine Inch Nails’ ability to make inhuman sonic surfaces sweat sex and sleaze."
The topics covered in Krüller influence climate change, war, survivalism and COVID-19 pandemic, as well as the contemporary socio-political climate of the United States. According to Shone, works of Octavia E. Butler, Ursula Le Guin, and Margaret Atwood has influenced the lyrical themes of the album. The track "Drone Carrying Dread" was particularly inspired by Butler's book Parable of the Sower. Lyrics of "Centurion" covers the 2021 United States Capitol attack, while the track "Misery" was written in response to the controversy regarding the building of the Trump wall over the ancient burial site of the Kumeyaay nation. Shone has also described the track "Blacksmith" as "an ode to Black women–led movements who have been fearlessly leading the way against oppression in this country for a long time."
Critical reception
Krüller has received generally positive reviews. At Metacritic, which assigns a normalized rating out of 100 to reviews from mainstream critics, the album has an average score of 79 based on 6 reviews, indicating "generally favorable reviews". AllMusic critic Paul Simpson has noted the record to be "one of the most accessible-sounding Author & Punisher releases," while labeling it as "still vast and uncompromising." Dom Lawson of Blabbermouth.net described Krüller as "a triumph for the visceral potential of sound, and the emotional catharsis that comes when everything is cranked up to the absolute max," while Kerrang!s Sam Law stated: "Ultimately, though, Krüller is best experienced not in its individual segments but as an overwhelming whole. The meld of muscle and mechanisation still demands that listeners hand themselves over entirely."
Alex Deller of Metal Hammer noted that the expansion of sounds on Krüller "allow Shone to paint his bleak, mono-chromatic visions with ever more subtle shades of grey." Metal Injection's Riley Rowe noted that the record "shows a human side of Shone, more right side of the brain, and an openness to more perspective." Writing for The Line of Best Fit, Kate Crudgington described the record as "a sonic purge that rages and recoils in equal measure, enhanced by collaboration, but with Shone remaining the master of ceremonies of his distinctive noise." Brian Howe of Pitchfork wrote: "If Krüller is warmed by a nostalgic human past, it also bears the chill of a posthuman future where the machines grind on without us, an intimation that seeps from his music like a corrosive fluid and lends these songs a bitter, heroic weight." Punknews.org critic John Gentile considered the record as "cyborg music driven by metal fingers, but the human heart is still intact."
Track listing
All tracks written by Tristan Shone except where noted.
Personnel
Tristan Shone — performance, production, mixing
Phil Sgrosso — guitar
Vytear — production
Jason Begin — mixing
Brad Boatright — mastering
Danny Carey — guest performance (5)
Justin Chancellor — guest performance (3)
Zlatko Mitev — artwork
References
External links
Krüller on Bandcamp
2022 albums
Author & Punisher albums
Relapse Records albums
Avant-garde metal albums
Albums about the COVID-19 pandemic
Science fiction albums
Works about survival skills
Works about American politics
|
Forced displacement and the experiences of refugees, asylum seekers and otherwise forcibly displaced people became of increasing interest in the popular culture since 2015 with the European migrant crisis.
Books
Fiction
Refugee Tales: Volume II: 2 by Jackie Kay et al., 2017
Exit West by Mohsin Hamid, 2017
Refugee Tales, by Ali Smith et al., 2016
What Is the What by Dave Eggers, 2006
Refugee Boy by Benjamin Zephaniah, 2001
Children's books
Where Will I Live? by Rosemary McCarney, 2017
Stormy Seas. Stories of Young Boat Refugees by Mary Beth Leatherdale, 2017
Stepping Stones. A Refugee Family's Journey by Margriet Ruurs, 2016
Refugee by Alan Gratz, 2016
Poems
Sisters' Entrance by Emtithal Mahmoud, 2018
Non-Fiction
The New Odyssey: The Story of Europe's Refugee Crisis by Patrick Kingsley, 2017
Cast Away: Stories of Survival from Europe's Refugee Crisis by Charlotte McDonald-Gibson, 2017
Refuge: Transforming a Broken Refugee System by Alexander Betts and Paul Collier, 2017
Violent Borders: Refugees and the Right to Move by Reece Jones, 2017
A Hope More Powerful Than the Sea by Melissa Fleming, 2017
Refugee Stories: Seven personal journeys behind the headlines by Dave Smith, 2016
City of Thorns: Nine Lives in the World's Largest Refugee Camp by Ben Rawlence, 2016
The Morning They Came for Us: Dispatches from Syria by Janine di Giovanni, 2016
Refugee Economies: Forced Displacement and Development by Alexander Betts et al., 2016
The Making of the Modern Refugee by Peter Gatrell, 2015
The Lightless Sky: My Journey to Safety as a Child Refugee, by Gulwali Passarlay, 2015
Human Cargo: A Journey Among Refugees by Caroline Moorehead, 2006
They Poured Fire on Us From the Sky by Judy A. Bernstein, 2005, is the story three of the Lost Boys of Sudan
Escape From Manus, 2021 autobiography by Jaivet Ealom
Film
Drama
The Other Side of Hope by Aki Kaurismäki, 2017
Pawo by Marvin Litwak, 2016
Mediterranea by Jonas Carpignano, 2015
Ohthes by Panos Karkanevatos, 2015
Hope by Boris Lojkine, 2014
The Golden Dream by Diego Quemada-Díez, 2013
Le Havre by Aki Kaurismäki, 2011
In This World by Michael Winterbottom, 2002
Baran by Majid Majidi, 2001
Last Resort by Paweł Pawlikowski, 2000
Fantasy
Encanto by Jared Bush and Byron Howard, 2021
Documentary
Island of the Hungry Ghosts by Gabrielle Brady, 2018
Stop the boats by Simon V. Kurian, 2018
Watan by James L. Brown and Bill Irving, 2018
Human Flow by Ai Weiwei, 2017
Warehoused: The forgotten refugees of Dadaab by Asher Emmanuel and Vincent Vittorio, 2017
Refugee by Alexander J. Farrell, 2017
Re-Calais by Yann Moix, 2017
69 Minutes of 86 Days by Egil Håskjold Larsen, 2017
Hope Road by Tom Zubrycki, 2017
Sea Sorrow by Vanessa Redgrave, 2017
8 Borders, 8 Days by Amanda Bailly, 2017
Fire at Sea by Gianfranco Rosi, 2016
Ta'ang by Wang Bing, 2016
4.1 Miles by Daphne Matziaraki, 2016
Influx by Luca Vullo, 2016
The Art of Moving by Liliana Dulce Marinho de Sousa, 2016
Between Fences by Avi Mograbi, 2016
Stranger in Paradise by Guido Hendrikx, 2016
Rifles or Graffiti by Jordi Oriola Folch, 2016
Born in Syria by Hernán Zin, 2016
The Invisible City: Kakuma by Lieven Corthouts, 2016
Salam Neighbor by Chris Temple and Zach Ingrasci, 2015
Beats of the Antonov by Hajooj Kuka, 2014
On The Bride's Side by Antonio Augugliaro et al., 2014
Moving to Mars by Mat Whitecross, 2009
Lost Boys of Sudan by Megan Mylan and Jon Shenk, 2003
Short film
I am Rebecca by Eve Doherty and Kate McCaslin, 2017
I felt it too by Iamia Aboukheir, 2017
Only My Voice by Myriam Rey, 2017
Refugee by Joyce Chen and Emily Moore, 2016
Refugee by Adam Tyler, 2016
Lifestories: The Lost Boys of Sudan by J.D. Martin, 2008
TV series
Exodus: Our Journey was shown on BBC Two in 2017. It started with 3 episodes called Exodus: Our Journey to Europe and was followed by 3 episodes called Exodus: Our Journey Continues
Theatre
The Claim by Tim Cowbury and Mark Maughan, 2017
The Jungle by Joe Murphy and Joe Robertson, 2017
Fireworks by Dalia Taha and Richard Twyman, 2015
Refugee Boy by Lemn Sissay, 2013
Refugees by Zlatko Topčić, 1999
Painting
Refugees by Jēkabs Kazaks, 1917
See also
Asylum seekers
Lost Boys of Sudan#Books, films and plays
Refugees
Refugee employment
References
Refugees
|
Nietupa is a village in the administrative district of Gmina Krynki, within Sokółka County, Podlaskie Voivodeship, in north-eastern Poland, close to the border with Belarus.
References
Nietupa
|
```php
<?php
class RequestsTests_Encoding extends PHPUnit_Framework_TestCase {
protected static function mapData($type, $data) {
$real_data = array();
foreach ($data as $value) {
$key = $type . ': ' . $value[0];
$real_data[$key] = $value;
}
return $real_data;
}
public static function gzipData() {
return array(
array(
'foobar',
"\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\x03\x4b\xcb\xcf\x4f\x4a"
. "\x2c\x02\x00\x95\x1f\xf6\x9e\x06\x00\x00\x00",
),
array(
'Requests for PHP',
"\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\x03\x0b\x4a\x2d\x2c\x4d"
. "\x2d\x2e\x29\x56\x48\xcb\x2f\x52\x08\xf0\x08\x00\x00\x58\x35"
. "\x18\x17\x10\x00\x00\x00",
),
);
}
public static function deflateData() {
return array(
array(
'foobar',
"\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\x03\x78\x9c\x4b\xcb\xcf"
. "\x4f\x4a\x2c\x02\x00\x08\xab\x02\x7a"
),
array(
'Requests for PHP',
"\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\x03\x78\x9c\x0b\x4a\x2d"
. "\x2c\x4d\x2d\x2e\x29\x56\x48\xcb\x2f\x52\x08\xf0\x08\x00\x00"
. "\x34\x68\x05\xcc"
)
);
}
public static function deflateWithoutHeadersData() {
return array(
array(
'foobar',
"\x78\x9c\x4b\xcb\xcf\x4f\x4a\x2c\x02\x00\x08\xab\x02\x7a"
),
array(
'Requests for PHP',
"\x78\x9c\x0b\x4a\x2d\x2c\x4d\x2d\x2e\x29\x56\x48\xcb\x2f\x52"
. "\x08\xf0\x08\x00\x00\x34\x68\x05\xcc"
)
);
}
public static function encodedData() {
$datasets = array();
$datasets['gzip'] = self::gzipData();
$datasets['deflate'] = self::deflateData();
$datasets['deflate without zlib headers'] = self::deflateWithoutHeadersData();
$data = array();
foreach ($datasets as $key => $set) {
$real_set = self::mapData($key, $set);
$data = array_merge($data, $real_set);
}
return $data;
}
/**
* @dataProvider encodedData
*/
public function testDecompress($original, $encoded) {
$decoded = Requests::decompress($encoded);
$this->assertEquals($original, $decoded);
}
/**
* @dataProvider encodedData
*/
public function testCompatibleInflate($original, $encoded) {
$decoded = Requests::compatible_gzinflate($encoded);
$this->assertEquals($original, $decoded);
}
protected function bin2hex($field) {
$field = bin2hex($field);
$field = chunk_split($field,2,"\\x");
$field = "\\x" . substr($field,0,-2);
return $field;
}
}
```
|
Yevgeny Valeryevich Pomazan (; born 31 January 1989) is a Russian footballer.
Career
Pomazan previously played for FC Kuban Krasnodar. He was part of the Russian Under 17 squad that won the 2006 UEFA European Under-17 Football Championship. In 2007, he represented Europe at the Meridian Cup, which the European team won 10–1 on aggregate over two games.
On the final day of the transfer window in the summer of 2007, Pomazan signed on loan with CSKA Moscow. He made his Russian Premier League debut for CSKA Moscow on 28 October 2007 in a 4–2 game against FC Krylia Sovetov Samara.
In the June 2013 Pomazan joined newly promoted Ural Sverdlovsk Oblast on a season-long loan.
On 2 July 2014 Pomazan joined Kuban Krasnodar on a season-long loan deal. He only played one game for Kuban in the Russian Cup.
Career honours
Russian Cup: 2008, 2009
References
External links
CSKA Moscow profile
1989 births
People from Tashkent Region
Living people
Russian men's footballers
Russia men's youth international footballers
Russia men's under-21 international footballers
Russia men's B international footballers
Men's association football goalkeepers
FC Kuban Krasnodar players
PFC CSKA Moscow players
FC Ural Yekaterinburg players
PFC Spartak Nalchik players
FC Anzhi Makhachkala players
FC Baltika Kaliningrad players
FC Dinamo Minsk players
FC Chayka Peschanokopskoye players
FC SKA-Khabarovsk players
Russian Premier League players
Russian First League players
Russian Second League players
Belarusian Premier League players
Russian expatriate men's footballers
Expatriate men's footballers in Belarus
Russian expatriate sportspeople in Belarus
|
```swift
//
// RandomColorizationTests.swift
// RandomColorizationTests
//
// Created by Allen on 16/1/14.
//
import XCTest
@testable import RandomColorization
class RandomColorizationTests: XCTestCase {
override func setUp() {
super.setUp()
// Put setup code here. This method is called before the invocation of each test method in the class.
}
override func tearDown() {
// Put teardown code here. This method is called after the invocation of each test method in the class.
super.tearDown()
}
func testExample() {
// This is an example of a functional test case.
// Use XCTAssert and related functions to verify your tests produce the correct results.
}
func testPerformanceExample() {
// This is an example of a performance test case.
self.measure {
// Put the code you want to measure the time of here.
}
}
}
```
|
Franz Xaver Johann Nepomuk Graf Saint-Julien und Walsee (French: François-Xavier de Guyard, comte de Saint-Julien) (baptised 12 October 1756; died 16 January 1836 in Skalička) was an Austrian infantry commander during the French Revolutionary Wars and the War of the Fifth Coalition.
Footnotes
Austrian Empire military leaders of the French Revolutionary Wars
Austrian Empire commanders of the Napoleonic Wars
1756 births
1836 deaths
|
1999 Kyoto Purple Sanga season
Competitions
Domestic results
J.League 1
Emperor's Cup
J.League Cup
Player statistics
Other pages
J.League official site
Kyoto Purple Sanga
Kyoto Sanga FC seasons
|
```c++
/*
*
* path_to_url for terms and conditions.
*/
#include "V82JSC.h"
#include "ObjectTemplate.h"
#include "Object.h"
#include "JSCPrivate.h"
#include <string.h>
using namespace V82JSC;
using namespace v8;
static GenericNamedPropertyGetterCallback NullNamedGetter =
[](Local<Name> property, const PropertyCallbackInfo<v8::Value>& info) {};
static GenericNamedPropertySetterCallback NullNamedSetter =
[](Local<Name> property, Local<v8::Value> value, const PropertyCallbackInfo<v8::Value>& info) {};
static GenericNamedPropertyDescriptorCallback NullNamedDescriptor =
[](Local<Name> property, const PropertyCallbackInfo<v8::Value>& info) {};
static GenericNamedPropertyDeleterCallback NullNamedDeleter =
[](Local<Name> property, const PropertyCallbackInfo<v8::Boolean>& info) {};
static GenericNamedPropertyEnumeratorCallback NullNamedEnumerator =
[](const PropertyCallbackInfo<Array>& info) {};
static GenericNamedPropertyDefinerCallback NullNamedDefiner =
[](Local<Name> property, const PropertyDescriptor& desc, const PropertyCallbackInfo<v8::Value>& info) {};
static GenericNamedPropertyQueryCallback NullNamedQuery =
[](Local<Name> property, const PropertyCallbackInfo<Integer>& info) {};
static IndexedPropertyGetterCallback NullIndexedGetter =
[](uint32_t index, const PropertyCallbackInfo<v8::Value>& info) {};
static IndexedPropertySetterCallback NullIndexedSetter =
[](uint32_t index, Local<v8::Value> value, const PropertyCallbackInfo<v8::Value>& info) {};
static IndexedPropertyDescriptorCallback NullIndexedDescriptor =
[](uint32_t index, const PropertyCallbackInfo<v8::Value>& info) {};
static IndexedPropertyDeleterCallback NullIndexedDeleter =
[](uint32_t index, const PropertyCallbackInfo<v8::Boolean>& info) {};
static IndexedPropertyEnumeratorCallback NullIndexedEnumerator =
[](const PropertyCallbackInfo<Array>& info) {};
static IndexedPropertyDefinerCallback NullIndexedDefiner =
[](uint32_t index, const PropertyDescriptor& desc, const PropertyCallbackInfo<v8::Value>& info) {};
static IndexedPropertyQueryCallback NullIndexedQuery =
[](uint32_t index, const PropertyCallbackInfo<Integer>& info) {};
struct DefaultNamedHandlers : public NamedPropertyHandlerConfiguration
{
DefaultNamedHandlers() : NamedPropertyHandlerConfiguration
(NullNamedGetter, NullNamedSetter, NullNamedDescriptor, NullNamedDeleter,
NullNamedEnumerator, NullNamedDefiner) { query = NullNamedQuery; }
};
struct DefaultIndexedHandlers : public IndexedPropertyHandlerConfiguration
{
DefaultIndexedHandlers() : IndexedPropertyHandlerConfiguration
(NullIndexedGetter, NullIndexedSetter, NullIndexedDescriptor, NullIndexedDeleter,
NullIndexedEnumerator, NullIndexedDefiner) { query = NullIndexedQuery; }
};
#define THROW_ACCESS_ERROR() \
info.GetIsolate()->ThrowException(Exception::TypeError(v8::String::NewFromUtf8(info.GetIsolate(), "access denied", \
v8::NewStringType::kNormal).ToLocalChecked()));
struct AccessDeniedNamedHandlers : public NamedPropertyHandlerConfiguration
{
AccessDeniedNamedHandlers() : NamedPropertyHandlerConfiguration
(
[](Local<Name> property, const PropertyCallbackInfo<v8::Value>& info) { THROW_ACCESS_ERROR() },
[](Local<Name> property, Local<v8::Value> value, const PropertyCallbackInfo<v8::Value>& info) { THROW_ACCESS_ERROR() },
[](Local<Name> property, const PropertyCallbackInfo<v8::Value>& info) { THROW_ACCESS_ERROR() },
[](Local<Name> property, const PropertyCallbackInfo<v8::Boolean>& info) { THROW_ACCESS_ERROR() },
[](const PropertyCallbackInfo<Array>& info) { THROW_ACCESS_ERROR() },
[](Local<Name> property, const PropertyDescriptor& desc, const PropertyCallbackInfo<v8::Value>& info) { THROW_ACCESS_ERROR() }
)
{
query = [](Local<Name> property, const PropertyCallbackInfo<Integer>& info) { THROW_ACCESS_ERROR() };
}
};
struct AccessDeniedIndexedHandlers : public IndexedPropertyHandlerConfiguration
{
AccessDeniedIndexedHandlers() : IndexedPropertyHandlerConfiguration
(
[](uint32_t index, const PropertyCallbackInfo<v8::Value>& info) { THROW_ACCESS_ERROR() },
[](uint32_t index, Local<v8::Value> value, const PropertyCallbackInfo<v8::Value>& info) { THROW_ACCESS_ERROR() },
[](uint32_t index, const PropertyCallbackInfo<v8::Value>& info) { THROW_ACCESS_ERROR() },
[](uint32_t index, const PropertyCallbackInfo<v8::Boolean>& info) { THROW_ACCESS_ERROR() },
[](const PropertyCallbackInfo<Array>& info) { THROW_ACCESS_ERROR() },
[](uint32_t index, const PropertyDescriptor& desc, const PropertyCallbackInfo<v8::Value>& info) { THROW_ACCESS_ERROR() }
)
{
query = [](uint32_t index, const PropertyCallbackInfo<Integer>& info) { THROW_ACCESS_ERROR() };
}
};
/** Creates an ObjectTemplate. */
Local<v8::ObjectTemplate> v8::ObjectTemplate::New(
Isolate* isolate,
Local<FunctionTemplate> constructor)
{
EscapableHandleScope scope(isolate);
if (!constructor.IsEmpty()) {
return scope.Escape(constructor->InstanceTemplate());
} else {
auto otempl = static_cast<V82JSC::ObjectTemplate*>
(HeapAllocator::Alloc(ToIsolateImpl(isolate),
ToIsolateImpl(isolate)->m_object_template_map));
otempl->m_constructor_template.Reset();
otempl->m_named_data = 0;
otempl->m_indexed_data = 0;
otempl->m_named_handler = DefaultNamedHandlers();
otempl->m_indexed_handler = DefaultIndexedHandlers();
otempl->m_named_failed_access_handler = AccessDeniedNamedHandlers();
otempl->m_indexed_failed_access_handler = AccessDeniedIndexedHandlers();
return scope.Escape(CreateLocal<ObjectTemplate>(isolate, otempl));
}
}
/** Get a template included in the snapshot by index. */
MaybeLocal<v8::ObjectTemplate> v8::ObjectTemplate::FromSnapshot(Isolate* isolate,
size_t index)
{
NOT_IMPLEMENTED;
}
/** Creates a new instance of this template.*/
MaybeLocal<Object> v8::ObjectTemplate::NewInstance(Local<Context> context)
{
auto impl = ToImpl<V82JSC::ObjectTemplate>(this);
auto ctx = ToContextImpl(context);
IsolateImpl* iso = ToIsolateImpl(ctx);
Isolate* isolate = ToIsolate(iso);
EscapableHandleScope scope(isolate);
Context::Scope context_scope(context);
// Temporarily disable access checks until we are done setting up the object
DisableAccessChecksScope disable_scope(iso, impl);
Local<ObjectTemplate> thiz = CreateLocal<ObjectTemplate>(&iso->ii, impl);
LocalException exception(iso);
JSObjectRef instance = 0;
if (!impl->m_constructor_template.IsEmpty()) {
MaybeLocal<Function> ctor = impl->m_constructor_template.Get(isolate)->GetFunction(context);
if (!ctor.IsEmpty()) {
JSValueRef ctor_func = ToJSValueRef(ctor.ToLocalChecked(), context);
instance = JSObjectCallAsConstructor(ctx->m_ctxRef, (JSObjectRef)ctor_func, 0, 0, &exception);
return scope.Escape(V82JSC::Value::New(ctx, instance).As<Object>());
} else {
return MaybeLocal<Object>();
}
} else if (impl->m_callback) {
JSClassDefinition def = kJSClassDefinitionEmpty;
if (impl->m_callback) {
def.callAsFunction = V82JSC::Template::callAsFunctionCallback;
def.callAsConstructor = V82JSC::Template::callAsConstructorCallback;
}
JSClassRef claz = JSClassCreate(&def);
void * data = PersistentData<ObjectTemplate>(isolate, thiz);
def.finalize = [](JSObjectRef obj) {
void *data = JSObjectGetPrivate(obj);
ReleasePersistentData<ObjectTemplate>(data);
};
instance = JSObjectMake(ctx->m_ctxRef, claz, data);
} else {
instance = JSObjectMake(ctx->m_ctxRef, 0, 0);
}
MaybeLocal<Object> o = impl->NewInstance(context, instance, false);
if (o.IsEmpty()) {
return MaybeLocal<Object>();
}
return scope.Escape(o.ToLocalChecked());
}
#undef O
#define O(v) reinterpret_cast<v8::internal::Object*>(v)
#define CALLBACK_PARAMS JSContextRef ctx, JSObjectRef function, JSObjectRef thisObject, \
size_t argumentCount, const JSValueRef arguments[], JSValueRef* exception
#define PASS ctx, function, thisObject, argumentCount, arguments, exception
class InterceptorGetter {};
class InterceptorSetter {};
class InterceptorOther {};
template <typename V, typename I>
JSValueRef PropertyHandler(CALLBACK_PARAMS,
void (*named_handler)(const V82JSC::ObjectTemplate*, Local<Name>, Local<v8::Value>, PropertyCallbackInfo<V>&, const NamedPropertyHandlerConfiguration&),
void (*indexed_handler)(const V82JSC::ObjectTemplate*, uint32_t, Local<v8::Value>, PropertyCallbackInfo<V>&, const IndexedPropertyHandlerConfiguration&))
{
// Arguments:
// get - target, property, receiver -> Value
// set - target, property, value, receiver -> True (assigned), False (not assigned)
// deleteProperty - target, property -> True (deleted), False (not deleted)
// has - target, property -> True (has), False (not has)
// ownKeys - target -> Array of keys
IsolateImpl *isolateimpl = IsolateFromCtx(ctx);
Isolate *isolate = ToIsolate(isolateimpl);
v8::Locker lock(isolate);
HandleScope scope(isolate);
*exception = 0;
assert(argumentCount > 0);
JSValueRef excp = 0;
JSObjectRef target = (JSObjectRef) arguments[0];
JSStringRef propertyName = 0;
bool isSymbol = false;
bool isIndex = false;
int index = 0;
auto thread = IsolateImpl::PerThreadData::Get(isolateimpl);
if (argumentCount > 1) {
isSymbol = JSValueToBoolean(ctx, exec(ctx, "return typeof _1 === 'symbol'", 1, &arguments[1]));
if (!isSymbol) {
propertyName = JSValueToStringCopy(ctx, arguments[1], &excp);
} else {
JSValueRef args[] = {
arguments[1],
isolateimpl->m_private_symbol
};
if (JSValueToBoolean(ctx, exec(ctx, "return _1 === _2", 2, args))) {
return NULL;
}
}
assert(excp==0);
} else {
propertyName = JSStringCreateWithUTF8CString("NONE");
}
if (!isSymbol) {
size_t size = JSStringGetMaximumUTF8CStringSize(propertyName);
char property[size];
JSStringGetUTF8CString(propertyName, property, size);
char *p = nullptr;
index = strtod(property, &p);
if (p && (!strcmp(p, "constructor") || !strcmp(p, "__proto__") )) {
return NULL;
}
if (!p || *p==0) isIndex = true;
}
int receiver_loc = std::is_same<I,InterceptorGetter>::value ? 2 : std::is_same<I,InterceptorSetter>::value ? 3 : 0;
JSValueRef value;
if (argumentCount > 2) {
value = arguments[2];
} else {
value = JSValueMakeUndefined(ctx);
}
auto wrap = V82JSC::TrackedObject::getPrivateInstance(ctx, target);
auto templ = ToImpl<V82JSC::ObjectTemplate>(wrap->m_object_template.Get(isolate));
Local<v8::Context> context = LocalContext::New(ToIsolate(isolateimpl), ctx);
v8::Context::Scope context_scope(context);
auto ctximpl = ToContextImpl(context);
Local<v8::Value> holder = V82JSC::Value::New(ctximpl, wrap->m_proxy_security);
#ifdef USE_JAVASCRIPTCORE_PRIVATE_API
JSGlobalContextRef creation_context = JSCPrivate::JSObjectGetGlobalContext(target);
#else
JSGlobalContextRef creation_context = JSContextGetGlobalContext(ctx);
#endif
bool ok = wrap->m_isGlobalObject && creation_context == JSContextGetGlobalContext(ctx);
if (!ok && templ->m_access_check) {
ok = templ->m_access_check(context,
V82JSC::Value::New(ctximpl, target).As<Object>(),
V82JSC::Value::New(ctximpl, templ->m_access_check_data));
} else {
ok = true;
}
Local<v8::Value> data;
if (!ok) {
if (isSymbol || !isIndex) { /* Is named */
data = V82JSC::Value::New(ctximpl, templ->m_failed_named_data);
} else { /* Is Indexed */
data = V82JSC::Value::New(ctximpl, templ->m_failed_indexed_data);
}
} else {
if (isSymbol || !isIndex) { /* Is named */
data = V82JSC::Value::New(ctximpl, templ->m_named_data);
} else { /* Is Indexed */
data = V82JSC::Value::New(ctximpl, templ->m_indexed_data);
}
}
++ thread->m_callback_depth;
Local<v8::Value> thiz = V82JSC::Value::New(ctximpl, arguments[receiver_loc]);
typedef v8::internal::Heap::RootListIndex R;
internal::Object *the_hole = isolateimpl->ii.heap()->root(R::kTheHoleValueRootIndex);
// FIXME: I can think of no way to determine whether we were called from strict mode or not
bool isStrict = false;
internal::Object *shouldThrow = internal::Smi::FromInt(isStrict?1:0);
v8::internal::Object * implicit[] = {
shouldThrow, // kShouldThrowOnErrorIndex = 0;
* reinterpret_cast<v8::internal::Object**>(*holder), // kHolderIndex = 1;
O(isolateimpl), // kIsolateIndex = 2;
the_hole, // kReturnValueDefaultValueIndex = 3;
the_hole, // kReturnValueIndex = 4;
* reinterpret_cast<v8::internal::Object**>(*data), // kDataIndex = 5;
* reinterpret_cast<v8::internal::Object**>(*thiz), // kThisIndex = 6;
};
PropertyCallback<V> info(implicit);
Local<v8::Value> set = V82JSC::Value::New(ctximpl, value);
thread->m_scheduled_exception = the_hole;
TryCatch try_catch(ToIsolate(isolateimpl));
if (isSymbol || !isIndex) {
Local<Name> prop = Local<Name>();
if (argumentCount>1) {
prop = V82JSC::Value::New(ctximpl, arguments[1]).As<Name>();
}
named_handler(templ, prop, set, info, ok ? templ->m_named_handler : templ->m_named_failed_access_handler);
} else {
indexed_handler(templ, index, set, info, ok ? templ->m_indexed_handler : templ->m_indexed_failed_access_handler);
}
if (try_catch.HasCaught()) {
*exception = ToJSValueRef(try_catch.Exception(), context);
} else if (thread->m_scheduled_exception != the_hole) {
internal::Object * excep = thread->m_scheduled_exception;
*exception = ToJSValueRef_<v8::Value>(excep, context);
thread->m_scheduled_exception = the_hole;
}
-- thread->m_callback_depth;
if (implicit[4] == the_hole) {
return NULL;
}
Local<v8::Value> retVal = info.GetReturnValue().Get();
return ToJSValueRef<v8::Value>(retVal, context);
}
#define NAMED_PARAMS(R) const V82JSC::ObjectTemplate* impl, Local<Name> property, Local<v8::Value> value, \
PropertyCallbackInfo<R>& info, const NamedPropertyHandlerConfiguration& config
#define INDEXED_PARAMS(R) const V82JSC::ObjectTemplate* impl, uint32_t index, Local<v8::Value> value, \
PropertyCallbackInfo<R>& info, const IndexedPropertyHandlerConfiguration& config
static inline bool inGlobalPrototypeChain(JSContextRef ctx, JSObjectRef obj)
{
JSObjectRef global = JSContextGetGlobalObject(ctx);
while (JSValueIsObject(ctx, global)) {
if (JSValueIsStrictEqual(ctx, global, obj)) return true;
global = (JSObjectRef) JSObjectGetPrototype(ctx, global);
}
return false;
}
static JSValueRef proxy_get(CALLBACK_PARAMS)
{
JSValueRef ret = PropertyHandler<v8::Value,InterceptorGetter>
(
PASS, [](NAMED_PARAMS(v8::Value)) { config.getter(property, info); },
[](INDEXED_PARAMS(v8::Value)) { config.getter(index, info); }
);
if (ret == NULL && !*exception) {
// Not handled. Pass thru.
assert(argumentCount>1);
// If the receiver is not the proxy, do the 'get' via the prototype so that any
// signature checks can be maintained properly
auto wrap = V82JSC::TrackedObject::getPrivateInstance(ctx, (JSObjectRef)arguments[0]);
if (!JSValueIsStrictEqual(ctx, wrap->m_proxy_security, arguments[2])) {
JSObjectRef temp1 = JSObjectMake(ctx, 0, 0);
JSObjectSetPrototype(ctx, temp1, arguments[0]);
JSValueRef args[] = {
temp1,
arguments[1]
};
return exec(ctx, "return Reflect.get(_1,_2)", 2, args, exception);
}
return exec(ctx, "return Reflect.get(_1,_2)", 2, arguments, exception);
}
return ret;
}
static JSValueRef legacy_proxy_get(JSContextRef ctx, JSObjectRef object,
JSStringRef propertyName, JSValueRef* exception)
{
if (JSStringGetLength(propertyName) == 0 || !inGlobalPrototypeChain(ctx, object)) return NULL;
JSValueRef args[] = {
object, JSValueMakeString(ctx, propertyName), object
};
JSValueRef ret = PropertyHandler<v8::Value,InterceptorGetter>
(
ctx, object, object, 3, args, exception,
[](NAMED_PARAMS(v8::Value)) { config.getter(property, info); },
[](INDEXED_PARAMS(v8::Value)) { config.getter(index, info); }
);
return ret;
}
static JSValueRef proxy_set(CALLBACK_PARAMS)
{
JSValueRef ret = PropertyHandler<v8::Value,InterceptorSetter>
(PASS, [](NAMED_PARAMS(v8::Value)) { config.setter(property, value, info); },
[](INDEXED_PARAMS(v8::Value)) { config.setter(index, value, info); }
);
if (*exception) {
return JSValueMakeBoolean(ctx, false);
}
if (ret == NULL) {
assert(argumentCount>2);
auto wrap = V82JSC::TrackedObject::getPrivateInstance(ctx, (JSObjectRef)arguments[0]);
assert(wrap);
// If the receiver is not the proxy, do the 'set' via the prototype so that any
// signature checks can be maintained properly
if (!JSValueIsStrictEqual(ctx, wrap->m_proxy_security, arguments[3])) {
JSValueRef args[] = {
arguments[0],
arguments[1],
arguments[2],
arguments[3],
0
};
args[4] = JSObjectMake(ctx, 0, 0);
JSObjectSetPrototype(ctx, (JSObjectRef)args[4], arguments[0]);
return exec(ctx, "Reflect.set(_5,_2,_3); return Reflect.set(_1,_2,_3)", 5, args, exception);
}
return exec(ctx, "return Reflect.set(_1,_2,_3)", 3, arguments, exception);
}
return JSValueMakeBoolean(ctx, true);
}
static bool legacy_proxy_set(JSContextRef ctx, JSObjectRef object, JSStringRef propertyName,
JSValueRef value, JSValueRef* exception)
{
if (JSStringGetLength(propertyName) == 0 || !inGlobalPrototypeChain(ctx, object)) return NULL;
JSValueRef args[] = {
object, JSValueMakeString(ctx, propertyName), value, object
};
JSValueRef ret = PropertyHandler<v8::Value,InterceptorSetter>
(ctx, object, object, 4, args, exception,
[](NAMED_PARAMS(v8::Value)) { config.setter(property, value, info); },
[](INDEXED_PARAMS(v8::Value)) { config.setter(index, value, info); }
);
if (*exception || ret == NULL) {
return false;
}
return true;
}
static JSValueRef proxy_has(CALLBACK_PARAMS)
{
JSValueRef ret = PropertyHandler<Integer,InterceptorOther>
(PASS,
[](NAMED_PARAMS(Integer)) {
if (config.query != NullNamedQuery) config.query(property, info);
else if(config.getter != NullNamedGetter) info.GetReturnValue().Set(v8::PropertyAttribute::None);
},
[](INDEXED_PARAMS(Integer)) {
if (config.query != NullIndexedQuery) config.query(index, info);
else if(config.getter != NullIndexedGetter) info.GetReturnValue().Set(v8::PropertyAttribute::None);
}
);
if (*exception) {
return JSValueMakeBoolean(ctx, false);
}
if (ret == NULL) {
assert(argumentCount>1);
return exec(ctx, "return _1.hasOwnProperty(_2)", 2, arguments, exception);
}
return JSValueMakeBoolean(ctx, !JSValueIsUndefined(ctx, ret));
}
static bool legacy_proxy_has(JSContextRef ctx, JSObjectRef object, JSStringRef propertyName)
{
if (JSStringGetLength(propertyName) == 0 || !inGlobalPrototypeChain(ctx, object)) return NULL;
JSValueRef args[] = {
object, JSValueMakeString(ctx, propertyName)
};
JSValueRef exception = 0;
JSValueRef ret = PropertyHandler<Integer,InterceptorOther>
(ctx, object, object, 2, args, &exception,
[](NAMED_PARAMS(Integer)) {
if (config.query != NullNamedQuery) config.query(property, info);
else if(config.getter != NullNamedGetter) config.getter(property, reinterpret_cast<PropertyCallbackInfo<v8::Value>&>(info));
},
[](INDEXED_PARAMS(Integer)) {
if (config.query != NullIndexedQuery) config.query(index, info);
else if(config.getter != NullIndexedGetter) config.getter(index, reinterpret_cast<PropertyCallbackInfo<v8::Value>&>(info));
}
);
if (exception || ret == NULL) {
return false;
}
return true;
}
static JSValueRef proxy_deleteProperty(CALLBACK_PARAMS)
{
JSValueRef ret = PropertyHandler<v8::Boolean,InterceptorOther>
(PASS, [](NAMED_PARAMS(v8::Boolean)) { config.deleter(property, info); },
[](INDEXED_PARAMS(v8::Boolean)) { config.deleter(index, info); });
if (!*exception && ret == NULL) {
assert(argumentCount>1);
return exec(ctx, "return Reflect.deleteProperty(_1,_2)", 2, arguments, exception);
}
return ret;
}
static bool legacy_proxy_deleteProperty(JSContextRef ctx, JSObjectRef object, JSStringRef propertyName,
JSValueRef* exception)
{
if (JSStringGetLength(propertyName) == 0 || !inGlobalPrototypeChain(ctx, object)) return NULL;
JSValueRef args[] = {
object, JSValueMakeString(ctx, propertyName)
};
JSValueRef ret = PropertyHandler<v8::Boolean,InterceptorOther>
(ctx, object, object, 2, args, exception,
[](NAMED_PARAMS(v8::Boolean)) { config.deleter(property, info); },
[](INDEXED_PARAMS(v8::Boolean)) { config.deleter(index, info); });
if (ret==NULL || *exception) {
return false;
}
return true;
}
static JSValueRef proxy_ownKeys(CALLBACK_PARAMS)
{
JSValueRef ret = PropertyHandler<v8::Array,InterceptorOther>
(PASS, [](NAMED_PARAMS(v8::Array)) { config.enumerator(info); },
[](INDEXED_PARAMS(v8::Array)) { config.enumerator(info); });
if (!*exception && ret == NULL) {
IsolateImpl *iso = IsolateFromCtx(ctx);
assert(argumentCount>0);
JSValueRef args[] = {
arguments[0],
iso->m_private_symbol
};
return exec(ctx,
"return Array.from(new Set("
" Object.getOwnPropertyNames(_1)"
" .concat(Object.getOwnPropertySymbols(_1))"
")).filter(p => p!==_2)",
2, args, exception);
}
return ret;
}
static void legacy_proxy_ownKeys(JSContextRef ctx, JSObjectRef object,
JSPropertyNameAccumulatorRef acc)
{
if (!inGlobalPrototypeChain(ctx, object)) return;
JSValueRef exception = 0;
JSValueRef ret = PropertyHandler<v8::Array,InterceptorOther>
(ctx, object, object, 1, &object, &exception,
[](NAMED_PARAMS(v8::Array)) { config.enumerator(info); },
[](INDEXED_PARAMS(v8::Array)) { config.enumerator(info); });
if (!exception && ret) {
int length = static_cast<int>(JSValueToNumber(ctx, exec(ctx, "_1.length", 1, &ret), 0));
for (int i=0; !exception && i<length; i++) {
JSValueRef name = JSObjectGetPropertyAtIndex(ctx, (JSObjectRef)ret, i, &exception);
JSStringRef s = JSValueToStringCopy(ctx, name, 0);
if (JSStringGetLength(s)) {
JSPropertyNameAccumulatorAddName(acc, s);
}
}
}
}
static JSValueRef proxy_defineProperty(CALLBACK_PARAMS)
{
assert(argumentCount>2);
JSValueRef ret = PropertyHandler<v8::Value,InterceptorOther>
(PASS, [](NAMED_PARAMS(v8::Value)) { config.definer(property, value, info); },
[](INDEXED_PARAMS(v8::Value)) { config.definer(index, value, info); });
if (!*exception && ret == NULL) {
return exec(ctx, "return Object.defineProperty(_1, _2, _3)", 3, arguments, exception);
}
return ret;
}
static JSValueRef proxy_getPrototypeOf(CALLBACK_PARAMS)
{
assert(argumentCount>0);
auto wrap = V82JSC::TrackedObject::getPrivateInstance(ctx, (JSObjectRef)arguments[0]);
Isolate *isolate = ToIsolate(IsolateFromCtx(ctx));
HandleScope scope(isolate);
Local<v8::Context> context = LocalContext::New(isolate, ctx);
auto templ = ToImpl<V82JSC::ObjectTemplate>(wrap->m_object_template.Get(isolate));
if (templ->m_access_check)
{
return JSValueMakeNull(ctx);
}
Local<v8::Value> proto = V82JSC::Value::New(ToContextImpl(context),
arguments[0]).As<Object>()->GetPrototype();
return ToJSValueRef(proto, context);
}
static JSValueRef proxy_setPrototypeOf(CALLBACK_PARAMS)
{
assert(argumentCount>1);
auto wrap = V82JSC::TrackedObject::getPrivateInstance(ctx, (JSObjectRef)arguments[0]);
Isolate *isolate = ToIsolate(IsolateFromCtx(ctx));
HandleScope scope(isolate);
Local<v8::Context> context = LocalContext::New(isolate, ctx);
auto templ = ToImpl<V82JSC::ObjectTemplate>(wrap->m_object_template.Get(isolate));
if (templ->m_access_check && !templ->m_access_check(context,
V82JSC::Value::New(ToContextImpl(context), arguments[0]).As<Object>(),
V82JSC::Value::New(ToContextImpl(context), templ->m_access_check_data)))
{
isolate->ThrowException(Exception::TypeError(v8::String::NewFromUtf8(isolate, "access denied",
NewStringType::kNormal).ToLocalChecked()));
}
Maybe<bool> r = V82JSC::Value::New(ToContextImpl(context),arguments[0])
.As<Object>()->SetPrototype(context, V82JSC::Value::New(ToContextImpl(context), arguments[1]));
return JSValueMakeBoolean(ctx, r.FromJust());
}
static JSValueRef proxy_getOwnPropertyDescriptor(CALLBACK_PARAMS)
{
assert(argumentCount>1);
// First, try a descriptor interceptor
JSValueRef descriptor = PropertyHandler<v8::Value,InterceptorOther>
(PASS, [](NAMED_PARAMS(v8::Value)) { config.descriptor(property, info); },
[](INDEXED_PARAMS(v8::Value)) { config.descriptor(index, info); });
if (descriptor) return descriptor;
if (exception && *exception) return NULL;
// Second, see if we have a real property descriptor
descriptor = exec(ctx, "return Object.getOwnPropertyDescriptor(_1, _2)", 2, arguments, exception);
if (descriptor && !JSValueIsStrictEqual(ctx, descriptor, JSValueMakeUndefined(ctx))) return descriptor;
if (exception && *exception) return NULL;
// Third, try calling the querier to see if the property exists
JSValueRef attributes = PropertyHandler<Integer,InterceptorOther>
(PASS,
[](NAMED_PARAMS(Integer)) {
if (config.query != NullNamedQuery) config.query(property, info);
else if(config.getter != NullNamedGetter) info.GetReturnValue().Set(-1);
},
[](INDEXED_PARAMS(Integer)) {
if (config.query != NullIndexedQuery) config.query(index, info);
else if(config.getter != NullIndexedGetter) info.GetReturnValue().Set(-1);
}
);
if (exception && *exception) return NULL;
// attributes can be NULL (has querier, property does not exist), -1 (no querier, defer to value), PropertyAttribute (has property)
if (attributes == NULL) {
return JSValueMakeUndefined(ctx);
}
int pattr = static_cast<int>(JSValueToNumber(ctx, attributes, 0));
// Finally, check the getter to see if we should claim a value
JSValueRef value = PropertyHandler<v8::Value,InterceptorOther>
(
PASS, [](NAMED_PARAMS(v8::Value)) { config.getter(property, info); },
[](INDEXED_PARAMS(v8::Value)) { config.getter(index, info); }
);
if (exception && *exception) return NULL;
if (pattr != -1 || value != NULL) {
v8::PropertyAttribute attr = PropertyAttribute::None;
if (pattr != -1) {
attr = static_cast<v8::PropertyAttribute>(pattr);
}
JSValueRef args[] = {
JSValueMakeBoolean(ctx, !(attr & v8::PropertyAttribute::ReadOnly)),
JSValueMakeBoolean(ctx, !(attr & PropertyAttribute::DontEnum)),
value
};
if (value != NULL) {
return exec(ctx, "return { writable: _1, enumerable: _2, configurable: true, value: _3 }", 3, args);
} else {
return exec(ctx, "return { writable: _1, enumerable: _2, configurable: true }", 2, args);
}
}
// No property
return JSValueMakeUndefined(ctx);
}
v8::MaybeLocal<v8::Object> V82JSC::ObjectTemplate::NewInstance(v8::Local<v8::Context> context,
JSObjectRef root, bool isHiddenPrototype,
JSClassDefinition* definition,
void *data)
{
auto ctx = ToContextImpl(context);
IsolateImpl* iso = ToIsolateImpl(ctx);
Isolate* isolate = ToIsolate(iso);
EscapableHandleScope scope(isolate);
LocalException exception(iso);
Local<v8::ObjectTemplate> thiz = CreateLocal<v8::ObjectTemplate>(isolate, this);
TrackedObject *wrap;
if (definition) {
if (m_need_proxy) {
definition->getProperty = legacy_proxy_get;
definition->setProperty = legacy_proxy_set;
definition->hasProperty = legacy_proxy_has;
definition->deleteProperty = legacy_proxy_deleteProperty;
definition->getPropertyNames = legacy_proxy_ownKeys;
}
JSClassRef klass = JSClassCreate(definition);
root = JSObjectMake(ctx->m_ctxRef, klass, data);
JSClassRelease(klass);
}
assert(root);
wrap = V82JSC::TrackedObject::makePrivateInstance(iso, ctx->m_ctxRef, root);
// Structure:
//
// proxy -----> root . [[PrivateSymbol]] --> lifecycle_object(wrap) --> TrackedObjectImpl*
// Create lifecycle object
wrap->m_object_template.Reset(isolate, thiz);
wrap->m_num_internal_fields = m_internal_fields;
JSValueRef initarray[m_internal_fields];
for (int i=0; i<m_internal_fields; i++) {
initarray[i] = JSValueMakeUndefined(ctx->m_ctxRef);
}
wrap->m_internal_fields_array = JSObjectMakeArray(ctx->m_ctxRef, m_internal_fields, initarray, 0);
JSValueProtect(ctx->m_ctxRef, wrap->m_internal_fields_array);
wrap->m_isHiddenPrototype = isHiddenPrototype;
// Create proxy
JSObjectRef handler = 0;
if (m_need_proxy && !wrap->m_isGlobalObject) {
handler = JSObjectMake(ctx->m_ctxRef, nullptr, nullptr);
auto handler_func = [ctx, handler](const char *name, JSObjectCallAsFunctionCallback callback) -> void {
JSValueRef excp = 0;
JSStringRef sname = JSStringCreateWithUTF8CString(name);
JSObjectRef f = JSObjectMakeFunctionWithCallback(ctx->m_ctxRef, sname, callback);
JSObjectSetProperty(ctx->m_ctxRef, handler, sname, f, 0, &excp);
JSStringRelease(sname);
assert(excp==0);
};
handler_func("get", proxy_get);
handler_func("set", proxy_set);
handler_func("has", proxy_has);
handler_func("deleteProperty", proxy_deleteProperty);
handler_func("ownKeys", proxy_ownKeys);
handler_func("defineProperty", proxy_defineProperty);
handler_func("getPrototypeOf", proxy_getPrototypeOf);
handler_func("setPrototypeOf", proxy_setPrototypeOf);
handler_func("getOwnPropertyDescriptor", proxy_getOwnPropertyDescriptor);
}
MaybeLocal<Object> instance;
if (!m_constructor_template.IsEmpty()) {
instance = reinterpret_cast<Template*>(this)->
InitInstance(context, root, exception, m_constructor_template.Get(isolate));
} else {
instance = reinterpret_cast<Template*>(this)->InitInstance(context, root, exception);
}
if (instance.IsEmpty()) {
return instance;
}
if (m_need_proxy) {
JSValueRef args[] = {root, handler};
JSValueRef proxy_object = exec(ctx->m_ctxRef, "return new Proxy(_1, _2)", 2, args);
// Important! Set the security proxy before calling ValueImpl::New(). We don't want the proxy object
// to have its own wrap
wrap->m_proxy_security = proxy_object;
Local<Object> proxy = V82JSC::Value::New(ctx, proxy_object).As<Object>();
instance = proxy;
}
if (isHiddenPrototype) {
const char* proxy_code =
"const handler = {"
" set(target,prop,val,receiver) {"
" var d = Object.getOwnPropertyDescriptor(target, prop);"
" var exists = d !== undefined;"
" var r = (target[prop] = val);"
" if (!exists) {"
" _2(target, prop, receiver);"
" }"
" return r;"
" },"
" deleteProperty(target,prop) {"
" var d = Object.getOwnPropertyDescriptor(target, prop);"
" var exists = d !== undefined;"
" var r = delete target[prop];"
" if (exists && r) {"
" _3(target, prop);"
" }"
" return r;"
" },"
" defineProperty(target,prop,desc) {"
" var d = Object.getOwnPropertyDescriptor(target, prop);"
" var exists = d !== undefined;"
" try {"
" Object.defineProperty(target, prop, desc);"
" } catch (e) {"
" return false;"
" }"
" if (!exists) {"
" _2(target, prop);"
" }"
" return true;"
" }"
"};"
"return new Proxy(_1, handler);";
JSStringRef sname = JSStringCreateWithUTF8CString("propagate_set");
JSObjectRef propagate_set = JSObjectMakeFunctionWithCallback(ctx->m_ctxRef, sname, [](CALLBACK_PARAMS) -> JSValueRef {
auto wrap = V82JSC::TrackedObject::getPrivateInstance(ctx, (JSObjectRef)arguments[0]);
assert(wrap && wrap->m_hidden_proxy_security);
Isolate *isolate = ToIsolate(IsolateFromCtx(ctx));
HandleScope scope(isolate);
Local<v8::Context> context = LocalContext::New(isolate, ctx);
Local<Name> property = V82JSC::Value::New(ToContextImpl(context), arguments[1]).As<Name>();
if (JSValueIsStrictEqual(ctx, arguments[2], JSValueMakeUndefined(ctx)) ||
JSValueIsStrictEqual(ctx, arguments[2], wrap->m_hidden_proxy_security)) {
ToImpl<V82JSC::HiddenObject>(V82JSC::Value::New(ToContextImpl(context), arguments[0]))
->PropagateOwnPropertyToChildren(context, property);
}
return JSValueMakeUndefined(ctx);
});
JSStringRelease(sname);
sname = JSStringCreateWithUTF8CString("propagate_delete");
JSObjectRef propagate_delete = JSObjectMakeFunctionWithCallback(ctx->m_ctxRef, sname, [](CALLBACK_PARAMS) -> JSValueRef {
assert(0); // FIXME! We need to propagate deletes
return NULL;
});
JSStringRelease(sname);
JSValueRef args[] = {
ToJSValueRef(instance.ToLocalChecked(), context),
propagate_set,
propagate_delete
};
JSValueRef hidden_proxy_object = exec(ctx->m_ctxRef, proxy_code, 3, args);
// Same here. Set the hidden proxy reference before calling ValueImpl::New()
wrap->m_hidden_proxy_security = hidden_proxy_object;
Local<Object> hidden_proxy = V82JSC::Value::New(ctx, hidden_proxy_object).As<Object>();
instance = hidden_proxy;
}
return scope.Escape(V82JSC::TrackedObject::SecureValue(instance.ToLocalChecked()).As<Object>());
}
/**
* Sets an accessor on the object template.
*
* Whenever the property with the given name is accessed on objects
* created from this ObjectTemplate the getter and setter callbacks
* are called instead of getting and setting the property directly
* on the JavaScript object.
*
* \param name The name of the property for which an accessor is added.
* \param getter The callback to invoke when getting the property.
* \param setter The callback to invoke when setting the property.
* \param data A piece of data that will be passed to the getter and setter
* callbacks whenever they are invoked.
* \param settings Access control settings for the accessor. This is a bit
* field consisting of one of more of
* DEFAULT = 0, ALL_CAN_READ = 1, or ALL_CAN_WRITE = 2.
* The default is to not allow cross-context access.
* ALL_CAN_READ means that all cross-context reads are allowed.
* ALL_CAN_WRITE means that all cross-context writes are allowed.
* The combination ALL_CAN_READ | ALL_CAN_WRITE can be used to allow all
* cross-context access.
* \param attribute The attributes of the property for which an accessor
* is added.
* \param signature The signature describes valid receivers for the accessor
* and is used to perform implicit instance checks against them. If the
* receiver is incompatible (i.e. is not an instance of the constructor as
* defined by FunctionTemplate::HasInstance()), an implicit TypeError is
* thrown and no callback is invoked.
*/
void v8::ObjectTemplate::SetAccessor(
Local<v8::String> name, AccessorGetterCallback getter,
AccessorSetterCallback setter, Local<v8::Value> data,
AccessControl settings, PropertyAttribute attribute,
Local<AccessorSignature> signature)
{
SetAccessor(name.As<Name>(),
reinterpret_cast<AccessorNameGetterCallback>(getter),
reinterpret_cast<AccessorNameSetterCallback>(setter),
data, settings, attribute, signature);
}
void v8::ObjectTemplate::SetAccessor(
Local<Name> name, AccessorNameGetterCallback getter,
AccessorNameSetterCallback setter, Local<Value> data,
AccessControl settings, PropertyAttribute attribute,
Local<AccessorSignature> signature)
{
auto this_ = ToImpl<V82JSC::ObjectTemplate,v8::ObjectTemplate>(this);
Isolate* isolate = ToIsolate(ToIsolateImpl(this_));
HandleScope scope(isolate);
auto accessor = static_cast<V82JSC::ObjAccessor *>
(HeapAllocator::Alloc(ToIsolateImpl(this_), ToIsolateImpl(this_)->m_object_accessor_map));
accessor->name.Reset(isolate, name);
accessor->getter = getter;
accessor->setter = setter ? setter :
[](Local<Name> property, Local<Value> value, const PropertyCallbackInfo<void>& info) {
info.GetReturnValue().Set(Undefined(info.GetIsolate()));
};
accessor->data.Reset(isolate, data);
accessor->settings = settings;
accessor->attribute = attribute;
// For now, Signature and AccessorSignature are the same
Local<Signature> sig = * reinterpret_cast<Local<Signature>*>(&signature);
accessor->signature.Reset(isolate, sig);
Local<v8::ObjAccessor> local = CreateLocal<v8::ObjAccessor>(isolate, accessor);
accessor->next_.Reset(isolate, this_->m_accessors.Get(isolate));
this_->m_accessors.Reset(isolate, local);
}
/**
* Sets a named property handler on the object template.
*
* Whenever a property whose name is a string is accessed on objects created
* from this object template, the provided callback is invoked instead of
* accessing the property directly on the JavaScript object.
*
* SetNamedPropertyHandler() is different from SetHandler(), in
* that the latter can intercept symbol-named properties as well as
* string-named properties when called with a
* NamedPropertyHandlerConfiguration. New code should use SetHandler().
*
* \param getter The callback to invoke when getting a property.
* \param setter The callback to invoke when setting a property.
* \param query The callback to invoke to check if a property is present,
* and if present, get its attributes.
* \param deleter The callback to invoke when deleting a property.
* \param enumerator The callback to invoke to enumerate all the named
* properties of an object.
* \param data A piece of data that will be passed to the callbacks
* whenever they are invoked.
*/
// TODO(dcarney): deprecate
void v8::ObjectTemplate::SetNamedPropertyHandler(NamedPropertyGetterCallback getter,
NamedPropertySetterCallback setter,
NamedPropertyQueryCallback query,
NamedPropertyDeleterCallback deleter,
NamedPropertyEnumeratorCallback enumerator,
Local<Value> data)
{
// FIXME: This is a nasty hack.
NamedPropertyHandlerConfiguration config;
config.getter = reinterpret_cast<GenericNamedPropertyGetterCallback>(getter);
config.setter = reinterpret_cast<GenericNamedPropertySetterCallback>(setter);
config.query = reinterpret_cast<GenericNamedPropertyQueryCallback>(query);
config.deleter = reinterpret_cast<GenericNamedPropertyDeleterCallback>(deleter);
config.enumerator = reinterpret_cast<GenericNamedPropertyEnumeratorCallback>(enumerator);
config.data = data;
SetHandler(config);
}
/**
* Sets a named property handler on the object template.
*
* Whenever a property whose name is a string or a symbol is accessed on
* objects created from this object template, the provided callback is
* invoked instead of accessing the property directly on the JavaScript
* object.
*
* @param configuration The NamedPropertyHandlerConfiguration that defines the
* callbacks to invoke when accessing a property.
*/
void v8::ObjectTemplate::SetHandler(const NamedPropertyHandlerConfiguration& configuration)
{
auto templ = ToImpl<V82JSC::ObjectTemplate,ObjectTemplate>(this);
HandleScope scope(ToIsolate(templ->GetIsolate()));
Local<Value> data = configuration.data;
if (configuration.getter) templ->m_named_handler.getter = configuration.getter;
if (configuration.setter) templ->m_named_handler.setter = configuration.setter;
if (configuration.descriptor) templ->m_named_handler.descriptor = configuration.descriptor;
if (configuration.deleter) templ->m_named_handler.deleter = configuration.deleter;
if (configuration.enumerator) templ->m_named_handler.enumerator = configuration.enumerator;
if (configuration.definer) templ->m_named_handler.definer = configuration.definer;
if (configuration.query) templ->m_named_handler.query = configuration.query;
templ->m_named_handler.data.Clear();
if (data.IsEmpty()) {
data = Undefined(Isolate::GetCurrent());
}
templ->m_named_data = ToJSValueRef(configuration.data, Isolate::GetCurrent());
JSValueProtect(ToContextRef(Isolate::GetCurrent()), templ->m_named_data);
templ->m_need_proxy = true;
}
/**
* Sets an indexed property handler on the object template.
*
* Whenever an indexed property is accessed on objects created from
* this object template, the provided callback is invoked instead of
* accessing the property directly on the JavaScript object.
*
* @param configuration The IndexedPropertyHandlerConfiguration that defines
* the callbacks to invoke when accessing a property.
*/
void v8::ObjectTemplate::SetHandler(const IndexedPropertyHandlerConfiguration& configuration)
{
auto templ = ToImpl<V82JSC::ObjectTemplate,ObjectTemplate>(this);
HandleScope scope(ToIsolate(templ->GetIsolate()));
Local<Value> data = configuration.data;
if (configuration.getter) templ->m_indexed_handler.getter = configuration.getter;
if (configuration.setter) templ->m_indexed_handler.setter = configuration.setter;
if (configuration.descriptor) templ->m_indexed_handler.descriptor = configuration.descriptor;
if (configuration.deleter) templ->m_indexed_handler.deleter = configuration.deleter;
if (configuration.enumerator) templ->m_indexed_handler.enumerator = configuration.enumerator;
if (configuration.definer) templ->m_indexed_handler.definer = configuration.definer;
if (configuration.query) templ->m_indexed_handler.query = configuration.query;
templ->m_indexed_handler.data.Clear();
if (data.IsEmpty()) {
data = Undefined(Isolate::GetCurrent());
}
templ->m_indexed_data = ToJSValueRef(configuration.data, Isolate::GetCurrent());
JSValueProtect(ToContextRef(Isolate::GetCurrent()), templ->m_indexed_data);
templ->m_need_proxy = true;
}
/**
* Sets the callback to be used when calling instances created from
* this template as a function. If no callback is set, instances
* behave like normal JavaScript objects that cannot be called as a
* function.
*/
void v8::ObjectTemplate::SetCallAsFunctionHandler(FunctionCallback callback,
Local<Value> data)
{
Isolate* isolate = ToIsolate(this);
HandleScope scope(isolate);
Local<Context> context = ToCurrentContext(this);
auto templ = ToImpl<V82JSC::ObjectTemplate,ObjectTemplate>(this);
templ->m_callback = callback;
if (data.IsEmpty()) {
data = Undefined(isolate);
}
templ->m_data = ToJSValueRef<Value>(data, isolate);
JSValueProtect(ToContextRef(context), templ->m_data);
}
/**
* Mark object instances of the template as undetectable.
*
* In many ways, undetectable objects behave as though they are not
* there. They behave like 'undefined' in conditionals and when
* printed. However, properties can be accessed and called as on
* normal objects.
*/
void v8::ObjectTemplate::MarkAsUndetectable()
{
printf("V82JSC: Undetectable objects not supported in JSC\n");
}
/**
* Sets access check callback on the object template and enables access
* checks.
*
* When accessing properties on instances of this object template,
* the access check callback will be called to determine whether or
* not to allow cross-context access to the properties.
*/
void v8::ObjectTemplate::SetAccessCheckCallback(AccessCheckCallback callback,
Local<Value> data)
{
auto templ = ToImpl<V82JSC::ObjectTemplate,ObjectTemplate>(this);
Isolate *isolate = ToIsolate(ToIsolateImpl(templ));
HandleScope scope(isolate);
Local<Context> context = OperatingContext(isolate);
JSContextRef ctx = ToContextRef(context);
templ->m_access_check = callback;
if (data.IsEmpty()) {
data = Undefined(isolate);
}
templ->m_access_check_data = ToJSValueRef(data, context);
JSValueProtect(ctx, templ->m_access_check_data);
}
/**
* Like SetAccessCheckCallback but invokes an interceptor on failed access
* checks instead of looking up all-can-read properties. You can only use
* either this method or SetAccessCheckCallback, but not both at the same
* time.
*/
void v8::ObjectTemplate::SetAccessCheckCallbackAndHandler(
AccessCheckCallback callback,
const NamedPropertyHandlerConfiguration& named_handler,
const IndexedPropertyHandlerConfiguration& indexed_handler,
Local<Value> data)
{
SetAccessCheckCallback(callback, data);
auto templ = ToImpl<V82JSC::ObjectTemplate,ObjectTemplate>(this);
HandleScope scope(ToIsolate(templ->GetIsolate()));
Local<Value> named_data = named_handler.data;
if (named_handler.getter) templ->m_named_failed_access_handler.getter = named_handler.getter;
if (named_handler.setter) templ->m_named_failed_access_handler.setter = named_handler.setter;
if (named_handler.descriptor) templ->m_named_failed_access_handler.descriptor = named_handler.descriptor;
if (named_handler.deleter) templ->m_named_failed_access_handler.deleter = named_handler.deleter;
if (named_handler.enumerator) templ->m_named_failed_access_handler.enumerator = named_handler.enumerator;
if (named_handler.definer) templ->m_named_failed_access_handler.definer = named_handler.definer;
if (named_handler.query) templ->m_named_failed_access_handler.query = named_handler.query;
templ->m_named_failed_access_handler.data.Clear();
if (named_data.IsEmpty()) {
named_data = Undefined(Isolate::GetCurrent());
}
templ->m_failed_named_data = ToJSValueRef(named_data, Isolate::GetCurrent());
JSValueProtect(ToContextRef(Isolate::GetCurrent()), templ->m_failed_named_data);
Local<Value> indexed_data = indexed_handler.data;
if (indexed_handler.getter) templ->m_indexed_failed_access_handler.getter = indexed_handler.getter;
if (indexed_handler.setter) templ->m_indexed_failed_access_handler.setter = indexed_handler.setter;
if (indexed_handler.descriptor) templ->m_indexed_failed_access_handler.descriptor = indexed_handler.descriptor;
if (indexed_handler.deleter) templ->m_indexed_failed_access_handler.deleter = indexed_handler.deleter;
if (indexed_handler.enumerator) templ->m_indexed_failed_access_handler.enumerator = indexed_handler.enumerator;
if (indexed_handler.definer) templ->m_indexed_failed_access_handler.definer = indexed_handler.definer;
if (indexed_handler.query) templ->m_indexed_failed_access_handler.query = indexed_handler.query;
templ->m_indexed_failed_access_handler.data.Clear();
if (indexed_data.IsEmpty()) {
indexed_data = Undefined(Isolate::GetCurrent());
}
templ->m_failed_indexed_data = ToJSValueRef(named_data, Isolate::GetCurrent());
JSValueProtect(ToContextRef(Isolate::GetCurrent()), templ->m_failed_indexed_data);
}
/**
* Gets the number of internal fields for objects generated from
* this template.
*/
int v8::ObjectTemplate::InternalFieldCount()
{
auto templ = ToImpl<V82JSC::ObjectTemplate,ObjectTemplate>(this);
return templ->m_internal_fields;
}
/**
* Sets the number of internal fields for objects generated from
* this template.
*/
void v8::ObjectTemplate::SetInternalFieldCount(int value)
{
auto templ = ToImpl<V82JSC::ObjectTemplate,ObjectTemplate>(this);
templ->m_internal_fields = value > 0 ? value : 0;
}
/**
* Returns true if the object will be an immutable prototype exotic object.
*/
bool v8::ObjectTemplate::IsImmutableProto()
{
auto templ = ToImpl<V82JSC::ObjectTemplate,ObjectTemplate>(this);
return templ->m_is_immutable_proto;
}
/**
* Makes the ObjectTempate for an immutable prototype exotic object, with an
* immutable __proto__.
*/
void v8::ObjectTemplate::SetImmutableProto()
{
auto templ = ToImpl<V82JSC::ObjectTemplate,ObjectTemplate>(this);
templ->m_is_immutable_proto = true;
}
```
|
```php
<?php
/*
*
*
* path_to_url
*
* Unless required by applicable law or agreed to in writing, software
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
*/
namespace Google\Service\CloudHealthcare;
class ListFhirStoresResponse extends \Google\Collection
{
protected $collection_key = 'fhirStores';
protected $fhirStoresType = FhirStore::class;
protected $fhirStoresDataType = 'array';
/**
* @var string
*/
public $nextPageToken;
/**
* @param FhirStore[]
*/
public function setFhirStores($fhirStores)
{
$this->fhirStores = $fhirStores;
}
/**
* @return FhirStore[]
*/
public function getFhirStores()
{
return $this->fhirStores;
}
/**
* @param string
*/
public function setNextPageToken($nextPageToken)
{
$this->nextPageToken = $nextPageToken;
}
/**
* @return string
*/
public function getNextPageToken()
{
return $this->nextPageToken;
}
}
// Adding a class alias for backwards compatibility with the previous class name.
class_alias(ListFhirStoresResponse::class, 'Google_Service_CloudHealthcare_ListFhirStoresResponse');
```
|
Konung Gustaf V:s Pokal (literally: King Gustaf V's Trophy) or, in shorter form, Kungapokalen (literally: The King's Trophy), is an annual international Group One harness event for trotters. It is held at Åby Racetrack in Mölndal, 10 km south of Gothenburg, Sweden. It is a stakes race for 4-year-old stallions and geldings. The purse in the 2009 final was ≈US$273,000 (€200,000), of which the winner Knockout Rose won half.
Origin
In 1941, Åby inaugurated Jubileumslöpningen to celebrate that the track had been open for 5 years. Jubileumslöpningen was raced until 1948. Then, Swedish king Gustaf V donated a trophy to Åby, a trophy that had been produced according to the king's own wishes. The event changed name and since 1949, The King's Trophy has been an annually recurring event at Åby Racetrack.
Racing conditions
From the start until 1976, the distance of Kungapokalen was 2,200-2,300 meters (1.37-1.43 miles). Between 1977 and 1982, the distance was either 2,140 or 2,160 meters (1.33-1.34 miles). Since 1983, a year of several changes, the distance has been exclusively 2,140 meters. The same year, the starting method was changed from volt start to auto start.
1983 was Kungapokalen for the first time a stakes race solely for Swedish-bred, four-year-old trotters. The same year, Drottning Silvias Pokal, a big event open only for Swedish four-year-old mares, was inaugurated at Åby as well. In 2002, the conditions were changed and Kungapokalen was opened for foreign four-year-olds as well.
The final of the event is preceded by a number of elimination races, taking place approximately ten days before the final. Since 2000, the number of elimination heats has been either three or four per year.
The 2009 Konung Gustaf V:s Pokal
The three elimination heats of the 2009 event took place on April 30, 14 days before the final. The winners of these heats, Yewish Boko, Lavec Kronos and Reven d'Amour, together with Marshland, were to be considered favourites in the final on April 14, especially since star trotter Maharajah was withdrawn due to illness.
The starting list
Lavec Kronos - Johnny Takter (Lutfi Kolgjini)
Reven d'Amour - Fredrik B. Larsson (Henrik Larsson)
Yewish Boko - Åke Svanstedt (Timo Nurmos)
Marshland - Örjan Kilhström (Stefan Hultman)
Knockout Rose - Erik Adielsson (Stig H. Johansson)
Maharajah - Did not start
Lou Kronos - Lutfi Kolgjini
Insect Face - Robert Bergh (Marcus Lindgren)
Rakas - Per Lennartsson
Wiranas Dream - Thomas Uhrberg (Anna Forssell)
Revenue J:r - Jörgen Sjunnesson (Lutfi Kolgini)
Noras Bean - Stefan Söderkvist (Ulf Stenströmer)
(Trainer, if other than driver, in parentheses)
The race
Lavec Kronos took the lead. Lennartsson placed Rakas behind the leader, while Knockout Rose ran as third on the rail. Outside, Lavec Kronos got favourite Reven d'Amour, while second and third favourites, Marshland and Yewish Boko, spent their time further down the field. Yewish Boko made an attempt to attack the two up front (Lavec Kronos and Reven d'Amour) but ended up breaking stride before the final stretch. Knockout Rose was released by Adielsson down the stretch and won by a length before outsider Noras Bean. Rakas came third.
The Stig H. Johansson-trained stallion Knockout Rose, sired by Express Ride, won in 1:57.3f (mile rate)/1:13.1 (km rate).
Past winners
Drivers with most wins
6 - Sören Nordin
5 - Stig H. Johansson
4 - Gösta Nordin
3 - Tommy Hanné
3 - Lars Lindberg
3 - Berndt Lindstedt
3 - Gunnar Nordin
2 - Olle Goop
2 - Olof Persson
2 - Ragnar Thorngren
Trainers with most wins
7 - Sören Nordin
6 - Stig H. Johansson
4 - Gösta Nordin
3 - Tommy Hanné
3 - Gunnar Nordin
3 - Håkan Wallner
2 - Lars Lindberg
2 - Stefan Melander
2 - Olof Persson
2 - Ragnar Thorngren
Sires with at least two winning offsprings
4 - Bulwark (Hetty, Justus, Bulwarkson, Delta)
4 - Sir Walter Scott (Holly Scott, Fänrik Scott, Magnifik, Roland)
3 - Dartmouth (Dartster F., Rex Håleryd, Dior Broline)
3 - Tibur (Mustard, Rebur, Ata Star L.)
2 - Clean Sweep (Junker, Moneymaker)
2 - Earl's Mr Will (Indian Will, Duke Abbey)
2 - Fibber (Clementz, Julius Fibber)
2 - Lindy's Crown (Atlantic F.C., St Göran)
2 - Super Arnie (Gigant Neo, Dust All Over)
Mares with at least two winning offsprings
2 - Grand Duchess (Justus, Delta)
2 - Gullan Fafner (Magnifik, Roland)
Winning stallions that have also sired winners
Adept (1957), sire of Najo (1971)
Baron Karsk (1993), sire of Equalizer (2001)
Big Noon (1941), sire of Casanova (1954)
Justus (1946), sire of Jussi (1960)
Winner with lowest odds
Winning odds: 1.28 - Quiggin (1984)
Winner with highest odds
Winning odds: 98.62 - Najo (1971)
Fastest winners
Auto start
1:12.7 (km rate) - Gigant Neo (2002)
Volt start
1:17.0 (km rate) - Dartster F. (1980)
All winners of Konung Gustaf V:s Pokal
See also
List of Scandinavian harness horse races
References
Harness races in Sweden
|
```go
package aws_helper
import (
"fmt"
"os"
"strconv"
"time"
"github.com/aws/aws-sdk-go/aws/request"
"github.com/gruntwork-io/go-commons/version"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/arn"
"github.com/aws/aws-sdk-go/aws/credentials"
"github.com/aws/aws-sdk-go/aws/credentials/stscreds"
"github.com/aws/aws-sdk-go/aws/endpoints"
"github.com/aws/aws-sdk-go/aws/session"
"github.com/aws/aws-sdk-go/service/sts"
"github.com/gruntwork-io/go-commons/errors"
"github.com/gruntwork-io/terragrunt/options"
)
// A representation of the configuration options for an AWS Session
type AwsSessionConfig struct {
Region string
CustomS3Endpoint string
CustomDynamoDBEndpoint string
Profile string
RoleArn string
CredsFilename string
S3ForcePathStyle bool
DisableComputeChecksums bool
ExternalID string
SessionName string
}
// addUserAgent - Add terragrunt version to the user agent for AWS API calls.
var addUserAgent = request.NamedHandler{
Name: "terragrunt.UserAgentHandler",
Fn: request.MakeAddToUserAgentHandler(
"terragrunt", version.GetVersion()),
}
// Returns an AWS session object for the given config region (required), profile name (optional), and IAM role to assume
// (optional), ensuring that the credentials are available.
func CreateAwsSessionFromConfig(config *AwsSessionConfig, terragruntOptions *options.TerragruntOptions) (*session.Session, error) {
defaultResolver := endpoints.DefaultResolver()
s3CustResolverFn := func(service, region string, optFns ...func(*endpoints.Options)) (endpoints.ResolvedEndpoint, error) {
if service == "s3" && config.CustomS3Endpoint != "" {
return endpoints.ResolvedEndpoint{
URL: config.CustomS3Endpoint,
SigningRegion: config.Region,
}, nil
} else if service == "dynamodb" && config.CustomDynamoDBEndpoint != "" {
return endpoints.ResolvedEndpoint{
URL: config.CustomDynamoDBEndpoint,
SigningRegion: config.Region,
}, nil
}
return defaultResolver.EndpointFor(service, region, optFns...)
}
var awsConfig = aws.Config{
Region: aws.String(config.Region),
EndpointResolver: endpoints.ResolverFunc(s3CustResolverFn),
S3ForcePathStyle: aws.Bool(config.S3ForcePathStyle),
DisableComputeChecksums: aws.Bool(config.DisableComputeChecksums),
}
var sessionOptions = session.Options{
Config: awsConfig,
Profile: config.Profile,
SharedConfigState: session.SharedConfigEnable,
}
if len(config.CredsFilename) > 0 {
sessionOptions.SharedConfigFiles = []string{config.CredsFilename}
}
sess, err := session.NewSessionWithOptions(sessionOptions)
if err != nil {
return nil, errors.WithStackTraceAndPrefix(err, "Error initializing session")
}
sess.Handlers.Build.PushFrontNamed(addUserAgent)
// Merge the config based IAMRole options into the original one, as the config has higher precedence than CLI.
iamRoleOptions := terragruntOptions.IAMRoleOptions
if config.RoleArn != "" {
iamRoleOptions = options.MergeIAMRoleOptions(
iamRoleOptions,
options.IAMRoleOptions{
RoleARN: config.RoleArn,
AssumeRoleSessionName: config.SessionName,
},
)
}
if iamRoleOptions.WebIdentityToken != "" && iamRoleOptions.RoleARN != "" {
sess.Config.Credentials = getWebIdentityCredentialsFromIAMRoleOptions(sess, iamRoleOptions)
return sess, nil
}
credentialOptFn := func(p *stscreds.AssumeRoleProvider) {
if config.ExternalID != "" {
p.ExternalID = aws.String(config.ExternalID)
}
}
if iamRoleOptions.RoleARN != "" {
sess.Config.Credentials = getSTSCredentialsFromIAMRoleOptions(sess, iamRoleOptions, credentialOptFn)
} else if creds := getCredentialsFromEnvs(terragruntOptions); creds != nil {
sess.Config.Credentials = creds
}
return sess, nil
}
type tokenFetcher string
// FetchToken Implements the stscreds.TokenFetcher interface.
// Supports providing a token value or the path to a token on disk
func (f tokenFetcher) FetchToken(ctx credentials.Context) ([]byte, error) {
// Check if token is a raw value
if _, err := os.Stat(string(f)); err != nil {
// TODO: See if this lint error should be ignored
return []byte(f), nil //nolint: nilerr
}
token, err := os.ReadFile(string(f))
if err != nil {
return nil, errors.WithStackTrace(err)
}
return token, nil
}
func getWebIdentityCredentialsFromIAMRoleOptions(sess *session.Session, iamRoleOptions options.IAMRoleOptions) *credentials.Credentials {
roleSessionName := iamRoleOptions.AssumeRoleSessionName
if roleSessionName == "" {
// Set a unique session name in the same way it is done in the SDK
roleSessionName = strconv.FormatInt(time.Now().UTC().UnixNano(), 10)
}
svc := sts.New(sess)
p := stscreds.NewWebIdentityRoleProviderWithOptions(svc, iamRoleOptions.RoleARN, roleSessionName, tokenFetcher(iamRoleOptions.WebIdentityToken))
if iamRoleOptions.AssumeRoleDuration > 0 {
p.Duration = time.Second * time.Duration(iamRoleOptions.AssumeRoleDuration)
} else {
p.Duration = time.Second * time.Duration(options.DefaultIAMAssumeRoleDuration)
}
return credentials.NewCredentials(p)
}
func getSTSCredentialsFromIAMRoleOptions(sess *session.Session, iamRoleOptions options.IAMRoleOptions, optFns ...func(*stscreds.AssumeRoleProvider)) *credentials.Credentials {
optFns = append(optFns, func(p *stscreds.AssumeRoleProvider) {
if iamRoleOptions.AssumeRoleDuration > 0 {
p.Duration = time.Second * time.Duration(iamRoleOptions.AssumeRoleDuration)
} else {
p.Duration = time.Second * time.Duration(options.DefaultIAMAssumeRoleDuration)
}
if iamRoleOptions.AssumeRoleSessionName != "" {
p.RoleSessionName = iamRoleOptions.AssumeRoleSessionName
}
})
return stscreds.NewCredentials(sess, iamRoleOptions.RoleARN, optFns...)
}
func getCredentialsFromEnvs(opts *options.TerragruntOptions) *credentials.Credentials {
var (
accessKeyID = opts.Env["AWS_ACCESS_KEY_ID"]
secretAccessKey = opts.Env["AWS_SECRET_ACCESS_KEY"]
sessionToken = opts.Env["AWS_SESSION_TOKEN"]
)
if accessKeyID == "" || secretAccessKey == "" {
return nil
}
return credentials.NewStaticCredentials(accessKeyID, secretAccessKey, sessionToken)
}
// Returns an AWS session object. The session is configured by either:
// - The provided AwsSessionConfig struct, which specifies region (required), profile name (optional), and IAM role to
// assume (optional).
// - The provided TerragruntOptions struct, which specifies any IAM role to assume (optional).
//
// Note that if the AwsSessionConfig object is null, this will return default session credentials using the default
// credentials chain of the AWS SDK.
func CreateAwsSession(config *AwsSessionConfig, terragruntOptions *options.TerragruntOptions) (*session.Session, error) {
var (
sess *session.Session
err error
)
if config == nil {
sessionOptions := session.Options{SharedConfigState: session.SharedConfigEnable}
sess, err = session.NewSessionWithOptions(sessionOptions)
if err != nil {
return nil, errors.WithStackTrace(err)
}
sess.Handlers.Build.PushFrontNamed(addUserAgent)
if terragruntOptions.IAMRoleOptions.RoleARN != "" {
if terragruntOptions.IAMRoleOptions.WebIdentityToken != "" {
terragruntOptions.Logger.Debugf("Assuming role %s using WebIdentity token", terragruntOptions.IAMRoleOptions.RoleARN)
sess.Config.Credentials = getWebIdentityCredentialsFromIAMRoleOptions(sess, terragruntOptions.IAMRoleOptions)
} else {
terragruntOptions.Logger.Debugf("Assuming role %s", terragruntOptions.IAMRoleOptions.RoleARN)
sess.Config.Credentials = getSTSCredentialsFromIAMRoleOptions(sess, terragruntOptions.IAMRoleOptions)
}
} else if creds := getCredentialsFromEnvs(terragruntOptions); creds != nil {
sess.Config.Credentials = creds
}
} else {
sess, err = CreateAwsSessionFromConfig(config, terragruntOptions)
if err != nil {
return nil, errors.WithStackTrace(err)
}
}
if _, err = sess.Config.Credentials.Get(); err != nil {
msg := "Error finding AWS credentials (did you set the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables?)"
if config != nil && len(config.CredsFilename) > 0 {
msg = fmt.Sprintf("Error finding AWS credentials in file '%s' (did you set the correct file name and/or profile?)", config.CredsFilename)
}
return nil, errors.WithStackTraceAndPrefix(err, msg)
}
return sess, nil
}
// Make API calls to AWS to assume the IAM role specified and return the temporary AWS credentials to use that role
func AssumeIamRole(iamRoleOpts options.IAMRoleOptions) (*sts.Credentials, error) {
sessionOptions := session.Options{SharedConfigState: session.SharedConfigEnable}
sess, err := session.NewSessionWithOptions(sessionOptions)
if err != nil {
return nil, errors.WithStackTrace(err)
}
sess.Handlers.Build.PushFrontNamed(addUserAgent)
if iamRoleOpts.RoleARN != "" && iamRoleOpts.WebIdentityToken != "" {
sess.Config.Credentials = getWebIdentityCredentialsFromIAMRoleOptions(sess, iamRoleOpts)
}
_, err = sess.Config.Credentials.Get()
if err != nil {
return nil, errors.WithStackTraceAndPrefix(err, "Error finding AWS credentials (did you set the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables?)")
}
stsClient := sts.New(sess)
sessionName := options.GetDefaultIAMAssumeRoleSessionName()
if iamRoleOpts.AssumeRoleSessionName != "" {
sessionName = iamRoleOpts.AssumeRoleSessionName
}
sessionDurationSeconds := int64(options.DefaultIAMAssumeRoleDuration)
if iamRoleOpts.AssumeRoleDuration != 0 {
sessionDurationSeconds = iamRoleOpts.AssumeRoleDuration
}
if iamRoleOpts.WebIdentityToken == "" {
// Use regular sts AssumeRole
input := sts.AssumeRoleInput{
RoleArn: aws.String(iamRoleOpts.RoleARN),
RoleSessionName: aws.String(sessionName),
DurationSeconds: aws.Int64(sessionDurationSeconds),
}
output, err := stsClient.AssumeRole(&input)
if err != nil {
return nil, errors.WithStackTrace(err)
}
return output.Credentials, nil
}
// Use sts AssumeRoleWithWebIdentity
var token string
// Check if value is a raw token or a path to a file with a token
if _, err := os.Stat(iamRoleOpts.WebIdentityToken); err != nil {
token = iamRoleOpts.WebIdentityToken
} else {
tb, err := os.ReadFile(iamRoleOpts.WebIdentityToken)
if err != nil {
return nil, errors.WithStackTrace(err)
}
token = string(tb)
}
input := sts.AssumeRoleWithWebIdentityInput{
RoleArn: aws.String(iamRoleOpts.RoleARN),
RoleSessionName: aws.String(sessionName),
WebIdentityToken: aws.String(token),
DurationSeconds: aws.Int64(sessionDurationSeconds),
}
req, resp := stsClient.AssumeRoleWithWebIdentityRequest(&input)
// InvalidIdentityToken error is a temporary error that can occur
// when assuming an Role with a JWT web identity token.
// N.B: copied from SDK implementation
req.RetryErrorCodes = append(req.RetryErrorCodes, sts.ErrCodeInvalidIdentityTokenException)
if err := req.Send(); err != nil {
return nil, errors.WithStackTrace(err)
}
return resp.Credentials, nil
}
// Return the AWS caller identity associated with the current set of credentials
func GetAWSCallerIdentity(config *AwsSessionConfig, terragruntOptions *options.TerragruntOptions) (sts.GetCallerIdentityOutput, error) {
sess, err := CreateAwsSession(config, terragruntOptions)
if err != nil {
return sts.GetCallerIdentityOutput{}, errors.WithStackTrace(err)
}
identity, err := sts.New(sess).GetCallerIdentity(nil)
if err != nil {
return sts.GetCallerIdentityOutput{}, errors.WithStackTrace(err)
}
return *identity, nil
}
// ValidateAwsSession - Validate if current AWS session is valid
func ValidateAwsSession(config *AwsSessionConfig, terragruntOptions *options.TerragruntOptions) error {
// read the caller identity to check if the credentials are valid
_, err := GetAWSCallerIdentity(config, terragruntOptions)
return err
}
// Get the AWS Partition of the current session configuration
func GetAWSPartition(config *AwsSessionConfig, terragruntOptions *options.TerragruntOptions) (string, error) {
identity, err := GetAWSCallerIdentity(config, terragruntOptions)
if err != nil {
return "", errors.WithStackTrace(err)
}
arn, err := arn.Parse(*identity.Arn)
if err != nil {
return "", errors.WithStackTrace(err)
}
return arn.Partition, nil
}
// Get the AWS account ID of the current session configuration
func GetAWSAccountID(config *AwsSessionConfig, terragruntOptions *options.TerragruntOptions) (string, error) {
identity, err := GetAWSCallerIdentity(config, terragruntOptions)
if err != nil {
return "", errors.WithStackTrace(err)
}
return *identity.Account, nil
}
// Get the ARN of the AWS identity associated with the current set of credentials
func GetAWSIdentityArn(config *AwsSessionConfig, terragruntOptions *options.TerragruntOptions) (string, error) {
identity, err := GetAWSCallerIdentity(config, terragruntOptions)
if err != nil {
return "", errors.WithStackTrace(err)
}
return *identity.Arn, nil
}
// Get the AWS user ID of the current session configuration
func GetAWSUserID(config *AwsSessionConfig, terragruntOptions *options.TerragruntOptions) (string, error) {
identity, err := GetAWSCallerIdentity(config, terragruntOptions)
if err != nil {
return "", errors.WithStackTrace(err)
}
return *identity.UserId, nil
}
```
|
Vera Anatolyevna Pavlova (; born 1963) is a Russian poet.
Biography
Vera Pavlova was born in Moscow, 1963. She studied at the Oktyabryskaya Revolyutsiya Music College and only started publishing after graduation. She graduated from the Gnessin Academy, specializing in the history of music.
She is the author of twenty collections of poetry, four opera libretti, and lyrics to two cantatas. Her works have been translated into twenty five languages. Her work has been published in The New Yorker.
References
External links
Bibliography of poetry in English translation
Interview in Modern Poetry in Translation
Documentary by Red Palette Pictures
1963 births
Living people
Russian women poets
Writers from Moscow
20th-century Russian women writers
20th-century Russian poets
21st-century Russian women writers
21st-century Russian poets
Russian opera librettists
Women opera librettists
|
Benjamin Joseph Scotti (born June 9, 1937) is a former American football defensive back in the National Football League. A graduate of the University of Maryland (1959), Scotti played for the Washington Redskins (1959–1961), the Philadelphia Eagles (1962–1963), and the San Francisco 49ers (1964). In late November 1963, Scotti received brief national attention when he precipitated a fight with teammate John Mellekas that sent Mellekas to the hospital. He is the brother of media mogul Tony Scotti, with whom he co-produced a few television programs, most notably the lifeguard drama Baywatch, and also co-founded the Scotti Bros. record label which released music by artists such as Leif Garrett, Survivor and "Weird Al" Yankovic until the label was dissolved in the mid-1990s. He went on to form Banders.
External links
1937 births
Living people
Players of American football from Newark, New Jersey
American football defensive backs
Maryland Terrapins football players
Washington Redskins players
Philadelphia Eagles players
San Francisco 49ers players
|
The Greater London League ran for seven seasons between 1964 and 1971.
1964–65
A Section
The A Section was composed of:
Six clubs from the London League Premier Division (Barkingside, Epping Town, Hatfield Town, Hermes, London Transport and West Thurrock Athletic)
Three clubs from the Aetolian League (East Ham United, Eton Manor and Ford United)
Two clubs from the London League Division One (CAV Athletic and Canvey Island)
One club from the Essex and Suffolk Border League (Crittall Athletic)
Chingford
B Section
The B Section was composed of:
Seven clubs from the Aetolian League (Beckenham Town, Cray Wanderers, Crockenhill, Faversham Town, Sheppey United, Snowdown Colliery Welfare and Whitstable Town)
Four clubs from the London League Premier Division (ROFSA, Slade Green Athletic, Ulysses and Woolwich Polytechnic)
Tunbridge Wells Rangers
1965–66
Premier Division
Division One
Four new clubs joined Division One for the 1965–66 season:
Bexley
Highfield
Penhill Standard
Swanley
1966–67
Premier Division
Four new clubs joined the Premier Division for the 1966–67 season:
Barkingside (promoted from Division One)
Highfield (promoted from Division One)
Willesden (from the Spartan League)
Deal Town (from the Southern League)
Division One
Division One featured three new clubs for the 1966–67 season:
Beckenham Town (relegated from the Premier Division)
Battersea United
RAS & RA
1967–68
The Premier Division was renamed Division One, and Division One renamed Division Two prior to the start of the 1967–68 season.
Division One
Division One featured three new clubs for the 1967–68 season:
Battersea United (promoted from old Division One)
Swanley (promoted from old Division One)
Woodford Town (from the Metropolitan League)
Division Two
Division Two featured three new clubs for the 1967–68 season:
Barkingside (relegated from the Premier Division)
Willesden (relegated from the Premier Division)
Brentsonians
1968–69
Division One
Division One featured two new clubs for the 1968–69 season:
Willesden (promoted from Division Two)
Chingford (promoted from Division Two)
Division Two
Division Two featured three new clubs for the 1968–69 season:
Woolwich Polytechnic (relegated from the Division One)
Northern Polytechnic
Heathside Sports
1969–70
The league featured one new club for the 1969–70 season:
Merton United
1970–71
Two new clubs joined the league for the 1970–71 season:
BROB Barnet
Vokins
A Section
B Section
References
|
```xml
import * as React from 'react';
import { SuggestionsStore } from './Suggestions/SuggestionsStore';
import type { ISuggestionModel, ISuggestionItemProps } from '../../Pickers';
import type { ISuggestionsControlProps } from './Suggestions/Suggestions.types';
import type { IRefObject } from '../../Utilities';
import type { ICalloutProps } from '../Callout/Callout.types';
export interface IBaseFloatingPicker {
/** Whether the suggestions are shown */
isSuggestionsShown: boolean;
/** On queryString changed */
onQueryStringChanged: (input: string) => void;
/** Hides the picker */
hidePicker: () => void;
/** Shows the picker
* @param updateValue - Optional param to indicate whether to update the query string
*/
showPicker: (updateValue?: boolean) => void;
/** Gets the suggestions */
suggestions: any[];
/** Gets the input text */
inputText: string;
}
// Type T is the type of the item that is displayed
// and searched for by the people picker. For example, if the picker is
// displaying persona's than type T could either be of Persona or Ipersona props
export interface IBaseFloatingPickerProps<T> extends React.ClassAttributes<any> {
componentRef?: IRefObject<IBaseFloatingPicker>;
/**
* The suggestions store
*/
suggestionsStore: SuggestionsStore<T>;
/**
* The suggestions to show on zero query, return null if using as a controlled component
*/
onZeroQuerySuggestion?: (selectedItems?: T[]) => T[] | PromiseLike<T[]> | null;
/**
* The input element to listen on events
*/
inputElement?: HTMLInputElement | null;
/**
* Function that specifies how an individual suggestion item will appear.
*/
onRenderSuggestionsItem?: (props: T, itemProps: ISuggestionItemProps<T>) => JSX.Element;
/**
* A callback for what should happen when a person types text into the input.
* Returns the already selected items so the resolver can filter them out.
* If used in conjunction with resolveDelay this will only kick off after the delay throttle.
* Return null if using as a controlled component
*/
onResolveSuggestions: (filter: string, selectedItems?: T[]) => T[] | PromiseLike<T[]> | null;
/**
* A callback for when the input has been changed
*/
onInputChanged?: (filter: string) => void;
/**
* The delay time in ms before resolving suggestions, which is kicked off when input has been changed.
* e.g. If a second input change happens within the resolveDelay time, the timer will start over.
* Only until after the timer completes will onResolveSuggestions be called.
*/
resolveDelay?: number;
/**
* A callback for when a suggestion is clicked
*/
onChange?: (item: T) => void;
/**
* ClassName for the picker.
*/
className?: string;
/**
* The properties that will get passed to the Suggestions component.
*/
pickerSuggestionsProps?: IBaseFloatingPickerSuggestionProps;
/**
* The properties that will get passed to the Callout component.
*/
pickerCalloutProps?: ICalloutProps;
/**
* A callback for when an item is removed from the suggestion list
*/
onRemoveSuggestion?: (item: T) => void;
/**
* A function used to validate if raw text entered into the well can be added
*/
onValidateInput?: (input: string) => boolean;
/**
* The text to display while searching for more results in a limited suggestions list
*/
searchingText?: ((props: { input: string }) => string) | string;
/**
* Function that specifies how arbitrary text entered into the well is handled.
*/
createGenericItem?: (input: string, isValid: boolean) => ISuggestionModel<T>;
/**
* The callback that should be called to see if the force resolve command should be shown
*/
showForceResolve?: () => boolean;
/**
* The items that the base picker should currently display as selected.
* If this is provided then the picker will act as a controlled component.
*/
selectedItems?: T[];
/**
* A callback to get text from an item. Used to autofill text in the pickers.
*/
getTextFromItem?: (item: T, currentValue?: string) => string;
/**
* Width for the suggestions callout
*/
calloutWidth?: number;
/**
* The callback that should be called when the suggestions are shown
*/
onSuggestionsShown?: () => void;
/**
* The callback that should be called when the suggestions are hidden
*/
onSuggestionsHidden?: () => void;
/**
* If using as a controlled component, the items to show in the suggestion list
*/
suggestionItems?: T[];
}
/**
* Props which are passed on to the inner Suggestions component
*/
export type IBaseFloatingPickerSuggestionProps = Pick<
ISuggestionsControlProps<any>,
'shouldSelectFirstItem' | 'headerItemsProps' | 'footerItemsProps' | 'showRemoveButtons'
>;
```
|
Past Continuous is a 1977 novel originally written in Hebrew by Israeli novelist Yaakov Shabtai. The original title, Zikhron Devarim () is a form of contract or letter of agreement or memorandum, but could also be translated literally as Remembrance of Things.
Past Continuous is Shabtai's first, and only completed, novel. It was written as one continuous 280-page paragraph (broken up in the English translation), with some sentences spanning several pages.
Plot summary
The novel focuses on three friends, Goldman, Caesar, and Israel, in 1970's Tel Aviv, as well as their acquaintances, love interests, and relatives. The story begins with the death of Goldman's father on April 1 and ends a little after Goldman's suicide on January 1. The past is woven into this short "present" period, through a complex stream of associations.
The three men, lurching between guilt and depression, lose themselves in sexual adventures, amateur philosophy or compare their lives unfavorably to those of their sometimes heroic, sometime pitiful elders. The older characters can always hold firm to something or other, whether socialism and hatred of religious Jews, insights gained in Siberia, or refusal to admit that Israel is not Poland. The younger characters seethe instead in doubt and sweat.
Major themes
Past vs. present
In Past Continuous Shabtai expresses the personal loss felt by the main characters, which is echoed by the changing city of Tel Aviv, and infiltrates every narrative perspective:
From one day to the next, over the space of a few years, the city was rapidly and relentlessly changing its face…and Goldman, who was attached to these streets and houses because they, together with the sand dunes and virgin fields, were the landscape in which he had been born and grown up, knew that this process of destruction was inevitable, and perhaps even necessary, as inevitable as the change in the population of the town, which in the course of a few years had been filled with tens of thousands of new people, who in Goldman’s eyes were invading outsiders who had turned him into a stranger in his own city, but this awareness was powerless to soften the hatred he felt for the new people or the helpless rage which engulfed him at the sight of the destructive plague changing his childhood world and breaking it up…
This uncontrollable remembrance of events through the objects and landmarks that surround the characters point to their obsession with the past, neither nostalgic nor inspiring, but menacing, a reminder to the new generation that they could never achieve what past generations have. This theme is also presented through the occupations of the three main characters: Israel's piano playing, Goldman's translations and Caesar's photography all require a prior model or text - they can only reflect reality, and never create anything original.
Stream of consciousness
The flow of the narrative mixes past and present, thoughts and events, to create a stream of consciousness that moves from one character's mind to another, often through objects and experiences:
The prolonged paragraph replicates the exceptional intimacy of a society whose members are bound together by stronger-than-family ties and can hardly visit their parents or walk along the beach or drive to a funeral or an assignation without recalling who lived where and when or who had done what, where, and how.
The stream of collective consciousness Shabtai uses creates significant juxtapositions between events and produces irony. For example, when Shabtai presents the death of Aryeh, one of Caesar's relatives, the minor details brought up throughout the account puncture the tragic event:
[Aryeh] shot himself in the mouth with a pistol and was found two days later in his car on a dirt road between orange groves not far from the sea dressed in a leather suit and a floral shirt and a yellow tie, and Erwin and Caesar, who took the wooden mask of the African god from his mother and placed it on one of the shelves in the bookcase, went to identify the body in the morgue, because Yaffa and Tikva and also Zina, who looked at the mask absentmindedly and said, “Very nice,” couldn’t face it, and the two of them, together with Besh, told Yaffa, who fainted in the living room before they even told her, just as she had fainted when she heard that Tikva’s Hungarian engineer wasn’t an engineer, knocking over her cup and spilling the coffee, and Caesar made haste to pour cold water over her and the drops splashed onto Besh and Zina, who was trying to comfort her sister with a pale and frightened face but at the same time was filled with anger against her because of the whole business and because of the coffee stains spreading over the carpet and the wall, which Zina tried to clean with a wet cloth as soon as Yaffa had recovered a little, but without any success, and the stains continued to annoy her – until they repainted the whole room, which was already after Aryeh’s funeral…
The central fact of Aryeh's suicide is not as important as the values of Israeli society revealed through the smaller incidents around it, e.g. Yaffa's identical reactions to all bad news and Zina's greater concern for the coffee stains.
Existentialism
Shabbtai's three protagonists all feel a fundamental sickness because of their meaningless existence and the absurd world they inhabit, and have no choice but to denounce the world that betrayed them. According to Gershon Shaked, Shabtai is probably the only Israeli novelist who has “reached a deep understanding of the double meaning of the [Zionist] meta-narrative and the double meaning of the positive heroes.” Past continuous could be seen as an elegy for the working class which, due to its economic successes, has now become decadent. This decadence touches the younger generation as well, and both young and old are doomed from the first sentence of the novel:
Goldman’s father died on the first of April, whereas Goldman himself committed suicide on the first of January – just when it seemed to him that finally, thanks to the cultivation of detachment and withdrawal, he was about to enter a new era and rehabilitate himself by means of the “Bullworker” and a disciplined way of life, and especially by means of astronomy and the translation of the Somnium.
The younger generations attempts to replace the ideals of the past with sex, self-obsession, and meaningless routines, but these all fail. The only positive force that exists in the novel is the skillful use of language. The death of Zionist ideals engenders the birth of linguistic art, “Though words betray Shabtai’s hero, the storyteller believes these treacherous words. He believes in their symbolic power to describe his crumbling existence.”
Somewhat like James Joyce’s Ulysses, Past Continuous presents a funeral at the beginning and a birth at the end (the 'present' of the story spans a gestation period of nine months, from April 1 to January 1). In this case, however, there is no triumph of life over death, and the book ends with an image of the world as a grotesque caricature, populated by people who are dead while still alive:
…but Ella ignored her baby, whose head was covered with a fine black down, and the nurse held him helplessly in her hands and pleaded with Ella gently to take him, and then she asked her again, this time impatiently, to take him and feed him like all the other mothers, but Ella went on ignoring her baby, just as she went on ignoring Israel, who remained standing stubbornly by the bed and did not take his eyes off her as they slowly filled with tears and her face grew more and more blurred until it dissolved into the whiteness of the pillows, and behind him he heard the head nurse clapping her hands again, and finally she turned to him and asked him to leave – all the other visitors had already gone – and Israel took two or three steps backward and then he turned around and walked out of the room.
Literary significance
Past Continuous is considered the first novel ever to be written in truly vernacular Hebrew and in 2005 it was named the best novel written about Tel Aviv by Time Out Tel Aviv. Full of incidental information on the ups and downs of Zionism, the novel serves as an introduction to Israel as well as to Israeli literature. It received international acclaim as a unique work of modernism, prompting critic Gabriel Josipovici of The Independent to name it the greatest novel of the decade in 1989, comparing it to Proust's In Search of Lost Time.
In a 2007 survey among 25 top Israeli publishers, editors, and critics, Past Continuous was chosen the best Hebrew book written in Israel since the foundation of the state in 1948.
Film adaptation
In 1995 Director Amos Gitai adapted the book into a film Zihron Devarim (released as Devarim in France, L'Inventario in Italy and Things in English-speaking markets), starring Assi Dayan, Amos Shub, Lea Koenig, and Gitai himself.
In 2012 Director David (Dave) Abramov adapted the book into a short film, titled Hitabdot Aliza ('Yisrael's Cheerful Requiem') starring Eldad Carin (as Israel), Ari Libsker (as Goldman), Rana Werbin (Eliezra), and Adi Kum (as Ela).
Editions
Hebrew
Zikhron Devarim. Tel Aviv: Siman Kriah, 1977. Reprinted 1994.
English
Past Continuous. Overlook Press, 1983, reprinted 2004,
Past Continuous. Jewish Publication Society of America, 1985,
Past Continuous. Schocken Books, 1989,
References
External links
Past Continuous - New York Times review
Past Continuous at Overlook Press.
Zihron Devarim - film adaptation of the novel at Amos Gitai's official website
1977 novels
20th-century Israeli novels
Novels set in Israel
|
Jourdana Elizabeth Phillips (born 1990) is an American model.
Early life
Jourdana Phillips was born in Co-op City, Bronx in New York she lived there till age 6 then moved to Maplewood, New Jersey. She moved to Houston, Texas at 11 years old where she attended school and later graduated from Elkins High School.
She then moved to New York City where she attended New York University. She completed a bachelor's degree in Childhood Education in 2015.
Modelling
In 2016 she signed with Supreme Management in New York, as well as Women Management in Milan and Paris and Models 1 in London.
She has featured in editorials for Vogue, Elle and Harper’s Bazaar among others and walked in shows for Yves Saint Laurent , Balmain, Ralph Lauren, Emporio Armani, Elie Saab, Marc Jacobs, Cushnie, Topshop, Acne Studios, Tod's, Yeezy and others.
She appeared in the annual Victoria’s Secret Fashion Show in 2016, 2017 and 2018.>
References
External links
Jourdana Phillips on Models.com
Living people
American female models
New York University alumni
People from Co-op City, Bronx
1990 births
Elite Model Management models
Women Management models
21st-century American women
|
```xml
/*
* @license Apache-2.0
*
*
*
* path_to_url
*
* Unless required by applicable law or agreed to in writing, software
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
*/
import IS_BROWSER = require( './index' );
// TESTS //
// The variable is a boolean...
{
// eslint-disable-next-line @typescript-eslint/no-unused-expressions
IS_BROWSER; // $ExpectType boolean
}
```
|
```php
<?php
declare(strict_types=1);
/**
* Passbolt ~ Open source password manager for teams
*
* For full copyright and license information, please see the LICENSE.txt
* Redistributions of files must retain the above copyright notice.
*
* @link path_to_url Passbolt(tm)
* @since 3.3.0
*/
namespace Passbolt\MultiFactorAuthentication\Service;
use App\Model\Entity\AuthenticationToken;
use Cake\Datasource\Exception\RecordNotFoundException;
use Cake\ORM\Locator\LocatorAwareTrait;
/**
* Class UpdateMfaTokenSessionIdService
*/
class UpdateMfaTokenSessionIdService
{
use LocatorAwareTrait;
/**
* @var \App\Model\Table\AuthenticationTokensTable
*/
protected $AuthenticationTokens;
/**
* UpdateMfaTokenSessionIdService constructor.
*/
public function __construct()
{
/** @phpstan-ignore-next-line */
$this->AuthenticationTokens = $this->fetchTable('AuthenticationTokens');
}
/**
* @param string $mfaToken MFA Token to update
* @param string $sessionId Session ID
* @return \App\Model\Entity\AuthenticationToken
* @throws \Cake\Datasource\Exception\RecordNotFoundException When there is no matching MFA token.
* @throws \Cake\ORM\Exception\PersistenceFailedException When the entity couldn't be saved
*/
public function updateSessionId(string $mfaToken, string $sessionId): AuthenticationToken
{
/** @var \App\Model\Entity\AuthenticationToken|null $mfaToken */
$mfaToken = $this->AuthenticationTokens->find()
->where([
'active' => true,
'token' => $mfaToken,
'type' => AuthenticationToken::TYPE_MFA,
])
->first();
if ($mfaToken === null) {
throw new RecordNotFoundException(__('The MFA token provided does not exist or is inactive.'));
}
$mfaToken->hashAndSetSessionId($sessionId);
return $this->AuthenticationTokens->saveOrFail($mfaToken);
}
}
```
|
Albert Bartlett Bickford (24 August 1887 – 23 December 1971) was an Australian rules footballer who played with Carlton and Melbourne in the Victorian Football League (VFL).
Playing career
Originally from Essendon Association, Bickford made one appearance for Carlton in each of the 1906 and 1907 seasons, both premiership years. He then transferred to Melbourne and played eight games in the 1908 VFL season, followed by a single appearance in 1909.
Bickford is the brother of Carlton and Essendon footballer Edric Bickford, brother-in-law of former Carlton captain Rod McGregor and uncle of Melbourne premiership player George Bickford.
Umpiring career
Bickford was appointed to the VFL list of field umpires in 1921. In round one that season, as a boundary umpire, he made his only appearance in a VFL match - Richmond versus Carlton - earning Heritage Number 129. Between 1921 and 1929 he umpired 120 country matches as a field umpire including the 1926 Heathcote District Football Association Grand Final.
References
1887 births
Carlton Football Club players
Melbourne Football Club players
Essendon Association Football Club players
Australian Football League umpires
Australian rules footballers from Melbourne
1971 deaths
People from Flemington, Victoria
|
Agnes was a wooden brigantine built in 1849 at Point Brenley, Nova Scotia. She was first registered in Pictou, Nova Scotia. Later acquired by owners in Sydney, she was wrecked on the north side of the Wollongong breakwater in New South Wales on the evening of 10 March 1877, when the wind changed while she was trying to enter the harbour of Wollongong.
References
Shipwrecks of the Illawarra Region
Ships built in Nova Scotia
1849 ships
Maritime incidents in March 1877
1851–1870 ships of Australia
1871–1900 ships of Australia
Merchant ships of Australia
Brigantines of Australia
|
Landres-et-Saint-Georges () is a commune in the Ardennes department in northern France.
Population
See also
Communes of the Ardennes department
References
Communes of Ardennes (department)
Ardennes communes articles needing translation from French Wikipedia
|
Wioska is a village in the administrative district of Gmina Rakoniewice, within Grodzisk Wielkopolski County, Greater Poland Voivodeship, in west-central Poland. It lies approximately north-west of Rakoniewice, south-west of Grodzisk Wielkopolski, and south-west of the regional capital Poznań.
References
Wioska
|
```xml
<?xml version="1.0"?>
<package xmlns="path_to_url">
<metadata>
<id>itext.commons</id>
<version>9.0.0-SNAPSHOT</version>
<title>iText commons module</title>
<authors>Apryse Software</authors>
<owners>Apryse Software</owners>
<licenseUrl>path_to_url
<projectUrl>path_to_url
<icon>ITSC-avatar.png</icon>
<description>Commons module</description>
<summary />
<releaseNotes>path_to_url
<language>en-US</language>
<tags>itext itext7 itextsharp c# .net csharp</tags>
<dependencies>
<group targetFramework="net461">
<dependency id="Newtonsoft.Json" version="13.0.1" />
<dependency id="Microsoft.Extensions.Logging" version="5.0.0" />
</group>
<group targetFramework="netstandard2.0">
<dependency id="Newtonsoft.Json" version="13.0.1" />
<dependency id="Microsoft.Extensions.Logging" version="5.0.0" />
</group>
</dependencies>
</metadata>
<files>
<file src="bin\Release\net461\itext.commons.dll" target="lib\net461" />
<file src="bin\Release\net461\itext.commons.xml" target="lib\net461" />
<file src="bin\Release\netstandard2.0\itext.commons.dll" target="lib\netstandard2.0" />
<file src="bin\Release\netstandard2.0\itext.commons.xml" target="lib\netstandard2.0" />
<file src="..\..\ITSC-avatar.png" target="" />
</files>
</package>
```
|
```objective-c
#pragma once
#include <cstdint>
#include <unistd.h>
#include <vector>
#include "cereal/gen/cpp/log.capnp.h"
#include "common/i2c.h"
#include "common/gpio.h"
#include "common/swaglog.h"
#include "system/sensord/sensors/constants.h"
#include "system/sensord/sensors/sensor.h"
int16_t read_12_bit(uint8_t lsb, uint8_t msb);
int16_t read_16_bit(uint8_t lsb, uint8_t msb);
int32_t read_20_bit(uint8_t b2, uint8_t b1, uint8_t b0);
class I2CSensor : public Sensor {
private:
I2CBus *bus;
int gpio_nr;
bool shared_gpio;
virtual uint8_t get_device_address() = 0;
public:
I2CSensor(I2CBus *bus, int gpio_nr = 0, bool shared_gpio = false);
~I2CSensor();
int read_register(uint register_address, uint8_t *buffer, uint8_t len);
int set_register(uint register_address, uint8_t data);
int init_gpio();
bool has_interrupt_enabled();
virtual int init() = 0;
virtual bool get_event(MessageBuilder &msg, uint64_t ts = 0) = 0;
virtual int shutdown() = 0;
int verify_chip_id(uint8_t address, const std::vector<uint8_t> &expected_ids) {
uint8_t chip_id = 0;
int ret = read_register(address, &chip_id, 1);
if (ret < 0) {
LOGW("Reading chip ID failed: %d", ret);
return -1;
}
for (int i = 0; i < expected_ids.size(); ++i) {
if (chip_id == expected_ids[i]) return chip_id;
}
LOGE("Chip ID wrong. Got: %d, Expected %d", chip_id, expected_ids[0]);
return -1;
}
};
```
|
Louis Abel Beffroy de Reigny () (6 November 1757 – 17 December 1811) was a French dramatist and man of letters.
Life
He was born at Laon, Aisne.
Under the name of "Cousin Jacques" he founded a periodical called Les Lunes (1785–1787). The (1788–1792) followed. Nicodeme clans la Lune, ou la révolution pacifique (1790) a three-act farce, is said to have had more than four hundred representations.
In spite of his protests against the evils of the Revolution he escaped interference through the influence of his brother, Louis Etienne Beffroy, who was a member of the Convention.
Of La Petite Nanette (1795) and several other operas he wrote both the words and the music. His Dictionnaire neologique (3 vols, 1795–1800) of the chief actors and events in the Revolution was interdicted by the police and remained incomplete. Beffroy spent his last years in retirement and died in Paris on 17 December 1811.
Works
Theatre
Compliment 1781. Paris, Théâtre de l'Hôtel de Bourgogne, 16 August 1781.
Les Ailes de l'amour, comedy in 1 act in verse and in vaudevilles mingled with new songs. Paris, Théâtre Italien (salle Favart), 23 May 1786.
Les Clefs du jardin, ou les Pots de fleurs, divertissement en vers et en vaudevilles. Paris, Théâtre Italien (salle Favart), 24 March 1787.
La Fin du bail, ou le Repas des fermiers, divertissement en prose et en vaudevilles. Paris, Théâtre Italien (salle Favart), 8 March 1788.
Sans adieu, compliment de clôture 1789. Paris, Théâtre Italien (salle Favart), 24 March 1789.
La Couronne de fleurs, comédie en un acte et en vaudevilles. Paris, Théâtre Italien (salle Favart), 20 April 1789. Text online
La Confédération du Parnasse. Paris, Théâtre des Beaujolais, 11 July 1790.
Le Retour du Champ de Mars. Paris, Théâtre des Beaujolais, 25 July 1790.
Nicodème dans la lune, ou la Révolution pacifique, folie en prose et en 3 actes, mêlée d'ariettes et de vaudevilles. Paris, Théâtre-Français, 7 November 1790. Reprinting: Nizet, Paris, 1983. Text online
L'Histoire universelle, comédie en vers et en 2 actes, mêlée de vaudevilles et d'airs nouveaux. Paris, Théâtre de Monsieur, 16 December 1790.
Le Club des bonnes-gens, ou la Réconciliation, comédie en vers et en 2 actes, mêlée de vaudevilles et d'airs nouveaux. Paris, Théâtre de Monsieur, 24 September 1791. Text online
Les Deux Nicodèmes, ou Les Français sur la planète de Jupiter. Paris, Théâtre Feydeau, 21 November 1791.
Allons, ça va, ou le Quaker en France, tableau patriotique en vers et en 1 acte. Paris, Théâtre Feydeau, 28 October 1793. Text online
Toute la Grèce, ou Ce que peut la liberté, tableau patriotique en un acte. Paris, Théâtre de la Porte-Saint-Martin, 5 January 1794. Text online
Le Compère Luc ou Les Dangers de l'ivrognerie. Paris, Théâtre Feydeau, 19 February 1794.
La Petite Nannette, opéra-comique en 2 actes. Paris, Théâtre Feydeau, 7 December 1796.
Turlututu, empereur de l'Isle verte, folie, bêtise, farce ou parade, comme on voudra, en prose et en 3 actes. Paris, Théâtre de la Cité, 3 July 1797.
Jean-Baptiste, opéra comique en prose et en 1 acte. Paris, Théâtre Feydeau, 1 June 1798.
Un Rien, ou l'Habit de noces, folie épisodique en 1 acte et en prose, mêlée de vaudevilles et d'airs nouveaux. Paris, Théâtre de l'Ambigu-Comique, 7 June 1798.
Le Grand Genre. Paris, Théâtre de l'Ambigu-Comique, 13 January 1799.
Magdelon, comédie épisodique en prose et en 1 acte, mêlée d'ariettes. Paris, Théâtre Montansier, 4 June 1799.
Émilie ou Les Caprices, comédie en vers et en 3 actes. Paris, Théâtre des Jeunes-Artistes, 9 July 1799.
Les Deux Charbonniers, ou Les Contrastes, comédie en prose et en 2 actes mêlée d'ariettes. Paris, Théâtre Montansier, 24 August 1799.
Le Bonhomme, ou Poulot et Fanchon. Paris, Théâtre Montansier, 11 December 1799.
Poetry
Les Petites Poésies d'Antoine Jacques, citoyen de la place Maubert (1782)
Turlututu, ou la Science du bonheur, poème héroï-comique en vers et en huit chants, par le Cousin Jacques (1783)
Hurluberlu, ou le Célibataire, poème demi-burlesque avec des airs nouveaux, en vers et en trois chants, par le Cousin Jacques, avec des notes de M. de Kerkorkurkayladeck (1783)
Marlborough, poëme comique en prose rimée, par le Cousin-Jacques, avec des notes de M. de Kerkorkurkayladeck, gentilhomme bas-breton (1783)
Les Petites-Maisons du Parnasse, ouvrage comico-littéraire d'un genre nouveau, en vers et en prose, par le Cousin Jacques, traduit de l'arabe, etc., et donné au public par un drôle de corps, avec des notes de Messire Ives de Kerkorkurkaïladek-Kakabek, seigneur de Konkalek, Kikokikar, et autres lieux (1783–84)
Nouveau Te Deum en vers saphiques, avec des notes sur le Pape, sur le légal, sur le nouvel archevêque de Paris, sur les philosophes (1802)
Les Soirées chantantes, ou le Chansonnier bourgeois, formé du choix de tous les vaudevilles, couplets, romances, rondes, scènes chantantes du Cousin-Jacques, recueil revu, épuré par l'auteur, avec les airs nouveaux notés (1803)
Journalism and other
Le Cousin Jacques hors du Sallon, folie sans conséquence, à l'occasion des tableaux exposés au Louvre en 1787 (1787)
Histoire de France pendant trois mois, ou Relation exacte, impartiale et suivie des événemens qui ont eu lieu à Paris, à Versailles et dans les provinces, depuis le 15 mai jusqu'au 15 août 1789, avec des anecdotes qui n'ont point encore été publiées et des réflexions sur l'état actuel de la France, et suivie d'une épître en vers à Louis XVI (1789)
Précis exact de la prise de la Bastille rédigé sous les yeux des principaux acteurs qui ont joué un rôle dans cette expédition et lu le même jour à l'Hôtel-de-Ville (1789)
Supplément nécessaire au Précis exact de la prise de la Bastille, avec des anecdotes curieuses sur le même sujet (1789). Text online
Les Repentirs de l'année 1788, suivis de douze petites lettres, écrites a qui voudra les lire (1789)
Le Lendemain, ou l'Esprit des feuilles de la veille (10 October 1790 - 19 June 1791).
Les Lunes du Cousin Jacques (1785-1787). Text online
Courrier des planètes, ou Correspondance du Cousin Jacques avec le firmament, folie périodique dédiée à la Lune (1788–1790)
Les Nouvelles Lunes du Cousin Jacques (1791)
Almanach général de tous les spectacles de Paris et des provinces pour l'année 1791 [et 1792] par une société de gens de lettres et d'artistes (2 volumes, in collaboration, 1792–93)
Ah ! sauvons la France, puisqu'on le peut encore, ou Plan de finances, simple, facile, prompt et moral dans son exécution, soumis à l'opinion publique par un citoyen de Paris, qui veut garder l'anonyme (1793). Text online
La Constitution de la Lune, rêve politique et moral, par le Cousin-Jacques (1793). Text online
Testament d'un électeur de Paris (1795)
Dictionnaire néologique des hommes et des choses, ou Notice alphabétique des personnes des deux sexes, des événements, des découvertes et des mots qui ont paru le plus remarquables à l'auteur, dans tout le cours de la Révolution française, par le Cousin-Jacques (3 volumes, 1799)
Bibliography
Charles Westercamp, Beffroy de Reigny dit le Cousin Jacques, 1757-1811. Sa vie et ses Œuvres, Tablettes de l’Aisne, Laon, 1930.
External links
Ses pièces de théâtre et leurs représentations sur le site CÉSAR
References
Attribution
1757 births
1811 deaths
People from Laon
18th-century French dramatists and playwrights
18th-century French male writers
18th-century French poets
|
```javascript
var map = require('./map'),
property = require('../utility/property');
/**
* Gets the property value of `path` from all elements in `collection`.
*
* @static
* @memberOf _
* @category Collection
* @param {Array|Object|string} collection The collection to iterate over.
* @param {Array|string} path The path of the property to pluck.
* @returns {Array} Returns the property values.
* @example
*
* var users = [
* { 'user': 'barney', 'age': 36 },
* { 'user': 'fred', 'age': 40 }
* ];
*
* _.pluck(users, 'user');
* // => ['barney', 'fred']
*
* var userIndex = _.indexBy(users, 'user');
* _.pluck(userIndex, 'age');
* // => [36, 40] (iteration order is not guaranteed)
*/
function pluck(collection, path) {
return map(collection, property(path));
}
module.exports = pluck;
```
|
```html
<html>
<head>
<title>NVIDIA(R) PhysX(R) SDK 3.4 API Reference: Member List</title>
<meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1">
<LINK HREF="NVIDIA.css" REL="stylesheet" TYPE="text/css">
</head>
<body bgcolor="#FFFFFF">
<div id="header">
<hr class="first">
<img alt="" src="images/PhysXlogo.png" align="middle"> <br>
<center>
<a class="qindex" href="main.html">Main Page</a>
<a class="qindex" href="hierarchy.html">Class Hierarchy</a>
<a class="qindex" href="annotated.html">Compound List</a>
<a class="qindex" href="functions.html">Compound Members</a>
</center>
<hr class="second">
</div>
<!-- Generated by Doxygen 1.5.8 -->
<div class="contents">
<h1>PxSpatialLocationCallback Member List</h1>This is the complete list of members for <a class="el" href="structPxSpatialLocationCallback.html">PxSpatialLocationCallback</a>, including all inherited members.<p><table>
<tr class="memlist"><td><a class="el" href="structPxSpatialLocationCallback.html#869a311bedd23a07890d0d1d279eaab4">onHit</a>(PxSpatialIndexItem &item, PxReal distance, PxReal &shrunkDistance)=0</td><td><a class="el" href="structPxSpatialLocationCallback.html">PxSpatialLocationCallback</a></td><td><code> [pure virtual]</code></td></tr>
<tr class="memlist"><td><a class="el" href="structPxSpatialLocationCallback.html#b3b5df6caf441e46163273d340cb478a">~PxSpatialLocationCallback</a>()</td><td><a class="el" href="structPxSpatialLocationCallback.html">PxSpatialLocationCallback</a></td><td><code> [inline, virtual]</code></td></tr>
</table></div>
<hr style="width: 100%; height: 2px;"><br>
</body>
</html>
```
|
```c
/* Support for dynamic loading of extension modules */
#include "dl.h"
#include "Python.h"
#include "importdl.h"
extern char *Py_GetProgramName(void);
const struct filedescr _PyImport_DynLoadFiletab[] = {
{".o", "rb", C_EXTENSION},
{"module.o", "rb", C_EXTENSION},
{0, 0}
};
dl_funcptr _PyImport_GetDynLoadFunc(const char *fqname, const char *shortname,
const char *pathname, FILE *fp)
{
char funcname[258];
PyOS_snprintf(funcname, sizeof(funcname), "init%.200s", shortname);
return dl_loadmod(Py_GetProgramName(), pathname, funcname);
}
```
|
Roy Odhier (born 23 September 1964) is a Kenyan field hockey player. He competed in the men's tournament at the 1988 Summer Olympics.
References
External links
1964 births
Living people
Kenyan male field hockey players
Olympic field hockey players for Kenya
Field hockey players at the 1988 Summer Olympics
Place of birth missing (living people)
|
In geometry, the rectified truncated cube is a polyhedron, constructed as a rectified, truncated cube. It has 38 faces: 8 equilateral triangles, 24 isosceles triangles, and 6 octagons.
Topologically, the triangles corresponding to the cube's vertices are always equilateral, although the octagons, while having equal edge lengths, do not have the same edge lengths with the equilateral triangles, having different but alternating angles, causing the other triangles to be isosceles instead.
Related polyhedra
The rectified truncated cube can be seen in sequence of rectification and truncation operations from the cube. Further truncation, and alternation operations creates two more polyhedra:
See also
Rectified truncated tetrahedron
Rectified truncated octahedron
Rectified truncated dodecahedron
Rectified truncated icosahedron
References
Coxeter Regular Polytopes, Third edition, (1973), Dover edition, (pp. 145–154 Chapter 8: Truncation)
John H. Conway, Heidi Burgiel, Chaim Goodman-Strass, The Symmetries of Things 2008,
External links
George Hart's Conway interpreter: generates polyhedra in VRML, taking Conway notation as input
Polyhedra
|
Beinn Achaladair is a Scottish mountain situated six kilometres north east of the hamlet of Bridge of Orchy. The mountain stands on the border of the Perth and Kinross and Argyll and Bute council areas.
Overview
Beinn Achaladair is a distinct landmark for both road and rail travellers with both the A82 road and the West Highland Line passing close to the foot of the mountain with the railway actually traversing the lower northern slopes before crossing Rannoch Moor on its way to Fort William. The mountain looks impressive from the north west throwing down steep wall like slopes and along with the three adjoining Munros of Beinn Dorain, Beinn an Dothaidh, and Beinn a' Chreachain it forms the historical Great Wall of Rannoch, which was the boundary between the old Pictish Kingdom to the east and the Dál Riata kingdom of the Scots in the west.
Beinn Achaladair reaches a height of and is classed as a Munro and a Marilyn, geographically it is part of the southern highlands but it stands at their northern extremity and displays many of the characteristics of the rockier peaks to the north. It is believed that the mountain takes its name from the settlement of Achallader at the foot of the northern slopes and translates from the Gaelic as “Field of hard water” which referred to the area around Loch Tulla which often flooded and froze in the past. Achallader is a farm today but it was formally the site of Achallader Castle one of Campbell of Glenorchy’s seven strongholds, the remains of which can still be seen next to the farmhouse. However, Hamish Brown and others give the hill's translated name as “Hill of the Mower”.
Geography
Beinn Achaladair has a curved summit ridge almost two kilometres in length which runs north to south, the highest point stands at its northern end and overlooks Rannoch Moor, there are two cairns close together at the summit with the more northerly one being the highest point by a couple of feet. Just over a kilometre south of the highest point stands the South Top, with a height of 1002 metres it listed as a “Top” in the Munro’s Tables. Beinn Achaladair has two corries on its slopes, to the east of the summit ridge is Coire nan Clach which contains eight very small lochans in its upper recesses. This corrie drains down Gleann Cailliche (Glen of the Old Woman) into Loch Lyon, this now deserted glen was previously well populated before the Highland Clearances. The remains of the settlement of Tigh na Cailleach and the surrounding shielings can still be identified
. All drainage from this side of the mountain finds it way to the Firth of Tay on the east coast via Loch Lyon, Loch Tay and the River Tay
Beinn Achaladair’s other significant corrie is Corrie Achaladair which stands to the south of the mountain and forms a col with the adjoining Munro of Beinn an Dotaidh. The mountain's steep northern and western slopes are rocky higher up before becoming grassy as they fall to the valley. These grassy slopes are riven with many small streams which drain to the Water of Tulla which drains into Loch Tulla.
Ascents
The most common ascent of Beinn Achaladair starts from Achallader farm at grid reference from where it is usually climbed with the adjacent Munro of Beinn a' Chreachain which stands three kilometres to the north east. The farmer at Achallader kindly allows walkers to park in a field next to farm and they can show their appreciation by leaving money in an honesty box. From the farm Coire Achaladair is ascended to the col with Beinn an Dotaidh passing several impressive waterfalls on the way. From the col it is a three kilometre walk north to the summit with a vertical ascent of over 300 metres passing over the South Top on the way. The view from the summit gives fine views of Rannoch Moor, with the near at hand flatness of the moor emphasising the feeling of height.
References and footnotes
The Munros, Scottish Mountaineering Trust, 1986, Donald Bennett (Editor)
The High Mountains of Britain and Ireland, Diadem, 1993, Irvine Butterfield,
100 Best Routes on Scottish Mountains, Warner Books, 1992, Ralph Storer,
Hamish’s Mountain Walk, Baton Wicks, 1996, Hamish Brown,
The Munros, Scotland Highest Mountains, 2006, Cameron McNeish,
Highland Perthshire, Standard Press, 1978, Duncan Fraser,
Footnotes
Munros
Marilyns of Scotland
Mountains and hills of the Southern Highlands
One-thousanders of Scotland
|
```python
# your_sha256_hash___________
#
# Pyomo: Python Optimization Modeling Objects
# National Technology and Engineering Solutions of Sandia, LLC
# Under the terms of Contract DE-NA0003525 with National Technology and
# Engineering Solutions of Sandia, LLC, the U.S. Government retains certain
# rights in this software.
# your_sha256_hash___________
# wl_abstract.py: AbstractModel version of warehouse location determination problem
import pyomo.environ as pyo
model = pyo.AbstractModel(name="(WL)")
# @setdecl:
model.N = pyo.Set()
model.M = pyo.Set()
# @:setdecl
# @paramdecl:
model.d = pyo.Param(model.N, model.M)
model.P = pyo.Param()
# @:paramdecl
# @vardecl:
model.x = pyo.Var(model.N, model.M, bounds=(0, 1))
model.y = pyo.Var(model.N, within=pyo.Binary)
# @:vardecl
def obj_rule(model):
return sum(model.d[n, m] * model.x[n, m] for n in model.N for m in model.M)
model.obj = pyo.Objective(rule=obj_rule)
# @deliver:
def one_per_cust_rule(model, m):
return sum(model.x[n, m] for n in model.N) == 1
model.one_per_cust = pyo.Constraint(model.M, rule=one_per_cust_rule)
# @:deliver
def warehouse_active_rule(model, n, m):
return model.x[n, m] <= model.y[n]
model.warehouse_active = pyo.Constraint(model.N, model.M, rule=warehouse_active_rule)
def num_warehouses_rule(model):
return sum(model.y[n] for n in model.N) <= model.P
model.num_warehouses = pyo.Constraint(rule=num_warehouses_rule)
```
|
```javascript
/**
* @license Apache-2.0
*
*
*
* path_to_url
*
* Unless required by applicable law or agreed to in writing, software
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
*/
/* eslint-disable stdlib/no-redeclare */
'use strict';
// MODULES //
var isNumber = require( '@stdlib/assert/is-number' ).isPrimitive;
var isfinite = require( '@stdlib/math/base/assert/is-finite' );
// MAIN //
/**
* Tests if a value is a number primitive having a finite value.
*
* @param {*} value - value to test
* @returns {boolean} boolean indicating if a value is a number primitive having a finite value
*
* @example
* var bool = isFinite( -3.0 );
* // returns true
*
* @example
* var bool = isFinite( new Number( -3.0 ) );
* // returns false
*/
function isFinite( value ) {
return (
isNumber( value ) &&
isfinite( value )
);
}
// EXPORTS //
module.exports = isFinite;
```
|
```java
*
* path_to_url
*
* Unless required by applicable law or agreed to in writing, software
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
*/
package org.flowable.engine.impl.bpmn.parser.handler;
import org.flowable.bpmn.model.BaseElement;
import org.flowable.bpmn.model.HttpServiceTask;
import org.flowable.bpmn.model.ServiceTask;
import org.flowable.engine.impl.bpmn.parser.BpmnParse;
/**
* @author Tijs Rademakers
*/
public class HttpServiceTaskParseHandler extends AbstractActivityBpmnParseHandler<ServiceTask> {
@Override
public Class<? extends BaseElement> getHandledType() {
return HttpServiceTask.class;
}
@Override
protected void executeParse(BpmnParse bpmnParse, ServiceTask serviceTask) {
serviceTask.setBehavior(bpmnParse.getActivityBehaviorFactory().createHttpActivityBehavior(serviceTask));
}
}
```
|
```yaml
is:
activemodel:
attributes:
close_meeting:
attendees_count: Fjldi tttakenda
attending_organizations: Listi yfir samtk sem sttu
closing_report: Skrsla
contributions_count: Fjldi framlaga
proposal_ids: Tillgur bin til fundinum
meeting:
address: Heimilisfang
available_slots: Lausar rifa fyrir ennan fund
decidim_category_id: Flokkur
decidim_scope_id: Umfang
description: Lsing
end_time: Lokatmi
location: Stasetning
location_hints: Stasetningarmguleikar
private_meeting: Einkafundur
registration_terms: Skrningarskilmlar
registrations_enabled: Skrningar virkt
start_time: Byrjunartmi
title: Titill
transparent: Gegnstt
minutes:
audio_url: Audio url
description: Lsing
video_url: Video url
visible: Er snilegt
decidim:
admin:
meeting_copies:
new:
copy: Afrita
select: Veldu hvaa ggn vilt afrita
title: Afrita fundi
components:
meetings:
actions:
join: Skru ig
name: Fundir
settings:
global:
announcement: Tilkynning
comments_enabled: Athugasemdir virkt
default_registration_terms: Sjlfgefin skrningarskilmlar
step:
announcement: Tilkynning
comments_blocked: Athugasemdir lst
events:
meetings:
meeting_closed:
affected_user:
email_subject: Fundurinn "%{resource_title}" var lokaur
notification_title: <a href="%{resource_path}">%{resource_title}</a> fundurinn var lokaur.
follower:
email_intro: 'Fundurinn "%{resource_title}" var lokaur. getur lesi niursturnar af sunni:'
email_subject: Fundurinn "%{resource_title}" var lokaur
notification_title: <a href="%{resource_path}">%{resource_title}</a> fundurinn var lokaur.
meeting_created:
email_intro: Fundurinn "%{resource_title}" hefur veri btt vi "%{participatory_space_title}" sem fylgist me.
email_outro: hefur fengi essa tilkynningu vegna ess a fylgist me "%{participatory_space_title}". getur sleppt v fr fyrri tengilinn.
email_subject: N fundur btt vi %{participatory_space_title}
notification_title: Fundurinn <a href="%{resource_path}">%{resource_title}</a> hefur veri btt vi %{participatory_space_title}
meeting_registrations_over_percentage:
email_outro: hefur fengi essa tilkynningu vegna ess a ert stjrnandi tttkustigi fundarins.
meeting_updated:
email_intro: '"%{resource_title}" fundurinn var uppfrur. getur lesi nja tgfu af sunni:'
email_outro: hefur fengi essa tilkynningu vegna ess a fylgir "%{resource_title}" fundinum. getur sleppt v fr fyrri tengilinn.
email_subject: '"%{resource_title}" fundurinn var uppfrur'
notification_title: <a href="%{resource_path}">%{resource_title}</a> fundurinn var uppfrur.
registrations_enabled:
email_intro: '"%{resource_title}" fundurinn hefur gert skrningu kleift. getur skr ig sunni:'
email_outro: hefur fengi essa tilkynningu vegna ess a fylgir "%{resource_title}" fundinum. getur sleppt v fr fyrri tengilinn.
email_subject: '"%{resource_title}" fundurinn hefur gert skrningu kleift.'
notification_title: <a href="%{resource_path}">%{resource_title}</a> fundurinn hefur gert skrningu kleift.
upcoming_meeting:
email_intro: '"%{resource_title}" fundurinn mun byrja innan vi 48 klst.'
email_outro: hefur fengi essa tilkynningu vegna ess a fylgir "%{resource_title}" fundinum. getur sleppt v fr fyrri tengilinn.
email_subject: '"%{resource_title}" fundurinn mun byrja innan vi 48 klst.'
notification_title: <a href="%{resource_path}">%{resource_title}</a> fundurinn hefst innan vi 48 klst.
meetings:
actions:
attachments: Vihengi
close: Loka
confirm_destroy: Ertu viss um a viljir eya essum fundi?
destroy: Eya
edit: Breyta
minutes: Fundargerir
preview: Preview
registrations: Skrningar
title: Agerir
admin:
exports:
registrations: Skrningar
invite_join_meeting_mailer:
invite:
join: Taka tt fundi '%{meeting_title}'
meeting_closes:
edit:
close: Loka
title: Loka fundi
meetings:
close:
success: Fundur me gum rangri loka
create:
success: Fundur bin til me gum rangri
destroy:
success: Fundur me gum rangri eytt
edit:
update: Uppfra
index:
title: Fundir
new:
create: Ba til
title: Ba til fundi
service:
description: Lsing
down: Niur
remove: Fjarlgja
service: jnusta
title: Titill
up: Upp
services:
add_service: Bta vi jnustu
services: jnusta
update:
success: Fundur tkst a uppfra
minutes:
create:
success: Fundargerir bin til me gum rangri
edit:
update: Uppfra
new:
create: Ba til
title: Bu til mntur
update:
success: Fundargerir me gum rangri uppfr
models:
meeting:
name: Fundur
registrations:
edit:
save: Vista
form:
available_slots_help: Leyfi a til 0 ef hefur takmarkaa rifa boi.
reserved_slots_help: Leggu a 0 ef hefur ekki skilinn rifa
reserved_slots_less_than: Verur a vera minna en ea jafnt vi %{count}
update:
success: Fundir skrningarstillingar voru vistaar me gum rangri.
admin_log:
meeting:
close: "%{user_name} lokai %{resource_name} fundinum %{space_name} plssinu"
create: "%{user_name} bi til %{resource_name} fundinn %{space_name} plssinu"
delete: "%{user_name} eyddi %{resource_name} fundinum %{space_name} plssinu"
export_registrations: "%{user_name} flutti skrningar %{resource_name} fundarins %{space_name} plssinu"
update: "%{user_name} uppfrt %{resource_name} fundi %{space_name} plssi"
value_types:
organizer_presenter:
not_found: 'Skipuleggjandi fannst ekki gagnagrunninum (ID: %{id})'
minutes:
create: "%{user_name} bi til fundarger fundarins %{resource_name} %{space_name} plssinu"
update: "%{user_name} uppfrt fundargerir fundarins %{resource_name} %{space_name} plssinu"
mailer:
invite_join_meeting_mailer:
invite:
subject: Bo um tttku fundi
registration_mailer:
confirmation:
subject: Skrning fundarins hefur veri stafest
meeting:
not_allowed: mtt ekki skoa ennan fund
meetings:
filters:
category: Flokkur
date: Dagsetning
search: Leita
filters_small_view:
close_modal: Loka mt
filter: Sa
filter_by: Sa eftir
unfold: Fella t
meetings:
no_meetings_warning: Engar fundir samrmast leitarskilyrum num ea a er ekki fundur tla.
upcoming_meetings_warning: Eins og er, eru engar tlanir fundar, en hr er hgt a finna allar fyrri fundi skr.
registration_confirm:
cancel: Htta vi
confirm: Stafesta
show:
attendees: tttakendur telja
contributions: Framlg telja
going: Fara
join: Skru ig fundi
meeting_report: Fundarskrsla
no_slots_available: Engar rifa boi
organizations: Mta stofnanir
models:
meeting:
fields:
closed: Loka
end_time: Loka dagsetning
map: Kort
start_time: Upphafsdagur
title: Titill
read_more: "(Lestu meira)"
registration_mailer:
confirmation:
confirmed_html: Skrningin n til fundarins <a href="%{url}">%{title}</a> hefur veri stafest.
details: finnur upplsingar fundarins vihenginu.
registrations:
destroy:
success: hefur skili eftir fundinn me gum rangri.
types:
private_meeting: Einkafundur
transparent: Gegnstt
participatory_processes:
participatory_process_groups:
highlighted_meetings:
past_meetings: Fyrri fundi
upcoming_meetings: Nstu fundir
participatory_spaces:
highlighted_meetings:
past_meetings: Fyrri fundi
upcoming_meetings: Nstu fundir
devise:
mailer:
join_meeting:
subject: Bo um tttku fundi
```
|
Prosoplus tuberosicollis is a species of beetle in the family Cerambycidae. It was described by Stephan von Breuning in 1939. It is known from Papua New Guinea.
References
Prosoplus
Beetles described in 1939
|
Leptodontidium is a genus of fungi belonging to the family Leptodontidiaceae.
The species of this genus are found in Europe and Australia.
Species:
Leptodontidium aciculare
Leptodontidium aureum
Leptodontidium beauverioides
References
Helotiales
|
```javascript
const $ = require('jquery');
const fs = require('fs');
const path = require('path');
const chai = require("chai");
const should = chai.should();
const JWebDriver = require('jwebdriver');
chai.use(JWebDriver.chaiSupportChainPromise);
const resemble = require('resemblejs-node');
resemble.outputSettings({
errorType: 'flatDifferenceIntensity'
});
const rootPath = getRootPath();
module.exports = function(){
let driver, testVars;
before(function(){
let self = this;
driver = self.driver;
testVars = self.testVars;
});
{$testCodes}
function _(str){
if(typeof str === 'string'){
return str.replace(/\{\{(.+?)\}\}/g, function(all, key){
return testVars[key] || '';
});
}
else{
return str;
}
}
};
if(module.parent && /mocha\.js/.test(module.parent.id)){
runThisSpec();
}
function runThisSpec(){
// read config
let webdriver = process.env['webdriver'] || '';
let proxy = process.env['wdproxy'] || '';
let config = require(rootPath + '/config.json');
let webdriverConfig = Object.assign({},config.webdriver);
let host = webdriverConfig.host;
let port = webdriverConfig.port || 4444;
let group = webdriverConfig.group || 'default';
let match = webdriver.match(/([^\:]+)(?:\:(\d+))?/);
if(match){
host = match[1] || host;
port = match[2] || port;
}
let testVars = config.vars;
let browsers = webdriverConfig.browsers;
browsers = browsers.replace(/^\s+|\s+$/g, '');
delete webdriverConfig.host;
delete webdriverConfig.port;
delete webdriverConfig.group;
delete webdriverConfig.browsers;
// read hosts
let hostsPath = rootPath + '/hosts';
let hosts = '';
if(fs.existsSync(hostsPath)){
hosts = fs.readFileSync(hostsPath).toString();
}
let specName = path.relative(rootPath, __filename).replace(/\\/g,'/').replace(/\.js$/,'');
browsers.split(/\s*,\s*/).forEach(function(browserName){
let caseName = specName + ' : ' + browserName;
let browserInfo = browserName.split(' ');
browserName = browserInfo[0];
let browserVersion = browserInfo[1];
describe(caseName, function(){
this.timeout(600000);
this.slow(1000);
let driver;
before(function(){
let self = this;
let driver = new JWebDriver({
'host': host,
'port': port
});
let sessionConfig = Object.assign({}, webdriverConfig, {
'group': group,
'browserName': browserName,
'version': browserVersion,
'ie.ensureCleanSession': true,
});
if(proxy){
sessionConfig.proxy = {
'proxyType': 'manual',
'httpProxy': proxy,
'sslProxy': proxy
}
}
else if(hosts){
sessionConfig.hosts = hosts;
}
try {
self.driver = driver.session(sessionConfig){$sizeCode}.config({
pageloadTimeout: 30000, // page onload timeout
scriptTimeout: 5000, // sync script timeout
asyncScriptTimeout: 10000 // async script timeout
});
} catch (e) {
console.log(e);
}
self.testVars = testVars;
let casePath = path.dirname(caseName);
if (config.reporter && config.reporter.distDir) {
self.screenshotPath = config.reporter.distDir + '/reports/screenshots/' + casePath;
self.diffbasePath = config.reporter.distDir + '/reports/diffbase/' + casePath;
} else {
self.screenshotPath = rootPath + '/reports/screenshots/' + casePath;
self.diffbasePath = rootPath + '/reports/diffbase/' + casePath;
}
self.caseName = caseName.replace(/.*\//g, '').replace(/\s*[:\.\:\-\s]\s*/g, '_');
mkdirs(self.screenshotPath);
mkdirs(self.diffbasePath);
self.stepId = 0;
return self.driver;
});
module.exports();
beforeEach(function(){
let self = this;
self.stepId ++;
if(self.skipAll){
self.skip();
}
});
afterEach(async function(){
let self = this;
let currentTest = self.currentTest;
let title = currentTest.title;
if(currentTest.state === 'failed' && /^(url|waitBody|switchWindow|switchFrame):/.test(title)){
self.skipAll = true;
}
if ((config.screenshots && config.screenshots.captureAll && !/^(closeWindow):/.test(title)) || currentTest.state === 'failed') {
const casePath = path.dirname(caseName);
const filepath = `${self.screenshotPath}/${self.caseName}_${self.stepId}`;
const relativeFilePath = `./screenshots/${casePath}/${self.caseName}_${self.stepId}`;
let driver = self.driver;
try{
// catch error when get alert msg
await driver.getScreenshot(filepath + '.png');
let url = await driver.url();
let html = await driver.source();
html = '<!--url: '+url+' -->\n' + html;
fs.writeFileSync(filepath + '.html', html);
let cookies = await driver.cookies();
fs.writeFileSync(filepath + '.cookie', JSON.stringify(cookies));
appendToContext(self, relativeFilePath + '.png');
}
catch(e){}
}
});
after(function(){
return this.driver.close();
});
});
});
}
function getRootPath(){
let rootPath = path.resolve(__dirname);
while(rootPath){
if(fs.existsSync(rootPath + '/config.json')){
break;
}
rootPath = rootPath.substring(0, rootPath.lastIndexOf(path.sep));
}
return rootPath;
}
function mkdirs(dirname){
if(fs.existsSync(dirname)){
return true;
}else{
if(mkdirs(path.dirname(dirname))){
fs.mkdirSync(dirname);
return true;
}
}
}
function callSpec(name){
try{
require(rootPath + '/' + name)();
}
catch(e){
console.log(e)
process.exit(1);
}
}
function isPageError(code){
return code == '' || / jscontent="errorCode" jstcache="\d+"|diagnoseConnectionAndRefresh|dnserror_unavailable_header|id="reportCertificateErrorRetry"|400 Bad Request|403 Forbidden|404 Not Found|500 Internal Server Error|502 Bad Gateway|503 Service Temporarily Unavailable|504 Gateway Time-out/i.test(code);
}
function appendToContext(mocha, content) {
try {
const test = mocha.currentTest || mocha.test;
if (!test.context) {
test.context = content;
} else if (Array.isArray(test.context)) {
test.context.push(content);
} else {
test.context = [test.context];
test.context.push(content);
}
} catch (e) {
console.log('error', e);
}
};
function catchError(error){
}
```
|
```go
/*
path_to_url
Unless required by applicable law or agreed to in writing, software
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
*/
// Code generated by applyconfiguration-gen. DO NOT EDIT.
package v1
// LimitRangeSpecApplyConfiguration represents an declarative configuration of the LimitRangeSpec type for use
// with apply.
type LimitRangeSpecApplyConfiguration struct {
Limits []LimitRangeItemApplyConfiguration `json:"limits,omitempty"`
}
// LimitRangeSpecApplyConfiguration constructs an declarative configuration of the LimitRangeSpec type for use with
// apply.
func LimitRangeSpec() *LimitRangeSpecApplyConfiguration {
return &LimitRangeSpecApplyConfiguration{}
}
// WithLimits adds the given value to the Limits field in the declarative configuration
// and returns the receiver, so that objects can be build by chaining "With" function invocations.
// If called multiple times, values provided by each call will be appended to the Limits field.
func (b *LimitRangeSpecApplyConfiguration) WithLimits(values ...*LimitRangeItemApplyConfiguration) *LimitRangeSpecApplyConfiguration {
for i := range values {
if values[i] == nil {
panic("nil value passed to WithLimits")
}
b.Limits = append(b.Limits, *values[i])
}
return b
}
```
|
The Texas Thunder were a professional indoor football team that played in the American Professional Football League in 2004. The Thunder tied the Missouri Minutemen 42-42 in their inaugural APFL game.
The Thunder coaches were James Sanders, Chris Chandler and Art Tarango. Players of note were:
Matt Holem (AF2)
Hallart Keaton (NIFL)
Rolf Shaefer (NIFL)
Mark Ricker (NIFL)
Darrell Wilkins (IFL)
Joshua Sooter (IFL)
Fred Robinson (IPFL)
External links
Texas Thunder pre-game introductions via YouTube
Texas Thunder at Indoor Football Hall of Fame and Museum
References
National Indoor Football League teams
American football teams in Texas
|
Gregory W. Fowler is an American academic administrator serving as president of the University of Maryland Global Campus. He was previously president of Southern New Hampshire University's Global Campus.
Early life and education
Fowler was raised in Albany, Georgia, with seven siblings. His mother was a secondary schoolteacher, and several other family members were ministers. After graduating from Morehouse College, Fowler was an outreach specialist and media affairs specialist at the National Endowment for the Humanities for four years. During this time, he completed a master's degree in English at George Mason University. He worked as a lecturer and assistant professor of literature and American studies at Penn State Erie, The Behrend College, while earning a Ph.D. in English and American studies, completing his dissertation on Mark Twain and Generation X at the University at Buffalo. Fowler also completed a Master of Business Administration at Western Governors University in Utah and higher education and executive leadership programs at Harvard University. He was a Charles A. Dana Scholar at Duke University and received two Fulbright awards, in 2002 to Berlin, Germany and in 2006 to Belgium and Germany.
Career
Fowler was associate provost and dean of liberal arts at Western Governors University. He served as chief academic officer and vice president for academic affairs at Hesser College in New Hampshire. For nine years, Fowler worked at Southern New Hampshire University in different roles, including chief academic officer and vice president of academic affairs. He was promoted to president of its global campus in September 2018. On January 4, 2021, Fowler succeeded Javier Miyares as president of the University of Maryland Global Campus. He is the institution's first non-interim African American president.
References
Living people
Year of birth missing (living people)
Place of birth missing (living people)
People from Albany, Georgia
Morehouse College alumni
George Mason University alumni
Western Governors University alumni
Southern New Hampshire University faculty
Presidents of the University of Maryland Global Campus
African-American academic administrators
University at Buffalo alumni
21st-century African-American academics
21st-century American academics
|
The Nigeria Rugby Football Federation (NRFF) is the governing body for rugby union in Nigeria. It is affiliated to the Nigeria Olympic Committee, Rugby Africa and World Rugby.
The NRFF is constituted with a democratic board headed by President; Ademola Are and Vice President; AIG Aliyu Abubakar (Rtd) who have been elected to run the affairs of rugby in Nigeria.
References
Rugby union governing bodies in Africa
Rugby union in Nigeria
Rugby
Sports organizations established in 1998
Sports organizations based in Lagos
|
"My Secret Friend" is a song performed by IAMX and Imogen Heap, released as the third single from the album Kingdom of Welcome Addiction. The CD single is available only through the IAMX webstore.
Music video
The music video, directed by Chris Corner, features Corner dressed as a woman, the video's lead female character, and Heap as a man, the lead male character. He stated in an interview that when he wrote it he pictured the characters as siblings, who have a romantic, possibly incestuous, relationship. The video is featured on the CD single.
Track listing
Song versions
Album version – 4:06
Radio edit – 3:45
Omega Man remix – 4:45, remixed and additionally produced by Joe Wilson.
The Unfall Broken Waltz rework – 4:41, remixed by Corner under the alias Unfall, this version doesn't include Heap's vocals.
"Mein geheimer Freund" – 4:00, solo acoustic piano version in German released at IAMX's official YouTube channel announcing his Germany tour.
Interpretation by Larry Driscoll – 3:27, cover version included on Dogmatic Infidel Comedown OK as a hidden song in a 4-song track.
Chart positions
References
Imogen Heap songs
2009 singles
Songs written by Imogen Heap
IAMX songs
Songs written by Chris Corner
2009 songs
|
```kotlin
package expo.modules.localization
import android.content.Context
import android.content.res.Configuration
import expo.modules.core.interfaces.ApplicationLifecycleListener
import expo.modules.core.interfaces.Package
object Notifier {
private val observers = mutableListOf<() -> Unit>()
fun registerObserver(observer: () -> Unit) {
observers.add(observer)
}
fun deregisterObserver(observer: () -> Unit) {
observers.remove(observer)
}
fun onConfigurationChanged() {
// Notify all observers
observers.forEach { it() }
}
}
// TODO: Move to new listener API once it's available
class LocalizationPackage : Package {
override fun createApplicationLifecycleListeners(context: Context?): List<ApplicationLifecycleListener> {
return listOf(object : ApplicationLifecycleListener {
override fun onConfigurationChanged(newConfig: Configuration?) {
super.onConfigurationChanged(newConfig)
Notifier.onConfigurationChanged()
}
})
}
}
```
|
The Kievsky Yogan (; Selkup: Ки́евскэл кыге́) is a river in Tomsk Oblast, Russia. The river is long and has a catchment area of .
The Kievsky Yogan flows across the Central Siberian Plateau. Its basin is located in the Alexandrovsky District. There are no permanent settlements along the course of the river.
Course
The Kievsky Yogan is a right tributary of the Ob river. It flows in a roughly SSW direction along a very swampy area. In its lower reaches there are numerous lakes. Finally it meets an arm of the Ob, the Kiev Channel (проток Киевская), from its mouth in the right bank of the Ob.
Tributaries
The main tributary of the Kievsky Yogan is the long Bolshaya (Большая) on the left. The river is fed by snow and rain.
See also
List of rivers of Russia
References
External links
Большие реки России (Great Rivers of Russia)
Rivers of Tomsk Oblast]
Central Siberian Plateau
|
Stefan Dörflinger (born 23 December 1948 in Nagold, Germany) is a Swiss former Grand Prix motorcycle road racer.
Dörflinger won four consecutive FIM road racing world championships. In 1982 and 1983, he was the 50 cc world champion. In 1984, the FIM increased the displacement capacity to 80 cc and Dörflinger would become the first ever 80 cc world champion. He successfully defended his title in 1985. His lengthy Grand Prix career spanned 18 seasons.
References
External links
Wildeman-Zündapp
Swiss motorcycle racers
50cc World Championship riders
125cc World Championship riders
1948 births
Living people
80cc World Championship riders
|
Reutlingen railway station is a railway station in the Swiss canton of Zurich and city of Winterthur. It takes its name from that city's Reutlingen quarter, in which it is situated. The station is located on the Winterthur to Etzwilen line. It is an intermediate stop on Zurich S-Bahn service S11, which links Aarau and Seuzach, and S29, which links Winterthur and Stein am Rhein.
References
External links
Reutlingen
Reutlingen
|
```rust
use rocket::outcome::IntoOutcome;
use rocket::request::{self, FlashMessage, FromRequest, Request};
use rocket::response::{Redirect, Flash};
use rocket::http::{CookieJar, Status};
use rocket::form::Form;
use rocket_dyn_templates::{Template, context};
#[derive(FromForm)]
struct Login<'r> {
username: &'r str,
password: &'r str
}
#[derive(Debug)]
struct User(usize);
#[rocket::async_trait]
impl<'r> FromRequest<'r> for User {
type Error = std::convert::Infallible;
async fn from_request(request: &'r Request<'_>) -> request::Outcome<User, Self::Error> {
request.cookies()
.get_private("user_id")
.and_then(|cookie| cookie.value().parse().ok())
.map(User)
.or_forward(Status::Unauthorized)
}
}
#[macro_export]
macro_rules! session_uri {
($($t:tt)*) => (rocket::uri!("/session", $crate::session:: $($t)*))
}
pub use session_uri as uri;
#[get("/")]
fn index(user: User) -> Template {
Template::render("session", context! {
user_id: user.0,
})
}
#[get("/", rank = 2)]
fn no_auth_index() -> Redirect {
Redirect::to(uri!(login_page))
}
#[get("/login")]
fn login(_user: User) -> Redirect {
Redirect::to(uri!(index))
}
#[get("/login", rank = 2)]
fn login_page(flash: Option<FlashMessage<'_>>) -> Template {
Template::render("login", flash)
}
#[post("/login", data = "<login>")]
fn post_login(jar: &CookieJar<'_>, login: Form<Login<'_>>) -> Result<Redirect, Flash<Redirect>> {
if login.username == "Sergio" && login.password == "password" {
jar.add_private(("user_id", "1"));
Ok(Redirect::to(uri!(index)))
} else {
Err(Flash::error(Redirect::to(uri!(login_page)), "Invalid username/password."))
}
}
#[post("/logout")]
fn logout(jar: &CookieJar<'_>) -> Flash<Redirect> {
jar.remove_private("user_id");
Flash::success(Redirect::to(uri!(login_page)), "Successfully logged out.")
}
pub fn routes() -> Vec<rocket::Route> {
routes![index, no_auth_index, login, login_page, post_login, logout]
}
```
|
Nazım Beratlı (born 1952) is a Turkish Cypriot medical doctor, journalist, author, researcher and historian.
Early life and education
Beratlı was born in Lefka, Cyprus, in 1952. He studied medicine at Istanbul University, specialising in gynecology and obstetrics.
Career
Beratlı is a published author and columnist, having written for a number of Turkish Cypriot newspapers and journals. His first book, Kıbrıs'ta ulusal sorun (Cyprus: The National Problem), published in 1991, was on the topic of the internal disputes inside the Republican Turkish Party (the primary opposition party in Northern Cyprus at the time, of which Beratlı was a prominent member) that resulted in his eventual estrangement from the party. He later published the four-volume Kibrisli Türklerin Tarihi (History of the Turkish Cypriots). Currently, Beratlı is a columnist at daily Kibris Postasi and a lecturer at the Girne American University.
Beratlı is a member of the Association of Mediterranean Historians.
References
1952 births
Living people
|
Noailles (Languedocien: Noalhas) is a commune in the Tarn department in southern France.
Geography
The village lies in the middle of the commune, on the right bank of the Vère, which flows southwestward through the commune.
See also
Communes of the Tarn department
References
Communes of Tarn (department)
|
Nauradehi Wildlife Sanctuary, covering about , is the largest wildlife sanctuary of Madhya Pradesh state in India. This wildlife sanctuary is a part of 5500 km2 of forested landscape. It is located in the centre of the state covering parts of Sagar, Damoh, Narsinghpur, and Raisen Districts. It is about 90 km from Jabalpur and about 56 km from Sagar.
It is a potential site for the Cheetah Reintroduction in India. The cheetah prey density were reasonable and based on current prey density the area could support about 25 cheetahs. 750 km2 area was recommended by relocation of 23 villages. After relocating the species, the site could support over 50 cheetahs and Nauradehi could harbour over 70 individuals.
The wildlife refuge is divided into six ranges:
Mohli Range,
Singpur Range,
Jhapan Range,
Sarra Range,
D'Gaon Range,
Nauradehi Range
History
This forest area was made a sanctuary in 1975.
Geography
The protected area sits astride two major river basins of India, namely the Narmada, flowing west to the Arabian Sea and the Ganges, flowing east to the Bay of Bengal. Three-fourths of the wildlife sanctuary falls in the basin of Ganges tributary, the Yamuna River, of which the Ken River is a tributary, and one fourth of the sanctuary falls in the Naramada basin. The north flowing Kopra River, Bamner River, Vyarma River and Bearma River, which are tributaries of the Ken River, are the major rivers of this protected area. Some smaller streams flow southerly to the Narmada river in the south of the sanctuary.
The forest is spread over the southern area of the Vindhya Range of hills in which the Bandhavgarh National Park and Panna National Park are also located.
Nauradehi Sanctuary is located at an elevation of to above MSL. Average annual rainfall is .
The seasons here are:
Winter - November to February, to
Summer - March to June, to and
Monsoon - July to October, to .
Flora
The flora consists of central Indian Monsoon forests, which include tropical dry deciduous forest. Major trees found are teak, saja, dhawda, sal, tendu (Coromandel ebony), bhirra (East Indian satinwood) and mahua. In March the deciduous trees begin to shed their leaves for a hot summer season.
The sanctuary exists as fragmented patches of variable density forest. The sanctuary needs more research and study of its habitats, flora, fauna and avi-fauna.
Fauna
Indian wolf is the keystone species of Nauradehi Wildlife Sanctuary. Other carnivores here include: Bengal tiger, Indian leopard, striped hyena, wild dog (Dhole), Bengal fox, Muggar crocodile, golden jackal, and bears. The tiger and the leopards are conspicuous by their absence though infrequent evidences are met with. Recently a tigress was found dead due to old age. Other fauna often seen is smooth Indian otter, sloth bear and Indian grey mongoose.
Herbivores living here include: Four-horned antelope (Chousingha), nilgai (blue bull), chinkara (Indian gazelle), sambar deer, blackbuck antelope, barking deer, grey langur, rhesus macaque, chital (spotted deer) and wild boar.
Reptile species found in Nauradehi includes monitor lizard, mugger crocodile, turtle, tortoise and snakes.
Birds of Nauradehi
Due to presence of perennial water sources including several rivers and Cheola lake, there are a great number of birds in the protected area. Bird groups found there include: eagles, vultures, storks, cranes, egrets, lapwings, kites, owls, kingfishers, quails and doves.
At least 150 bird species can be seen in Noradehi. Some of the birds are king vulture, Egyptian vulture (E), white-rumped vulture long billed vulture, (CR), lesser adjutant stork (V), painted stork, open-billed stork, spotted owl, barred jungle owlet, black-winged kite, Indian pond heron, green sandpiper, Indian pied myna, common myna, wood sandpiper, red-wattled lapwing, yellow wagtail, purple sunbird, white breasted kingfisher, stork-billed kingfisher, black drongo, Indian robin, long-tailed shrike, black ibis, rock pigeon, Indian peafowl, grey francolin, jungle babbler, golden oriole, spotted dove, Indian roller, magpie, paddyfield pipit, crested serpent eagle, jungle crow, Asian green bee-eater, honey buzzard, changeable hawk eagle, shikra, paradise flycatcher, verditer flycatcher, black naped monarch, common woodshrike, plum headed parakeet, rose ringed parakeet and greater coucal. The spotted grey creeper, a rare bird, is also found here.
During winter season the sanctuary serves as the seasonal home for migratory birds, including the sarus crane.
Visitor information
The park is open from November to June. The best time to visit is winter i.e. November to February when it is not too hot and trees are still green. The sanctuary closes during monsoon from July till October to give trees and animals time to reacclimatize.
Jabalpur or Bhopal can be convenient bases to explore the sanctuary, which have airports. The Jabalpur-Jaipur highway (NH 12) passes through the sanctuary about west of Jabalpur. Nearby railheads include Sagar, Damoh and Narsinghpur.
Forest Rest Houses and Forest Department guides are available for visitors to Noradehi.
References
Project Cheetah (Brochure), September 2010, Ministry of Environment and Forests, Government of India. Accessed 01 Feb 2011.
Reintroducing the Cheetah in India
Assessing the potential for reintroducing the cheetah in India, 2010. A report on the feasibility of cheetah reintroduction in India, jointly prepared by the Wildlife Trust of India (WTI) and Wildlife Institute of India (WII), and submitted to the Ministry of Environment and Forests, Government of India (Ranjitsinh, M. K. & Jhala, Y. V. (2010) Assessing the potential for reintroducing the cheetah in India. Wildlife Trust of India, Noida, & the Wildlife Institute of India, Dehradun, TR2010/001). Also available at WII website: , . Accessed 01 Feb 2011. Also available at Ministry of Environment and Forests (India) website: Accessed 20 Sept 2011.
External sources
Nauradehi Sanctuary MP Forest department
Nauradehi Wildlife Sanctuary (Penthouse.in)
Noradehi Birds Checklist, 120 species
Wildlife sanctuaries in Madhya Pradesh
Cheetah reintroduction in India
Tourist attractions in Sagar district
1975 establishments in Madhya Pradesh
Protected areas established in 1975
|
```go
//go:build !ee
/*
path_to_url
Unless required by applicable law or agreed to in writing, software
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
*/
package docs
import _ "embed"
//go:embed zz_generated.kubermaticConfiguration.ce.yaml
var ExampleKubermaticConfiguration string
//go:embed zz_generated.seed.ce.yaml
var ExampleSeedConfiguration string
```
|
Bruce Clarke OAM (1 December 1925 – 24 July 2008) was an Australian jazz guitarist, composer, and educator.
Biography
One of Clarke's early music teachers was the New Zealander Tui Hamilton, at the Melbourne Hawaiian Club, from the early 1940s. Clarke played guitar in professional jazz ensembles, and from the late 1940s to mid 1950s he worked as a session musician for radio orchestras. Clarke accompanied musicians on their tours of Australia played in dance halls and ballrooms.
After the advent of television in Australia in 1956, Clarke started a recording studio and production company named The Jingle Workshop. He performed in thousands of recordings for films, television programs, and commercials, playing guitar and/or synthesizer. He was president of the International Society of Contemporary Music. He accepted a commission to realize the first major Australian electronic work for the 1968 Adelaide Festival of Arts and conducted performances in Melbourne of works by 20th-century composers Karlheinz Stockhausen, Luciano Berio, and Anton Webern.
He went on tour in Europe as a member of Felix Werder's ensemble Australia Felix. He accompanied classical guitarist John Williams with the Melbourne Symphony Orchestra in Concerto for Guitar and Orchestra written by Andre Previn. In 1977 he founded the Jazz Studies program at Victorian College of the Arts. He ran his own music tuition school, Guitar Workshop, and wrote for the magazine Jamm
During the late seventies he taught guitar using the Berklee method books and his pre-recorded cassette tapes. His students include Mick Harvey, Robert Goodge (of I'm Talking), Peter Farnan, Pierre Jaquinot, Laszlo Sirsom, Mark Cally, Anne McCue, Doug de Vries, Dominic Kiernan, Barry Morton, and Andrew Pendlebury (of The Sports).
He founded Cumquat Records to issue recordings of Australian jazz. He worked with Frank Sinatra, Mel Torme, Dizzy Gillespie, Stephane Grappelli, Stan Getz, and John Collins.
Further reading
1990 Interview with Bruce Clarke by Ron Payne
References
1925 births
2008 deaths
20th-century Australian musicians
20th-century guitarists
Australian jazz guitarists
Musicians from Melbourne
|
```python
# -*- coding: utf-8 -*-
#
# This file is part of OpenMediaVault.
#
# @license path_to_url GPL Version 3
# @author Volker Theile <volker.theile@openmediavault.org>
#
# OpenMediaVault is free software: you can redistribute it and/or modify
# any later version.
#
# OpenMediaVault is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
#
# along with OpenMediaVault. If not, see <path_to_url
import openmediavault.mkrrdgraph
class Plugin(openmediavault.mkrrdgraph.IPlugin):
def create_graph(self, config):
# path_to_url#uid=33r0-0kwi++bu++hX++++rd++kX
config.update(
{
'title_uptime': 'System uptime',
'color_uptime_current': '#f17742', # orange
'color_uptime_max': '#ff1300', # red
'color_uptime_min': '#ffdb70', # yellow
'color_uptime_avg': '#76d6ff', # blue
}
)
args = []
# yapf: disable
# pylint: disable=line-too-long
# autopep8: off
args.append('{image_dir}/uptime-{period}.png'.format(**config))
args.extend(config['defaults'])
args.extend(['--start', config['start']])
args.extend(['--title', '"{title_uptime}{title_by_period}"'.format(**config)])
args.append('--slope-mode')
args.extend(['--lower-limit', '0'])
args.append('--rigid')
args.extend(['--vertical-label', 'Days'])
# Based on path_to_url
args.append('DEF:uptime_sec_avg={data_dir}/uptime/uptime.rrd:value:AVERAGE'.format(**config))
args.append('DEF:uptime_sec_max={data_dir}/uptime/uptime.rrd:value:MAX'.format(**config))
args.append('CDEF:uptime_no_unkn=uptime_sec_max,UN,0,uptime_sec_max,IF')
args.append('CDEF:uptime_peaks=uptime_no_unkn,PREV\(uptime_no_unkn\),LT,PREV\(uptime_no_unkn\),UNKN,IF')
args.append('VDEF:minimum_uptime_secs=uptime_peaks,MINIMUM')
args.append('CDEF:minimum_uptime_graph=uptime_sec_max,minimum_uptime_secs,EQ,uptime_sec_max,86400,/,0,IF')
args.append('CDEF:minimum_uptime_days=uptime_sec_max,minimum_uptime_secs,EQ,uptime_sec_max,86400,/,FLOOR,0,IF')
args.append('CDEF:minimum_uptime_hours=uptime_sec_max,minimum_uptime_secs,EQ,uptime_sec_max,86400,%,3600,/,FLOOR,0,IF')
args.append('CDEF:minimum_uptime_mins=uptime_sec_max,minimum_uptime_secs,EQ,uptime_sec_max,86400,%,3600,%,60,/,FLOOR,0,IF')
args.append('VDEF:min_uptime_graph=minimum_uptime_graph,MAXIMUM')
args.append('VDEF:min_uptime_days=minimum_uptime_days,MAXIMUM')
args.append('VDEF:min_uptime_hours=minimum_uptime_hours,MAXIMUM')
args.append('VDEF:min_uptime_mins=minimum_uptime_mins,MAXIMUM')
args.append('VDEF:maximum_uptime_secs=uptime_sec_max,MAXIMUM')
args.append('CDEF:maximum_uptime_graph=uptime_sec_max,maximum_uptime_secs,EQ,uptime_sec_max,86400,/,0,IF')
args.append('CDEF:maximum_uptime_days=uptime_sec_max,maximum_uptime_secs,EQ,uptime_sec_max,86400,/,FLOOR,0,IF')
args.append('CDEF:maximum_uptime_hours=uptime_sec_max,maximum_uptime_secs,EQ,uptime_sec_max,86400,%,3600,/,FLOOR,0,IF')
args.append('CDEF:maximum_uptime_mins=uptime_sec_max,maximum_uptime_secs,EQ,uptime_sec_max,86400,%,3600,%,60,/,FLOOR,0,IF')
args.append('VDEF:max_uptime_graph=maximum_uptime_graph,MAXIMUM')
args.append('VDEF:max_uptime_days=maximum_uptime_days,MAXIMUM')
args.append('VDEF:max_uptime_hours=maximum_uptime_hours,MAXIMUM')
args.append('VDEF:max_uptime_mins=maximum_uptime_mins,MAXIMUM')
args.append('VDEF:average_uptime_secs=uptime_sec_max,AVERAGE')
args.append('CDEF:average_uptime_graph=uptime_sec_max,POP,average_uptime_secs,86400,/')
args.append('CDEF:average_uptime_days=uptime_sec_max,POP,average_uptime_secs,86400,/,FLOOR')
args.append('CDEF:average_uptime_hours=uptime_sec_max,POP,average_uptime_secs,86400,%,3600,/,FLOOR')
args.append('CDEF:average_uptime_mins=uptime_sec_max,POP,average_uptime_secs,86400,%,3600,%,60,/,FLOOR')
args.append('VDEF:avg_uptime_days=average_uptime_days,LAST')
args.append('VDEF:avg_uptime_hours=average_uptime_hours,LAST')
args.append('VDEF:avg_uptime_mins=average_uptime_mins,LAST')
args.append('CDEF:current_uptime_graph=uptime_sec_max,86400,/')
args.append('CDEF:current_uptime_days=uptime_sec_max,86400,/,FLOOR')
args.append('CDEF:current_uptime_hours=uptime_sec_max,86400,%,3600,/,FLOOR')
args.append('CDEF:current_uptime_mins=uptime_sec_max,86400,%,3600,%,60,/,FLOOR')
args.append('VDEF:curr_uptime_days=current_uptime_days,LAST')
args.append('VDEF:curr_uptime_hours=current_uptime_hours,LAST')
args.append('VDEF:curr_uptime_mins=current_uptime_mins,LAST')
args.append('CDEF:time=uptime_sec_max,POP,TIME')
args.append('VDEF:start=time,FIRST')
args.append('VDEF:last=time,LAST')
args.append('CDEF:time_window=uptime_sec_max,UN,0,uptime_sec_max,IF,POP,TIME')
args.append('CDEF:time_window2=PREV\(time_window\)')
args.append('VDEF:window_start=time_window,FIRST')
args.append('VDEF:window_last=time_window,LAST')
args.append('CDEF:delta=uptime_sec_max,POP,window_last,window_start,-')
args.append('CDEF:system_on_un=uptime_sec_avg,UN,UNKN,1,IF')
args.append('VDEF:total_uptime_secs=system_on_un,TOTAL')
args.append('CDEF:total_uptime_days=uptime_sec_max,POP,total_uptime_secs,86400,/,FLOOR')
args.append('CDEF:total_uptime_hours=uptime_sec_max,POP,total_uptime_secs,86400,%,3600,/,FLOOR')
args.append('CDEF:total_uptime_mins=uptime_sec_max,POP,total_uptime_secs,86400,%,3600,%,60,/,FLOOR')
args.append('VDEF:tot_uptime_days=total_uptime_days,LAST')
args.append('VDEF:tot_uptime_hours=total_uptime_hours,LAST')
args.append('VDEF:tot_uptime_mins=total_uptime_mins,LAST')
args.append('CDEF:temp_perc_on=uptime_sec_max,POP,total_uptime_secs,delta,/,100,*')
args.append('VDEF:new_perc_on=temp_perc_on,LAST')
args.append('AREA:current_uptime_graph#66666640')
args.append('LINE1:current_uptime_graph{color_uptime_current}:Current'.format(**config))
args.append('GPRINT:curr_uptime_days:"%5.0lf days"')
args.append('GPRINT:curr_uptime_hours:"%3.0lf hours"')
args.append('GPRINT:curr_uptime_mins:"%3.0lf mins"')
args.append('GPRINT:curr_uptime_mins:" %T %x\l":strftime')
args.append('LINE1:max_uptime_graph{color_uptime_max}:Maximum:dashes'.format(**config))
args.append('GPRINT:max_uptime_days:"%5.0lf days"')
args.append('GPRINT:max_uptime_hours:"%3.0lf hours"')
args.append('GPRINT:max_uptime_mins:"%3.0lf mins"')
args.append('GPRINT:max_uptime_mins:" %T %x\l":strftime')
args.append('HRULE:min_uptime_graph{color_uptime_min}:Minimum:dashes'.format(**config))
args.append('GPRINT:min_uptime_days:"%5.0lf days"')
args.append('GPRINT:min_uptime_hours:"%3.0lf hours"')
args.append('GPRINT:min_uptime_mins:"%3.0lf mins"')
args.append('GPRINT:min_uptime_mins:" %T %x\l":strftime')
args.append('LINE1:average_uptime_graph{color_uptime_avg}:Average:dashes'.format(**config))
args.append('GPRINT:avg_uptime_days:"%5.0lf days"')
args.append('GPRINT:avg_uptime_hours:"%3.0lf hours"')
args.append('GPRINT:avg_uptime_mins:"%3.0lf mins"')
args.append('GPRINT:avg_uptime_mins:" %T %x\l":strftime')
args.append('COMMENT:" Total "')
args.append('GPRINT:tot_uptime_days:"%5.0lf days"')
args.append('GPRINT:tot_uptime_hours:"%3.0lf hours"')
args.append('GPRINT:tot_uptime_mins:"%3.0lf mins"')
args.append('GPRINT:new_perc_on:" %3.2lf%% up\l"')
args.append('COMMENT:"{last_update}"'.format(**config))
# autopep8: on
# yapf: enable
openmediavault.mkrrdgraph.call_rrdtool_graph(args)
return 0
```
|
```perl6
package Tie::File;
require 5.005;
use strict;
use warnings;
use Carp ':DEFAULT', 'confess';
use POSIX 'SEEK_SET';
use Fcntl 'O_CREAT', 'O_RDWR', 'LOCK_EX', 'LOCK_SH', 'O_WRONLY', 'O_RDONLY';
sub O_ACCMODE () { O_RDONLY | O_RDWR | O_WRONLY }
our $VERSION = "1.07";
my $DEFAULT_MEMORY_SIZE = 1<<21; # 2 megabytes
my $DEFAULT_AUTODEFER_THRESHHOLD = 3; # 3 records
my $DEFAULT_AUTODEFER_FILELEN_THRESHHOLD = 65536; # 16 disk blocksful
my %good_opt = map {$_ => 1, "-$_" => 1}
qw(memory dw_size mode recsep discipline
autodefer autochomp autodefer_threshhold concurrent);
our $DIAGNOSTIC = 0;
our @OFF; # used as a temporary alias in some subroutines.
our @H; # used as a temporary alias in _annotate_ad_history
sub TIEARRAY {
if (@_ % 2 != 0) {
croak "usage: tie \@array, $_[0], filename, [option => value]...";
}
my ($pack, $file, %opts) = @_;
# transform '-foo' keys into 'foo' keys
for my $key (keys %opts) {
unless ($good_opt{$key}) {
croak("$pack: Unrecognized option '$key'\n");
}
my $okey = $key;
if ($key =~ s/^-+//) {
$opts{$key} = delete $opts{$okey};
}
}
if ($opts{concurrent}) {
croak("$pack: concurrent access not supported yet\n");
}
unless (defined $opts{memory}) {
# default is the larger of the default cache size and the
# deferred-write buffer size (if specified)
$opts{memory} = $DEFAULT_MEMORY_SIZE;
$opts{memory} = $opts{dw_size}
if defined $opts{dw_size} && $opts{dw_size} > $DEFAULT_MEMORY_SIZE;
# Dora Winifred Read
}
$opts{dw_size} = $opts{memory} unless defined $opts{dw_size};
if ($opts{dw_size} > $opts{memory}) {
croak("$pack: dw_size may not be larger than total memory allocation\n");
}
# are we in deferred-write mode?
$opts{defer} = 0 unless defined $opts{defer};
$opts{deferred} = {}; # no records are presently deferred
$opts{deferred_s} = 0; # count of total bytes in ->{deferred}
$opts{deferred_max} = -1; # empty
# What's a good way to arrange that this class can be overridden?
$opts{cache} = Tie::File::Cache->new($opts{memory});
# autodeferment is enabled by default
$opts{autodefer} = 1 unless defined $opts{autodefer};
$opts{autodeferring} = 0; # but is not initially active
$opts{ad_history} = [];
$opts{autodefer_threshhold} = $DEFAULT_AUTODEFER_THRESHHOLD
unless defined $opts{autodefer_threshhold};
$opts{autodefer_filelen_threshhold} = $DEFAULT_AUTODEFER_FILELEN_THRESHHOLD
unless defined $opts{autodefer_filelen_threshhold};
$opts{offsets} = [0];
$opts{filename} = $file;
unless (defined $opts{recsep}) {
$opts{recsep} = _default_recsep();
}
$opts{recseplen} = length($opts{recsep});
if ($opts{recseplen} == 0) {
croak "Empty record separator not supported by $pack";
}
$opts{autochomp} = 1 unless defined $opts{autochomp};
$opts{mode} = O_CREAT|O_RDWR unless defined $opts{mode};
$opts{rdonly} = (($opts{mode} & O_ACCMODE) == O_RDONLY);
$opts{sawlastrec} = undef;
my $fh;
if (UNIVERSAL::isa($file, 'GLOB')) {
# We use 1 here on the theory that some systems
# may not indicate failure if we use 0.
# MSWin32 does not indicate failure with 0, but I don't know if
# it will indicate failure with 1 or not.
unless (seek $file, 1, SEEK_SET) {
croak "$pack: your filehandle does not appear to be seekable";
}
seek $file, 0, SEEK_SET; # put it back
$fh = $file; # setting binmode is the user's problem
} elsif (ref $file) {
croak "usage: tie \@array, $pack, filename, [option => value]...";
} else {
# $fh = \do { local *FH }; # XXX this is buggy
if ($] < 5.006) {
# perl 5.005 and earlier don't autovivify filehandles
require Symbol;
$fh = Symbol::gensym();
}
sysopen $fh, $file, $opts{mode}, 0666 or return;
binmode $fh;
++$opts{ourfh};
}
{ my $ofh = select $fh; $| = 1; select $ofh } # autoflush on write
if (defined $opts{discipline} && $] >= 5.006) {
# This avoids a compile-time warning under 5.005
eval 'binmode($fh, $opts{discipline})';
croak $@ if $@ =~ /unknown discipline/i;
die if $@;
}
$opts{fh} = $fh;
bless \%opts => $pack;
}
sub FETCH {
my ($self, $n) = @_;
my $rec;
# check the defer buffer
$rec = $self->{deferred}{$n} if exists $self->{deferred}{$n};
$rec = $self->_fetch($n) unless defined $rec;
# inlined _chomp1
substr($rec, - $self->{recseplen}) = ""
if defined $rec && $self->{autochomp};
$rec;
}
# Chomp many records in-place; return nothing useful
sub _chomp {
my $self = shift;
return unless $self->{autochomp};
if ($self->{autochomp}) {
for (@_) {
next unless defined;
substr($_, - $self->{recseplen}) = "";
}
}
}
# Chomp one record in-place; return modified record
sub _chomp1 {
my ($self, $rec) = @_;
return $rec unless $self->{autochomp};
return unless defined $rec;
substr($rec, - $self->{recseplen}) = "";
$rec;
}
sub _fetch {
my ($self, $n) = @_;
# check the record cache
{ my $cached = $self->{cache}->lookup($n);
return $cached if defined $cached;
}
if ($#{$self->{offsets}} < $n) {
return if $self->{eof}; # request for record beyond end of file
my $o = $self->_fill_offsets_to($n);
# If it's still undefined, there is no such record, so return 'undef'
return unless defined $o;
}
my $fh = $self->{FH};
$self->_seek($n); # we can do this now that offsets is populated
my $rec = $self->_read_record;
# If we happen to have just read the first record, check to see if
# the length of the record matches what 'tell' says. If not, Tie::File
# won't work, and should drop dead.
#
# if ($n == 0 && defined($rec) && tell($self->{fh}) != length($rec)) {
# if (defined $self->{discipline}) {
# croak "I/O discipline $self->{discipline} not supported";
# } else {
# croak "File encoding not supported";
# }
# }
$self->{cache}->insert($n, $rec) if defined $rec && not $self->{flushing};
$rec;
}
sub STORE {
my ($self, $n, $rec) = @_;
die "STORE called from _check_integrity!" if $DIAGNOSTIC;
$self->_fixrecs($rec);
if ($self->{autodefer}) {
$self->_annotate_ad_history($n);
}
return $self->_store_deferred($n, $rec) if $self->_is_deferring;
# We need this to decide whether the new record will fit
# It incidentally populates the offsets table
# Note we have to do this before we alter the cache
# 20020324 Wait, but this DOES alter the cache. TODO BUG?
my $oldrec = $self->_fetch($n);
if (not defined $oldrec) {
# We're storing a record beyond the end of the file
$self->_extend_file_to($n+1);
$oldrec = $self->{recsep};
}
# return if $oldrec eq $rec; # don't bother
my $len_diff = length($rec) - length($oldrec);
# length($oldrec) here is not consistent with text mode TODO XXX BUG
$self->_mtwrite($rec, $self->{offsets}[$n], length($oldrec));
$self->_oadjust([$n, 1, $rec]);
$self->{cache}->update($n, $rec);
}
sub _store_deferred {
my ($self, $n, $rec) = @_;
$self->{cache}->remove($n);
my $old_deferred = $self->{deferred}{$n};
if (defined $self->{deferred_max} && $n > $self->{deferred_max}) {
$self->{deferred_max} = $n;
}
$self->{deferred}{$n} = $rec;
my $len_diff = length($rec);
$len_diff -= length($old_deferred) if defined $old_deferred;
$self->{deferred_s} += $len_diff;
$self->{cache}->adj_limit(-$len_diff);
if ($self->{deferred_s} > $self->{dw_size}) {
$self->_flush;
} elsif ($self->_cache_too_full) {
$self->_cache_flush;
}
}
# Remove a single record from the deferred-write buffer without writing it
# The record need not be present
sub _delete_deferred {
my ($self, $n) = @_;
my $rec = delete $self->{deferred}{$n};
return unless defined $rec;
if (defined $self->{deferred_max}
&& $n == $self->{deferred_max}) {
undef $self->{deferred_max};
}
$self->{deferred_s} -= length $rec;
$self->{cache}->adj_limit(length $rec);
}
sub FETCHSIZE {
my $self = shift;
my $n = $self->{eof} ? $#{$self->{offsets}} : $self->_fill_offsets;
my $top_deferred = $self->_defer_max;
$n = $top_deferred+1 if defined $top_deferred && $n < $top_deferred+1;
$n;
}
sub STORESIZE {
my ($self, $len) = @_;
if ($self->{autodefer}) {
$self->_annotate_ad_history('STORESIZE');
}
my $olen = $self->FETCHSIZE;
return if $len == $olen; # Woo-hoo!
# file gets longer
if ($len > $olen) {
if ($self->_is_deferring) {
for ($olen .. $len-1) {
$self->_store_deferred($_, $self->{recsep});
}
} else {
$self->_extend_file_to($len);
}
return;
}
# file gets shorter
if ($self->_is_deferring) {
# TODO maybe replace this with map-plus-assignment?
for (grep $_ >= $len, keys %{$self->{deferred}}) {
$self->_delete_deferred($_);
}
$self->{deferred_max} = $len-1;
}
$self->_seek($len);
$self->_chop_file;
$#{$self->{offsets}} = $len;
# $self->{offsets}[0] = 0; # in case we just chopped this
$self->{cache}->remove(grep $_ >= $len, $self->{cache}->ckeys);
}
### OPTIMIZE ME
### It should not be necessary to do FETCHSIZE
### Just seek to the end of the file.
sub PUSH {
my $self = shift;
$self->SPLICE($self->FETCHSIZE, scalar(@_), @_);
# No need to return:
# $self->FETCHSIZE; # because av.c takes care of this for me
}
sub POP {
my $self = shift;
my $size = $self->FETCHSIZE;
return if $size == 0;
# print STDERR "# POPPITY POP POP POP\n";
scalar $self->SPLICE($size-1, 1);
}
sub SHIFT {
my $self = shift;
scalar $self->SPLICE(0, 1);
}
sub UNSHIFT {
my $self = shift;
$self->SPLICE(0, 0, @_);
# $self->FETCHSIZE; # av.c takes care of this for me
}
sub CLEAR {
my $self = shift;
if ($self->{autodefer}) {
$self->_annotate_ad_history('CLEAR');
}
$self->_seekb(0);
$self->_chop_file;
$self->{cache}->set_limit($self->{memory});
$self->{cache}->empty;
@{$self->{offsets}} = (0);
%{$self->{deferred}}= ();
$self->{deferred_s} = 0;
$self->{deferred_max} = -1;
}
sub EXTEND {
my ($self, $n) = @_;
# No need to pre-extend anything in this case
return if $self->_is_deferring;
$self->_fill_offsets_to($n);
$self->_extend_file_to($n);
}
sub DELETE {
my ($self, $n) = @_;
if ($self->{autodefer}) {
$self->_annotate_ad_history('DELETE');
}
my $lastrec = $self->FETCHSIZE-1;
my $rec = $self->FETCH($n);
$self->_delete_deferred($n) if $self->_is_deferring;
if ($n == $lastrec) {
$self->_seek($n);
$self->_chop_file;
$#{$self->{offsets}}--;
$self->{cache}->remove($n);
# perhaps in this case I should also remove trailing null records?
# 20020316
# Note that delete @a[-3..-1] deletes the records in the wrong order,
# so we only chop the very last one out of the file. We could repair this
# by tracking deleted records inside the object.
} elsif ($n < $lastrec) {
$self->STORE($n, "");
}
$rec;
}
sub EXISTS {
my ($self, $n) = @_;
return 1 if exists $self->{deferred}{$n};
$n < $self->FETCHSIZE;
}
sub SPLICE {
my $self = shift;
if ($self->{autodefer}) {
$self->_annotate_ad_history('SPLICE');
}
$self->_flush if $self->_is_deferring; # move this up?
if (wantarray) {
$self->_chomp(my @a = $self->_splice(@_));
@a;
} else {
$self->_chomp1(scalar $self->_splice(@_));
}
}
sub DESTROY {
my $self = shift;
$self->flush if $self->_is_deferring;
$self->{cache}->delink if defined $self->{cache}; # break circular link
if ($self->{fh} and $self->{ourfh}) {
delete $self->{ourfh};
close delete $self->{fh};
}
}
sub _splice {
my ($self, $pos, $nrecs, @data) = @_;
my @result;
$pos = 0 unless defined $pos;
# Deal with negative and other out-of-range positions
# Also set default for $nrecs
{
my $oldsize = $self->FETCHSIZE;
$nrecs = $oldsize unless defined $nrecs;
my $oldpos = $pos;
if ($pos < 0) {
$pos += $oldsize;
if ($pos < 0) {
croak "Modification of non-creatable array value attempted, " .
"subscript $oldpos";
}
}
if ($pos > $oldsize) {
return unless @data;
$pos = $oldsize; # This is what perl does for normal arrays
}
# The manual is very unclear here
if ($nrecs < 0) {
$nrecs = $oldsize - $pos + $nrecs;
$nrecs = 0 if $nrecs < 0;
}
# nrecs is too big---it really means "until the end"
# 20030507
if ($nrecs + $pos > $oldsize) {
$nrecs = $oldsize - $pos;
}
}
$self->_fixrecs(@data);
my $data = join '', @data;
my $datalen = length $data;
my $oldlen = 0;
# compute length of data being removed
for ($pos .. $pos+$nrecs-1) {
last unless defined $self->_fill_offsets_to($_);
my $rec = $self->_fetch($_);
last unless defined $rec;
push @result, $rec;
# Why don't we just use length($rec) here?
# Because that record might have come from the cache. _splice
# might have been called to flush out the deferred-write records,
# and in this case length($rec) is the length of the record to be
# *written*, not the length of the actual record in the file. But
# the offsets are still true. 20020322
$oldlen += $self->{offsets}[$_+1] - $self->{offsets}[$_]
if defined $self->{offsets}[$_+1];
}
$self->_fill_offsets_to($pos+$nrecs);
# Modify the file
$self->_mtwrite($data, $self->{offsets}[$pos], $oldlen);
# Adjust the offsets table
$self->_oadjust([$pos, $nrecs, @data]);
{ # Take this read cache stuff out into a separate function
# You made a half-attempt to put it into _oadjust.
# Finish something like that up eventually.
# STORE also needs to do something similarish
# update the read cache, part 1
# modified records
for ($pos .. $pos+$nrecs-1) {
my $new = $data[$_-$pos];
if (defined $new) {
$self->{cache}->update($_, $new);
} else {
$self->{cache}->remove($_);
}
}
# update the read cache, part 2
# moved records - records past the site of the change
# need to be renumbered
# Maybe merge this with the previous block?
{
my @oldkeys = grep $_ >= $pos + $nrecs, $self->{cache}->ckeys;
my @newkeys = map $_-$nrecs+@data, @oldkeys;
$self->{cache}->rekey(\@oldkeys, \@newkeys);
}
# Now there might be too much data in the cache, if we spliced out
# some short records and spliced in some long ones. If so, flush
# the cache.
$self->_cache_flush;
}
# Yes, the return value of 'splice' *is* actually this complicated
wantarray ? @result : @result ? $result[-1] : undef;
}
# write data into the file
# $data is the data to be written.
# it should be written at position $pos, and should overwrite
# exactly $len of the following bytes.
# Note that if length($data) > $len, the subsequent bytes will have to
# be moved up, and if length($data) < $len, they will have to
# be moved down
sub _twrite {
my ($self, $data, $pos, $len) = @_;
unless (defined $pos) {
die "\$pos was undefined in _twrite";
}
my $len_diff = length($data) - $len;
if ($len_diff == 0) { # Woo-hoo!
my $fh = $self->{fh};
$self->_seekb($pos);
$self->_write_record($data);
return; # well, that was easy.
}
# the two records are of different lengths
# our strategy here: rewrite the tail of the file,
# reading ahead one buffer at a time
# $bufsize is required to be at least as large as the data we're overwriting
my $bufsize = _bufsize($len_diff);
my ($writepos, $readpos) = ($pos, $pos+$len);
my $next_block;
my $more_data;
# Seems like there ought to be a way to avoid the repeated code
# and the special case here. The read(1) is also a little weird.
# Think about this.
do {
$self->_seekb($readpos);
my $br = read $self->{fh}, $next_block, $bufsize;
$more_data = read $self->{fh}, my($dummy), 1;
$self->_seekb($writepos);
$self->_write_record($data);
$readpos += $br;
$writepos += length $data;
$data = $next_block;
} while $more_data;
$self->_seekb($writepos);
$self->_write_record($next_block);
# There might be leftover data at the end of the file
$self->_chop_file if $len_diff < 0;
}
# _iwrite(D, S, E)
# Insert text D at position S.
# Let C = E-S-|D|. If C < 0; die.
# Data in [S,S+C) is copied to [S+D,S+D+C) = [S+D,E).
# Data in [S+C = E-D, E) is returned. Data in [E, oo) is untouched.
#
# In a later version, don't read the entire intervening area into
# memory at once; do the copying block by block.
sub _iwrite {
my $self = shift;
my ($D, $s, $e) = @_;
my $d = length $D;
my $c = $e-$s-$d;
local *FH = $self->{fh};
confess "Not enough space to insert $d bytes between $s and $e"
if $c < 0;
confess "[$s,$e) is an invalid insertion range" if $e < $s;
$self->_seekb($s);
read FH, my $buf, $e-$s;
$D .= substr($buf, 0, $c, "");
$self->_seekb($s);
$self->_write_record($D);
return $buf;
}
# Like _twrite, but the data-pos-len triple may be repeated; you may
# write several chunks. All the writing will be done in
# one pass. Chunks SHALL be in ascending order and SHALL NOT overlap.
sub _mtwrite {
my $self = shift;
my $unwritten = "";
my $delta = 0;
@_ % 3 == 0
or die "Arguments to _mtwrite did not come in groups of three";
while (@_) {
my ($data, $pos, $len) = splice @_, 0, 3;
my $end = $pos + $len; # The OLD end of the segment to be replaced
$data = $unwritten . $data;
$delta -= length($unwritten);
$unwritten = "";
$pos += $delta; # This is where the data goes now
my $dlen = length $data;
$self->_seekb($pos);
if ($len >= $dlen) { # the data will fit
$self->_write_record($data);
$delta += ($dlen - $len); # everything following moves down by this much
$data = ""; # All the data in the buffer has been written
} else { # won't fit
my $writable = substr($data, 0, $len - $delta, "");
$self->_write_record($writable);
$delta += ($dlen - $len); # everything following moves down by this much
}
# At this point we've written some but maybe not all of the data.
# There might be a gap to close up, or $data might still contain a
# bunch of unwritten data that didn't fit.
my $ndlen = length $data;
if ($delta == 0) {
$self->_write_record($data);
} elsif ($delta < 0) {
# upcopy (close up gap)
if (@_) {
$self->_upcopy($end, $end + $delta, $_[1] - $end);
} else {
$self->_upcopy($end, $end + $delta);
}
} else {
# downcopy (insert data that didn't fit; replace this data in memory
# with _later_ data that doesn't fit)
if (@_) {
$unwritten = $self->_downcopy($data, $end, $_[1] - $end);
} else {
# Make the file longer to accommodate the last segment that doesn't
$unwritten = $self->_downcopy($data, $end);
}
}
}
}
# Copy block of data of length $len from position $spos to position $dpos
# $dpos must be <= $spos
#
# If $len is undefined, go all the way to the end of the file
# and then truncate it ($spos - $dpos bytes will be removed)
sub _upcopy {
my $blocksize = 8192;
my ($self, $spos, $dpos, $len) = @_;
if ($dpos > $spos) {
die "source ($spos) was upstream of destination ($dpos) in _upcopy";
} elsif ($dpos == $spos) {
return;
}
while (! defined ($len) || $len > 0) {
my $readsize = ! defined($len) ? $blocksize
: $len > $blocksize ? $blocksize
: $len;
my $fh = $self->{fh};
$self->_seekb($spos);
my $bytes_read = read $fh, my($data), $readsize;
$self->_seekb($dpos);
if ($data eq "") {
$self->_chop_file;
last;
}
$self->_write_record($data);
$spos += $bytes_read;
$dpos += $bytes_read;
$len -= $bytes_read if defined $len;
}
}
# Write $data into a block of length $len at position $pos,
# moving everything in the block forwards to make room.
# Instead of writing the last length($data) bytes from the block
# (because there isn't room for them any longer) return them.
#
# Undefined $len means 'until the end of the file'
sub _downcopy {
my $blocksize = 8192;
my ($self, $data, $pos, $len) = @_;
my $fh = $self->{fh};
while (! defined $len || $len > 0) {
my $readsize = ! defined($len) ? $blocksize
: $len > $blocksize? $blocksize : $len;
$self->_seekb($pos);
read $fh, my($old), $readsize;
my $last_read_was_short = length($old) < $readsize;
$data .= $old;
my $writable;
if ($last_read_was_short) {
# If last read was short, then $data now contains the entire rest
# of the file, so there's no need to write only one block of it
$writable = $data;
$data = "";
} else {
$writable = substr($data, 0, $readsize, "");
}
last if $writable eq "";
$self->_seekb($pos);
$self->_write_record($writable);
last if $last_read_was_short && $data eq "";
$len -= $readsize if defined $len;
$pos += $readsize;
}
return $data;
}
# Adjust the object data structures following an '_mtwrite'
# Arguments are
# [$pos, $nrecs, @length] items
# indicating that $nrecs records were removed at $recpos (a record offset)
# and replaced with records of length @length...
# Arguments guarantee that $recpos is strictly increasing.
# No return value
sub _oadjust {
my $self = shift;
my $delta = 0;
my $delta_recs = 0;
my $prev_end = -1;
for (@_) {
my ($pos, $nrecs, @data) = @$_;
$pos += $delta_recs;
# Adjust the offsets of the records after the previous batch up
# to the first new one of this batch
for my $i ($prev_end+2 .. $pos - 1) {
$self->{offsets}[$i] += $delta;
}
$prev_end = $pos + @data - 1; # last record moved on this pass
# Remove the offsets for the removed records;
# replace with the offsets for the inserted records
my @newoff = ($self->{offsets}[$pos] + $delta);
for my $i (0 .. $#data) {
my $newlen = length $data[$i];
push @newoff, $newoff[$i] + $newlen;
$delta += $newlen;
}
for my $i ($pos .. $pos+$nrecs-1) {
last if $i+1 > $#{$self->{offsets}};
my $oldlen = $self->{offsets}[$i+1] - $self->{offsets}[$i];
$delta -= $oldlen;
}
# replace old offsets with new
splice @{$self->{offsets}}, $pos, $nrecs+1, @newoff;
# What if we just spliced out the end of the offsets table?
# shouldn't we clear $self->{eof}? Test for this XXX BUG TODO
$delta_recs += @data - $nrecs; # net change in total number of records
}
# The trailing records at the very end of the file
if ($delta) {
for my $i ($prev_end+2 .. $#{$self->{offsets}}) {
$self->{offsets}[$i] += $delta;
}
}
# If we scrubbed out all known offsets, regenerate the trivial table
# that knows that the file does indeed start at 0.
$self->{offsets}[0] = 0 unless @{$self->{offsets}};
# If the file got longer, the offsets table is no longer complete
# $self->{eof} = 0 if $delta_recs > 0;
# Now there might be too much data in the cache, if we spliced out
# some short records and spliced in some long ones. If so, flush
# the cache.
$self->_cache_flush;
}
# If a record does not already end with the appropriate terminator
# string, append one.
sub _fixrecs {
my $self = shift;
for (@_) {
$_ = "" unless defined $_;
$_ .= $self->{recsep}
unless substr($_, - $self->{recseplen}) eq $self->{recsep};
}
}
################################################################
#
# Basic read, write, and seek
#
# seek to the beginning of record #$n
# Assumes that the offsets table is already correctly populated
#
# Note that $n=-1 has a special meaning here: It means the start of
# the last known record; this may or may not be the very last record
# in the file, depending on whether the offsets table is fully populated.
#
sub _seek {
my ($self, $n) = @_;
my $o = $self->{offsets}[$n];
defined($o)
or confess("logic error: undefined offset for record $n");
seek $self->{fh}, $o, SEEK_SET
or confess "Couldn't seek filehandle: $!"; # "Should never happen."
}
# seek to byte $b in the file
sub _seekb {
my ($self, $b) = @_;
seek $self->{fh}, $b, SEEK_SET
or die "Couldn't seek filehandle: $!"; # "Should never happen."
}
# populate the offsets table up to the beginning of record $n
# return the offset of record $n
sub _fill_offsets_to {
my ($self, $n) = @_;
return $self->{offsets}[$n] if $self->{eof};
my $fh = $self->{fh};
local *OFF = $self->{offsets};
my $rec;
until ($#OFF >= $n) {
$self->_seek(-1); # tricky -- see comment at _seek
$rec = $self->_read_record;
if (defined $rec) {
push @OFF, int(tell $fh); # Tels says that int() saves memory here
} else {
$self->{eof} = 1;
return; # It turns out there is no such record
}
}
# we have now read all the records up to record n-1,
# so we can return the offset of record n
$OFF[$n];
}
sub _fill_offsets {
my ($self) = @_;
my $fh = $self->{fh};
local *OFF = $self->{offsets};
$self->_seek(-1); # tricky -- see comment at _seek
# Tels says that inlining read_record() would make this loop
# five times faster. 20030508
while ( defined $self->_read_record()) {
# int() saves us memory here
push @OFF, int(tell $fh);
}
$self->{eof} = 1;
$#OFF;
}
# assumes that $rec is already suitably terminated
sub _write_record {
my ($self, $rec) = @_;
my $fh = $self->{fh};
local $\ = "";
print $fh $rec
or die "Couldn't write record: $!"; # "Should never happen."
# $self->{_written} += length($rec);
}
sub _read_record {
my $self = shift;
my $rec;
{ local $/ = $self->{recsep};
my $fh = $self->{fh};
$rec = <$fh>;
}
return unless defined $rec;
if (substr($rec, -$self->{recseplen}) ne $self->{recsep}) {
# improperly terminated final record --- quietly fix it.
# my $ac = substr($rec, -$self->{recseplen});
# $ac =~ s/\n/\\n/g;
$self->{sawlastrec} = 1;
unless ($self->{rdonly}) {
local $\ = "";
my $fh = $self->{fh};
print $fh $self->{recsep};
}
$rec .= $self->{recsep};
}
# $self->{_read} += length($rec) if defined $rec;
$rec;
}
sub _rw_stats {
my $self = shift;
@{$self}{'_read', '_written'};
}
################################################################
#
# Read cache management
sub _cache_flush {
my ($self) = @_;
$self->{cache}->reduce_size_to($self->{memory} - $self->{deferred_s});
}
sub _cache_too_full {
my $self = shift;
$self->{cache}->bytes + $self->{deferred_s} >= $self->{memory};
}
################################################################
#
# File custodial services
#
# We have read to the end of the file and have the offsets table
# entirely populated. Now we need to write a new record beyond
# the end of the file. We prepare for this by writing
# empty records into the file up to the position we want
#
# assumes that the offsets table already contains the offset of record $n,
# if it exists, and extends to the end of the file if not.
sub _extend_file_to {
my ($self, $n) = @_;
$self->_seek(-1); # position after the end of the last record
my $pos = $self->{offsets}[-1];
# the offsets table has one entry more than the total number of records
my $extras = $n - $#{$self->{offsets}};
# Todo : just use $self->{recsep} x $extras here?
while ($extras-- > 0) {
$self->_write_record($self->{recsep});
push @{$self->{offsets}}, int(tell $self->{fh});
}
}
# Truncate the file at the current position
sub _chop_file {
my $self = shift;
truncate $self->{fh}, tell($self->{fh});
}
# compute the size of a buffer suitable for moving
# all the data in a file forward $n bytes
# ($n may be negative)
# The result should be at least $n.
sub _bufsize {
my $n = shift;
return 8192 if $n <= 0;
my $b = $n & ~8191;
$b += 8192 if $n & 8191;
$b;
}
################################################################
#
# Miscellaneous public methods
#
# Lock the file
sub flock {
my ($self, $op) = @_;
unless (@_ <= 3) {
my $pack = ref $self;
croak "Usage: $pack\->flock([OPERATION])";
}
my $fh = $self->{fh};
$op = LOCK_EX unless defined $op;
my $locked = flock $fh, $op;
if ($locked && ($op & (LOCK_EX | LOCK_SH))) {
# If you're locking the file, then presumably it's because
# there might have been a write access by another process.
# In that case, the read cache contents and the offsets table
# might be invalid, so discard them. 20030508
$self->{offsets} = [0];
$self->{cache}->empty;
}
$locked;
}
# Get/set autochomp option
sub autochomp {
my $self = shift;
if (@_) {
my $old = $self->{autochomp};
$self->{autochomp} = shift;
$old;
} else {
$self->{autochomp};
}
}
# Get offset table entries; returns offset of nth record
sub offset {
my ($self, $n) = @_;
if ($#{$self->{offsets}} < $n) {
return if $self->{eof}; # request for record beyond the end of file
my $o = $self->_fill_offsets_to($n);
# If it's still undefined, there is no such record, so return 'undef'
return unless defined $o;
}
$self->{offsets}[$n];
}
sub discard_offsets {
my $self = shift;
$self->{offsets} = [0];
}
################################################################
#
# Matters related to deferred writing
#
# Defer writes
sub defer {
my $self = shift;
$self->_stop_autodeferring;
@{$self->{ad_history}} = ();
$self->{defer} = 1;
}
# Flush deferred writes
#
# This could be better optimized to write the file in one pass, instead
# of one pass per block of records. But that will require modifications
# to _twrite, so I should have a good _twrite test suite first.
sub flush {
my $self = shift;
$self->_flush;
$self->{defer} = 0;
}
sub _old_flush {
my $self = shift;
my @writable = sort {$a<=>$b} (keys %{$self->{deferred}});
while (@writable) {
# gather all consecutive records from the front of @writable
my $first_rec = shift @writable;
my $last_rec = $first_rec+1;
++$last_rec, shift @writable while @writable && $last_rec == $writable[0];
--$last_rec;
$self->_fill_offsets_to($last_rec);
$self->_extend_file_to($last_rec);
$self->_splice($first_rec, $last_rec-$first_rec+1,
@{$self->{deferred}}{$first_rec .. $last_rec});
}
$self->_discard; # clear out defered-write-cache
}
sub _flush {
my $self = shift;
my @writable = sort {$a<=>$b} (keys %{$self->{deferred}});
my @args;
my @adjust;
while (@writable) {
# gather all consecutive records from the front of @writable
my $first_rec = shift @writable;
my $last_rec = $first_rec+1;
++$last_rec, shift @writable while @writable && $last_rec == $writable[0];
--$last_rec;
my $end = $self->_fill_offsets_to($last_rec+1);
if (not defined $end) {
$self->_extend_file_to($last_rec);
$end = $self->{offsets}[$last_rec];
}
my ($start) = $self->{offsets}[$first_rec];
push @args,
join("", @{$self->{deferred}}{$first_rec .. $last_rec}), # data
$start, # position
$end-$start; # length
push @adjust, [$first_rec, # starting at this position...
$last_rec-$first_rec+1, # this many records...
# are replaced with these...
@{$self->{deferred}}{$first_rec .. $last_rec},
];
}
$self->_mtwrite(@args); # write multiple record groups
$self->_discard; # clear out defered-write-cache
$self->_oadjust(@adjust);
}
# Discard deferred writes and disable future deferred writes
sub discard {
my $self = shift;
$self->_discard;
$self->{defer} = 0;
}
# Discard deferred writes, but retain old deferred writing mode
sub _discard {
my $self = shift;
%{$self->{deferred}} = ();
$self->{deferred_s} = 0;
$self->{deferred_max} = -1;
$self->{cache}->set_limit($self->{memory});
}
# Deferred writing is enabled, either explicitly ($self->{defer})
# or automatically ($self->{autodeferring})
sub _is_deferring {
my $self = shift;
$self->{defer} || $self->{autodeferring};
}
# The largest record number of any deferred record
sub _defer_max {
my $self = shift;
return $self->{deferred_max} if defined $self->{deferred_max};
my $max = -1;
for my $key (keys %{$self->{deferred}}) {
$max = $key if $key > $max;
}
$self->{deferred_max} = $max;
$max;
}
################################################################
#
# Matters related to autodeferment
#
# Get/set autodefer option
sub autodefer {
my $self = shift;
if (@_) {
my $old = $self->{autodefer};
$self->{autodefer} = shift;
if ($old) {
$self->_stop_autodeferring;
@{$self->{ad_history}} = ();
}
$old;
} else {
$self->{autodefer};
}
}
# The user is trying to store record #$n Record that in the history,
# and then enable (or disable) autodeferment if that seems useful.
# Note that it's OK for $n to be a non-number, as long as the function
# is prepared to deal with that. Nobody else looks at the ad_history.
#
# Now, what does the ad_history mean, and what is this function doing?
# Essentially, the idea is to enable autodeferring when we see that the
# user has made three consecutive STORE calls to three consecutive records.
# ("Three" is actually ->{autodefer_threshhold}.)
# A STORE call for record #$n inserts $n into the autodefer history,
# and if the history contains three consecutive records, we enable
# autodeferment. An ad_history of [X, Y] means that the most recent
# STOREs were for records X, X+1, ..., Y, in that order.
#
# Inserting a nonconsecutive number erases the history and starts over.
#
# Performing a special operation like SPLICE erases the history.
#
# There's one special case: CLEAR means that CLEAR was just called.
# In this case, we prime the history with [-2, -1] so that if the next
# write is for record 0, autodeferring goes on immediately. This is for
# the common special case of "@a = (...)".
#
sub _annotate_ad_history {
my ($self, $n) = @_;
return unless $self->{autodefer}; # feature is disabled
return if $self->{defer}; # already in explicit defer mode
return unless $self->{offsets}[-1] >= $self->{autodefer_filelen_threshhold};
local *H = $self->{ad_history};
if ($n eq 'CLEAR') {
@H = (-2, -1); # prime the history with fake records
$self->_stop_autodeferring;
} elsif ($n =~ /^\d+$/) {
if (@H == 0) {
@H = ($n, $n);
} else { # @H == 2
if ($H[1] == $n-1) { # another consecutive record
$H[1]++;
if ($H[1] - $H[0] + 1 >= $self->{autodefer_threshhold}) {
$self->{autodeferring} = 1;
}
} else { # nonconsecutive- erase and start over
@H = ($n, $n);
$self->_stop_autodeferring;
}
}
} else { # SPLICE or STORESIZE or some such
@H = ();
$self->_stop_autodeferring;
}
}
# If autodeferring was enabled, cut it out and discard the history
sub _stop_autodeferring {
my $self = shift;
if ($self->{autodeferring}) {
$self->_flush;
}
$self->{autodeferring} = 0;
}
################################################################
# This is NOT a method. It is here for two reasons:
# 1. To factor a fairly complicated block out of the constructor
# 2. To provide access for the test suite, which need to be sure
# files are being written properly.
sub _default_recsep {
my $recsep = $/;
if ($^O eq 'MSWin32') { # Dos too?
# Windows users expect files to be terminated with \r\n
# But $/ is set to \n instead
# Note that this also transforms \n\n into \r\n\r\n.
# That is a feature.
$recsep =~ s/\n/\r\n/g;
}
$recsep;
}
# Utility function for _check_integrity
sub _ci_warn {
my $msg = shift;
$msg =~ s/\n/\\n/g;
$msg =~ s/\r/\\r/g;
print "# $msg\n";
}
# Given a file, make sure the cache is consistent with the
# file contents and the internal data structures are consistent with
# each other. Returns true if everything checks out, false if not
#
# The $file argument is no longer used. It is retained for compatibility
# with the existing test suite.
sub _check_integrity {
my ($self, $file, $warn) = @_;
my $rsl = $self->{recseplen};
my $rs = $self->{recsep};
my $good = 1;
local *_; # local $_ does not work here
local $DIAGNOSTIC = 1;
if (not defined $rs) {
_ci_warn("recsep is undef!");
$good = 0;
} elsif ($rs eq "") {
_ci_warn("recsep is empty!");
$good = 0;
} elsif ($rsl != length $rs) {
my $ln = length $rs;
_ci_warn("recsep <$rs> has length $ln, should be $rsl");
$good = 0;
}
if (not defined $self->{offsets}[0]) {
_ci_warn("offset 0 is missing!");
$good = 0;
} elsif ($self->{offsets}[0] != 0) {
_ci_warn("rec 0: offset <$self->{offsets}[0]> s/b 0!");
$good = 0;
}
my $cached = 0;
{
local *F = $self->{fh};
seek F, 0, SEEK_SET;
local $. = 0;
local $/ = $rs;
while (<F>) {
my $n = $. - 1;
my $cached = $self->{cache}->_produce($n);
my $offset = $self->{offsets}[$.];
my $ao = tell F;
if (defined $offset && $offset != $ao) {
_ci_warn("rec $n: offset <$offset> actual <$ao>");
$good = 0;
}
if (defined $cached && $_ ne $cached && ! $self->{deferred}{$n}) {
$good = 0;
_ci_warn("rec $n: cached <$cached> actual <$_>");
}
if (defined $cached && substr($cached, -$rsl) ne $rs) {
$good = 0;
_ci_warn("rec $n in the cache is missing the record separator");
}
if (! defined $offset && $self->{eof}) {
$good = 0;
_ci_warn("The offset table was marked complete, but it is missing " .
"element $.");
}
}
if (@{$self->{offsets}} > $.+1) {
$good = 0;
my $n = @{$self->{offsets}};
_ci_warn("The offset table has $n items, but the file has only $.");
}
my $deferring = $self->_is_deferring;
for my $n ($self->{cache}->ckeys) {
my $r = $self->{cache}->_produce($n);
$cached += length($r);
next if $n+1 <= $.; # checked this already
_ci_warn("spurious caching of record $n");
$good = 0;
}
my $b = $self->{cache}->bytes;
if ($cached != $b) {
_ci_warn("cache size is $b, should be $cached");
$good = 0;
}
}
# That cache has its own set of tests
$good = 0 unless $self->{cache}->_check_integrity;
# Now let's check the deferbuffer
# Unless deferred writing is enabled, it should be empty
if (! $self->_is_deferring && %{$self->{deferred}}) {
_ci_warn("deferred writing disabled, but deferbuffer nonempty");
$good = 0;
}
# Any record in the deferbuffer should *not* be present in the readcache
my $deferred_s = 0;
while (my ($n, $r) = each %{$self->{deferred}}) {
$deferred_s += length($r);
if (defined $self->{cache}->_produce($n)) {
_ci_warn("record $n is in the deferbuffer *and* the readcache");
$good = 0;
}
if (substr($r, -$rsl) ne $rs) {
_ci_warn("rec $n in the deferbuffer is missing the record separator");
$good = 0;
}
}
# Total size of deferbuffer should match internal total
if ($deferred_s != $self->{deferred_s}) {
_ci_warn("buffer size is $self->{deferred_s}, should be $deferred_s");
$good = 0;
}
# Total size of deferbuffer should not exceed the specified limit
if ($deferred_s > $self->{dw_size}) {
_ci_warn("buffer size is $self->{deferred_s} which exceeds the limit " .
"of $self->{dw_size}");
$good = 0;
}
# Total size of cached data should not exceed the specified limit
if ($deferred_s + $cached > $self->{memory}) {
my $total = $deferred_s + $cached;
_ci_warn("total stored data size is $total which exceeds the limit " .
"of $self->{memory}");
$good = 0;
}
# Stuff related to autodeferment
if (!$self->{autodefer} && @{$self->{ad_history}}) {
_ci_warn("autodefer is disabled, but ad_history is nonempty");
$good = 0;
}
if ($self->{autodeferring} && $self->{defer}) {
_ci_warn("both autodeferring and explicit deferring are active");
$good = 0;
}
if (@{$self->{ad_history}} == 0) {
# That's OK, no additional tests required
} elsif (@{$self->{ad_history}} == 2) {
my @non_number = grep !/^-?\d+$/, @{$self->{ad_history}};
if (@non_number) {
my $msg;
{ local $" = ')(';
$msg = "ad_history contains non-numbers (@{$self->{ad_history}})";
}
_ci_warn($msg);
$good = 0;
} elsif ($self->{ad_history}[1] < $self->{ad_history}[0]) {
_ci_warn("ad_history has nonsensical values @{$self->{ad_history}}");
$good = 0;
}
} else {
_ci_warn("ad_history has bad length <@{$self->{ad_history}}>");
$good = 0;
}
$good;
}
################################################################
#
# Tie::File::Cache
#
# Read cache
package Tie::File::Cache;
$Tie::File::Cache::VERSION = $Tie::File::VERSION;
use Carp ':DEFAULT', 'confess';
sub HEAP () { 0 }
sub HASH () { 1 }
sub MAX () { 2 }
sub BYTES() { 3 }
#sub STAT () { 4 } # Array with request statistics for each record
#sub MISS () { 5 } # Total number of cache misses
#sub REQ () { 6 } # Total number of cache requests
use strict 'vars';
sub new {
my ($pack, $max) = @_;
local *_;
croak "missing argument to ->new" unless defined $max;
my $self = [];
bless $self => $pack;
@$self = (Tie::File::Heap->new($self), {}, $max, 0);
$self;
}
sub adj_limit {
my ($self, $n) = @_;
$self->[MAX] += $n;
}
sub set_limit {
my ($self, $n) = @_;
$self->[MAX] = $n;
}
# For internal use only
# Will be called by the heap structure to notify us that a certain
# piece of data has moved from one heap element to another.
# $k is the hash key of the item
# $n is the new index into the heap at which it is stored
# If $n is undefined, the item has been removed from the heap.
sub _heap_move {
my ($self, $k, $n) = @_;
if (defined $n) {
$self->[HASH]{$k} = $n;
} else {
delete $self->[HASH]{$k};
}
}
sub insert {
my ($self, $key, $val) = @_;
local *_;
croak "missing argument to ->insert" unless defined $key;
unless (defined $self->[MAX]) {
confess "undefined max" ;
}
confess "undefined val" unless defined $val;
return if length($val) > $self->[MAX];
# if ($self->[STAT]) {
# $self->[STAT][$key] = 1;
# return;
# }
my $oldnode = $self->[HASH]{$key};
if (defined $oldnode) {
my $oldval = $self->[HEAP]->set_val($oldnode, $val);
$self->[BYTES] -= length($oldval);
} else {
$self->[HEAP]->insert($key, $val);
}
$self->[BYTES] += length($val);
$self->flush if $self->[BYTES] > $self->[MAX];
}
sub expire {
my $self = shift;
my $old_data = $self->[HEAP]->popheap;
return unless defined $old_data;
$self->[BYTES] -= length $old_data;
$old_data;
}
sub remove {
my ($self, @keys) = @_;
my @result;
# if ($self->[STAT]) {
# for my $key (@keys) {
# $self->[STAT][$key] = 0;
# }
# return;
# }
for my $key (@keys) {
next unless exists $self->[HASH]{$key};
my $old_data = $self->[HEAP]->remove($self->[HASH]{$key});
$self->[BYTES] -= length $old_data;
push @result, $old_data;
}
@result;
}
sub lookup {
my ($self, $key) = @_;
local *_;
croak "missing argument to ->lookup" unless defined $key;
# if ($self->[STAT]) {
# $self->[MISS]++ if $self->[STAT][$key]++ == 0;
# $self->[REQ]++;
# my $hit_rate = 1 - $self->[MISS] / $self->[REQ];
# # Do some testing to determine this threshhold
# $#$self = STAT - 1 if $hit_rate > 0.20;
# }
if (exists $self->[HASH]{$key}) {
$self->[HEAP]->lookup($self->[HASH]{$key});
} else {
return;
}
}
# For internal use only
sub _produce {
my ($self, $key) = @_;
my $loc = $self->[HASH]{$key};
return unless defined $loc;
$self->[HEAP][$loc][2];
}
# For internal use only
sub _promote {
my ($self, $key) = @_;
$self->[HEAP]->promote($self->[HASH]{$key});
}
sub empty {
my ($self) = @_;
%{$self->[HASH]} = ();
$self->[BYTES] = 0;
$self->[HEAP]->empty;
# @{$self->[STAT]} = ();
# $self->[MISS] = 0;
# $self->[REQ] = 0;
}
sub is_empty {
my ($self) = @_;
keys %{$self->[HASH]} == 0;
}
sub update {
my ($self, $key, $val) = @_;
local *_;
croak "missing argument to ->update" unless defined $key;
if (length($val) > $self->[MAX]) {
my ($oldval) = $self->remove($key);
$self->[BYTES] -= length($oldval) if defined $oldval;
} elsif (exists $self->[HASH]{$key}) {
my $oldval = $self->[HEAP]->set_val($self->[HASH]{$key}, $val);
$self->[BYTES] += length($val);
$self->[BYTES] -= length($oldval) if defined $oldval;
} else {
$self->[HEAP]->insert($key, $val);
$self->[BYTES] += length($val);
}
$self->flush;
}
sub rekey {
my ($self, $okeys, $nkeys) = @_;
local *_;
my %map;
@map{@$okeys} = @$nkeys;
croak "missing argument to ->rekey" unless defined $nkeys;
croak "length mismatch in ->rekey arguments" unless @$nkeys == @$okeys;
my %adjusted; # map new keys to heap indices
# You should be able to cut this to one loop TODO XXX
for (0 .. $#$okeys) {
$adjusted{$nkeys->[$_]} = delete $self->[HASH]{$okeys->[$_]};
}
while (my ($nk, $ix) = each %adjusted) {
# @{$self->[HASH]}{keys %adjusted} = values %adjusted;
$self->[HEAP]->rekey($ix, $nk);
$self->[HASH]{$nk} = $ix;
}
}
sub ckeys {
my $self = shift;
my @a = keys %{$self->[HASH]};
@a;
}
# Return total amount of cached data
sub bytes {
my $self = shift;
$self->[BYTES];
}
# Expire oldest item from cache until cache size is smaller than $max
sub reduce_size_to {
my ($self, $max) = @_;
until ($self->[BYTES] <= $max) {
# Note that Tie::File::Cache::expire has been inlined here
my $old_data = $self->[HEAP]->popheap;
return unless defined $old_data;
$self->[BYTES] -= length $old_data;
}
}
# Why not just $self->reduce_size_to($self->[MAX])?
# Try this when things stabilize TODO XXX
# If the cache is too full, expire the oldest records
sub flush {
my $self = shift;
$self->reduce_size_to($self->[MAX]) if $self->[BYTES] > $self->[MAX];
}
# For internal use only
sub _produce_lru {
my $self = shift;
$self->[HEAP]->expire_order;
}
BEGIN { *_ci_warn = \&Tie::File::_ci_warn }
sub _check_integrity { # For CACHE
my $self = shift;
my $good = 1;
# Test HEAP
$self->[HEAP]->_check_integrity or $good = 0;
# Test HASH
my $bytes = 0;
for my $k (keys %{$self->[HASH]}) {
if ($k ne '0' && $k !~ /^[1-9][0-9]*$/) {
$good = 0;
_ci_warn "Cache hash key <$k> is non-numeric";
}
my $h = $self->[HASH]{$k};
if (! defined $h) {
$good = 0;
_ci_warn "Heap index number for key $k is undefined";
} elsif ($h == 0) {
$good = 0;
_ci_warn "Heap index number for key $k is zero";
} else {
my $j = $self->[HEAP][$h];
if (! defined $j) {
$good = 0;
_ci_warn "Heap contents key $k (=> $h) are undefined";
} else {
$bytes += length($j->[2]);
if ($k ne $j->[1]) {
$good = 0;
_ci_warn "Heap contents key $k (=> $h) is $j->[1], should be $k";
}
}
}
}
# Test BYTES
if ($bytes != $self->[BYTES]) {
$good = 0;
_ci_warn "Total data in cache is $bytes, expected $self->[BYTES]";
}
# Test MAX
if ($bytes > $self->[MAX]) {
$good = 0;
_ci_warn "Total data in cache is $bytes, exceeds maximum $self->[MAX]";
}
return $good;
}
sub delink {
my $self = shift;
$self->[HEAP] = undef; # Bye bye heap
}
################################################################
#
# Tie::File::Heap
#
# Heap data structure for use by cache LRU routines
package Tie::File::Heap;
use Carp ':DEFAULT', 'confess';
$Tie::File::Heap::VERSION = $Tie::File::Cache::VERSION;
sub SEQ () { 0 };
sub KEY () { 1 };
sub DAT () { 2 };
sub new {
my ($pack, $cache) = @_;
die "$pack: Parent cache object $cache does not support _heap_move method"
unless eval { $cache->can('_heap_move') };
my $self = [[0,$cache,0]];
bless $self => $pack;
}
# Allocate a new sequence number, larger than all previously allocated numbers
sub _nseq {
my $self = shift;
$self->[0][0]++;
}
sub _cache {
my $self = shift;
$self->[0][1];
}
sub _nelts {
my $self = shift;
$self->[0][2];
}
sub _nelts_inc {
my $self = shift;
++$self->[0][2];
}
sub _nelts_dec {
my $self = shift;
--$self->[0][2];
}
sub is_empty {
my $self = shift;
$self->_nelts == 0;
}
sub empty {
my $self = shift;
$#$self = 0;
$self->[0][2] = 0;
$self->[0][0] = 0; # might as well reset the sequence numbers
}
# notify the parent cache object that we moved something
sub _heap_move {
my $self = shift;
$self->_cache->_heap_move(@_);
}
# Insert a piece of data into the heap with the indicated sequence number.
# The item with the smallest sequence number is always at the top.
# If no sequence number is specified, allocate a new one and insert the
# item at the bottom.
sub insert {
my ($self, $key, $data, $seq) = @_;
$seq = $self->_nseq unless defined $seq;
$self->_insert_new([$seq, $key, $data]);
}
# Insert a new, fresh item at the bottom of the heap
sub _insert_new {
my ($self, $item) = @_;
my $i = @$self;
$i = int($i/2) until defined $self->[$i/2];
$self->[$i] = $item;
$self->[0][1]->_heap_move($self->[$i][KEY], $i);
$self->_nelts_inc;
}
# Insert [$data, $seq] pair at or below item $i in the heap.
# If $i is omitted, default to 1 (the top element.)
sub _insert {
my ($self, $item, $i) = @_;
# $self->_check_loc($i) if defined $i;
$i = 1 unless defined $i;
until (! defined $self->[$i]) {
if ($self->[$i][SEQ] > $item->[SEQ]) { # inserted item is older
($self->[$i], $item) = ($item, $self->[$i]);
$self->[0][1]->_heap_move($self->[$i][KEY], $i);
}
# If either is undefined, go that way. Otherwise, choose at random
my $dir;
$dir = 0 if !defined $self->[2*$i];
$dir = 1 if !defined $self->[2*$i+1];
$dir = int(rand(2)) unless defined $dir;
$i = 2*$i + $dir;
}
$self->[$i] = $item;
$self->[0][1]->_heap_move($self->[$i][KEY], $i);
$self->_nelts_inc;
}
# Remove the item at node $i from the heap, moving child items upwards.
# The item with the smallest sequence number is always at the top.
# Moving items upwards maintains this condition.
# Return the removed item. Return undef if there was no item at node $i.
sub remove {
my ($self, $i) = @_;
$i = 1 unless defined $i;
my $top = $self->[$i];
return unless defined $top;
while (1) {
my $ii;
my ($L, $R) = (2*$i, 2*$i+1);
# If either is undefined, go the other way.
# Otherwise, go towards the smallest.
last unless defined $self->[$L] || defined $self->[$R];
$ii = $R if not defined $self->[$L];
$ii = $L if not defined $self->[$R];
unless (defined $ii) {
$ii = $self->[$L][SEQ] < $self->[$R][SEQ] ? $L : $R;
}
$self->[$i] = $self->[$ii]; # Promote child to fill vacated spot
$self->[0][1]->_heap_move($self->[$i][KEY], $i);
$i = $ii; # Fill new vacated spot
}
$self->[0][1]->_heap_move($top->[KEY], undef);
undef $self->[$i];
$self->_nelts_dec;
return $top->[DAT];
}
sub popheap {
my $self = shift;
$self->remove(1);
}
# set the sequence number of the indicated item to a higher number
# than any other item in the heap, and bubble the item down to the
# bottom.
sub promote {
my ($self, $n) = @_;
# $self->_check_loc($n);
$self->[$n][SEQ] = $self->_nseq;
my $i = $n;
while (1) {
my ($L, $R) = (2*$i, 2*$i+1);
my $dir;
last unless defined $self->[$L] || defined $self->[$R];
$dir = $R unless defined $self->[$L];
$dir = $L unless defined $self->[$R];
unless (defined $dir) {
$dir = $self->[$L][SEQ] < $self->[$R][SEQ] ? $L : $R;
}
@{$self}[$i, $dir] = @{$self}[$dir, $i];
for ($i, $dir) {
$self->[0][1]->_heap_move($self->[$_][KEY], $_) if defined $self->[$_];
}
$i = $dir;
}
}
# Return item $n from the heap, promoting its LRU status
sub lookup {
my ($self, $n) = @_;
# $self->_check_loc($n);
my $val = $self->[$n];
$self->promote($n);
$val->[DAT];
}
# Assign a new value for node $n, promoting it to the bottom of the heap
sub set_val {
my ($self, $n, $val) = @_;
# $self->_check_loc($n);
my $oval = $self->[$n][DAT];
$self->[$n][DAT] = $val;
$self->promote($n);
return $oval;
}
# The hash key has changed for an item;
# alter the heap's record of the hash key
sub rekey {
my ($self, $n, $new_key) = @_;
# $self->_check_loc($n);
$self->[$n][KEY] = $new_key;
}
sub _check_loc {
my ($self, $n) = @_;
unless (1 || defined $self->[$n]) {
confess "_check_loc($n) failed";
}
}
BEGIN { *_ci_warn = \&Tie::File::_ci_warn }
sub _check_integrity {
my $self = shift;
my $good = 1;
my %seq;
unless (eval {$self->[0][1]->isa("Tie::File::Cache")}) {
_ci_warn "Element 0 of heap corrupt";
$good = 0;
}
$good = 0 unless $self->_satisfies_heap_condition(1);
for my $i (2 .. $#{$self}) {
my $p = int($i/2); # index of parent node
if (defined $self->[$i] && ! defined $self->[$p]) {
_ci_warn "Element $i of heap defined, but parent $p isn't";
$good = 0;
}
if (defined $self->[$i]) {
if ($seq{$self->[$i][SEQ]}) {
my $seq = $self->[$i][SEQ];
_ci_warn "Nodes $i and $seq{$seq} both have SEQ=$seq";
$good = 0;
} else {
$seq{$self->[$i][SEQ]} = $i;
}
}
}
return $good;
}
sub _satisfies_heap_condition {
my $self = shift;
my $n = shift || 1;
my $good = 1;
for (0, 1) {
my $c = $n*2 + $_;
next unless defined $self->[$c];
if ($self->[$n][SEQ] >= $self->[$c]) {
_ci_warn "Node $n of heap does not predate node $c";
$good = 0 ;
}
$good = 0 unless $self->_satisfies_heap_condition($c);
}
return $good;
}
# Return a list of all the values, sorted by expiration order
sub expire_order {
my $self = shift;
my @nodes = sort {$a->[SEQ] <=> $b->[SEQ]} $self->_nodes;
map { $_->[KEY] } @nodes;
}
sub _nodes {
my $self = shift;
my $i = shift || 1;
return unless defined $self->[$i];
($self->[$i], $self->_nodes($i*2), $self->_nodes($i*2+1));
}
1;
__END__
=head1 NAME
Tie::File - Access the lines of a disk file via a Perl array
=head1 SYNOPSIS
use Tie::File;
tie @array, 'Tie::File', filename or die ...;
$array[0] = 'blah'; # first line of the file is now 'blah'
# (line numbering starts at 0)
print $array[42]; # display line 43 of the file
$n_recs = @array; # how many records are in the file?
$#array -= 2; # chop two records off the end
for (@array) {
s/PERL/Perl/g; # Replace PERL with Perl everywhere in the file
}
# These are just like regular push, pop, unshift, shift, and splice
# Except that they modify the file in the way you would expect
push @array, new recs...;
my $r1 = pop @array;
unshift @array, new recs...;
my $r2 = shift @array;
@old_recs = splice @array, 3, 7, new recs...;
untie @array; # all finished
=head1 DESCRIPTION
C<Tie::File> represents a regular text file as a Perl array. Each
element in the array corresponds to a record in the file. The first
line of the file is element 0 of the array; the second line is element
1, and so on.
The file is I<not> loaded into memory, so this will work even for
gigantic files.
Changes to the array are reflected in the file immediately.
Lazy people and beginners may now stop reading the manual.
=head2 C<recsep>
What is a 'record'? By default, the meaning is the same as for the
C<E<lt>...E<gt>> operator: It's a string terminated by C<$/>, which is
probably C<"\n">. (Minor exception: on DOS and Win32 systems, a
'record' is a string terminated by C<"\r\n">.) You may change the
definition of "record" by supplying the C<recsep> option in the C<tie>
call:
tie @array, 'Tie::File', $file, recsep => 'es';
This says that records are delimited by the string C<es>. If the file
contained the following data:
Curse these pesky flies!\n
then the C<@array> would appear to have four elements:
"Curse th"
"e p"
"ky fli"
"!\n"
An undefined value is not permitted as a record separator. Perl's
special "paragraph mode" semantics (E<agrave> la C<$/ = "">) are not
emulated.
Records read from the tied array do not have the record separator
string on the end; this is to allow
$array[17] .= "extra";
to work as expected.
(See L<"autochomp">, below.) Records stored into the array will have
the record separator string appended before they are written to the
file, if they don't have one already. For example, if the record
separator string is C<"\n">, then the following two lines do exactly
the same thing:
$array[17] = "Cherry pie";
$array[17] = "Cherry pie\n";
The result is that the contents of line 17 of the file will be
replaced with "Cherry pie"; a newline character will separate line 17
from line 18. This means that this code will do nothing:
chomp $array[17];
Because the C<chomp>ed value will have the separator reattached when
it is written back to the file. There is no way to create a file
whose trailing record separator string is missing.
Inserting records that I<contain> the record separator string is not
supported by this module. It will probably produce a reasonable
result, but what this result will be may change in a future version.
Use 'splice' to insert records or to replace one record with several.
=head2 C<autochomp>
Normally, array elements have the record separator removed, so that if
the file contains the text
Gold
Frankincense
Myrrh
the tied array will appear to contain C<("Gold", "Frankincense",
"Myrrh")>. If you set C<autochomp> to a false value, the record
separator will not be removed. If the file above was tied with
tie @gifts, "Tie::File", $gifts, autochomp => 0;
then the array C<@gifts> would appear to contain C<("Gold\n",
"Frankincense\n", "Myrrh\n")>, or (on Win32 systems) C<("Gold\r\n",
"Frankincense\r\n", "Myrrh\r\n")>.
=head2 C<mode>
Normally, the specified file will be opened for read and write access,
and will be created if it does not exist. (That is, the flags
C<O_RDWR | O_CREAT> are supplied in the C<open> call.) If you want to
change this, you may supply alternative flags in the C<mode> option.
See L<Fcntl> for a listing of available flags.
For example:
# open the file if it exists, but fail if it does not exist
use Fcntl 'O_RDWR';
tie @array, 'Tie::File', $file, mode => O_RDWR;
# create the file if it does not exist
use Fcntl 'O_RDWR', 'O_CREAT';
tie @array, 'Tie::File', $file, mode => O_RDWR | O_CREAT;
# open an existing file in read-only mode
use Fcntl 'O_RDONLY';
tie @array, 'Tie::File', $file, mode => O_RDONLY;
Opening the data file in write-only or append mode is not supported.
=head2 C<memory>
This is an upper limit on the amount of memory that C<Tie::File> will
consume at any time while managing the file. This is used for two
things: managing the I<read cache> and managing the I<deferred write
buffer>.
Records read in from the file are cached, to avoid having to re-read
them repeatedly. If you read the same record twice, the first time it
will be stored in memory, and the second time it will be fetched from
the I<read cache>. The amount of data in the read cache will not
exceed the value you specified for C<memory>. If C<Tie::File> wants
to cache a new record, but the read cache is full, it will make room
by expiring the least-recently visited records from the read cache.
The default memory limit is 2Mib. You can adjust the maximum read
cache size by supplying the C<memory> option. The argument is the
desired cache size, in bytes.
# I have a lot of memory, so use a large cache to speed up access
tie @array, 'Tie::File', $file, memory => 20_000_000;
Setting the memory limit to 0 will inhibit caching; records will be
fetched from disk every time you examine them.
The C<memory> value is not an absolute or exact limit on the memory
used. C<Tie::File> objects contains some structures besides the read
cache and the deferred write buffer, whose sizes are not charged
against C<memory>.
The cache itself consumes about 310 bytes per cached record, so if
your file has many short records, you may want to decrease the cache
memory limit, or else the cache overhead may exceed the size of the
cached data.
=head2 C<dw_size>
(This is an advanced feature. Skip this section on first reading.)
If you use deferred writing (See L<"Deferred Writing">, below) then
data you write into the array will not be written directly to the
file; instead, it will be saved in the I<deferred write buffer> to be
written out later. Data in the deferred write buffer is also charged
against the memory limit you set with the C<memory> option.
You may set the C<dw_size> option to limit the amount of data that can
be saved in the deferred write buffer. This limit may not exceed the
total memory limit. For example, if you set C<dw_size> to 1000 and
C<memory> to 2500, that means that no more than 1000 bytes of deferred
writes will be saved up. The space available for the read cache will
vary, but it will always be at least 1500 bytes (if the deferred write
buffer is full) and it could grow as large as 2500 bytes (if the
deferred write buffer is empty.)
If you don't specify a C<dw_size>, it defaults to the entire memory
limit.
=head2 Option Format
C<-mode> is a synonym for C<mode>. C<-recsep> is a synonym for
C<recsep>. C<-memory> is a synonym for C<memory>. You get the
idea.
=head1 Public Methods
The C<tie> call returns an object, say C<$o>. You may call
$rec = $o->FETCH($n);
$o->STORE($n, $rec);
to fetch or store the record at line C<$n>, respectively; similarly
the other tied array methods. (See L<perltie> for details.) You may
also call the following methods on this object:
=head2 C<flock>
$o->flock(MODE)
will lock the tied file. C<MODE> has the same meaning as the second
argument to the Perl built-in C<flock> function; for example
C<LOCK_SH> or C<LOCK_EX | LOCK_NB>. (These constants are provided by
the C<use Fcntl ':flock'> declaration.)
C<MODE> is optional; the default is C<LOCK_EX>.
C<Tie::File> maintains an internal table of the byte offset of each
record it has seen in the file.
When you use C<flock> to lock the file, C<Tie::File> assumes that the
read cache is no longer trustworthy, because another process might
have modified the file since the last time it was read. Therefore, a
successful call to C<flock> discards the contents of the read cache
and the internal record offset table.
C<Tie::File> promises that the following sequence of operations will
be safe:
my $o = tie @array, "Tie::File", $filename;
$o->flock;
In particular, C<Tie::File> will I<not> read or write the file during
the C<tie> call. (Exception: Using C<mode =E<gt> O_TRUNC> will, of
course, erase the file during the C<tie> call. If you want to do this
safely, then open the file without C<O_TRUNC>, lock the file, and use
C<@array = ()>.)
The best way to unlock a file is to discard the object and untie the
array. It is probably unsafe to unlock the file without also untying
it, because if you do, changes may remain unwritten inside the object.
That is why there is no shortcut for unlocking. If you really want to
unlock the file prematurely, you know what to do; if you don't know
what to do, then don't do it.
All the usual warnings about file locking apply here. In particular,
note that file locking in Perl is B<advisory>, which means that
holding a lock will not prevent anyone else from reading, writing, or
erasing the file; it only prevents them from getting another lock at
the same time. Locks are analogous to green traffic lights: If you
have a green light, that does not prevent the idiot coming the other
way from plowing into you sideways; it merely guarantees to you that
the idiot does not also have a green light at the same time.
=head2 C<autochomp>
my $old_value = $o->autochomp(0); # disable autochomp option
my $old_value = $o->autochomp(1); # enable autochomp option
my $ac = $o->autochomp(); # recover current value
See L<"autochomp">, above.
=head2 C<defer>, C<flush>, C<discard>, and C<autodefer>
See L<"Deferred Writing">, below.
=head2 C<offset>
$off = $o->offset($n);
This method returns the byte offset of the start of the C<$n>th record
in the file. If there is no such record, it returns an undefined
value.
=head1 Tying to an already-opened filehandle
If C<$fh> is a filehandle, such as is returned by C<IO::File> or one
of the other C<IO> modules, you may use:
tie @array, 'Tie::File', $fh, ...;
Similarly if you opened that handle C<FH> with regular C<open> or
C<sysopen>, you may use:
tie @array, 'Tie::File', \*FH, ...;
Handles that were opened write-only won't work. Handles that were
opened read-only will work as long as you don't try to modify the
array. Handles must be attached to seekable sources of data---that
means no pipes or sockets. If C<Tie::File> can detect that you
supplied a non-seekable handle, the C<tie> call will throw an
exception. (On Unix systems, it can detect this.)
Note that Tie::File will only close any filehandles that it opened
internally. If you passed it a filehandle as above, you "own" the
filehandle, and are responsible for closing it after you have untied
the @array.
Tie::File calls C<binmode> on filehandles that it opens internally,
but not on filehandles passed in by the user. For consistency,
especially if using the tied files cross-platform, you may wish to
call C<binmode> on the filehandle prior to tying the file.
=head1 Deferred Writing
(This is an advanced feature. Skip this section on first reading.)
Normally, modifying a C<Tie::File> array writes to the underlying file
immediately. Every assignment like C<$a[3] = ...> rewrites as much of
the file as is necessary; typically, everything from line 3 through
the end will need to be rewritten. This is the simplest and most
transparent behavior. Performance even for large files is reasonably
good.
However, under some circumstances, this behavior may be excessively
slow. For example, suppose you have a million-record file, and you
want to do:
for (@FILE) {
$_ = "> $_";
}
The first time through the loop, you will rewrite the entire file,
from line 0 through the end. The second time through the loop, you
will rewrite the entire file from line 1 through the end. The third
time through the loop, you will rewrite the entire file from line 2 to
the end. And so on.
If the performance in such cases is unacceptable, you may defer the
actual writing, and then have it done all at once. The following loop
will perform much better for large files:
(tied @a)->defer;
for (@a) {
$_ = "> $_";
}
(tied @a)->flush;
If C<Tie::File>'s memory limit is large enough, all the writing will
done in memory. Then, when you call C<-E<gt>flush>, the entire file
will be rewritten in a single pass.
(Actually, the preceding discussion is something of a fib. You don't
need to enable deferred writing to get good performance for this
common case, because C<Tie::File> will do it for you automatically
unless you specifically tell it not to. See L</Autodeferring>,
below.)
Calling C<-E<gt>flush> returns the array to immediate-write mode. If
you wish to discard the deferred writes, you may call C<-E<gt>discard>
instead of C<-E<gt>flush>. Note that in some cases, some of the data
will have been written already, and it will be too late for
C<-E<gt>discard> to discard all the changes. Support for
C<-E<gt>discard> may be withdrawn in a future version of C<Tie::File>.
Deferred writes are cached in memory up to the limit specified by the
C<dw_size> option (see above). If the deferred-write buffer is full
and you try to write still more deferred data, the buffer will be
flushed. All buffered data will be written immediately, the buffer
will be emptied, and the now-empty space will be used for future
deferred writes.
If the deferred-write buffer isn't yet full, but the total size of the
buffer and the read cache would exceed the C<memory> limit, the oldest
records will be expired from the read cache until the total size is
under the limit.
C<push>, C<pop>, C<shift>, C<unshift>, and C<splice> cannot be
deferred. When you perform one of these operations, any deferred data
is written to the file and the operation is performed immediately.
This may change in a future version.
If you resize the array with deferred writing enabled, the file will
be resized immediately, but deferred records will not be written.
This has a surprising consequence: C<@a = (...)> erases the file
immediately, but the writing of the actual data is deferred. This
might be a bug. If it is a bug, it will be fixed in a future version.
=head2 Autodeferring
C<Tie::File> tries to guess when deferred writing might be helpful,
and to turn it on and off automatically.
for (@a) {
$_ = "> $_";
}
In this example, only the first two assignments will be done
immediately; after this, all the changes to the file will be deferred
up to the user-specified memory limit.
You should usually be able to ignore this and just use the module
without thinking about deferring. However, special applications may
require fine control over which writes are deferred, or may require
that all writes be immediate. To disable the autodeferment feature,
use
(tied @o)->autodefer(0);
or
tie @array, 'Tie::File', $file, autodefer => 0;
Similarly, C<-E<gt>autodefer(1)> re-enables autodeferment, and
C<-E<gt>autodefer()> recovers the current value of the autodefer setting.
=head1 CONCURRENT ACCESS TO FILES
Caching and deferred writing are inappropriate if you want the same
file to be accessed simultaneously from more than one process. Other
optimizations performed internally by this module are also
incompatible with concurrent access. A future version of this module will
support a C<concurrent =E<gt> 1> option that enables safe concurrent access.
Previous versions of this documentation suggested using C<memory
=E<gt> 0> for safe concurrent access. This was mistaken. Tie::File
will not support safe concurrent access before version 0.96.
=head1 CAVEATS
(That's Latin for 'warnings'.)
=over 4
=item *
Reasonable effort was made to make this module efficient. Nevertheless,
changing the size of a record in the middle of a large file will
always be fairly slow, because everything after the new record must be
moved.
=item *
The behavior of tied arrays is not precisely the same as for regular
arrays. For example:
# This DOES print "How unusual!"
undef $a[10]; print "How unusual!\n" if defined $a[10];
C<undef>-ing a C<Tie::File> array element just blanks out the
corresponding record in the file. When you read it back again, you'll
get the empty string, so the supposedly-C<undef>'ed value will be
defined. Similarly, if you have C<autochomp> disabled, then
# This DOES print "How unusual!" if 'autochomp' is disabled
undef $a[10];
print "How unusual!\n" if $a[10];
Because when C<autochomp> is disabled, C<$a[10]> will read back as
C<"\n"> (or whatever the record separator string is.)
There are other minor differences, particularly regarding C<exists>
and C<delete>, but in general, the correspondence is extremely close.
=item *
I have supposed that since this module is concerned with file I/O,
almost all normal use of it will be heavily I/O bound. This means
that the time to maintain complicated data structures inside the
module will be dominated by the time to actually perform the I/O.
When there was an opportunity to spend CPU time to avoid doing I/O, I
usually tried to take it.
=item *
You might be tempted to think that deferred writing is like
transactions, with C<flush> as C<commit> and C<discard> as
C<rollback>, but it isn't, so don't.
=item *
There is a large memory overhead for each record offset and for each
cache entry: about 310 bytes per cached data record, and about 21 bytes
per offset table entry.
The per-record overhead will limit the maximum number of records you
can access per file. Note that I<accessing> the length of the array
via C<$x = scalar @tied_file> accesses B<all> records and stores their
offsets. The same for C<foreach (@tied_file)>, even if you exit the
loop early.
=back
=head1 SUBCLASSING
This version promises absolutely nothing about the internals, which
may change without notice. A future version of the module will have a
well-defined and stable subclassing API.
=head1 WHAT ABOUT C<DB_File>?
People sometimes point out that L<DB_File> will do something similar,
and ask why C<Tie::File> module is necessary.
There are a number of reasons that you might prefer C<Tie::File>.
A list is available at C<L<path_to_url
=head1 AUTHOR
Mark Jason Dominus
To contact the author, send email to: C<mjd-perl-tiefile+@plover.com>
To receive an announcement whenever a new version of this module is
released, send a blank email message to
C<mjd-perl-tiefile-subscribe@plover.com>.
The most recent version of this module, including documentation and
any news of importance, will be available at
path_to_url
=head1 LICENSE
C<Tie::File> version 0.96 is copyright (C) 2003 Mark Jason Dominus.
This library is free software; you may redistribute it and/or modify
it under the same terms as Perl itself.
These terms are your choice of any of (1) the Perl Artistic Licence,
Free Software Foundation, or (3) any later version of the GNU General
This library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
along with this library program; it should be in the file C<COPYING>.
If not, write to the Free Software Foundation, Inc., 51 Franklin Street,
Fifth Floor, Boston, MA 02110-1301, USA
For licensing inquiries, contact the author at:
Mark Jason Dominus
255 S. Warnock St.
Philadelphia, PA 19107
=head1 WARRANTY
C<Tie::File> version 0.98 comes with ABSOLUTELY NO WARRANTY.
For details, see the license.
=head1 THANKS
Gigantic thanks to Jarkko Hietaniemi, for agreeing to put this in the
core when I hadn't written it yet, and for generally being helpful,
supportive, and competent. (Usually the rule is "choose any one.")
Also big thanks to Abhijit Menon-Sen for all of the same things.
Special thanks to Craig Berry and Peter Prymmer (for VMS portability
help), Randy Kobes (for Win32 portability help), Clinton Pierce and
Autrijus Tang (for heroic eleventh-hour Win32 testing above and beyond
the call of duty), Michael G Schwern (for testing advice), and the
rest of the CPAN testers (for testing generally).
Special thanks to Tels for suggesting several speed and memory
optimizations.
Additional thanks to:
Edward Avis /
Mattia Barbon /
Tom Christiansen /
Gerrit Haase /
Gurusamy Sarathy /
Jarkko Hietaniemi (again) /
Nikola Knezevic /
John Kominetz /
Nick Ing-Simmons /
Tassilo von Parseval /
H. Dieter Pearcey /
Slaven Rezic /
Eric Roode /
Peter Scott /
Peter Somu /
Autrijus Tang (again) /
Tels (again) /
Juerd Waalboer /
Todd Rinaldo
=head1 TODO
More tests. (Stuff I didn't think of yet.)
Paragraph mode?
Fixed-length mode. Leave-blanks mode.
Maybe an autolocking mode?
For many common uses of the module, the read cache is a liability.
For example, a program that inserts a single record, or that scans the
file once, will have a cache hit rate of zero. This suggests a major
optimization: The cache should be initially disabled. Here's a hybrid
approach: Initially, the cache is disabled, but the cache code
maintains statistics about how high the hit rate would be *if* it were
enabled. When it sees the hit rate get high enough, it enables
itself. The STAT comments in this code are the beginning of an
implementation of this.
Record locking with fcntl()? Then the module might support an undo
log and get real transactions. What a tour de force that would be.
Keeping track of the highest cached record. This would allow reads-in-a-row
to skip the cache lookup faster (if reading from 1..N with empty cache at
start, the last cached value will be always N-1).
More tests.
=cut
```
|
```c
/*
* You can use this software according to the terms and conditions of the Mulan PSL v2.
* You may obtain a copy of Mulan PSL v2 at:
* path_to_url
* THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND,
* EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT,
* MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE.
* See the Mulan PSL v2 for more details.
* Description: fwscanf_s function
* Author: lishunda
* Create: 2014-02-25
*/
#include "securec.h"
/*
* <FUNCTION DESCRIPTION>
* The fwscanf_s function is the wide-character equivalent of the fscanf_s function
* The fwscanf_s function reads data from the current position of stream into
* the locations given by argument (if any). Each argument must be a pointer
* to a variable of a type that corresponds to a type specifier in format.
* format controls the interpretation of the input fields and has the same
* form and function as the format argument for scanf.
*
* <INPUT PARAMETERS>
* stream Pointer to FILE structure.
* format Format control string, see Format Specifications.
* ... Optional arguments.
*
* <OUTPUT PARAMETERS>
* ... The converted value stored in user assigned address
*
* <RETURN VALUE>
* Each of these functions returns the number of fields successfully converted
* and assigned; the return value does not include fields that were read but
* not assigned. A return value of 0 indicates that no fields were assigned.
* return -1 if an error occurs.
*/
int fwscanf_s(FILE *stream, const wchar_t *format, ...)
{
int ret; /* If initialization causes e838 */
va_list argList;
va_start(argList, format);
ret = vfwscanf_s(stream, format, argList);
va_end(argList);
(void)argList; /* To clear e438 last value assigned not used , the compiler will optimize this code */
return ret;
}
```
|
Kech may refer to:
Places
Kech, Khyber Pakhtunkhwa, Pakistan
Kech District, Balochistan, Pakistan
Kech River, in Iran and Pakistan
Kech, Iran (disambiguation), the alternative spelling of several places in Iran
Other uses
KECH-FM, a radio station in Idaho, U.S.
See also
Kek (disambiguation)
Ketch, a sailboat
Makran, a coastal strip in Balochistan, in Pakistan and Iran, called Kech Makran on the Pakistani side
|
```objective-c
/*
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY APPLE INC. AND ITS CONTRIBUTORS ``AS IS'' AND ANY
* EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
* WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
* DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR ITS CONTRIBUTORS BE LIABLE FOR ANY
* DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
* (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
* LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
* ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#pragma once
#include "OrdinalNumber.h"
namespace WTF {
// TextPosition structure specifies coordinates within an text resource. It is used mostly
// for saving script source position.
class TextPosition {
public:
TextPosition(OrdinalNumber line, OrdinalNumber column)
: m_line(line)
, m_column(column)
{
}
TextPosition() { }
bool operator==(const TextPosition& other) { return m_line == other.m_line && m_column == other.m_column; }
bool operator!=(const TextPosition& other) { return !((*this) == other); }
// A value with line value less than a minimum; used as an impossible position.
static TextPosition belowRangePosition() { return TextPosition(OrdinalNumber::beforeFirst(), OrdinalNumber::beforeFirst()); }
OrdinalNumber m_line;
OrdinalNumber m_column;
};
}
using WTF::TextPosition;
```
|
```c++
/// Source : path_to_url
/// Author : liuyubobobo
/// Time : 2020-05-10
#include <iostream>
#include <vector>
#include <queue>
using namespace std;
/// Greedy using PQ
/// Time Complexity: O(nlogn)
/// Space Complexity: O(n)
class Solution {
public:
int minRefuelStops(int target, int startFuel, vector<vector<int>>& stations) {
if(startFuel >= target)
return 0;
int n = stations.size();
priority_queue<int> pq;
int i = 0, cur = startFuel, res = 0;
while(cur < target){
for(; i < n && cur >= stations[i][0]; i ++)
pq.push(stations[i][1]);
if(!pq.empty()) cur += pq.top(), pq.pop(), res ++;
else break;
}
return cur >= target ? res : -1;
}
};
int main() {
int target1 = 1, startFuel1 = 1;
vector<vector<int>> stations1;
cout << Solution().minRefuelStops(target1, startFuel1, stations1) << endl;
// 0
int target2 = 100, startFuel2 = 1;
vector<vector<int>> stations2 = {{10, 100}};
cout << Solution().minRefuelStops(target2, startFuel2, stations2) << endl;
// -1
int target3 = 100, startFuel3 = 10;
vector<vector<int>> stations3 = {{10, 60}, {20, 30}, {30, 30}, {60, 40}};
cout << Solution().minRefuelStops(target3, startFuel3, stations3) << endl;
// 2
int target4 = 1000, startFuel4 = 83;
vector<vector<int>> stations4 = {{25, 27}, {36, 187}, {140, 186}, {378, 6}, {492, 202},
{517, 89}, {579, 234}, {673, 86}, {808, 53}, {954, 49}};
cout << Solution().minRefuelStops(target4, startFuel4, stations4) << endl;
// -1
return 0;
}
```
|
Andreas Schonberg Lommer (born 7 November 1991) is a Danish long-distance runner. He competed in the men's race at the 2020 World Athletics Half Marathon Championships held in Gdynia, Poland.
In 2015, he competed in the men's 1500 metres event at the Summer Universiade held in Gwangju, South Korea.
Personal bests
Outdoor
1500 metres – 3:49.64 (Watford 2015)
5000 metres – 13:58.31 (Aalborg 2020)
10,000 metres – 29:18.94 (Odense 2020)
Road
Half marathon – 1:03:52 (Barcelona 2020)
Marathon – 2:17:42 (Amsterdam 2021)
References
External links
Living people
1991 births
Place of birth missing (living people)
Danish male middle-distance runners
Danish male long-distance runners
Competitors at the 2015 Summer Universiade
21st-century Danish people
|
The Royal Society of Portrait Painters is a charity based at Carlton House Terrace, SW1, London that promotes the practice and appreciation of portraiture art.
Its Annual Exhibition of portraiture is held at Mall Galleries, and it runs a commissions service to help those wanting a portrait throughout the year. Activities include artist Prizes, Awards, demonstrations, workshops, debates and talks.
The Society is a member of the Federation of British Artists.
History
The Royal Society of Portrait Painters was founded in 1891 by the leading portrait painters of the day. Being dissatisfied with the selection policies of the Royal Academy for its annual exhibition in London, they formed a new body to be concerned solely with portrait painting.
The first exhibition of the society was held in 1891. The catalogue of that exhibition shows that its committee then consisted of Archibald Stuart-Wortley (Chairman), Hon. John Collier, Arthur Hacker, G. P. Jacomb-Hood, S.J. Solomon, James Jebusa Shannon and Hubert Vos. The other members listed were Percy Bigland, C. A. Furse, Glazebrook, John McLure Hamilton, Heywood Hardy, Hubert von Herkomer, Henry J. Hudson, Louise Jopling, T. B. Kennington, W. Llewellyn, W. M. Loudan, Arthur Melville, Anna Lea Merritt, F. M. Skipworth, Mrs Annie Swynnerton, W. R. Symonds, Mary Waller, Edwin A. Ward, Leslie Ward (better known as "Spy"), and T. Blake Wirgman.
Other early members included Sir John Everett Millais, George Frederick Watts, John Singer Sargent, Augustus John and James McNeill Whistler. Women were eligible for membership from the start.
At the Coronation Exhibition of 1911, which marked its 20th anniversary, it was announced that King George V had conferred on the society the status of a Royal Society, and it has been known as the Royal Society of Portrait Painters since then.
The Times said on 22 April 2006:
Activities
Portrait Commissions Service
To encourage the genre of portrait painting, they offer a portrait commissions service.
Annual Exhibition
The Society holds an Annual Exhibition, which takes place every year at the Mall Galleries, The Mall, by Trafalgar Square, London.
A major showcase for some 200 recent portraits it is the largest contemporary portrait exhibition in the UK. It is formed by a cohort of work by distinguished members alongside works by non-member artists who have successfully competed to be included in the show. Unlike other shows, the works are entirely selected by portrait painters. The exhibition aims to include the best of a wide variety of styles in painted and drawn media.
Prizes and awards
Many prizes are awarded via the Annual Exhibition. These include The William Lock Portrait Prize worth £20,000,the Ondaatje Prize for Portraiture, the Prince of Wales Award for Portrait Drawing, the Burke's Peerage Foundation Prize and the de Laszlo Foundation Prize, and the Smallwood Architects Prize.
Learning
The Society also holds portrait demonstrations, workshops, talks and tours.
Collections
The People's Portrait Collection
The People's Portraits Collection owned by the society was founded in 2000 as a millennial exhibition. The idea was to represent ordinary people from all walks of life, and thereby offer a picture of the United Kingdom as it moved from the 20th century into the 21st. Each portrait is donated by a Member and Members continue to add to this collection.
The Collection has been housed at Girton College (one of the 31 constituent colleges of the University of Cambridge) since 2002 as a long-term loan, and is open to the public every day.
Membership
Members
Adams PPRP (Hon Archivist, Treasurer)
Frances Bell RP
Jane Bond RP NEAC
Jason Bowyer RP PPNEAC
Paul Brason RP
Keith Breeden RP
Peter Brown RP NEAC PS ROI Hon. RBA
George Bruce (Past President)
David Caldwell RP
Tom Coates RP PPNEAC PPPS RWS
David Cobley RP NEAC RWA
Anthony Connolly RP (President)
Frank Cadogan Cowper RA RP RWS
Saied Dai RP NEAC
Sam Dalby RP
Simon Davis VPRP RBSA (Hon Secretary)
Frederick Deane RP
Andrew Festing PPRP MBE
Richard Foster PRP (Past President)
David Graham RP
Valeriy Gridnev RP PS ROI
Herbert James Gunn RA RP (Past President)
Robin-Lee Hall (Past President)
James Hague RP
Geoffrey Hayzer RP
Emma Hopkins RP
Sheldon Hutchinson RP
Andrew James RP
Brendan Kelly RP
Peter Kuhfeld RP NEAC
June Mendoza AO OBE RP ROI Hon. SWA
Anthony Morris RP NEAC
Tom Phillips RA Hon. RP
Anastasia Pollard RP
David Poole PPRP ARCA
Mark Roscoe RP
Susan Ryder RP NEAC
Tai-Shan Schierenberg Hon. RP
Melissa Scott-Miller RP NEAC
Stephen Shankland RP
Jeff Stultiens RP
Benjamin Sullivan RP NEAC
Jason Sullivan RP
Michael Taylor RP
Daphne Todd OBE PPRP NEAC
Jason Walker RP
John Walton RP
Emma Wesley RP
Toby Wiggins RP
Antony Williams RP PS NEAC
John Wonnacott Hon. RP CBE
Neale Worley RP NEAC
Robbie Wraith RP
Martin Yeoman RP NEAC
References
External links
Website of the Royal Society of Portrait Painters
1891 establishments in England
Art societies
Charities based in London
Organisations based in the City of Westminster
|
The 2013 NCAA Division I men's basketball tournament was a single-elimination tournament that involved 68 teams playing to determine the national champion of men's NCAA Division I college basketball. It began on March 19, 2013, and concluded with the championship game on April 8, 2013, at the Georgia Dome in Atlanta. This was the 75th edition of the NCAA Men's Basketball Championship, dating to 1939.
The Final Four consisted of Louisville, Wichita State (second appearance), Syracuse (first appearance since their 2003 national championship), and Michigan, returning for the first time since the Fab Five's second appearance in 1993 (later vacated). By winning the West Region, Wichita State became the first #9 seed and first Missouri Valley Conference (MVC) team to reach the Final Four since the tournament expanded to 64 teams in 1985. The last #9 seed to reach the Final Four was Penn, and the last MVC team to do so was Indiana State, both in 1979.
Louisville defeated Michigan in the championship game by a final score of 82–76, winning their first national title since 1986. On February 20, 2018, the NCAA vacated Louisville's entire tournament run, including its national title, due to a 2015 sex scandal.
The tournament featured several notable upsets. For the first time since 1991, at least one team seeded #9 through #15 won at least once in the tournament. The most notable was Florida Gulf Coast University of the Atlantic Sun Conference, who made their tournament debut in only their second year of Division I eligibility. They upset Georgetown and San Diego State in their first two games, becoming the first #15 seed to advance to the regional semifinals (where they were defeated by Florida). For the first time since 2010, a #14 seed won as Harvard defeated New Mexico in the West Region. The same region saw #13 La Salle, who won in the opening round, defeat #4 Kansas State and #12 Mississippi defeat #5 Wisconsin. In addition to that, the region's top seed, Gonzaga, was defeated in the round of 32 by eventual region winner Wichita State, who defeated La Salle in the Sweet Sixteen.
Two other teams also earned their first ever NCAA Tournament victory: Ivy League champion Harvard and Mid-Eastern Athletic Conference (MEAC) champion North Carolina A&T.
Schedule and venues
The following are the sites selected to host each round of the 2013 tournament:
First Four
March 19 and 20
University of Dayton Arena, Dayton, Ohio (Host: University of Dayton)
Second and third rounds
March 21 and 23
The Palace of Auburn Hills, Auburn Hills, Michigan (Host: Oakland University)
Rupp Arena, Lexington, Kentucky (Host: University of Kentucky)
EnergySolutions Arena, Salt Lake City, Utah (Host: University of Utah)
HP Pavilion, San Jose, California (Host: West Coast Conference)
March 22 and 24
University of Dayton Arena, Dayton, Ohio (Host: University of Dayton)
Frank Erwin Center, Austin, Texas (Host: University of Texas at Austin)
Sprint Center, Kansas City, Missouri (Host: Missouri Valley Conference)
Wells Fargo Center, Philadelphia, Pennsylvania (Host: Temple University)
Regional semifinals and Finals
March 28 and 30
East Regional, Verizon Center, Washington, D.C. (Host: Georgetown University)
West Regional, Staples Center, Los Angeles, California (Host: Pepperdine University)
March 29 and 31
Midwest Regional, Lucas Oil Stadium, Indianapolis, Indiana (Hosts: IUPUI, Horizon League)
South Regional, Cowboys Stadium, Arlington, Texas (Host: Big 12 Conference)
National semifinals and championship (Final Four and championship)
April 6 and 8
Georgia Dome, Atlanta, Georgia (Host: Georgia Institute of Technology)
Qualified teams
Automatic qualifiers
The following teams were automatic qualifiers for the 2013 NCAA field by virtue of winning their conference's tournament (except for the Ivy League, whose regular-season champion received the automatic bid).
Tournament seeds
*See First Four.
Bracket
* – Denotes overtime period
Unless otherwise noted, all times listed are Eastern Daylight Time (UTC−04)
First Four – Dayton, Ohio
The First Four games involved eight teams: the four overall lowest-ranked teams, and the four lowest-ranked at-large teams.
Midwest Regional – Indianapolis, Indiana
Midwest Regional all-tournament team
Regional all-tournament team: Seth Curry, Duke; Gorgui Dieng, Louisville; Mason Plumlee, Duke; Peyton Siva, Louisville
Regional most outstanding player: Russ Smith, Louisville
West Regional – Los Angeles, California
West Regional all-tournament team
Regional all-tournament team: Carl Hall, Wichita State; Mark Lyons, Arizona; LaQuinton Ross, Ohio State; Deshaun Thomas, Ohio State
Regional most outstanding player: Malcolm Armstead, Wichita State
South Regional – Arlington, Texas
South Regional all-tournament team
Regional all-tournament team: Mitch McGary, Michigan; Ben McLemore, Kansas; Mike Rosario, Florida; Nik Stauskas, Michigan
Regional most outstanding player: Trey Burke, Michigan
East Regional – Washington, D.C.
East Regional all-tournament team
Regional all-tournament team: Vander Blue, Marquette; C. J. Fair, Syracuse; Davante Gardner, Marquette; James Southerland, Syracuse
Regional most outstanding player: Michael Carter-Williams, Syracuse
Final Four – Georgia Dome, Atlanta, Georgia
During the Final Four round, the champion of the top overall top seed's region was to play against the champion of the fourth-ranked top seed's region, and the champion of the second overall top seed's region was to play against the champion of the third-ranked top seed's region. Louisville (placed in the Midwest Regional) was selected as the top overall seed, and Gonzaga (in the West Regional) was named as the final top seed. Thus, the Midwest champion played the West Champion in one semifinal game, and the South Champion faced the East Champion in the other semifinal game.
Wichita State surprised the college basketball world by reaching the Final Four from the West region. They lost to Louisville in the first semifinal game, 72–68. Michigan defeated Syracuse 61–56 in the second semifinal.
On February 20, 2018, NCAA took away from Louisville the 2013 winning title and allowed them to pay the fines.
Final Four all-tournament team
Final Four all-tournament team: Spike Albrecht, Michigan; Trey Burke, Michigan; Mitch McGary, Michigan; Cleanthony Early, Wichita State; Peyton Siva, Louisville; Luke Hancock, Louisville; Chane Behanan, Louisville;
Final Four most outstanding player: Luke Hancock, Louisville (the first non-starter to earn this title)
Game summaries
Elite Eight
Final Four
National Championship
Louisville defeated Michigan 82–76 in the championship game. The win gave Louisville its first championship since 1986, and third overall. It became the eighth school to win at least three championships until vacated by the NCAA on February 20, 2018, due to a 2015 sex scandal.
Head coach Rick Pitino became the first coach to win an NCAA championship with two different schools. Michigan fell to 1–5 all time in championship games (including two losses vacated because of sanctions against the university).
Michigan's Trey Burke scored seven quick points to get Michigan out to a 7–3 lead, but also picked up two quick fouls and sat during much of the first half. With Burke on the bench, Michigan got a spark from freshman Spike Albrecht, a minor role player during the regular season. Albrecht hit four straight 3-pointers en route to a 17-point first half performance, easily surpassing his previous single game best of 7. Louisville trailed Michigan 35–23 late in the first half, before going on a run fueled by four straight three-pointers by Luke Hancock. At halftime, Michigan led 38–37.
The second half featured several lead changes before Louisville pushed the margin to 10 on a three-pointer by Hancock with 3:20 remaining in the game. Michigan fought back, closing the gap to four points in the last minute, but ran out of time in its comeback effort.
Hancock hit all five three-point shots he attempted in the game and led Louisville with 22 points, while teammate Peyton Siva scored 18 and had a game high 4 steals. Chane Behanan pulled down 12 rebounds to go with 15 points. Burke led Michigan with 24 points. Russ Smith, Louisville's leading scorer on the season, struggled in the game, shooting 3-for-16. Hancock was named as the game's most outstanding player.
Record by conference
The R64, R32, S16, E8, F4, CG, and NC columns indicate how many teams from each conference were in the round of 64 (second round), round of 32 (third round), Sweet 16, Elite Eight, Final Four, championship game, and national champion, respectively.
The Big South and NEC each had one representative, eliminated in the first round with a record of 0–1.
The America East Conference, Big Sky, Big West, Horizon League, MAAC, MAC, OVC, Patriot League, Southern Conference, Southland Conference, Summit League, SWAC, and WAC each had one representative, eliminated in the second round with a record of 0–1.
The Sun Belt Conference had two representatives, one eliminated in the first round and the other in the second round, with a record of 0–2.
Other events surrounding the tournament
On May 10, 2012, the NCAA announced that as part of the celebration of the 75th Division I tournament, it would hold all three of its men's basketball championship games in Atlanta. The finals of the Division II and Division III tournaments were held at Philips Arena on April 7, the day between the Division I semifinals and final. In addition, Atlanta-based tournament broadcaster TBS announced that Conan O'Brien would tape his Conan talk show at The Tabernacle, located a few blocks from the Georgia Dome and Philips Arena, in the week leading up to the Final Four. March Madness studio analyst Charles Barkley and Dick Vitale were among the guests who appeared.
Media
U.S. television
The year 2013 marked the third year of a 14-year partnership between CBS and Turner cable networks TBS, TNT and truTV to cover the entire tournament under the NCAA March Madness banner. CBS aired the Final Four and championship rounds for the 32nd consecutive year. The tournament was considered a ratings success. Tournament games averaged 10.7 million viewers, and the championship game garnered an average of 23.4 million viewers and a peak viewership of 27.1 million.
Studio hosts
Greg Gumbel (New York City and Atlanta) – second round, third round, regionals, Final Four and national championship game
Ernie Johnson Jr. (New York City and Atlanta) – First Four, second round, third round and Regional Semi-Finals
Matt Winer (Atlanta) – First Four, second round and third round
Studio analysts
Greg Anthony (New York City and Atlanta) – First Four, second round, third round, regionals, Final Four and national championship game
Charles Barkley (New York City and Atlanta) – First Four, second round, third round, regionals, Final Four and national championship game
Rex Chapman (Atlanta) – First Four and Second Round
Seth Davis (Atlanta) – First Four, second round, third round and Regional Semi-Finals
Jamie Dixon (Atlanta) – third round
Doug Gottlieb (New York City and Atlanta) – Regionals, Final Four and national championship game
Kenny Smith (New York City and Atlanta) – second round, third round, regionals, Final Four and national championship game
Steve Smith (Atlanta) – First Four, second round, third round and regional semi-finals
Jay Wright (Atlanta) – Regional semi-finals
Commentary teams
Jim Nantz/Clark Kellogg/Steve Kerr/Tracy Wolfson – First Four at Dayton, Ohio; Second and third round at Dayton, Ohio; Midwest Regional at Indianapolis, Indiana; Final Four at Atlanta, GeorgiaKerr joins Nantz and Kellogg during the Final Four and national championship games
Marv Albert/Steve Kerr/Craig Sager – First Four at Dayton, Ohio; Second and third round at Kansas City, Missouri; South Regional at Arlington, Texas
Verne Lundquist/Bill Raftery/Rachel Nichols – Second and third round at Auburn Hills, Michigan; East Regional at Washington, D.C.
Kevin Harlan/Len Elmore/Reggie Miller/Lewis Johnson – Second and third round at Philadelphia, Pennsylvania; West Regional at Los Angeles, California
Ian Eagle/Jim Spanarkel/Allie LaForce – Second and third round at Lexington, Kentucky
Brian Anderson/Dan Bonner/Marty Snider – Second and third round at San Jose, California
Tim Brando/Mike Gminski/Otis Livingston – Second and third round at Austin, Texas
Spero Dedes/Doug Gottlieb/Jaime Maggio – Second and third round at Salt Lake City, Utah
Radio
Dial Global Sports (formerly Westwood One) and SiriusXM have live broadcasts of all 67 games.
First four
Brad Sham and Kyle Macy – at Dayton, Ohio
Second and third rounds
Tom McCarthy and Kelly Tripucka – Second and third round at Auburn Hills, Michigan
Kevin Kugler and Jamal Mashburn – Second and third round at Lexington, Kentucky
Dave Sims and Kevin Grevey – Second and third round at Salt Lake City, Utah
Ted Robinson and Bill Frieder – Second and third round at San Jose, California
Gary Cohen and Pete Gillen – Second and third round at Dayton, Ohio
Wayne Larrivee and Reid Gettys – Second and third round at Austin, Texas
Kevin Calabro and Will Perdue – Second and third round at Kansas City, Missouri
Scott Graham and John Thompson – Second and third round at Philadelphia, Pennsylvania
Regionals
Ian Eagle and John Thompson – East Regional at Washington, D.C.
Kevin Kugler and Pete Gillen – Midwest Regional at Indianapolis, Indiana
Brad Sham and Fran Fraschilla – South Regional at Arlington, Texas
Wayne Larrivee and Bill Frieder – West Regional at Los Angeles, California
Final Four
Kevin Kugler, John Thompson and Bill Raftery – Atlanta, Georgia
Local radio
Matt Shephard and David Merritt – (Michigan), (WWJ), (Detroit) & (WWWW), (Ann Arbor)
Paul Rogers and Bob Valvano – (Louisville), (WHAS), (Louisville) & (WWRW), (Lexington)
International
ESPN International held broadcast rights to the tournament outside of the United States: it produced its own broadcasts of the semi-final and championship game, called by ESPN College Basketball personalities Brad Nessler (play-by-play), Dick Vitale (analyst for the final and one semi-final), and Jay Bilas (analyst for the other semi-final). For the initial rounds, they use CBS/Turner coverage with an additional host to transition between games, with whiparound coverage similar to the CBS-only era. ESPN also has exclusive digital rights to the NCAA tournament outside of North America.
Canada
In Canada, the TSN family of media outlets (including TSN2, RDS, and TSN Radio), which are part-owned by ESPN, own broadcast rights to the tournament. TSN produces separate studio coverage with Kate Beirness, Jack Armstrong, Dan Shulman and Sam Mitchell, but simulcasts CBS/Turner game coverage for the first five rounds (and ESPN International coverage for the Final Four).
As in past years, TSN and TSN2 carry whiparound coverage (often in parallel) during the second, third and fourth rounds, in 2013 focusing when possible on games not being broadcast on CBS (as that network, but not the Turner channels, is also widely available in Canada).
See also
2013 NCAA Division II men's basketball tournament
2013 NCAA Division III men's basketball tournament
2013 NCAA Division I women's basketball tournament
2013 NCAA Division II women's basketball tournament
2013 NCAA Division III women's basketball tournament
2013 National Invitation Tournament
2013 Women's National Invitation Tournament
2013 NAIA Division I men's basketball tournament
2013 NAIA Division II men's basketball tournament
2013 NAIA Division I women's basketball tournament
2013 NAIA Division II women's basketball tournament
2013 College Basketball Invitational
2013 CollegeInsider.com Postseason Tournament
Notes
References
Ncaa tournament
NCAA Division I men's basketball tournament
NCAA Division I men's basketball tournament
NCAA Division I men's basketball tournament
NCAA Division I men's basketball tournament
Basketball competitions in Atlanta
College basketball tournaments in Georgia (U.S. state)
Basketball competitions in Austin, Texas
Basketball in the Dallas–Fort Worth metroplex
|
The following is a list of the winners of the World Women's Curling Championships since the inception of the championships in 1979.
Medallists
All-time medal table
As of 2023 World Championships
Performance timeline
See also
List of World Men's Curling Champions
List of World Mixed Doubles Curling Champions
List of Olympic medalists in curling
List of Paralympic medalists in wheelchair curling
Notes
Bronze medals were only awarded from 1985. Table shows third-place finishers before then.
1989–1994: Two bronze medals were awarded.
References
World Curling champions
Curling-related lists
World champions
Curling
de:Curling-Weltmeisterschaft#Weltmeisterschaften der Damen
|
Relax Your Mind is the debut, and only, album by the folk duo Jon & Alun who five years later founded the short-lived late-1960s English rock band Sweet Thursday. Jon Mark is best known for his records with Marianne Faithfull, John Mayall and Mark-Almond. Alun Davies became Cat Stevens's guitarist.
Track listing
Side One
"Relax Your Mind" (Burchell, Davies)
"Walk to the Gallows" (Burchell, Davies)
"I'm My Own Grandpa" (Dwight Latham, Moe Jaffe)
"The Poor Fool’s Blues" (Burchell, Davies)
"Black is the Colour" (Traditional; arranged by Shel Talmy, Stone)
"Easy Rambler" (Burchell, Davies)
"I Never Will Marry" (Traditional; arranged by Shel Talmy, Stone)
Side Two
"Alberta" (Traditional; arranged by Shel Talmy, Stone)
"John B." (Traditional; arranged by Shel Talmy, Stone)
"The Song of the Salvation Army" (Traditional; arranged by Shel Talmy, Stone)
"Lone Green Valley" (Traditional; arranged by Shel Talmy, Stone)
"The Way of Life" (Burchell, Davies)
"Sinking of the Reuben James" (Woody Guthrie)
Personnel
Alun Davies - guitar, vocals
Jon Mark - guitar, vocals
with:
Judd Proctor - banjo, rhythm guitar
Big Jim Sullivan - electric guitar, twelve-string guitar
Arthur Watts - bass
Shel Talmy: Producer
External links
Listen to: Jon Mark & Alun Davies: Alberta.
Discogs: Jon & Alun: Relax Your Mind
Murphy Anderson: Relax Your Mind With Jon & Alun
1963 debut albums
Albums produced by Shel Talmy
Decca Records albums
|
```c++
//
// Aspia Project
//
// This program is free software: you can redistribute it and/or modify
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
//
// along with this program. If not, see <path_to_url
//
#include "common/clipboard_x11.h"
#include "base/logging.h"
#include "base/files/file_descriptor_watcher_posix.h"
#include "base/message_loop/message_loop.h"
#include "base/message_loop/message_pump_asio.h"
#include "base/x11/x_server_clipboard.h"
namespace common {
//your_sha256_hash----------------------------------
ClipboardX11::ClipboardX11()
{
// Nothing
}
//your_sha256_hash----------------------------------
ClipboardX11::~ClipboardX11()
{
if (display_)
{
XCloseDisplay(display_);
display_ = nullptr;
}
}
//your_sha256_hash----------------------------------
void ClipboardX11::init()
{
display_ = XOpenDisplay(nullptr);
if (!display_)
{
LOG(LS_ERROR) << "Couldn't open X display";
return;
}
x_server_clipboard_ = std::make_unique<base::XServerClipboard>();
x_server_clipboard_->init(
display_, std::bind(&ClipboardX11::onData, this, std::placeholders::_1));
x_connection_watcher_ = std::make_unique<base::FileDescriptorWatcher>();
x_connection_watcher_->startWatching(
ConnectionNumber(display_),
base::FileDescriptorWatcher::Mode::WATCH_READ,
std::bind(&ClipboardX11::pumpXEvents, this));
pumpXEvents();
}
//your_sha256_hash----------------------------------
void ClipboardX11::setData(const std::string& data)
{
if (x_server_clipboard_)
x_server_clipboard_->setClipboard(data);
}
//your_sha256_hash----------------------------------
void ClipboardX11::pumpXEvents()
{
DCHECK(display_ && x_server_clipboard_);
while (XPending(display_))
{
XEvent event;
XNextEvent(display_, &event);
x_server_clipboard_->processXEvent(&event);
}
}
} // namespace common
```
|
```c
/*
*
*/
#define DT_DRV_COMPAT nxp_imx_lpi2c
#include <errno.h>
#include <zephyr/drivers/i2c.h>
#include <zephyr/drivers/clock_control.h>
#include <zephyr/kernel.h>
#include <zephyr/irq.h>
#include <fsl_lpi2c.h>
#if CONFIG_NXP_LP_FLEXCOMM
#include <zephyr/drivers/mfd/nxp_lp_flexcomm.h>
#endif
#include <zephyr/drivers/pinctrl.h>
#ifdef CONFIG_I2C_MCUX_LPI2C_BUS_RECOVERY
#include "i2c_bitbang.h"
#include <zephyr/drivers/gpio.h>
#endif /* CONFIG_I2C_MCUX_LPI2C_BUS_RECOVERY */
#include <zephyr/logging/log.h>
LOG_MODULE_REGISTER(mcux_lpi2c);
#include "i2c-priv.h"
/* Wait for the duration of 12 bits to detect a NAK after a bus
* address scan. (10 appears sufficient, 20% safety factor.)
*/
#define SCAN_DELAY_US(baudrate) (12 * USEC_PER_SEC / baudrate)
/* Required by DEVICE_MMIO_NAMED_* macros */
#define DEV_CFG(_dev) \
((const struct mcux_lpi2c_config *)(_dev)->config)
#define DEV_DATA(_dev) ((struct mcux_lpi2c_data *)(_dev)->data)
struct mcux_lpi2c_config {
DEVICE_MMIO_NAMED_ROM(reg_base);
#ifdef CONFIG_NXP_LP_FLEXCOMM
const struct device *parent_dev;
#endif
const struct device *clock_dev;
clock_control_subsys_t clock_subsys;
void (*irq_config_func)(const struct device *dev);
uint32_t bitrate;
uint32_t bus_idle_timeout_ns;
const struct pinctrl_dev_config *pincfg;
#ifdef CONFIG_I2C_MCUX_LPI2C_BUS_RECOVERY
struct gpio_dt_spec scl;
struct gpio_dt_spec sda;
#endif /* CONFIG_I2C_MCUX_LPI2C_BUS_RECOVERY */
};
struct mcux_lpi2c_data {
DEVICE_MMIO_NAMED_RAM(reg_base);
lpi2c_master_handle_t handle;
struct k_sem lock;
struct k_sem device_sync_sem;
status_t callback_status;
#ifdef CONFIG_I2C_TARGET
lpi2c_slave_handle_t target_handle;
struct i2c_target_config *target_cfg;
bool target_attached;
bool first_tx;
bool read_active;
bool send_ack;
#endif
};
static int mcux_lpi2c_configure(const struct device *dev,
uint32_t dev_config_raw)
{
const struct mcux_lpi2c_config *config = dev->config;
struct mcux_lpi2c_data *data = dev->data;
LPI2C_Type *base = (LPI2C_Type *)DEVICE_MMIO_NAMED_GET(dev, reg_base);
uint32_t clock_freq;
uint32_t baudrate;
int ret;
if (!(I2C_MODE_CONTROLLER & dev_config_raw)) {
return -EINVAL;
}
if (I2C_ADDR_10_BITS & dev_config_raw) {
return -EINVAL;
}
switch (I2C_SPEED_GET(dev_config_raw)) {
case I2C_SPEED_STANDARD:
baudrate = KHZ(100);
break;
case I2C_SPEED_FAST:
baudrate = KHZ(400);
break;
case I2C_SPEED_FAST_PLUS:
baudrate = MHZ(1);
break;
default:
return -EINVAL;
}
if (clock_control_get_rate(config->clock_dev, config->clock_subsys,
&clock_freq)) {
return -EINVAL;
}
ret = k_sem_take(&data->lock, K_FOREVER);
if (ret) {
return ret;
}
LPI2C_MasterSetBaudRate(base, clock_freq, baudrate);
k_sem_give(&data->lock);
return 0;
}
static void mcux_lpi2c_master_transfer_callback(LPI2C_Type *base,
lpi2c_master_handle_t *handle,
status_t status, void *userData)
{
struct mcux_lpi2c_data *data = userData;
ARG_UNUSED(handle);
ARG_UNUSED(base);
data->callback_status = status;
k_sem_give(&data->device_sync_sem);
}
static uint32_t mcux_lpi2c_convert_flags(int msg_flags)
{
uint32_t flags = 0U;
if (!(msg_flags & I2C_MSG_STOP)) {
flags |= kLPI2C_TransferNoStopFlag;
}
if (msg_flags & I2C_MSG_RESTART) {
flags |= kLPI2C_TransferRepeatedStartFlag;
}
return flags;
}
static int mcux_lpi2c_transfer(const struct device *dev, struct i2c_msg *msgs,
uint8_t num_msgs, uint16_t addr)
{
const struct mcux_lpi2c_config *config = dev->config;
struct mcux_lpi2c_data *data = dev->data;
LPI2C_Type *base = (LPI2C_Type *)DEVICE_MMIO_NAMED_GET(dev, reg_base);
lpi2c_master_transfer_t transfer;
status_t status;
int ret = 0;
ret = k_sem_take(&data->lock, K_FOREVER);
if (ret) {
return ret;
}
/* Iterate over all the messages */
for (int i = 0; i < num_msgs; i++) {
if (I2C_MSG_ADDR_10_BITS & msgs->flags) {
ret = -ENOTSUP;
break;
}
/* Initialize the transfer descriptor */
transfer.flags = mcux_lpi2c_convert_flags(msgs->flags);
/* Prevent the controller to send a start condition between
* messages, except if explicitly requested.
*/
if (i != 0 && !(msgs->flags & I2C_MSG_RESTART)) {
transfer.flags |= kLPI2C_TransferNoStartFlag;
}
transfer.slaveAddress = addr;
transfer.direction = (msgs->flags & I2C_MSG_READ)
? kLPI2C_Read : kLPI2C_Write;
transfer.subaddress = 0;
transfer.subaddressSize = 0;
transfer.data = msgs->buf;
transfer.dataSize = msgs->len;
/* Start the transfer */
status = LPI2C_MasterTransferNonBlocking(base,
&data->handle, &transfer);
/* Return an error if the transfer didn't start successfully
* e.g., if the bus was busy
*/
if (status != kStatus_Success) {
LPI2C_MasterTransferAbort(base, &data->handle);
ret = -EIO;
break;
}
/* Wait for the transfer to complete */
k_sem_take(&data->device_sync_sem, K_FOREVER);
/* Return an error if the transfer didn't complete
* successfully. e.g., nak, timeout, lost arbitration
*/
if (data->callback_status != kStatus_Success) {
LPI2C_MasterTransferAbort(base, &data->handle);
ret = -EIO;
break;
}
if (msgs->len == 0) {
k_busy_wait(SCAN_DELAY_US(config->bitrate));
if (0 != (base->MSR & LPI2C_MSR_NDF_MASK)) {
LPI2C_MasterTransferAbort(base, &data->handle);
ret = -EIO;
break;
}
}
/* Move to the next message */
msgs++;
}
k_sem_give(&data->lock);
return ret;
}
#if CONFIG_I2C_MCUX_LPI2C_BUS_RECOVERY
static void mcux_lpi2c_bitbang_set_scl(void *io_context, int state)
{
const struct mcux_lpi2c_config *config = io_context;
gpio_pin_set_dt(&config->scl, state);
}
static void mcux_lpi2c_bitbang_set_sda(void *io_context, int state)
{
const struct mcux_lpi2c_config *config = io_context;
gpio_pin_set_dt(&config->sda, state);
}
static int mcux_lpi2c_bitbang_get_sda(void *io_context)
{
const struct mcux_lpi2c_config *config = io_context;
return gpio_pin_get_dt(&config->sda) == 0 ? 0 : 1;
}
static int mcux_lpi2c_recover_bus(const struct device *dev)
{
const struct mcux_lpi2c_config *config = dev->config;
struct mcux_lpi2c_data *data = dev->data;
struct i2c_bitbang bitbang_ctx;
struct i2c_bitbang_io bitbang_io = {
.set_scl = mcux_lpi2c_bitbang_set_scl,
.set_sda = mcux_lpi2c_bitbang_set_sda,
.get_sda = mcux_lpi2c_bitbang_get_sda,
};
uint32_t bitrate_cfg;
int error = 0;
if (!gpio_is_ready_dt(&config->scl)) {
LOG_ERR("SCL GPIO device not ready");
return -EIO;
}
if (!gpio_is_ready_dt(&config->sda)) {
LOG_ERR("SDA GPIO device not ready");
return -EIO;
}
k_sem_take(&data->lock, K_FOREVER);
error = gpio_pin_configure_dt(&config->scl, GPIO_OUTPUT_HIGH);
if (error != 0) {
LOG_ERR("failed to configure SCL GPIO (err %d)", error);
goto restore;
}
error = gpio_pin_configure_dt(&config->sda, GPIO_OUTPUT_HIGH);
if (error != 0) {
LOG_ERR("failed to configure SDA GPIO (err %d)", error);
goto restore;
}
i2c_bitbang_init(&bitbang_ctx, &bitbang_io, (void *)config);
bitrate_cfg = i2c_map_dt_bitrate(config->bitrate) | I2C_MODE_CONTROLLER;
error = i2c_bitbang_configure(&bitbang_ctx, bitrate_cfg);
if (error != 0) {
LOG_ERR("failed to configure I2C bitbang (err %d)", error);
goto restore;
}
error = i2c_bitbang_recover_bus(&bitbang_ctx);
if (error != 0) {
LOG_ERR("failed to recover bus (err %d)", error);
goto restore;
}
restore:
(void)pinctrl_apply_state(config->pincfg, PINCTRL_STATE_DEFAULT);
k_sem_give(&data->lock);
return error;
}
#endif /* CONFIG_I2C_MCUX_LPI2C_BUS_RECOVERY */
#ifdef CONFIG_I2C_TARGET
static void mcux_lpi2c_slave_irq_handler(const struct device *dev)
{
struct mcux_lpi2c_data *data = dev->data;
LPI2C_Type *base = (LPI2C_Type *)DEVICE_MMIO_NAMED_GET(dev, reg_base);
const struct i2c_target_callbacks *target_cb = data->target_cfg->callbacks;
int ret;
uint32_t flags;
uint8_t i2c_data;
/* Note- the HAL provides a callback-based I2C slave API, but
* the API expects the user to provide a transmit buffer of
* a fixed length at the first byte received, and will not signal
* the user callback until this buffer is exhausted. This does not
* work well with the Zephyr API, which requires callbacks for
* every byte. For these reason, we handle the LPI2C IRQ
* directly.
*/
flags = LPI2C_SlaveGetStatusFlags(base);
if (flags & kLPI2C_SlaveAddressValidFlag) {
/* Read Slave address to clear flag */
LPI2C_SlaveGetReceivedAddress(base);
data->first_tx = true;
/* Reset to sending ACK, in case we NAK'ed before */
data->send_ack = true;
}
if (flags & kLPI2C_SlaveRxReadyFlag) {
/* RX data is available, read it and issue callback */
i2c_data = (uint8_t)base->SRDR;
if (data->first_tx) {
data->first_tx = false;
if (target_cb->write_requested) {
ret = target_cb->write_requested(data->target_cfg);
if (ret < 0) {
/* NAK further bytes */
data->send_ack = false;
}
}
}
if (target_cb->write_received) {
ret = target_cb->write_received(data->target_cfg,
i2c_data);
if (ret < 0) {
/* NAK further bytes */
data->send_ack = false;
}
}
}
if (flags & kLPI2C_SlaveTxReadyFlag) {
/* Space is available in TX fifo, issue callback and write out */
if (data->first_tx) {
data->read_active = true;
data->first_tx = false;
if (target_cb->read_requested) {
ret = target_cb->read_requested(data->target_cfg,
&i2c_data);
if (ret < 0) {
/* Disable TX */
data->read_active = false;
} else {
/* Send I2C data */
base->STDR = i2c_data;
}
}
} else if (data->read_active) {
if (target_cb->read_processed) {
ret = target_cb->read_processed(data->target_cfg,
&i2c_data);
if (ret < 0) {
/* Disable TX */
data->read_active = false;
} else {
/* Send I2C data */
base->STDR = i2c_data;
}
}
}
}
if (flags & kLPI2C_SlaveStopDetectFlag) {
LPI2C_SlaveClearStatusFlags(base, flags);
if (target_cb->stop) {
target_cb->stop(data->target_cfg);
}
}
if (flags & kLPI2C_SlaveTransmitAckFlag) {
LPI2C_SlaveTransmitAck(base, data->send_ack);
}
}
static int mcux_lpi2c_target_register(const struct device *dev,
struct i2c_target_config *target_config)
{
const struct mcux_lpi2c_config *config = dev->config;
struct mcux_lpi2c_data *data = dev->data;
LPI2C_Type *base = (LPI2C_Type *)DEVICE_MMIO_NAMED_GET(dev, reg_base);
lpi2c_slave_config_t slave_config;
uint32_t clock_freq;
LPI2C_MasterDeinit(base);
/* Get the clock frequency */
if (clock_control_get_rate(config->clock_dev, config->clock_subsys,
&clock_freq)) {
return -EINVAL;
}
if (!target_config) {
return -EINVAL;
}
if (data->target_attached) {
return -EBUSY;
}
data->target_attached = true;
data->target_cfg = target_config;
data->first_tx = false;
LPI2C_SlaveGetDefaultConfig(&slave_config);
slave_config.address0 = target_config->address;
/* Note- this setting enables clock stretching to allow the
* slave to respond to each byte with an ACK/NAK.
* this behavior may cause issues with some I2C controllers.
*/
slave_config.sclStall.enableAck = true;
LPI2C_SlaveInit(base, &slave_config, clock_freq);
/* Clear all flags. */
LPI2C_SlaveClearStatusFlags(base, (uint32_t)kLPI2C_SlaveClearFlags);
/* Enable interrupt */
LPI2C_SlaveEnableInterrupts(base,
(kLPI2C_SlaveTxReadyFlag |
kLPI2C_SlaveRxReadyFlag |
kLPI2C_SlaveStopDetectFlag |
kLPI2C_SlaveAddressValidFlag |
kLPI2C_SlaveTransmitAckFlag));
return 0;
}
static int mcux_lpi2c_target_unregister(const struct device *dev,
struct i2c_target_config *target_config)
{
struct mcux_lpi2c_data *data = dev->data;
LPI2C_Type *base = (LPI2C_Type *)DEVICE_MMIO_NAMED_GET(dev, reg_base);
if (!data->target_attached) {
return -EINVAL;
}
data->target_cfg = NULL;
data->target_attached = false;
LPI2C_SlaveDeinit(base);
return 0;
}
#endif /* CONFIG_I2C_TARGET */
static void mcux_lpi2c_isr(const struct device *dev)
{
struct mcux_lpi2c_data *data = dev->data;
LPI2C_Type *base = (LPI2C_Type *)DEVICE_MMIO_NAMED_GET(dev, reg_base);
#ifdef CONFIG_I2C_TARGET
if (data->target_attached) {
mcux_lpi2c_slave_irq_handler(dev);
}
#endif /* CONFIG_I2C_TARGET */
#if CONFIG_HAS_MCUX_FLEXCOMM
LPI2C_MasterTransferHandleIRQ(LPI2C_GetInstance(base), &data->handle);
#else
LPI2C_MasterTransferHandleIRQ(base, &data->handle);
#endif
}
static int mcux_lpi2c_init(const struct device *dev)
{
const struct mcux_lpi2c_config *config = dev->config;
struct mcux_lpi2c_data *data = dev->data;
LPI2C_Type *base;
uint32_t clock_freq, bitrate_cfg;
lpi2c_master_config_t master_config;
int error;
DEVICE_MMIO_NAMED_MAP(dev, reg_base, K_MEM_CACHE_NONE | K_MEM_DIRECT_MAP);
base = (LPI2C_Type *)DEVICE_MMIO_NAMED_GET(dev, reg_base);
k_sem_init(&data->lock, 1, 1);
k_sem_init(&data->device_sync_sem, 0, K_SEM_MAX_LIMIT);
if (!device_is_ready(config->clock_dev)) {
LOG_ERR("clock control device not ready");
return -ENODEV;
}
error = pinctrl_apply_state(config->pincfg, PINCTRL_STATE_DEFAULT);
if (error) {
return error;
}
if (clock_control_get_rate(config->clock_dev, config->clock_subsys,
&clock_freq)) {
return -EINVAL;
}
LPI2C_MasterGetDefaultConfig(&master_config);
master_config.busIdleTimeout_ns = config->bus_idle_timeout_ns;
LPI2C_MasterInit(base, &master_config, clock_freq);
LPI2C_MasterTransferCreateHandle(base, &data->handle,
mcux_lpi2c_master_transfer_callback,
data);
bitrate_cfg = i2c_map_dt_bitrate(config->bitrate);
error = mcux_lpi2c_configure(dev, I2C_MODE_CONTROLLER | bitrate_cfg);
if (error) {
return error;
}
#if CONFIG_NXP_LP_FLEXCOMM
/* When using LP Flexcomm driver, register the interrupt handler
* so we receive notification from the LP Flexcomm interrupt handler.
*/
nxp_lp_flexcomm_setirqhandler(config->parent_dev, dev,
LP_FLEXCOMM_PERIPH_LPI2C, mcux_lpi2c_isr);
#else
/* Interrupt is managed by this driver */
config->irq_config_func(dev);
#endif
return 0;
}
static const struct i2c_driver_api mcux_lpi2c_driver_api = {
.configure = mcux_lpi2c_configure,
.transfer = mcux_lpi2c_transfer,
#if CONFIG_I2C_MCUX_LPI2C_BUS_RECOVERY
.recover_bus = mcux_lpi2c_recover_bus,
#endif /* CONFIG_I2C_MCUX_LPI2C_BUS_RECOVERY */
#if CONFIG_I2C_TARGET
.target_register = mcux_lpi2c_target_register,
.target_unregister = mcux_lpi2c_target_unregister,
#endif /* CONFIG_I2C_TARGET */
};
#if CONFIG_I2C_MCUX_LPI2C_BUS_RECOVERY
#define I2C_MCUX_LPI2C_SCL_INIT(n) .scl = GPIO_DT_SPEC_INST_GET_OR(n, scl_gpios, {0}),
#define I2C_MCUX_LPI2C_SDA_INIT(n) .sda = GPIO_DT_SPEC_INST_GET_OR(n, sda_gpios, {0}),
#else
#define I2C_MCUX_LPI2C_SCL_INIT(n)
#define I2C_MCUX_LPI2C_SDA_INIT(n)
#endif /* CONFIG_I2C_MCUX_LPI2C_BUS_RECOVERY */
#define I2C_MCUX_LPI2C_MODULE_IRQ_CONNECT(n) \
do { \
IRQ_CONNECT(DT_INST_IRQN(n), \
DT_INST_IRQ(n, priority), \
mcux_lpi2c_isr, \
DEVICE_DT_INST_GET(n), 0); \
irq_enable(DT_INST_IRQN(n)); \
} while (false)
#define I2C_MCUX_LPI2C_MODULE_IRQ(n) \
IF_ENABLED(DT_INST_IRQ_HAS_IDX(n, 0), \
(I2C_MCUX_LPI2C_MODULE_IRQ_CONNECT(n)))
#ifdef CONFIG_NXP_LP_FLEXCOMM
#define PARENT_DEV(n) \
.parent_dev = DEVICE_DT_GET(DT_INST_PARENT(n)),
#else
#define PARENT_DEV(n)
#endif /* CONFIG_NXP_LP_FLEXCOMM */
#define I2C_MCUX_LPI2C_INIT(n) \
PINCTRL_DT_INST_DEFINE(n); \
\
static void mcux_lpi2c_config_func_##n(const struct device *dev); \
\
static const struct mcux_lpi2c_config mcux_lpi2c_config_##n = { \
DEVICE_MMIO_NAMED_ROM_INIT(reg_base, DT_DRV_INST(n)), \
PARENT_DEV(n) \
.clock_dev = DEVICE_DT_GET(DT_INST_CLOCKS_CTLR(n)), \
.clock_subsys = \
(clock_control_subsys_t)DT_INST_CLOCKS_CELL(n, name),\
.irq_config_func = mcux_lpi2c_config_func_##n, \
.bitrate = DT_INST_PROP(n, clock_frequency), \
.pincfg = PINCTRL_DT_INST_DEV_CONFIG_GET(n), \
I2C_MCUX_LPI2C_SCL_INIT(n) \
I2C_MCUX_LPI2C_SDA_INIT(n) \
.bus_idle_timeout_ns = \
UTIL_AND(DT_INST_NODE_HAS_PROP(n, bus_idle_timeout),\
DT_INST_PROP(n, bus_idle_timeout)), \
}; \
\
static struct mcux_lpi2c_data mcux_lpi2c_data_##n; \
\
I2C_DEVICE_DT_INST_DEFINE(n, mcux_lpi2c_init, NULL, \
&mcux_lpi2c_data_##n, \
&mcux_lpi2c_config_##n, POST_KERNEL, \
CONFIG_I2C_INIT_PRIORITY, \
&mcux_lpi2c_driver_api); \
\
static void mcux_lpi2c_config_func_##n(const struct device *dev) \
{ \
I2C_MCUX_LPI2C_MODULE_IRQ(n); \
}
DT_INST_FOREACH_STATUS_OKAY(I2C_MCUX_LPI2C_INIT)
```
|
```go
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// in the Software without restriction, including without limitation the rights
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
// THE SOFTWARE.
package influxdb
import (
"testing"
"github.com/stretchr/testify/assert"
)
type test struct {
in, outMetric, outMetricTail, outLabel string
}
func TestPromRewriter(t *testing.T) {
r := newPromRewriter()
tests := []test{{"foo", "foo", "foo", "foo"},
{".bar", "_bar", "_bar", "_bar"},
{"b.ar", "b_ar", "b_ar", "b_ar"},
{":bar", ":bar", ":bar", "_bar"},
{"ba:r", "ba:r", "ba:r", "ba_r"},
{"9bar", "_bar", "9bar", "_bar"},
}
for _, test := range tests {
in1 := []byte(test.in)
r.rewriteMetric(in1)
assert.Equal(t, test.outMetric, string(in1))
in2 := []byte(test.in)
r.rewriteMetricTail(in2)
assert.Equal(t, test.outMetricTail, string(in2))
in3 := []byte(test.in)
r.rewriteLabel(in3)
assert.Equal(t, test.outLabel, string(in3))
}
}
```
|
Bài Chòi is a combination of arts in Central Vietnam including music, poetry, acting, painting and literature, providing recreation, entertainment and socialising within village communities. It was Inscribed on the UNESCO's Intangible Cultural Heritage of Humanity list in 2017. Bài Chòi was recognised as Vietnam's national intangible cultural heritage during 2014-2016 by the Ministry of Culture, Sports and Tourism.
Bài Chòi games and performance involve a card game similar to bingo, played with songs and music performed by Hieu artists, during the Tết Nguyên Đán.
References
Vietnamese culture
Intangible Cultural Heritage of Humanity
|
Lieutenant-General Mohinder Singh Wadalia (30 November 1908 – 20 May 2001) was an Indian Army general.
Career
A King's Commissioned Indian Officer (KCIO), Wadalia was educated at Aitchison College and the Prince of Wales Royal Indian Military College,Dehra Dun. He subsequently attended the Royal Military Academy Sandhurst and was commissioned a second lieutenant in the British Indian Army on 31 January 1929, passing out fifth in the order of merit from 37 successful cadets. He was formally appointed to the Indian Army as an officer with the 4/19 Hyderabad Regiment (now 4 Kumaon Regiment) on 13 April 1930. On 1 March 1934, he transferred to the 16th Light Cavalry and was appointed a squadron officer. He was appointed the adjutant on 1 January 1937.
During the Second World War, Wadalia was appointed a GSO 3 in the Directorate of Military Training on 27 June 1941, under Brigadier Francis Tuker. Advanced to a GSO 2 on 26 September and promoted temporary major in December, Wadalia was transferred to the Directorate of Armoured Fighting Vehicles on 26 May 1943. He served on the headquarters staff in the Persia-Iraq theatre and was mentioned in dispatches. On 13 August 1944, he was appointed a GSO 1 on the staff, with the acting rank of lieutenant-colonel. He was promoted substantive major (temporary lieutenant-colonel) on 31 January 1946. On 23 December 1949, he was promoted temporary brigadier and given command of a brigade.
On 21 June 1951, Wadalia was appointed an area commander with the local rank of major general. On 1 September, he was appointed Commandant of the National Defence Academy with the acting rank of major-general. He was the Deputy Chief of the Army Staff between 27 January 1959 and 15 November 1964.
Dates of rank
Notes
References
1908 births
2001 deaths
British Indian Army officers
Indian Army personnel of World War II
People of the Indo-Pakistani War of 1947
Indian generals
Indian Army personnel
Graduates of the Royal Military Academy Sandhurst
Commandants of the National Defence Academy
Commandants of Indian Military Academy
|
The Serranía de San Lucas is a forested massif in the Bolívar Department of northern Colombia that reaches heights of 2,600 m above sea level. It is part of the Magdalena–Urabá moist forests ecoregion, with a rainforest ecology that includes large monkey and bird populations.
It is a 'forest reserve' that has been recommended for protection, but has been opened to mining by the Colombian government, as the mountains have large deposits of gold, emeralds, nickel and mercury. AngloGold Ashanti has been exploring in the area since 2004, causing tensions with local small-scale miners.
The ELN guerrilla group enforced forest protection in the area in the early 2000s, apparently to protect local hydrology. The area is still subject to fighting between drug cartels, FARC, ELN, the Black Eagles and the Colombian army.
References
Mountain ranges of Colombia
Geography of Bolívar Department
|
```go
package util
import (
"image/color"
"math"
"testing"
)
func TestRGBToHSV(t *testing.T) {
cases := []struct {
input color.RGBA
expected [3]float64
}{
{
input: color.RGBA{45, 166, 115, 255},
expected: [3]float64{155, 0.73, 0.65},
},
{
input: color.RGBA{0, 255, 0, 255},
expected: [3]float64{120, 1, 1},
},
{
input: color.RGBA{242, 220, 97, 255},
expected: [3]float64{51, 0.6, 0.95},
},
{
input: color.RGBA{10, 10, 10, 255},
expected: [3]float64{0, 0.0, 0.04},
},
{
input: color.RGBA{255, 255, 255, 255},
expected: [3]float64{0, 0.0, 1.0},
},
{
input: color.RGBA{0, 0, 0, 255},
expected: [3]float64{0, 0.0, 0.0},
},
{
input: color.RGBA{255, 0, 0, 255},
expected: [3]float64{0, 1.0, 1.0},
},
{
input: color.RGBA{255, 0, 255, 255},
expected: [3]float64{300, 1.0, 1.0},
},
}
for _, c := range cases {
h, s, v := RGBToHSV(c.input)
h = math.Floor(h + 0.5)
s = math.Floor((s*100)+0.5) / 100
v = math.Floor((v*100)+0.5) / 100
if h != c.expected[0] || s != c.expected[1] || v != c.expected[2] {
t.Errorf("RGBToHSV failed: expected: %#v, actual: %#v, %#v, %#v", c.expected, h, s, v)
}
}
}
func TestHSVToRGB(t *testing.T) {
cases := []struct {
input [3]float64
expected color.RGBA
}{
{
input: [3]float64{155, 0.73, 0.65},
expected: color.RGBA{45, 166, 115, 255},
},
{
input: [3]float64{120, 1, 1},
expected: color.RGBA{0, 255, 0, 255},
},
{
input: [3]float64{51, 0.6, 0.95},
expected: color.RGBA{242, 220, 97, 255},
},
{
input: [3]float64{0, 0.0, 0.04},
expected: color.RGBA{10, 10, 10, 255},
},
{
input: [3]float64{0, 0.0, 1.0},
expected: color.RGBA{255, 255, 255, 255},
},
{
input: [3]float64{0, 0.0, 0.0},
expected: color.RGBA{0, 0, 0, 255},
},
{
input: [3]float64{0, 1.0, 1.0},
expected: color.RGBA{255, 0, 0, 255},
},
{
input: [3]float64{300, 1.0, 1.0},
expected: color.RGBA{255, 0, 255, 255},
},
}
for _, c := range cases {
actual := HSVToRGB(c.input[0], c.input[1], c.input[2])
if actual != c.expected {
t.Errorf("HSVToRGB failed: expected: %#v, actual: %#v", c.expected, actual)
}
}
}
func TestRGBToHSL(t *testing.T) {
cases := []struct {
input color.RGBA
expected [3]float64
}{
{
input: color.RGBA{45, 166, 115, 255},
expected: [3]float64{155, 0.57, 0.41},
},
{
input: color.RGBA{0, 255, 0, 255},
expected: [3]float64{120, 1, 0.5},
},
{
input: color.RGBA{242, 220, 97, 255},
expected: [3]float64{51, 0.85, 0.66},
},
{
input: color.RGBA{10, 10, 10, 255},
expected: [3]float64{0, 0.0, 0.04},
},
{
input: color.RGBA{255, 255, 255, 255},
expected: [3]float64{0, 0.0, 1.0},
},
{
input: color.RGBA{0, 0, 0, 255},
expected: [3]float64{0, 0.0, 0.0},
},
{
input: color.RGBA{255, 0, 0, 255},
expected: [3]float64{0, 1.0, 0.5},
},
{
input: color.RGBA{0, 0, 255, 255},
expected: [3]float64{240, 1.0, 0.5},
},
{
input: color.RGBA{255, 0, 255, 255},
expected: [3]float64{300, 1.0, 0.5},
},
}
for _, c := range cases {
h, s, l := RGBToHSL(c.input)
h = math.Floor(h + 0.5)
s = math.Floor((s*100)+0.5) / 100
l = math.Floor((l*100)+0.5) / 100
if h != c.expected[0] || s != c.expected[1] || l != c.expected[2] {
t.Errorf("RGBToHSL failed: expected: %#v, actual: %#v, %#v, %#v", c.expected, h, s, l)
}
}
}
func TestHSLToRGB(t *testing.T) {
cases := []struct {
input [3]float64
expected color.RGBA
}{
{
input: [3]float64{155, 0.57, 0.41},
expected: color.RGBA{0x2d, 0xa4, 0x72, 0xff},
},
{
input: [3]float64{120, 1, 0.5},
expected: color.RGBA{0, 255, 0, 255},
},
{
input: [3]float64{51, 0.85, 0.66},
expected: color.RGBA{0xf2, 0xdc, 0x5f, 0xff},
},
{
input: [3]float64{0, 0.0, 0.04},
expected: color.RGBA{10, 10, 10, 255},
},
{
input: [3]float64{0, 0.0, 1.0},
expected: color.RGBA{255, 255, 255, 255},
},
{
input: [3]float64{0, 0.0, 0.0},
expected: color.RGBA{0, 0, 0, 255},
},
{
input: [3]float64{0, 1.0, 0.5},
expected: color.RGBA{255, 0, 0, 255},
},
{
input: [3]float64{240, 1.0, 0.5},
expected: color.RGBA{0, 0, 255, 255},
},
{
input: [3]float64{300, 1.0, 0.5},
expected: color.RGBA{255, 0, 255, 255},
},
}
for _, c := range cases {
actual := HSLToRGB(c.input[0], c.input[1], c.input[2])
if actual != c.expected {
t.Errorf("HSLToRGB failed: expected: %#v, actual: %#v", c.expected, actual)
}
}
}
```
|
Coyanosa is an unincorporated desert village in Pecos County, located in the Permian Basin in West Texas, United States. Its population was 163 at the 2010 census. Part of the Coyanosa Draw runs adjacent to the town, 2.2 miles to the west. There is a food store, two Mexican restaurants, a public library, an RV park and a post office at Coyanosa by way of businesses and services.
Coyanosa is mentioned as the hometown of the main character in the book "The man from Coyanosa" (1998) by Lauran Paine.
People from Coyanosa are called Coyanosans.
Coyanosa is the main portion of the eponymous census-designated place (CDP).
History
Coyanosa was originally settled as a ranching community in the early 1900s. A post office was established in 1908, but was discontinued 10 years later. Further development of the community resumed in the 1950s, as numerous water wells were drilled in the area to irrigate nearby cotton farms. By 1958, around 200 people lived in Coyanosa. The post office reopened, and by the early 1960s, the population had risen to 600. Increasing fuel prices in the mid-1970s made irrigation unprofitable and forced many area cotton farms out of business. A decline in the number of inhabitants soon followed. By 1990, Coyanosa had around 270 people. That figure had fallen to 138 by 2000.
Geography
Coyanosa is located at (31.240532, -103.066121). It is situated south of the intersection of Farm Roads 1776 and 1450, approximately 26 miles northwest of Fort Stockton in northwestern Pecos County.
According to the United States Census Bureau in 2000, the CDP has a total area of , all of it land. By the 2010 census, the CDP had increased in size to , all land.
Climate
According to bestplaces.net, Coyanosa has on average 261 sunny days out of the year, averages 2 inches of snow a year, and 14 inches of rain on average per year. July is the hottest month for Coyanosa with an average high temperature of 97.7°, which ranks it as warmer than most places in Texas. January has the coldest nighttime temperatures for Coyanosa with an average of 28.3°. This is one of the coldest places in Texas.
Demographics
Coyanosa has been part of its namesake CDP since 1980 and, as such, no census information since then is available for the village alone. As of the census of 2000, 138 people, 46 households, and 39 families were residing in the CDP. The population density was . The 59 housing units averaged 493.1/sq mi (189.8/km2). The racial makeup of the CDP was 100.00% White. Hispanics or Latinos of any race were 85.51% of the population.
Of the 46 households, 45.7% had children under the age of 18 living with them, 67.4% were married couples living together, 15.2% had a female householder with no husband present, and 15.2% were not families. About 10.9% of all households were made up of individuals, and none had someone living alone who was 65 or older. The average household size was 3.00, and the average family size was 3.18.
In the CDP, the age distribution was 31.9% under the age of 18, 10.9% from 18 to 24, 19.6% from 25 to 44, 28.3% from 45 to 64, and 9.4% who were 65 or older. The median age was 34 years. For every 100 females, there were 100.0 males. For every 100 females age 18 and over, there were 95.8 males.
The median income for a household in the CDP was $9,643, and for a family was $17,083. Males had a median income of $38,393 versus $48,750 for females. The per capita income for the CDP was $7,974. There were 36.4% of families and 41.9% of the population living below the poverty line, including 37.0% of under eighteens and 36.4% of those over 64.
Government and infrastructure
The United States Postal Service operates the Coyanosa Post Office. Coyanosa is under the jurisdiction of the municipality of Fort Stockton, the county seat of Pecos County.
Education
Coyanosa is served by the Fort Stockton Independent School District.
Religion
There is only one church in Coyanosa, the St. Isidore Catholic Church. It is a parish under the Roman Catholic Diocese of San Angelo.
Petroleum
Coyanosa is also on the site of an oil field, having wells going down as much as 2 miles.
References
External links
Coyanosa in the Handbook of Texas
Census-designated places in Pecos County, Texas
Census-designated places in Texas
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.