text
stringlengths 1
22.8M
|
|---|
The Bochner–Riesz mean is a summability method often used in harmonic analysis when considering convergence of Fourier series and Fourier integrals. It was introduced by Salomon Bochner as a modification of the Riesz mean.
Definition
Define
Let be a periodic function, thought of as being on the n-torus, , and having Fourier coefficients for . Then the Bochner–Riesz means of complex order , of (where and ) are defined as
Analogously, for a function on with Fourier transform , the Bochner–Riesz means of complex order , (where and ) are defined as
Application to convolution operators
For and , and may be written as convolution operators, where the convolution kernel is an approximate identity. As such, in these cases, considering the almost everywhere convergence of Bochner–Riesz means for functions in spaces is much simpler than the problem of "regular" almost everywhere convergence of Fourier series/integrals (corresponding to ).
In higher dimensions, the convolution kernels become "worse behaved": specifically, for
the kernel is no longer integrable. Here, establishing almost everywhere convergence becomes correspondingly more difficult.
Bochner–Riesz conjecture
Another question is that of for which and which the Bochner–Riesz means of an function converge in norm. This issue is of fundamental importance for , since regular spherical norm convergence (again corresponding to ) fails in when . This was shown in a paper of 1971 by Charles Fefferman.
By a transference result, the and problems are equivalent to one another, and as such, by an argument using the uniform boundedness principle, for any particular , norm convergence follows in both cases for exactly those where is the symbol of an bounded Fourier multiplier operator.
For , that question has been completely resolved, but for , it has only been partially answered. The case of is not interesting here as convergence follows for in the most difficult case as a consequence of the boundedness of the Hilbert transform and an argument of Marcel Riesz.
Define , the "critical index", as
.
Then the Bochner–Riesz conjecture states that
is the necessary and sufficient condition for a bounded Fourier multiplier operator. It is known that the condition is necessary.
References
Further reading
Means
Summability methods
|
Re Rizzo & Rizzo Shoes Ltd is a 1998 judgment from the Supreme Court of Canada regarding the priority of employees interests when a company declares bankruptcy. The judgment hinged on the interpretation of the Employment Standards Act and has been taken to mark the Supreme Court of Canada's adoption of the purposive approach to legislative interpretation. It has since been frequently cited in subsequent decisions of Canadian courts, nearly every time legislation is interpreted.
Background
Rizzo & Rizzo Shoes Ltd. filed for bankruptcy; employees subsequently lost their jobs. The company paid all wages, salaries, commissions, and vacation pay through termination. The Ministry of Labour for the Province of Ontario audited the company to ensure that no further payments were owed to former employees under the Employment Standards Act (ESA). Proof of claim was submitted to a trustee, who subsequently disallowed the claim. According to the trustee, a company's bankruptcy does not constitute dismissal from employment; thus, the former employees of Rizzo & Rizzo Shoes gained no positive right to severance, termination or vacation pay under the ESA.
The case went before the Ontario Court (General Division) where the judge agreed with the Ministry of Labour and allowed the former employees to be paid. However, the Ontario Court of Appeal overturned the ruling and restored the Trustee's decision. The Ministry sought leave to appeal from the Court of Appeal judgment but discontinued its application. Following the discontinuance of the appeal, the Trustee paid a dividend to Rizzo's creditors, thereby leaving significantly less funds in the estate. Subsequently, the appellants, five former employees of Rizzo, moved to set aside the discontinuance, add themselves as parties to the proceedings, and requested and were granted an order granting them leave to appeal.
Ruling
In the unanimous decision, the Supreme Court allowed the employees' appeal holding that they were entitled to the payments. While the plain language of the Act seemed to suggest that termination pay and severance pay were payable only when the employer terminates the employment, the Court held that the words of an Act must be read in their entire context and in their grammatical and ordinary sense harmoniously with the scheme of the Act, the object of the Act, and the intention of Parliament.
Iacobucci J. wrote:
Elmer Driedger in Construction of Statutes (2nd ed. 1983) best encapsulates the approach upon which I prefer to rely. He recognizes that statutory interpretation cannot be founded on the wording of the legislation alone. At p. 87 he states
Today there is only one principle or approach, namely, the words of an Act are to be read in their entire context and in their grammatical and ordinary sense harmoniously with the scheme of the Act, the object of the Act, and the intention of Parliament.
(Re Rizzo & Rizzo Shoes Ltd [1998] 1 S.C.R. 27, at para 21, quoting E. A. Driedger, The Construction of Statutes (2nd ed 1983), at p. 87)
The Court of Appeal had failed to read the language of the Act in this broad manner the Supreme Court held. It noted that the purpose of the termination and severance pay provisions were to protect employees, to recognize their service and investment in the employer's enterprise and to cushion them against the adverse effects of economic dislocation. To hold that (more junior) employees terminated prior to bankruptcy would be entitled to termination and severance pay while (more senior) employees terminated upon bankruptcy would be absurd, the Court held:
The trial judge properly noted that, if the ESA termination and severance pay provisions do not apply in circumstances of bankruptcy, those employees “fortunate” enough to have been dismissed the day before a bankruptcy would be entitled to such payments, but those terminated on the day the bankruptcy becomes final would not be so entitled. In my view, the absurdity of this consequence is particularly evident in a unionized workplace where seniority is a factor in determining the order of lay-off. The more senior the employee, the larger the investment he or she has made in the employer and the greater the entitlement to termination and severance pay. However, it is the more senior personnel who are likely to be employed up until the time of the bankruptcy and who would thereby lose their entitlements to these payments.
The Court also held that the legislative history of the termination and severance pay provisions and the other provisions in the ESA supported an interpretation that such benefits were payable to employees whose employment is terminated upon bankruptcy. The Court also ordered the Ministry of Labour to pay the employees' costs, since it had not provided the Court with any evidence as to the effort it made to notify or secure the consent of the Rizzo employees before it discontinued its application for leave to appeal to this Court on their behalf.
See also
List of Supreme Court of Canada cases
Employment Standards Act
External links
Full text of Supreme Court of Canada decision available at LexUM and CanLII
Ontario Ministry of Labour Employment Standards
Case Briefs Rizzo & Rizzo Shoes Ltd. (Re)
Supreme Court of Canada cases
1998 in Canadian case law
Canadian labour case law
Bankruptcy
Legal interpretation
|
```c++
// Aseprite
//
// This program is free software; you can redistribute it and/or modify
// published by the Free Software Foundation.
#ifdef HAVE_CONFIG_H
#include "config.h"
#endif
#include "app/app.h"
#include "app/commands/command.h"
#include "app/context_access.h"
#include "app/ui/timeline.h"
#include "ui/base.h"
namespace app {
class MoveCelCommand : public Command {
public:
MoveCelCommand();
Command* clone() const override { return new MoveCelCommand(*this); }
protected:
bool onEnabled(Context* context) override;
void onExecute(Context* context) override;
};
MoveCelCommand::MoveCelCommand()
: Command("MoveCel",
"Move Cel",
CmdUIOnlyFlag)
{
}
bool MoveCelCommand::onEnabled(Context* context)
{
return App::instance()->timeline()->isMovingCel();
}
void MoveCelCommand::onExecute(Context* context)
{
App::instance()->timeline()->dropRange(Timeline::kMove);
}
Command* CommandFactory::createMoveCelCommand()
{
return new MoveCelCommand;
}
} // namespace app
```
|
The Wu River () is a left tributary of the Yuan River in south China. This upper stream is called Wuyang River () in Guizhou Province; it rises on the western slopes of Mount Foding in the southeast of Weng'an County. The river runs eastward into Hunan Province and then is called the Wu River. It joins Yuan River at Hongjiang City. The river has a length of and drains an area of .
Notes
Rivers of Guizhou
Rivers of Hunan
|
The 2011 CAF Champions League (also known as the 2011 Orange CAF Champions League for sponsorship reasons) was the 47th edition of Africa's premier club football tournament organized by the Confederation of African Football (CAF), and the 15th edition under the current CAF Champions League format. The winner Espérance ST participated in the 2011 FIFA Club World Cup, and also played in the 2012 CAF Super Cup.
Association team allocation
Theoretically, up to 55 CAF member associations may enter the 2011 CAF Champions League, with the 12 highest ranked associations according to CAF 5-year ranking eligible to enter 2 teams in the competition. For this year's competition, CAF used . As a result, a maximum of 67 teams could enter the tournament – although this level has never been reached.
Ranking system
CAF calculates points for each entrant association based on their clubs’ performance over the last 5 years in the CAF Champions League and CAF Confederation Cup, not taking into considering the running year. The criteria for points are the following:
The points are multiplied by a coefficient according to the year as follow:
2009 – 5
2008 – 4
2007 – 3
2006 – 2
2005 – 1
This system is different from the one used for the 2010 CAF Champions League and previous years.
A similar procedure is used to rank clubs, with the exception that the results from 2006–2010 are used (with 2010 weighted by 5, 2009 by 4, and so on)
Entrants list
Below is the entrants list for the competition. Nations are shown according to their 2005–2009 CAF 5-year ranking – those with a ranking score have their rank and score indicated. Teams were also seeded using their individual team 2006–2010 5-year ranking. The top nine sides (shown in bold) received byes to the first qualifying round.
Notes
Associations that did not enter a team: Cape Verde, Djibouti, Eritrea, Guinea-Bissau, Malawi, Mauritius, Namibia, Réunion, São Tomé and Príncipe, Somalia, Togo, Uganda
Unranked associations have no ranking points and hence are equal 20th.
Unranked teams have no rankings points and hence are equal 21st. Club ranking is determined only between teams qualified for the 2011 CAF Champions League.
Round and draw dates
Schedule of dates for 2011 competition.
† The second leg of the preliminary round matches are postponed to 25–27 February (or further to 4–6 March) in case the club have at least three players in the 2011 African Nations Championship.
Qualifying rounds
The fixtures for the preliminary, first and second qualifying rounds were announced on 20 December 2010.
Qualification ties were decided over two legs, with aggregate goals used to determine the winner. If the sides were level on aggregate after the second leg, the away goals rule applied, and if still level, the tie proceeded directly to a penalty shootout (no extra time is played).
Preliminary round
|}
Notes
Note 1: Inter Luanda advanced to the first round after Township Rollers F.C. withdrew.
Note 2: Raja Casablanca advanced to the first round after Tourbillon withdrew following the first leg.
First round
|}
Notes
Note 3: Al-Ittihad advanced to the second round after JC Abidjan withdrew. Tie was scheduled to be played at a neutral venue over one leg due to the political situations in Côte d'Ivoire and Libya, but match did not take place.
Note 4: Tie played over one leg due to the political situation in Côte d'Ivoire.
Note 5: Second leg abandoned on 90+5 minutes with Zamalek SC leading 2–1 (Club Africain leading 5–4 on aggregate) when Zamalek SC fans invaded the pitch.
Note 6: TP Mazembe won 6–3 on aggregate, but were later disqualified for fielding an ineligible player. As a result, Simba played against Moroccan side Wydad AC, which lost to TP Mazembe in the second round, in a play-off for a place in the group stage.
Second round
|}
Notes
Note 7: Ties scheduled to be played over one leg due to the political situations in Côte d'Ivoire and Libya.
Note 8: Second leg abandoned on 81 minutes with the score at 1–1 (Al-Hilal leading 2–1 on aggregate) when Club Africain fans invaded the pitch.
Note 9: TP Mazembe won 2–1 on aggregate, but were later disqualified for fielding an ineligible player in the first round. As a result, Wydad AC played against Tanzanian side Simba, which lost to TP Mazembe in the first round, in a play-off for a place in the group stage.
The losing teams from the second round advance to the 2011 CAF Confederation Cup play-off round.
Special play-off
On 14 May 2011, CAF announced that TP Mazembe (Congo DR) were ejected from the Champions League following a complaint about the eligibility of TP Mazembe player Janvier Besala Bokungu from Tanzanian club Simba, who lost to them in the first round.
As a result, the Organising Committee decided that a replacement for the group stage would be determined by a play-off match at a neutral venue between Simba and Moroccan club Wydad AC (who lost to TP Mazembe in the second round).
|}
Group stage
Group A
Group B
Knockout stage
Bracket
Semi-finals
|}
Final
Espérance de Tunis won 1–0 on aggregate.
Top scorers
See also
2011 CAF Confederation Cup
2012 CAF Super Cup
2011 FIFA Club World Cup
References
External links
CAF Champions League
2011
1
|
Kiselev () is a rural locality (a khutor) in Volokonovsky District, Belgorod Oblast, Russia. The population was 31 as of 2010. There are 4 streets.
Geography
Kiselev is located 34 km southwest of Volokonovka (the district's administrative centre) by road. Borisovka is the nearest rural locality.
References
Rural localities in Volokonovsky District
|
Although Edwin Oswald LeGrand (1801–1861) was born in North Carolina, he was an original Texan. LeGrand was one of the fifty-seven men who signed the Texas Declaration of Independence. He was a San Augustine delegate to the Convention of 1836 at Washington-on-the-Brazos and fought in the Battle of San Jacinto. His sister, Mrs. William Colson Norwood, and her family also settled in San Augustine, Texas. LeGrand is buried near San Augustine.
References
Handbook of Texas bio
1801 births
1861 deaths
People from North Carolina
People from San Augustine, Texas
People of the Texas Revolution
Signers of the Texas Declaration of Independence
|
```javascript
/**
* @license Apache-2.0
*
*
*
* path_to_url
*
* Unless required by applicable law or agreed to in writing, software
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
*/
'use strict';
// MODULES //
var tape = require( 'tape' );
var proxyquire = require( 'proxyquire' );
var iteratorSymbol = require( '@stdlib/symbol/iterator' );
var sinpi = require( '@stdlib/math/base/special/sinpi' );
var abs = require( '@stdlib/math/base/special/abs' );
var EPS = require( '@stdlib/constants/float64/eps' );
var iterSineWave = require( './../lib' );
// TESTS //
tape( 'main export is a function', function test( t ) {
t.ok( true, __filename );
t.strictEqual( typeof iterSineWave, 'function', 'main export is a function' );
t.end();
});
tape( 'the function throws an error if provided a first argument which is not an object', function test( t ) {
var values;
var i;
values = [
'5',
5,
NaN,
true,
false,
null,
void 0,
[],
function noop() {}
];
for ( i = 0; i < values.length; i++ ) {
t.throws( badValue( values[i] ), TypeError, 'throws an error when provided '+values[i] );
}
t.end();
function badValue( value ) {
return function badValue() {
iterSineWave( value );
};
}
});
tape( 'the function throws an error if provided an invalid option', function test( t ) {
var values;
var i;
values = [
'5',
-5,
3.14,
NaN,
true,
false,
null,
void 0,
[],
{},
function noop() {}
];
for ( i = 0; i < values.length; i++ ) {
t.throws( badValue( values[i] ), TypeError, 'throws an error when provided '+values[i] );
}
t.end();
function badValue( value ) {
return function badValue() {
iterSineWave({
'iter': value
});
};
}
});
tape( 'the function returns an iterator protocol-compliant object which generates a sine wave', function test( t ) {
var expected;
var actual;
var delta;
var tol;
var it;
var i;
expected = [
{
'value': 0.0,
'done': false
},
{
'value': sinpi( 1.0*2.0*(1.0/10.0) ),
'done': false
},
{
'value': sinpi( 2.0*2.0*(1.0/10.0) ),
'done': false
},
{
'value': sinpi( 3.0*2.0*(1.0/10.0) ),
'done': false
},
{
'value': sinpi( 4.0*2.0*(1.0/10.0) ),
'done': false
},
{
'value': sinpi( 5.0*2.0*(1.0/10.0) ),
'done': false
},
{
'value': sinpi( 6.0*2.0*(1.0/10.0) ),
'done': false
},
{
'value': sinpi( 7.0*2.0*(1.0/10.0) ),
'done': false
},
{
'value': sinpi( 8.0*2.0*(1.0/10.0) ),
'done': false
},
{
'value': sinpi( 9.0*2.0*(1.0/10.0) ),
'done': false
},
{
'value': sinpi( 10.0*2.0*(1.0/10.0) ),
'done': false
},
{
'value': sinpi( 1.0*2.0*(1.0/10.0) ),
'done': false
}
];
it = iterSineWave();
t.equal( it.next.length, 0, 'has zero arity' );
for ( i = 0; i < expected.length; i++ ) {
actual = it.next();
t.equal( actual.done, expected[ i ].done, 'returns expected value' );
if ( actual.value === expected[ i ].value ) {
t.equal( actual.value, expected[ i ].value, 'returns expected value' );
} else {
delta = abs( actual.value - expected[ i ].value );
tol = 1.0 * EPS * abs( expected[ i ].value );
t.equal( delta <= tol, true, 'within tolerance. i: '+i+'. actual: '+actual.value+'. expected: '+expected[ i ].value+'. delta: '+delta+'. tol: '+tol+'.' );
}
}
t.end();
});
tape( 'the function supports specifying the waveform period', function test( t ) {
var expected;
var actual;
var delta;
var opts;
var tol;
var it;
var i;
expected = [
{
'value': 0.0,
'done': false
},
{
'value': sinpi( 0.5 ),
'done': false
},
{
'value': sinpi( 1.0 ),
'done': false
},
{
'value': sinpi( 1.5 ),
'done': false
},
{
'value': sinpi( 2.0 ),
'done': false
},
{
'value': sinpi( 0.5 ),
'done': false
},
{
'value': sinpi( 1.0 ),
'done': false
},
{
'value': sinpi( 1.5 ),
'done': false
},
{
'value': sinpi( 2.0 ),
'done': false
},
{
'value': sinpi( 0.5 ),
'done': false
},
{
'value': sinpi( 1.0 ),
'done': false
},
{
'value': sinpi( 1.5 ),
'done': false
}
];
opts = {
'period': 4
};
it = iterSineWave( opts );
t.equal( it.next.length, 0, 'has zero arity' );
for ( i = 0; i < expected.length; i++ ) {
actual = it.next();
t.equal( actual.done, expected[ i ].done, 'returns expected value' );
if ( actual.value === expected[ i ].value ) {
t.equal( actual.value, expected[ i ].value, 'returns expected value' );
} else {
delta = abs( actual.value - expected[ i ].value );
tol = 1.0 * EPS * abs( expected[ i ].value );
t.equal( delta <= tol, true, 'within tolerance. i: '+i+'. actual: '+actual.value+'. expected: '+expected[ i ].value+'. delta: '+delta+'. tol: '+tol+'.' );
}
}
t.end();
});
tape( 'the function supports specifying the wave amplitude', function test( t ) {
var expected;
var actual;
var delta;
var opts;
var tol;
var it;
var i;
expected = [
{
'value': 0.0,
'done': false
},
{
'value': 10.0 * sinpi( 1.0*2.0*(1.0/10.0) ),
'done': false
},
{
'value': 10.0 * sinpi( 2.0*2.0*(1.0/10.0) ),
'done': false
},
{
'value': 10.0 * sinpi( 3.0*2.0*(1.0/10.0) ),
'done': false
},
{
'value': 10.0 * sinpi( 4.0*2.0*(1.0/10.0) ),
'done': false
},
{
'value': 10.0 * sinpi( 5.0*2.0*(1.0/10.0) ),
'done': false
},
{
'value': 10.0 * sinpi( 6.0*2.0*(1.0/10.0) ),
'done': false
},
{
'value': 10.0 * sinpi( 7.0*2.0*(1.0/10.0) ),
'done': false
},
{
'value': 10.0 * sinpi( 8.0*2.0*(1.0/10.0) ),
'done': false
},
{
'value': 10.0 * sinpi( 9.0*2.0*(1.0/10.0) ),
'done': false
},
{
'value': 10.0 * sinpi( 10.0*2.0*(1.0/10.0) ),
'done': false
},
{
'value': 10.0 * sinpi( 1.0*2.0*(1.0/10.0) ),
'done': false
}
];
opts = {
'period': 10,
'amplitude': 10.0
};
it = iterSineWave( opts );
t.equal( it.next.length, 0, 'has zero arity' );
for ( i = 0; i < expected.length; i++ ) {
actual = it.next();
t.equal( actual.done, expected[ i ].done, 'returns expected value' );
if ( actual.value === expected[ i ].value ) {
t.equal( actual.value, expected[ i ].value, 'returns expected value' );
} else {
delta = abs( actual.value - expected[ i ].value );
tol = 1.0 * EPS * abs( expected[ i ].value );
t.equal( delta <= tol, true, 'within tolerance. i: '+i+'. actual: '+actual.value+'. expected: '+expected[ i ].value+'. delta: '+delta+'. tol: '+tol+'.' );
}
}
t.end();
});
tape( 'the function supports specifying the phase offset (left shift)', function test( t ) {
var expected;
var actual;
var delta;
var opts;
var tol;
var it;
var i;
expected = [
{
'value': sinpi( (0.0+3.0)*2.0*(1.0/10.0) ),
'done': false
},
{
'value': sinpi( (1.0+3.0)*2.0*(1.0/10.0) ),
'done': false
},
{
'value': sinpi( (2.0+3.0)*2.0*(1.0/10.0) ),
'done': false
},
{
'value': sinpi( (3.0+3.0)*2.0*(1.0/10.0) ),
'done': false
},
{
'value': sinpi( (4.0+3.0)*2.0*(1.0/10.0) ),
'done': false
},
{
'value': sinpi( (5.0+3.0)*2.0*(1.0/10.0) ),
'done': false
},
{
'value': sinpi( (6.0+3.0)*2.0*(1.0/10.0) ),
'done': false
},
{
'value': sinpi( (7.0+3.0)*2.0*(1.0/10.0) ),
'done': false
},
{
'value': sinpi( (-2.0+3.0)*2.0*(1.0/10.0) ),
'done': false
},
{
'value': sinpi( (-1.0+3.0)*2.0*(1.0/10.0) ),
'done': false
},
{
'value': sinpi( (0.0+3.0)*2.0*(1.0/10.0) ),
'done': false
},
{
'value': sinpi( (1.0+3.0)*2.0*(1.0/10.0) ),
'done': false
}
];
opts = {
'period': 10,
'offset': -3
};
it = iterSineWave( opts );
t.equal( it.next.length, 0, 'has zero arity' );
for ( i = 0; i < expected.length; i++ ) {
actual = it.next();
t.equal( actual.done, expected[ i ].done, 'returns expected value' );
if ( actual.value === expected[ i ].value ) {
t.equal( actual.value, expected[ i ].value, 'returns expected value' );
} else {
delta = abs( actual.value - expected[ i ].value );
tol = 1.0 * EPS * abs( expected[ i ].value );
t.equal( delta <= tol, true, 'within tolerance. i: '+i+'. actual: '+actual.value+'. expected: '+expected[ i ].value+'. delta: '+delta+'. tol: '+tol+'.' );
}
}
t.end();
});
tape( 'the function supports specifying the phase offset (left shift; mod)', function test( t ) {
var expected;
var actual;
var delta;
var opts;
var tol;
var it;
var i;
expected = [
{
'value': sinpi( (0.0+3.0)*2.0*(1.0/10.0) ),
'done': false
},
{
'value': sinpi( (1.0+3.0)*2.0*(1.0/10.0) ),
'done': false
},
{
'value': sinpi( (2.0+3.0)*2.0*(1.0/10.0) ),
'done': false
},
{
'value': sinpi( (3.0+3.0)*2.0*(1.0/10.0) ),
'done': false
},
{
'value': sinpi( (4.0+3.0)*2.0*(1.0/10.0) ),
'done': false
},
{
'value': sinpi( (5.0+3.0)*2.0*(1.0/10.0) ),
'done': false
},
{
'value': sinpi( (6.0+3.0)*2.0*(1.0/10.0) ),
'done': false
},
{
'value': sinpi( (7.0+3.0)*2.0*(1.0/10.0) ),
'done': false
},
{
'value': sinpi( (-2.0+3.0)*2.0*(1.0/10.0) ),
'done': false
},
{
'value': sinpi( (-1.0+3.0)*2.0*(1.0/10.0) ),
'done': false
},
{
'value': sinpi( (0.0+3.0)*2.0*(1.0/10.0) ),
'done': false
},
{
'value': sinpi( (1.0+3.0)*2.0*(1.0/10.0) ),
'done': false
}
];
opts = {
'period': 10,
'offset': -13
};
it = iterSineWave( opts );
t.equal( it.next.length, 0, 'has zero arity' );
for ( i = 0; i < expected.length; i++ ) {
actual = it.next();
t.equal( actual.done, expected[ i ].done, 'returns expected value' );
if ( actual.value === expected[ i ].value ) {
t.equal( actual.value, expected[ i ].value, 'returns expected value' );
} else {
delta = abs( actual.value - expected[ i ].value );
tol = 1.0 * EPS * abs( expected[ i ].value );
t.equal( delta <= tol, true, 'within tolerance. i: '+i+'. actual: '+actual.value+'. expected: '+expected[ i ].value+'. delta: '+delta+'. tol: '+tol+'.' );
}
}
t.end();
});
tape( 'the function supports specifying the phase offset (right shift)', function test( t ) {
var expected;
var actual;
var delta;
var opts;
var tol;
var it;
var i;
expected = [
{
'value': sinpi( (10.0-3.0)*2.0*(1.0/10.0) ),
'done': false
},
{
'value': sinpi( (11.0-3.0)*2.0*(1.0/10.0) ),
'done': false
},
{
'value': sinpi( (12.0-3.0)*2.0*(1.0/10.0) ),
'done': false
},
{
'value': sinpi( (3.0-3.0)*2.0*(1.0/10.0) ),
'done': false
},
{
'value': sinpi( (4.0-3.0)*2.0*(1.0/10.0) ),
'done': false
},
{
'value': sinpi( (5.0-3.0)*2.0*(1.0/10.0) ),
'done': false
},
{
'value': sinpi( (6.0-3.0)*2.0*(1.0/10.0) ),
'done': false
},
{
'value': sinpi( (7.0-3.0)*2.0*(1.0/10.0) ),
'done': false
},
{
'value': sinpi( (8.0-3.0)*2.0*(1.0/10.0) ),
'done': false
},
{
'value': sinpi( (9.0-3.0)*2.0*(1.0/10.0) ),
'done': false
},
{
'value': sinpi( (10.0-3.0)*2.0*(1.0/10.0) ),
'done': false
},
{
'value': sinpi( (11.0-3.0)*2.0*(1.0/10.0) ),
'done': false
}
];
opts = {
'period': 10,
'offset': 3
};
it = iterSineWave( opts );
t.equal( it.next.length, 0, 'has zero arity' );
for ( i = 0; i < expected.length; i++ ) {
actual = it.next();
t.equal( actual.done, expected[ i ].done, 'returns expected value' );
if ( actual.value === expected[ i ].value ) {
t.equal( actual.value, expected[ i ].value, 'returns expected value' );
} else {
delta = abs( actual.value - expected[ i ].value );
tol = 1.0 * EPS * abs( expected[ i ].value );
t.equal( delta <= tol, true, 'within tolerance. i: '+i+'. actual: '+actual.value+'. expected: '+expected[ i ].value+'. delta: '+delta+'. tol: '+tol+'.' );
}
}
t.end();
});
tape( 'the function supports specifying the phase offset (right shift; mod)', function test( t ) {
var expected;
var actual;
var delta;
var opts;
var tol;
var it;
var i;
expected = [
{
'value': sinpi( (10.0-3.0)*2.0*(1.0/10.0) ),
'done': false
},
{
'value': sinpi( (11.0-3.0)*2.0*(1.0/10.0) ),
'done': false
},
{
'value': sinpi( (12.0-3.0)*2.0*(1.0/10.0) ),
'done': false
},
{
'value': sinpi( (3.0-3.0)*2.0*(1.0/10.0) ),
'done': false
},
{
'value': sinpi( (4.0-3.0)*2.0*(1.0/10.0) ),
'done': false
},
{
'value': sinpi( (5.0-3.0)*2.0*(1.0/10.0) ),
'done': false
},
{
'value': sinpi( (6.0-3.0)*2.0*(1.0/10.0) ),
'done': false
},
{
'value': sinpi( (7.0-3.0)*2.0*(1.0/10.0) ),
'done': false
},
{
'value': sinpi( (8.0-3.0)*2.0*(1.0/10.0) ),
'done': false
},
{
'value': sinpi( (9.0-3.0)*2.0*(1.0/10.0) ),
'done': false
},
{
'value': sinpi( (10.0-3.0)*2.0*(1.0/10.0) ),
'done': false
},
{
'value': sinpi( (11.0-3.0)*2.0*(1.0/10.0) ),
'done': false
}
];
opts = {
'period': 10,
'offset': 23
};
it = iterSineWave( opts );
t.equal( it.next.length, 0, 'has zero arity' );
for ( i = 0; i < expected.length; i++ ) {
actual = it.next();
t.equal( actual.done, expected[ i ].done, 'returns expected value' );
if ( actual.value === expected[ i ].value ) {
t.equal( actual.value, expected[ i ].value, 'returns expected value' );
} else {
delta = abs( actual.value - expected[ i ].value );
tol = 1.0 * EPS * abs( expected[ i ].value );
t.equal( delta <= tol, true, 'within tolerance. i: '+i+'. actual: '+actual.value+'. expected: '+expected[ i ].value+'. delta: '+delta+'. tol: '+tol+'.' );
}
}
t.end();
});
tape( 'the function supports limiting the number of iterations', function test( t ) {
var expected;
var actual;
var opts;
var it;
var i;
expected = [
{
'value': 0.0,
'done': false
},
{
'value': 1.0,
'done': false
},
{
'value': 0.0,
'done': false
},
{
'value': -1.0,
'done': false
},
{
'done': true
}
];
opts = {
'period': 4,
'iter': 4
};
it = iterSineWave( opts );
t.equal( it.next.length, 0, 'has zero arity' );
actual = [];
for ( i = 0; i < opts.iter; i++ ) {
actual.push( it.next() );
}
actual.push( it.next() );
t.deepEqual( actual, expected, 'returns expected values' );
t.end();
});
tape( 'the returned iterator has a `return` method for closing an iterator (no argument)', function test( t ) {
var it;
var r;
it = iterSineWave();
r = it.next();
t.equal( typeof r.value, 'number', 'returns a number' );
t.equal( r.done, false, 'returns expected value' );
r = it.next();
t.equal( typeof r.value, 'number', 'returns a number' );
t.equal( r.done, false, 'returns expected value' );
r = it.return();
t.equal( r.value, void 0, 'returns expected value' );
t.equal( r.done, true, 'returns expected value' );
r = it.next();
t.equal( r.value, void 0, 'returns expected value' );
t.equal( r.done, true, 'returns expected value' );
t.end();
});
tape( 'the returned iterator has a `return` method for closing an iterator (argument)', function test( t ) {
var it;
var r;
it = iterSineWave();
r = it.next();
t.equal( typeof r.value, 'number', 'returns a number' );
t.equal( r.done, false, 'returns expected value' );
r = it.next();
t.equal( typeof r.value, 'number', 'returns a number' );
t.equal( r.done, false, 'returns expected value' );
r = it.return( 'finished' );
t.equal( r.value, 'finished', 'returns expected value' );
t.equal( r.done, true, 'returns expected value' );
r = it.next();
t.equal( r.value, void 0, 'returns expected value' );
t.equal( r.done, true, 'returns expected value' );
t.end();
});
tape( 'if an environment supports `Symbol.iterator`, the returned iterator is iterable', function test( t ) {
var iterSineWave;
var it1;
var it2;
var i;
iterSineWave = proxyquire( './../lib/main.js', {
'@stdlib/symbol/iterator': '__ITERATOR_SYMBOL__'
});
it1 = iterSineWave();
t.equal( typeof it1[ '__ITERATOR_SYMBOL__' ], 'function', 'has method' );
t.equal( it1[ '__ITERATOR_SYMBOL__' ].length, 0, 'has zero arity' );
it2 = it1[ '__ITERATOR_SYMBOL__' ]();
t.equal( typeof it2, 'object', 'returns an object' );
t.equal( typeof it2.next, 'function', 'has method' );
t.equal( typeof it2.return, 'function', 'has method' );
for ( i = 0; i < 10; i++ ) {
t.equal( it2.next().value, it1.next().value, 'returns expected value' );
}
t.end();
});
tape( 'if an environment does not support `Symbol.iterator`, the returned iterator is not "iterable"', function test( t ) {
var iterSineWave;
var it;
iterSineWave = proxyquire( './../lib/main.js', {
'@stdlib/symbol/iterator': false
});
it = iterSineWave();
t.equal( it[ iteratorSymbol ], void 0, 'does not have property' );
t.end();
});
```
|
Jean-Claude Paye may refer to:
Jean-Claude Paye (sociologist) (born 1952), Belgian sociologist
Jean-Claude Paye (OECD) (born 1934), French civil servant, Secretary-General of the OECD
|
Arnala class was an Indian designation for the Petya III-class vessels of the Indian Navy.
Although these vessels were classified as frigates in the Soviet Navy, they were classified by the Indian Navy as anti-submarine corvettes due to their role and smaller size. Vessels of the class were named for Indian islands.
Operational history
and were part of the task force for Operation Trident during the Indo-Pakistan War of 1971.
The hulls of this class were of relatively inferior quality, requiring the vessels to undergo major refit every 5 years. The Indian Navy constructed the Naval Dockyard at Visakhapatnam, primarily to service Russian vessels. But given the lack of engineering support from Russia there were inordinate delays in completing the servicing facility. This resulted in considerable delay of the second refit for , which was in poor repair and subsequently was lost at sea in storm conditions, east of Visakhapatnam on 21 August 1990.
Vessels
The corvettes of this class constituted the 31st Patrol Vessel Squadron of the Eastern Naval Command and the 32nd Patrol Vessel Squadron of the Western Naval Command.
References
Corvette classes
Corvettes of the Indian Navy
India–Soviet Union relations
|
The Kairos Document (KD) is a theological statement issued in 1985 by a group of mainly black South African theologians based predominantly in the townships of Soweto, South Africa. The document challenged the churches' response to what the authors saw as the vicious policies of the apartheid regime under the state of emergency declared on 21 July 1985. The KD evoked strong reactions and furious debates not only in South Africa, but world-wide.
The KD is a prime example of contextual theology and liberation theology - or "theology from below" - in South Africa, and has served as an example for attempted, similarly critical writing at decisive moments in several other countries and contexts (Latin America, Europe, Zimbabwe, India, Palestine, etc.).
Context
The KD was predominantly written by an ecumenical group of pastors in Soweto, whose names have never (officially) been released to the public. Many believe it was a conscious decision to make the document anonymous, perhaps for security reasons since the Apartheid regime frequently harassed, detained, or tortured clergy who opposed the government. It is widely thought though that Frank Chikane, a black Pentecostal pastor and theologian, and Albert Nolan, a white Roman Catholic priest and member of the Dominican Order, belonged to this group. John W. de Gruchy writes decidedly that Frank Chikane, then General Secretary of the Institute for Contextual Theology (ICT) in Braamfontein, Johannesburg, initiated the process.
When this fairly short, 11,000-word document was first published in September 1985, it included over 150 signatures; it was subsequently signed by many more church leaders and theologians in South Africa, though the amended list was never published. A substantially revised, second edition appeared in 1986.
Summary
The document is structured in five short chapters (the second edition comes to less than 40 pages): (1) The Moment of Truth; (2) Critique of 'State Theology'; (3) Critique of 'Church Theology'; (4) Towards a Prophetic Theology; (5) Challenge to Action; and a short conclusion. The following summary is based on the revised edition, and is designed to focus on the most important aspects of the KD, without comment.
Chapter One: The Moment of Truth
This chapter sets the context in which the KD is written. The time has come (September 1985) to act on the situation. The Greek term kairos / καιρος (meaning 'special moment' in this context) was chosen as a key term to describe the highly situational nature of this document. It was addressed to the churches in the context of South Africa at that very moment, and was meant to be understood as a process rather than a definitive statement, "... this was an open-ended document which will never be said to be final" (KD, Preface). The document is primarily addressed to the divided churches; divided, that is, due to the roles that Christians within the churches play in the conflict between the racist minority government and the black majority population. "Both oppressor and oppressed claim loyalty to the same church". The KD theologians see three broad theological positions within the church, which are discussed in turn in the next three chapters.
Chapter Two: Critique of State Theology
'State theology' is defined as, "the theological justification of the status quo with its racism, capitalism and totalitarianism... It does [this] by misusing theological concepts and biblical texts for its own political purposes". The government, as well as parts of the church, are accused of using state theology. Four examples are discussed.
Romans 13:1-7
"'State Theology' assumes that in this text Paul is presenting us with the absolute and definitive Christian doctrine about the State ... and absolute and universal principle ... The falseness of this assumption has been pointed out by many biblical scholars". Reference is made to Käsemann's Commentary on Romans, as well as Cullmann's The State in the New Testament.
The KD authors insist that texts must be understood in their context: within a particular writing (here: Romans); within the Bible as a whole; and within the particular historical context (here: Paul and the community in Rome). Note that, "In the rest of the Bible, God does not demand obedience to oppressive rulers ... cannot contradict all of this".
The letter known as the Biblical book Romans was sent to an early Christian community in Rome that could be characterized as 'antinomian' or 'enthusiast.' Roman Christians thought that "because Jesus ... was their Lord and King," every authority should be obeyed. Paul was arguing against such an understanding; that is, he is "not addressing the issue of a just or unjust State." Attention is drawn to ("the State is there for your benefit"): "That is the kind of State that must be obeyed." The question of an unjust government is not addressed in but, for example, in .
Law and Order
State theology implies that law and order must be upheld, but in the Apartheid State, the KD authors contend, this is an unjust law and order. "Anyone who wishes to change this law ... is made ... to feel guilty of sin". The KD theologians argue that the State has no divine authority to maintain any sort of law and order. The appeal to law and order is misplaced. Ultimately, it is God who must be obeyed ().
State theology further justifies the State's use of violence to maintain the status quo. Thus "state security becomes a more important concern than justice ... The State often admonishes church leaders ... not to 'meddle in politics' while at the same time it indulges in its own political theology which claims God's approval for its use of violence in maintaining an unjust system of 'law and order'".
The Threat of Communism
"Anything that threatens the status quo is labeled 'communist'... No account is taken of what communism really means ... Even people who have not rejected capitalism are called 'communists' when they reject 'State Theology.' The State uses the label ... as its symbol of evil". The State uses "threats and warnings about the horrors of a tyrannical, totalitarian, atheistic and terrorist communist regime" simply to scare people.
The God of the State
The Apartheid State often makes explicit use of the name of God to justify its own existence, most explicitly in the preamble to the (1983) Constitution of South Africa: "In humble submission to almighty God ... who gathered our forebears together from many lands and gave them this their own; who has guided them from generation to generation ..."
The KD theologians reject this categorically: "This god [of the State] is an idol ... [it is] the god of teargas, rubber bullets, sjamboks, prison cells and death sentences." In other words, "the very opposite of the God of the Bible", which is, theologically speaking "the devil disguised as Almighty God." Therefore, "State Theology is not only heretical, it is blasphemous ... What is particularly tragic for a Christian is to see the number of people who are fooled and confused by these false prophets and their heretical theology."
Chapter Three: Critique of 'Church Theology'
'Church Theology' is defined as the kind of theology shown by public pronouncements of many church leaders in the so-called English speaking churches of South Africa, such as Anglicans, Methodists, and Lutherans. While such a theology tends to reject apartheid in principle, the KD theologians regard it as counter-productive and superficial as they do not analyze "the signs of our times [but rather rely] upon a few stock ideas derived from Christian tradition," which is uncritically 'applied' to the then South African context.
Reconciliation
While true reconciliation and peace are the core of the Christian tradition, true reconciliation, the KD authors argue, is not possible without justice. Calls for reconciliation without justice are calls for "counterfeit reconciliation".
Such false reconciliation relies on the notion that the church must stand between 'both sides' and 'get them to reconcile,' as if all conflicts were the same: some struggles are about justice and injustice, where blindly calling for reconciliation is "unChristian." Therefore, "no reconciliation, no forgiveness and no negotiations are possible without repentance". Yet, the imposition of the brutal State of Emergency in July 1985 shows that there is no repentance.
Justice
The KD theologians acknowledge that the concept of justice is not absent from much Church Theology. Yet the KD accuses Church Theology of advocating a "justice of reforms," a justice of concessions that is determined by the oppressor. Hence "almost all Church statements are made to the State or to the white community".
At the heart of this approach, the KD sees the reliance on individual conversion as a moralizing approach directed at the individual Christian. Yet "the problem ... in South Africa is not merely a problem of personal guilt, it is a problem of structural injustice." The question one has to ask is: "Why does this [Church] theology not demand that the oppressed stand up for their rights and wage a struggle against their oppressors? Why does it not tell them that it is 'their' duty to work for justice and to change the unjust structures"?
Non-Violence
The KD questions the blanket condemnation of all "that is called violence," which "has been made into an absolute principle." This aspect of Church Theology tends to exclude state-organized, "structural, institutional and unrepentant violence of the State." Indeed, "is it legitimate ... to use the same word violence in a blanket condemnation to cover" the violence of the state and the "desperate attempts of the people to defend themselves"?
The KD observes that the term violence is used in the Bible to denote the violence of the oppressor (e.g. , etc.). "When Jesus says that we should turn the other cheek he is telling us that we must not take revenge; he is not saying that we should never defend ourselves or others".
"This is not to say that any use of force at any time by people who are oppressed is permissible..."; the problem with such acts of "killing and maiming" is, however, "based upon a concern for genuine liberation". While Church Theology tends to decry violent resistance, it tends to accept the militarization of the Apartheid State, which implies a tacit acceptance of the racist regime as legitimate authority.
Neutrality, in this context, is not possible: "Neutrality enables the status quo of oppression (and therefore violence) to continue".
The fundamental problem
According to the KD, Church Theology lacks appropriate social analysis: "It is not possible to make valid moral judgments about a society without first understanding that society". Secondly, it lacks "an adequate understanding of politics and political strategy," not because there is a "specifically Christian solution" as such, but because Christians need to make use of politics. The reasons for this are seen in the "type of faith and spirituality that has dominated Church life for centuries," namely an approach that has regarded spirituality as an "other-worldly affair," wherein God was relied upon to intervene "in God's own good time." Yet such a faith has "no foundation" in the Bible, which shows how God redeems all of creation (Romans 8:18-24): "Biblical faith is prophetically relevant to everything that happens in the world".
Chapter Four: Towards a Prophetic Theology
What would the alternative to State and Church Theology be? "What would be the characteristics of a prophetic theology"?
Prophetic Theology
In the first place, prophetic theology will have to be biblical: "Our KAIROS impels us to return to the Bible, and to search the Word of God for a message that is relevant to what we are experiencing in South Africa today". It does not "pretend to be comprehensive and complete;" it is consciously devised for this situation, and therefore needs to take seriously the need to read the "signs of the times" (). It is always a call to action, a call for "repentance, conversion and change". This will involve confrontation, taking a stand, and persecution. It is, nevertheless, fundamentally a "message of hope." It is spiritual: "Infused with a spirit of fearless[ness] ... courage ... love ... understanding ... joy and hope".
Suffering and Oppression in the Bible
Reading the bible in this context "what stands out for us is (sic) the many, many vivid and concrete descriptions of suffering and oppression" from Exodus to Revelation. Israel was often oppressed by both external and internal forces. "Their oppressors were their enemies. The people of Israel were in no doubt about that". Indeed, "people of the townships can identify fully with these descriptions of suffering". Nor is the concern about oppression only found in the Old Testament, even though the New Testament tends to focus on internal repression rather than the Roman occupying forces. "Throughout his life Jesus associated himself with the poor and the oppressed and as the suffering (or oppressed) servant of Yahweh he suffered and died for us. 'Ours were the sufferings he bore, ours the sorrows he carried.' () He continues to do so, even today".
Social Analysis
The KD offers "the broad outlines of an analysis of the conflict in which we find ourselves". This conflict is seen not so much as a 'racial war' but rather a situation "of tyranny and oppression". This is expressed in social structures which "will sooner or later bring the people involved into conflict." Those who benefit from this system will only make reform possible in order to maintain the essential status quo. On the other hand, those who do not benefit from the system have no say in it. The situation now is one where the oppressed are no longer prepared to accept this. "What they want is justice for all...".
Of course this social structure is more complex, but the KD authors come to this distinction: "Either we have full and equal justice for all or we don't". Prophetic theology, like Jesus', addresses this situation (e.g. ). "It is therefore not primarily a matter of trying to reconcile individual people but a matter of trying to change unjust structures so that people will not be pitted against one another as oppressor and oppressed".
Tyranny
In terms of the Christian tradition, the KD maintain, a tyrannical government has no moral right to govern, "and the people acquire the right to resist". The South African Apartheid government is tyrannical because it consistently demonstrates its hostility to the common good as a matter of principle. As a tyrannical regime, it uses terror to maintain power. As a result, the oppressed refer to it as 'the enemy'.
The Apartheid State is not capable of true reform; any reforms will have to be facile only since they are designed to ensure the survival of the white minority government. "A regime that has made itself the enemy of the people has thereby also made itself the enemy of God," even though at the level of the individual, people in government are not aware of this. This is, however, "no excuse for hatred. As Christians we are called upon to love our enemies" (Mt 5:44). However, "the most loving thing we can do for both the oppressed and for our enemies who are oppressors is to eliminate the oppression, remove the tyrants from power, and establish a just government for the common good for all the people".
Liberation and hope in the Bible
The Bible is commonly understood as a message of hope in the face of oppression; Yahweh will liberate the people (e.g. , , ). "Throughout the Bible, God appears as the liberator ... God is not neutral. He does not attempt to reconcile Moses and Pharaoh ...". Neither is Jesus "unconcerned about the rich ... These he calls to repentance ... We believe that God is at work in our world turning hopeless and evil situations to good so that God's Kingdom may come and God's Will may be done on earth as it is in heaven".
A message of hope
"The people need to hear it said again and again that God is with them and that 'the hope of the poor is never brought to nothing'" (). Also, while the oppressors must be called to repentance, "they must also be given something to hope for. At present they have false hopes ... Can the Christian message of hope not help them in this matter"?
Chapter Five: Challenge to Action
God sides with the oppressed
The Church's call to action must consider that the struggle against Apartheid is generally waged by the poor and oppressed, who are part of the church already.
Church unity is a matter of joining in the struggle. "For those Christians who find themselves on the side of the oppressor or sitting on the fence, [the way forward is] to cross over to the other side to be united in faith and action".
Liberation, however should be noted that it does not come on a silver platter.
Participation in the struggle
"Criticism [of the way the struggle is being waged] will sometimes be necessary but encouragement and support will be (sic) also be necessary. In other words, ... move beyond a mere 'ambulance ministry' to a ministry of involvement and participation".
Transforming church action
The traditional life, ritual, and actions of the church must be re-envisaged in the light of the kairos. "The repentance we preach must be named. It is repentance for our share of the guilt for the suffering and oppression in our country".
Special campaigns
Special church action and campaigns must be in "consultation, co-ordination and co-operation" with the people's political organization, rather than a 'new, third force' that duplicates what already exists.
Civil disobedience
"In the first place, the Church cannot collaborate with tyranny... Secondly, the Church should not only pray for a change of government, it should also mobilize its members in every parish to begin to think and work and plan for a change of government in South Africa". At times, the KD contends, this will mean getting involved in civil disobedience.
Moral guidance
People look to the church for moral guidance, and this position of influence must be taken seriously. "There must be no misunderstanding about the moral duty of all who are oppressed to resist oppression and to struggle for liberation and justice. The Church will also find that at times it does need to curb excesses and to appeal to the consciences of those who act thoughtlessly and wildly".
Conclusion
"As we said in the beginning, there is nothing final about this document nor even about this second edition. Our hope is that it will continue to stimulate discussion, debate, reflection and prayer, but, above all, that it will lead to action ... We pray that God will help all of us to translate the challenge of our times into action."
Reactions
Although the Apartheid State was not directly addressed in the KD, the government reacted strongly against it. A government spokesperson rejected it in a speech in Parliament, denouncing it as a call for violence, and calling for its prohibition ('banning') by the government. An Inkatha political magazine, the Clarion Call, similarly attacked it as a theological document that supported the 'violence of the ANC' (African National Congress). However, to the surprise of many observers at the time, the KD was never banned by the Apartheid government.
Within the churches in South Africa, and indeed worldwide, the KD led to intense and often heated debates. The unpolished nature of this radical document allowed many critics to disengage from real debate. For example, Markus Barth and Helmut Blanke make a rather brief, disparaging remark, which seems to be based on a reading of the KD that is significantly at variance with its substance. In the KD, Barth and Blanke claim, "it is the starved, exploited, oppressed people whose cause, as it were, by definition is righteous, while all political, economical, and ecclesiastical wielders of institutionalized power are depicted as instruments of the devil."
A crucial part of the debate was the distinction made between state, church, and prophetic theology. The distinction between 'church' and 'prophetic' theology, where the former was explicitly rejected by the KD, caused furious debates. Many argued the KD's qualified criticism of central theological concepts like reconciliation.
Another aspect of this debate, especially in South Africa, was the question of violence: not the violence of the state, but the supposed use of violence to resist and indeed overthrow the state. As the summary above shows, this was not a central part of the KD, but it nevertheless became a focus for the debate. This focus on violence soon came to eclipse much of the rest of the KD. The publication of the book Theology & Violenceis testimony to that debate; it attempted both to ground it in biblical, historical, ethical and theological reflections, and to 'move on', as Frank Chikane in his contribution to that book called it.
The influence and effect of the KD was such that attempts were made in a number of contexts to create similarly 'revolutionary' documents to challenge the churches' attitude to particular issues. None of these were remotely as successful as the KD. For example, in South Africa again, a group in the ICT attempted to address the sharply rising and complex violence in 1990 with a 'new Kairos document'. Several years later, some theologians in Europe tried to address global economics as 'the new Kairos'. Perhaps the most successful attempt to follow in the footsteps of the KD was the 'Latin American KD', called The Road to Damascus, written by Central American theologians and published in April 1988. However, the KD was successful in influencing black evangelicals and Pentecostals to come up with their own declarations in the context of Apartheid.
See also
Kairos Palestine
References
Notes
Bibliography
(Contains: The Kairos document; Kairos Central America: a challenge to the churches of the world; and The Road to Damascus: Kairos and conversion)
Further reading
External links
Opposition to apartheid in South Africa
Liberation theology
20th-century Christian texts
|
Verticordia auriculata is a flowering plant in the myrtle family, Myrtaceae and is endemic to the south-west of Western Australia. It is a small, multi-branched shrub with small leaves and spikes of pink to magenta-coloured flowers in late spring to early summer and it is widespread in the wheatbelt.
Description
Verticordia auriculata is a highly branched shrub with a single stem at the base and which grows to a height of and a width of . Its leaves are broadly elliptic in shape, long, dished and have short hairs along their edges.
The flowers are scented and arranged in spike-like groups near the ends of the branches, each flower on a stalk long. The floral cup is top-shaped, long, and has 5 ribs and a pitted surface. The sepals are pale pink to magenta, long, with 4 or 5 feather-like lobes and prominent, silvery appendages. The petals are egg-shaped, pink to magenta, long, slightly rough to touch and have a thread-like fringe. The style is about long, S-shaped and has hairs about long. Flowering time is from October to January.
Taxonomy and naming
Verticordia auriculata was first formally described by Alex George in 1991 and the description was published in Nuytsia from specimens collected near Perenjori. The specific epithet (auriculata) is derived from a Latin word meaning "having ear-like appendages" referring to the appendages on the sepals.
George placed this species in subgenus Eperephes, section Verticordella along with V. pennigera, V. halophila, V. blepharophylla, V. lindleyi, V. carinata, V. drummondii, V. wonganensis,V. paludosa, V. luteola, V. attenuata, V. tumida, V. mitodes, V. centipeda, V. bifimbriata, V. pholidophylla, V. spicata and V. hughanii.
Distribution and habitat
This verticordia grows in sand, often over other substrates, often in association with other verticordias, in heath or shrubland. It is widespread in areas between Mullewa, Yalgoo, Moonijin and Mukinbudin in the Avon Wheatbelt, Geraldton Sandplains and Yalgoo biogeographic regions.
Conservation
Verticordia auriculata is classified as "not threatened" by the Western Australian Government Department of Parks and Wildlife.
Use in horticulture
In cultivation V. auriculata is usually a compact shrub with scented flowers, making it an attractive garden plant, but it has proven difficult to establish. It seems to prefer sand with some gravel added but will not tolerate phosphorus-containing fertiliser. Further experiments need to be undertaken to establish its requirements for horticulture. It has been propagated from seed and from cuttings but fungal diseases can cause problems for young plants.
References
auriculata
Rosids of Western Australia
Eudicots of Western Australia
Plants described in 1991
|
The Yugoslav Ice Hockey League was the top ice hockey league in the old Yugoslavia.
In 1939, Yugoslavia became a member of the International Ice Hockey Federation. That year also, the country held its first national championship, with Ilirija emerging as champion. For many years, teams from Slovenia dominated even if Serbian and Croatian teams were also successful for a while during the late-1980s and early-1990s, right before the breakup of Yugoslavia.
The league folded in 1991, when the country split. Since then, Serbia, Croatia, Slovenia and Bosnia and Herzegovina have had their own national leagues.
Yugoslav League champions
1936–37 : Ilirija*
1937–38 : Ilirija*
1938–39 : Ilirija
1939–40 : Ilirija
1940–41 : Ilirija
1941–42 – 1945–46 : Not played
1946–47 : Mladost
1947–48 : Partizan
1948–49 : Mladost
1949–50 : Not played
1950–51 : Partizan
1951–52 : Partizan
1952–53 : Partizan
1953–54 : Partizan
1954–55 : Partizan
1955–56 : SD Zagreb
1956–57 : Jesenice
1957–58 : Jesenice
1958–59 : Jesenice
1959–60 : Jesenice
1960–61 : Jesenice
1961–62 : Jesenice
1962–63 : Jesenice
1963–64 : Jesenice
1964–65 : Jesenice
1965–66 : Jesenice
1966–67 : Jesenice
1967–68 : Jesenice
1968–69 : Jesenice
1969–70 : Jesenice
1970–71 : Jesenice
1971–72 : Olimpija
1972–73 : Jesenice
1973–74 : Olimpija
1974–75 : Olimpija
1975–76 : Olimpija
1976–77 : Jesenice
1977–78 : Jesenice
1978–79 : Olimpija
1979–80 : Olimpija
1980–81 : Jesenice
1981–82 : Jesenice
1982–83 : Olimpija
1983–84 : Olimpija
1984–85 : Jesenice
1985–86 : Partizan
1986–87 : Jesenice
1987–88 : Jesenice
1988–89 : Medveščak
1989–90 : Medveščak
1990–91 : Medveščak
* – by default as the only club in the competition.
Yugoslav Cup winners
1966 : Partizan
1967 : Jesenice
1968 : Jesenice
1969 : Olimpija
1970 : Jesenice
1971 : Jesenice
1972 : Olimpija
1973 : Jesenice
1974 : Jesenice
1975 : Olimpija
1976 : Jesenice
1977 : Jesenice
1978 – 1985 : Not played
1986 : Partizan
1987 : Olimpija
1988 : Medveščak
1989 : Medveščak
1990 : Medveščak
1991 : Medveščak
Teams
Below is a list of teams that had participated. A number of these participated for only a few seasons, while others participated for many.
Jesenice
Olimpija
Bled
Slavija
Kranjska Gora
Celje
Maribor
Tivoli
Gorenje Velenje
Prevalje
Partizan
Red Star
Beograd
Tašmajdan
Vojvodina
Spartak Subotica
Medveščak
SD Zagreb
Mladost
Sisak
Varaždin
Karlovac
Bosna
Makoteks Skopje
Vardar Skopje
See also
Slohokej League
Slovenian Ice Hockey League
Serbian Hockey League
Croatian Ice Hockey League
Bosnia and Herzegovina Hockey League
Panonian League
References
Book Total Hockey 2nd Edition (2000), Dan Diamond, Total Sports Publishing
External links
AZ Hockey – Yugoslavia
Ice hockey competitions in Yugoslavia
Defunct ice hockey leagues in Europe
Ice
Ice hockey leagues in Croatia
Ice hockey leagues in Serbia
Ice hockey leagues in Slovenia
Ice hockey in Bosnia and Herzegovina
1939 establishments in Yugoslavia
1991 disestablishments in Yugoslavia
Sports leagues established in 1939
Sports leagues disestablished in 1991
Ice Hockey
|
4-Vinylbenzyl chloride is an organic compound with the formula ClCH2C6H4CH=CH2. It is a bifunctional molecule, featuring both vinyl and a benzylic chloride functional groups. It is a colorless liquid that is typically stored with a stabilizer to suppress polymerization.
In combination with styrene, vinylbenzyl chloride is used as a comonomer in the production of chloromethylated polystyrene. It is produced by the chlorination of vinyltoluene. Often vinyltoluene consists of a mixture of 3- and 4-vinyl isomers, in which case the vinylbenzyl chloride will also be produced as a mixture of isomers.
References
Monomers
Vinylbenzenes
Organochlorides
|
```swift
// Sources/SwiftProtobuf/BinaryDecodingError.swift - Protobuf binary decoding errors
//
//
// See LICENSE.txt for license information:
// path_to_url
//
// your_sha256_hash-------------
///
/// Protobuf binary format decoding errors
///
// your_sha256_hash-------------
/// Describes errors that can occur when decoding a message from binary format.
public enum BinaryDecodingError: Error {
/// Extraneous data remained after decoding should have been complete.
case trailingGarbage
/// The decoder unexpectedly reached the end of the data before it was
/// expected.
case truncated
/// A string field was not encoded as valid UTF-8.
case invalidUTF8
/// The binary data was malformed in some way, such as an invalid wire format
/// or field tag.
case malformedProtobuf
/// The definition of the message or one of its nested messages has required
/// fields but the binary data did not include values for them. You must pass
/// `partial: true` during decoding if you wish to explicitly ignore missing
/// required fields.
case missingRequiredFields
/// An internal error happened while decoding. If this is ever encountered,
/// please file an issue with SwiftProtobuf with as much details as possible
/// for what happened (proto definitions, bytes being decoded (if possible)).
case internalExtensionError
/// Reached the nesting limit for messages within messages while decoding.
case messageDepthLimit
}
```
|
```c++
/// @ref core
/// @file glm/detail/type_mat3x4.hpp
#pragma once
#include "../fwd.hpp"
#include "type_vec3.hpp"
#include "type_vec4.hpp"
#include "type_mat.hpp"
#include <limits>
#include <cstddef>
namespace glm
{
template<typename T, qualifier Q>
struct mat<3, 4, T, Q>
{
typedef vec<4, T, Q> col_type;
typedef vec<3, T, Q> row_type;
typedef mat<3, 4, T, Q> type;
typedef mat<4, 3, T, Q> transpose_type;
typedef T value_type;
private:
col_type value[3];
public:
// -- Accesses --
typedef length_t length_type;
GLM_FUNC_DECL static GLM_CONSTEXPR length_type length() { return 3; }
GLM_FUNC_DECL col_type & operator[](length_type i);
GLM_FUNC_DECL col_type const& operator[](length_type i) const;
// -- Constructors --
GLM_FUNC_DECL GLM_CONSTEXPR_CTOR_CXX14 mat() GLM_DEFAULT_CTOR;
GLM_FUNC_DECL GLM_CONSTEXPR_CTOR_CXX14 mat(mat<3, 4, T, Q> const& m) GLM_DEFAULT;
template<qualifier P>
GLM_FUNC_DECL GLM_CONSTEXPR_CTOR_CXX14 mat(mat<3, 4, T, P> const& m);
GLM_FUNC_DECL explicit GLM_CONSTEXPR_CTOR_CXX14 mat(T scalar);
GLM_FUNC_DECL GLM_CONSTEXPR_CTOR_CXX14 mat(
T x0, T y0, T z0, T w0,
T x1, T y1, T z1, T w1,
T x2, T y2, T z2, T w2);
GLM_FUNC_DECL GLM_CONSTEXPR_CTOR_CXX14 mat(
col_type const& v0,
col_type const& v1,
col_type const& v2);
// -- Conversions --
template<
typename X1, typename Y1, typename Z1, typename W1,
typename X2, typename Y2, typename Z2, typename W2,
typename X3, typename Y3, typename Z3, typename W3>
GLM_FUNC_DECL GLM_CONSTEXPR_CTOR_CXX14 mat(
X1 x1, Y1 y1, Z1 z1, W1 w1,
X2 x2, Y2 y2, Z2 z2, W2 w2,
X3 x3, Y3 y3, Z3 z3, W3 w3);
template<typename V1, typename V2, typename V3>
GLM_FUNC_DECL GLM_CONSTEXPR_CTOR_CXX14 mat(
vec<4, V1, Q> const& v1,
vec<4, V2, Q> const& v2,
vec<4, V3, Q> const& v3);
// -- Matrix conversions --
template<typename U, qualifier P>
GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR_CTOR_CXX14 mat(mat<3, 4, U, P> const& m);
GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR_CTOR_CXX14 mat(mat<2, 2, T, Q> const& x);
GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR_CTOR_CXX14 mat(mat<3, 3, T, Q> const& x);
GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR_CTOR_CXX14 mat(mat<4, 4, T, Q> const& x);
GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR_CTOR_CXX14 mat(mat<2, 3, T, Q> const& x);
GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR_CTOR_CXX14 mat(mat<3, 2, T, Q> const& x);
GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR_CTOR_CXX14 mat(mat<2, 4, T, Q> const& x);
GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR_CTOR_CXX14 mat(mat<4, 2, T, Q> const& x);
GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR_CTOR_CXX14 mat(mat<4, 3, T, Q> const& x);
// -- Unary arithmetic operators --
GLM_FUNC_DECL GLM_CONSTEXPR_CXX14 mat<3, 4, T, Q> & operator=(mat<3, 4, T, Q> const& m) GLM_DEFAULT;
template<typename U>
GLM_FUNC_DECL GLM_CONSTEXPR_CXX14 mat<3, 4, T, Q> & operator=(mat<3, 4, U, Q> const& m);
template<typename U>
GLM_FUNC_DECL mat<3, 4, T, Q> & operator+=(U s);
template<typename U>
GLM_FUNC_DECL mat<3, 4, T, Q> & operator+=(mat<3, 4, U, Q> const& m);
template<typename U>
GLM_FUNC_DECL mat<3, 4, T, Q> & operator-=(U s);
template<typename U>
GLM_FUNC_DECL mat<3, 4, T, Q> & operator-=(mat<3, 4, U, Q> const& m);
template<typename U>
GLM_FUNC_DECL mat<3, 4, T, Q> & operator*=(U s);
template<typename U>
GLM_FUNC_DECL mat<3, 4, T, Q> & operator/=(U s);
// -- Increment and decrement operators --
GLM_FUNC_DECL mat<3, 4, T, Q> & operator++();
GLM_FUNC_DECL mat<3, 4, T, Q> & operator--();
GLM_FUNC_DECL mat<3, 4, T, Q> operator++(int);
GLM_FUNC_DECL mat<3, 4, T, Q> operator--(int);
};
// -- Unary operators --
template<typename T, qualifier Q>
GLM_FUNC_DECL mat<3, 4, T, Q> operator+(mat<3, 4, T, Q> const& m);
template<typename T, qualifier Q>
GLM_FUNC_DECL mat<3, 4, T, Q> operator-(mat<3, 4, T, Q> const& m);
// -- Binary operators --
template<typename T, qualifier Q>
GLM_FUNC_DECL mat<3, 4, T, Q> operator+(mat<3, 4, T, Q> const& m, T scalar);
template<typename T, qualifier Q>
GLM_FUNC_DECL mat<3, 4, T, Q> operator+(mat<3, 4, T, Q> const& m1, mat<3, 4, T, Q> const& m2);
template<typename T, qualifier Q>
GLM_FUNC_DECL mat<3, 4, T, Q> operator-(mat<3, 4, T, Q> const& m, T scalar);
template<typename T, qualifier Q>
GLM_FUNC_DECL mat<3, 4, T, Q> operator-(mat<3, 4, T, Q> const& m1, mat<3, 4, T, Q> const& m2);
template<typename T, qualifier Q>
GLM_FUNC_DECL mat<3, 4, T, Q> operator*(mat<3, 4, T, Q> const& m, T scalar);
template<typename T, qualifier Q>
GLM_FUNC_DECL mat<3, 4, T, Q> operator*(T scalar, mat<3, 4, T, Q> const& m);
template<typename T, qualifier Q>
GLM_FUNC_DECL typename mat<3, 4, T, Q>::col_type operator*(mat<3, 4, T, Q> const& m, typename mat<3, 4, T, Q>::row_type const& v);
template<typename T, qualifier Q>
GLM_FUNC_DECL typename mat<3, 4, T, Q>::row_type operator*(typename mat<3, 4, T, Q>::col_type const& v, mat<3, 4, T, Q> const& m);
template<typename T, qualifier Q>
GLM_FUNC_DECL mat<4, 4, T, Q> operator*(mat<3, 4, T, Q> const& m1, mat<4, 3, T, Q> const& m2);
template<typename T, qualifier Q>
GLM_FUNC_DECL mat<2, 4, T, Q> operator*(mat<3, 4, T, Q> const& m1, mat<2, 3, T, Q> const& m2);
template<typename T, qualifier Q>
GLM_FUNC_DECL mat<3, 4, T, Q> operator*(mat<3, 4, T, Q> const& m1, mat<3, 3, T, Q> const& m2);
template<typename T, qualifier Q>
GLM_FUNC_DECL mat<3, 4, T, Q> operator/(mat<3, 4, T, Q> const& m, T scalar);
template<typename T, qualifier Q>
GLM_FUNC_DECL mat<3, 4, T, Q> operator/(T scalar, mat<3, 4, T, Q> const& m);
// -- Boolean operators --
template<typename T, qualifier Q>
GLM_FUNC_DECL bool operator==(mat<3, 4, T, Q> const& m1, mat<3, 4, T, Q> const& m2);
template<typename T, qualifier Q>
GLM_FUNC_DECL bool operator!=(mat<3, 4, T, Q> const& m1, mat<3, 4, T, Q> const& m2);
}//namespace glm
#ifndef GLM_EXTERNAL_TEMPLATE
#include "type_mat3x4.inl"
#endif
```
|
Rapid Redux (foaled in Kentucky on February 24, 2006) is an American Thoroughbred racehorse who set an American record with 22 consecutive wins in 2012. The winter-born gelding was his sire Pleasantly Perfect's first runner.
As a two-year-old at Keeneland Sales's September 2007 auctions, Rapid Redux was sold for $85,000. His best finish at two was a distant third in the three-horse Tyro Stakes at Monmouth Park. By three, he was running in Claiming races.
Robert Cole, a Baltimore County native, claimed Rapid Redux at Penn National Race Course for $6,250 on October 13, 2010. The horse's win streak began on December 2, 2010, at the same track.
Trained by David J. Wells (based at Penn National), Rapid Redux has now won races at seven different tracks at distances from five furlongs to 1 1/8 miles, using seven riders during the streak.
2011
Rapid Redux won 19 straight races in 2011, continuing his streak that began in 2010 and reached 22 races in a row in January 2012. The gelding's running style is to get quickly in front and stay there.
The chestnut gelding has equaled Citation's modern-day United States record by winning 19 races in a single season. He has also equaled the record of Pepper's Pride and Zenyatta. But while Citation was an American Triple Crown winner and is now in the Hall of Fame, Zenyatta ran in graded stakes races for all but her first two starts, and Pepper's Pride won in stakes races (though only in her birth state of New Mexico), Rapid Redux's accomplishments came in starter allowance races.
A day after it was announced that the now 6-year-old chestnut would receive the Eclipse Special Award for his accomplishments in 2011, he was named winner of the Secretariat Vox Populi Award ("Voice of the People"). The award, established by Penny Chenery in 2010 and first given to Zenyatta, recognizes a horse who has added in great measure to the sport of horse racing, distinguishing himself or herself by reaching out to the public in a positive and popular way. Four horses were considered for this honor: Havre de Grace, Uncle Mo, Goldikova and Rapid Redux. The winner was determined by the public’s online poll results, opinions offered by the Vox Populi Committee, and input from Chenery. Rapid Redux received 39% of the votes.
In awarding Rapid Redux, Mrs. Chenery said: "Thousands of races are staged in America every year for every level of competition and thousands of horses win them, but rarely the same horse. So for one horse to beat his field 19 times in one year is flat out phenomenal..."
On November 21, 2011 at Mountaineer Park, Rapid Redux established the North American record of 20 wins in a row.
2012
Rapid Redux began the year at Laurel Park by winning his 22nd straight race. After winning that race, he was retired. In March 2012, the Maryland Horse Industry Board honored Rapid Redux with its “Touch of Class” Award. In May 2012, he arrived at Old Friends Equine in Georgetown, Kentucky.
See also
List of leading Thoroughbred racehorses
Notes and references
External links
Rapid Redux, pedigree & stats
Rapid Redux' facebook page
Rapid Redux in his 22nd win
2006 racehorse births
Racehorses trained in the United States
Racehorses bred in Kentucky
Eclipse Award winners
Thoroughbred family 2-d
Old Friends Equine Retirement
|
The Cobra Event is a 1998 thriller novel by Richard Preston describing an attempted bioterrorism attack on the United States. The perpetrator of the attack has genetically engineered a virus, called "Cobra", that fuses the incurable and highly contagious common cold with one of the world's most virulent diseases, smallpox. The disease that results from the virus, called brainpox in the novel, has symptoms that mimic those of Lesch–Nyhan syndrome and the common cold. The book is divided between descriptions of the virus and the government's attempt to stop the imminent threat posed by it.
Plot summary
The book is divided into six sections. The first section, called "Trial", starts with a teenaged girl named Kate Moran who dies violently one day in school. The next section, titled "1969", describes tests done in the 1960s by the U.S. government involving weaponized viruses. The third section, "Diagnosis", describes the autopsy of Kate Moran and introduces the key characters of Dr. Alice Austen, Mark Littleberry, and Will Hopkins. The last three sections—"Decision", "Reachdeep", and "The Operation"—describe these three characters' journeys to discover the source of the lethal Cobra virus.
"Cobra" and its effects
The virus described in the novel is a fictional chimera that attacks the human brain. The infective agent, code-named "Cobra" by the protagonists, is a recombinant virus made from modified variants of the nuclear polyhedrosis virus (which normally infects moths and butterflies), rhinovirus, and smallpox.
The infection initially presents common cold-like symptoms and a characteristic blistering process in the nose and mouth, before invading the nervous system. Although not as contagious as the influenza virus, it spreads rapidly through the same vectors as the common cold, mainly via airborne particulate matter coming in contact with the mucous membranes of the respiratory system. Although tussis is the primary source of these particles, the inclusion of nuclear polyhedrosis virus allows Cobra to form crystals, which can be easily processed into a fine powder. This powder is used as a delivery mechanism in the novel's terrorist attacks.
The optic nerves of the eye, accessed through the eyelid, and olfactory nerves of the nose provide a direct pathway for the neurotropic Cobra virus to spread to the central nervous system, where the virus takes root. Once present in brain matter, the virus begins to replicate exponentially. Infected brain cells experience growth of viral crystals in their nuclei, eventually leading to lysis of the cell. The brain stem, which controls the basic functions of life, is heavily affected by this growth. The Cobra virus also knocks out the gene for the enzyme hypoxanthine-guanine phosphoribosyltransferase, whose absence causes Lesch–Nyhan syndrome. As a result, patients experience both autocannibalistic urges and increased aggression toward others. These neurological symptoms develop within a matter of hours, eventually resulting in death due to the severe damage to the brain stem. Cobra is so aggressive in its growth that autopsies reveal a brain almost liquefied by cell lysis. Cobra is shown to have a case fatality rate of almost 98%, with only one survivor out of 43 people infected.
Impact of the book
President Bill Clinton was reportedly sufficiently impressed by the scenarios described in the book that he asked aides and officials for closer study and suggested more funding for research into bioterrorism threats. However, some variation exists in the assorted accounts of this episode in his administration, about his degree of concern, who was asked to help, the depth of inquiry, the formal status of his orders, and the magnitude of expense involved.
References
Biological warfare
1998 American novels
American thriller novels
Novels by Richard Preston
Fictional viruses
Fictional microorganisms
Bioterrorism in fiction
Novels about mass murder
|
```rust
//
//
// path_to_url
//
// Unless required by applicable law or agreed to in writing, software
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
use std::cell::RefCell;
use std::cmp::{max, min};
use std::iter;
use std::ops::Range;
use serde_json::Value;
use crate::annotations::{AnnotationStore, Annotations, ToAnnotation};
use crate::client::{Client, Update, UpdateOp};
use crate::edit_types::ViewEvent;
use crate::find::{Find, FindStatus};
use crate::line_cache_shadow::{self, LineCacheShadow, RenderPlan, RenderTactic};
use crate::line_offset::LineOffset;
use crate::linewrap::{InvalLines, Lines, VisualLine, WrapWidth};
use crate::movement::{region_movement, selection_movement, Movement};
use crate::plugins::PluginId;
use crate::rpc::{FindQuery, GestureType, MouseAction, SelectionGranularity, SelectionModifier};
use crate::selection::{Affinity, InsertDrift, SelRegion, Selection};
use crate::styles::{Style, ThemeStyleMap};
use crate::tabs::{BufferId, Counter, ViewId};
use crate::width_cache::WidthCache;
use crate::word_boundaries::WordCursor;
use xi_rope::spans::Spans;
use xi_rope::{Cursor, Interval, LinesMetric, Rope, RopeDelta};
use xi_trace::trace_block;
type StyleMap = RefCell<ThemeStyleMap>;
/// A flag used to indicate when legacy actions should modify selections
const FLAG_SELECT: u64 = 2;
/// Size of batches as number of bytes used during incremental find.
const FIND_BATCH_SIZE: usize = 500000;
/// A view to a buffer. It is the buffer plus additional information
/// like line breaks and selection state.
pub struct View {
view_id: ViewId,
buffer_id: BufferId,
/// Tracks whether this view has been scheduled to render.
/// We attempt to reduce duplicate renders by setting a small timeout
/// after an edit is applied, to allow batching with any plugin updates.
pending_render: bool,
size: Size,
/// The selection state for this view. Invariant: non-empty.
selection: Selection,
drag_state: Option<DragState>,
/// vertical scroll position
first_line: usize,
/// height of visible portion
height: usize,
lines: Lines,
/// Front end's line cache state for this view. See the `LineCacheShadow`
/// description for the invariant.
lc_shadow: LineCacheShadow,
/// New offset to be scrolled into position after an edit.
scroll_to: Option<usize>,
/// The state for finding text for this view.
/// Each instance represents a separate search query.
find: Vec<Find>,
/// Tracks the IDs for additional search queries in find.
find_id_counter: Counter,
/// Tracks whether there has been changes in find results or find parameters.
/// This is used to determined whether FindStatus should be sent to the frontend.
find_changed: FindStatusChange,
/// Tracks the progress of incremental find.
find_progress: FindProgress,
/// Tracks whether find highlights should be rendered.
/// Highlights are only rendered when search dialog is open.
highlight_find: bool,
/// The state for replacing matches for this view.
replace: Option<Replace>,
/// Tracks whether the replacement string or replace parameters changed.
replace_changed: bool,
/// Annotations provided by plugins.
annotations: AnnotationStore,
}
/// Indicates what changed in the find state.
#[derive(PartialEq, Debug)]
enum FindStatusChange {
/// None of the find parameters or number of matches changed.
None,
/// Find parameters and number of matches changed.
All,
/// Only number of matches changed
Matches,
}
/// Indicates what changed in the find state.
#[derive(PartialEq, Debug, Clone)]
enum FindProgress {
/// Incremental find is done/not running.
Ready,
/// The find process just started.
Started,
/// Incremental find is in progress. Keeps tracked of already searched range.
InProgress(Range<usize>),
}
/// Contains replacement string and replace options.
#[derive(Debug, Default, PartialEq, Serialize, Deserialize, Clone)]
pub struct Replace {
/// Replacement string.
pub chars: String,
pub preserve_case: bool,
}
/// A size, in pixel units (not display pixels).
#[derive(Debug, Default, PartialEq, Serialize, Deserialize, Clone)]
pub struct Size {
pub width: f64,
pub height: f64,
}
/// State required to resolve a drag gesture into a selection.
struct DragState {
/// All the selection regions other than the one being dragged.
base_sel: Selection,
/// Start of the region selected when drag was started (region is
/// assumed to be forward).
min: usize,
/// End of the region selected when drag was started.
max: usize,
granularity: SelectionGranularity,
}
impl View {
pub fn new(view_id: ViewId, buffer_id: BufferId) -> View {
View {
view_id,
buffer_id,
pending_render: false,
selection: SelRegion::caret(0).into(),
scroll_to: Some(0),
size: Size::default(),
drag_state: None,
first_line: 0,
height: 10,
lines: Lines::default(),
lc_shadow: LineCacheShadow::default(),
find: Vec::new(),
find_id_counter: Counter::default(),
find_changed: FindStatusChange::None,
find_progress: FindProgress::Ready,
highlight_find: false,
replace: None,
replace_changed: false,
annotations: AnnotationStore::new(),
}
}
pub(crate) fn get_buffer_id(&self) -> BufferId {
self.buffer_id
}
pub(crate) fn get_view_id(&self) -> ViewId {
self.view_id
}
pub(crate) fn get_lines(&self) -> &Lines {
&self.lines
}
pub(crate) fn get_replace(&self) -> Option<Replace> {
self.replace.clone()
}
pub(crate) fn set_has_pending_render(&mut self, pending: bool) {
self.pending_render = pending
}
pub(crate) fn has_pending_render(&self) -> bool {
self.pending_render
}
pub(crate) fn update_wrap_settings(&mut self, text: &Rope, wrap_cols: usize, word_wrap: bool) {
let wrap_width = match (word_wrap, wrap_cols) {
(true, _) => WrapWidth::Width(self.size.width),
(false, 0) => WrapWidth::None,
(false, cols) => WrapWidth::Bytes(cols),
};
self.lines.set_wrap_width(text, wrap_width);
}
pub(crate) fn needs_more_wrap(&self) -> bool {
!self.lines.is_converged()
}
pub(crate) fn needs_wrap_in_visible_region(&self, text: &Rope) -> bool {
if self.lines.is_converged() {
false
} else {
let visible_region = self.interval_of_visible_region(text);
self.lines.interval_needs_wrap(visible_region)
}
}
pub(crate) fn find_in_progress(&self) -> bool {
matches!(self.find_progress, FindProgress::InProgress(_) | FindProgress::Started)
}
pub(crate) fn do_edit(&mut self, text: &Rope, cmd: ViewEvent) {
use self::ViewEvent::*;
match cmd {
Move(movement) => self.do_move(text, movement, false),
ModifySelection(movement) => self.do_move(text, movement, true),
SelectAll => self.select_all(text),
Scroll(range) => self.set_scroll(range.first, range.last),
AddSelectionAbove => self.add_selection_by_movement(text, Movement::UpExactPosition),
AddSelectionBelow => self.add_selection_by_movement(text, Movement::DownExactPosition),
Gesture { line, col, ty } => self.do_gesture(text, line, col, ty),
GotoLine { line } => self.goto_line(text, line),
Find { chars, case_sensitive, regex, whole_words } => {
let id = self.find.first().map(|q| q.id());
let query_changes = FindQuery { id, chars, case_sensitive, regex, whole_words };
self.set_find(text, [query_changes].to_vec())
}
MultiFind { queries } => self.set_find(text, queries),
FindNext { wrap_around, allow_same, modify_selection } => {
self.do_find_next(text, false, wrap_around, allow_same, &modify_selection)
}
FindPrevious { wrap_around, allow_same, modify_selection } => {
self.do_find_next(text, true, wrap_around, allow_same, &modify_selection)
}
FindAll => self.do_find_all(text),
Click(MouseAction { line, column, flags, click_count }) => {
// Deprecated (kept for client compatibility):
// should be removed in favor of do_gesture
warn!("Usage of click is deprecated; use do_gesture");
if (flags & FLAG_SELECT) != 0 {
self.do_gesture(
text,
line,
column,
GestureType::SelectExtend { granularity: SelectionGranularity::Point },
)
} else if click_count == Some(2) {
self.do_gesture(text, line, column, GestureType::WordSelect)
} else if click_count == Some(3) {
self.do_gesture(text, line, column, GestureType::LineSelect)
} else {
self.do_gesture(text, line, column, GestureType::PointSelect)
}
}
Drag(MouseAction { line, column, .. }) => {
warn!("Usage of drag is deprecated; use gesture instead");
self.do_gesture(text, line, column, GestureType::Drag)
}
CollapseSelections => self.collapse_selections(text),
HighlightFind { visible } => {
self.highlight_find = visible;
self.find_changed = FindStatusChange::All;
self.set_dirty(text);
}
SelectionForFind { case_sensitive } => self.do_selection_for_find(text, case_sensitive),
Replace { chars, preserve_case } => self.do_set_replace(chars, preserve_case),
SelectionForReplace => self.do_selection_for_replace(text),
SelectionIntoLines => self.do_split_selection_into_lines(text),
}
}
fn do_gesture(&mut self, text: &Rope, line: u64, col: u64, ty: GestureType) {
let line = line as usize;
let col = col as usize;
let offset = self.line_col_to_offset(text, line, col);
match ty {
GestureType::Select { granularity, multi } => {
self.select(text, offset, granularity, multi)
}
GestureType::SelectExtend { granularity } => {
self.extend_selection(text, offset, granularity)
}
GestureType::Drag => self.do_drag(text, offset, Affinity::default()),
_ => {
warn!("Deprecated gesture type sent to do_gesture method");
}
}
}
fn goto_line(&mut self, text: &Rope, line: u64) {
let offset = self.line_col_to_offset(text, line as usize, 0);
self.set_selection(text, SelRegion::caret(offset));
}
pub fn set_size(&mut self, size: Size) {
self.size = size;
}
pub fn set_scroll(&mut self, first: i64, last: i64) {
let first = max(first, 0) as usize;
let last = max(last, 0) as usize;
self.first_line = first;
self.height = last - first;
}
pub fn scroll_height(&self) -> usize {
self.height
}
fn scroll_to_cursor(&mut self, text: &Rope) {
let end = self.sel_regions().last().unwrap().end;
let line = self.line_of_offset(text, end);
if line < self.first_line {
self.first_line = line;
} else if self.first_line + self.height <= line {
self.first_line = line - (self.height - 1);
}
// We somewhat arbitrarily choose the last region for setting the old-style
// selection state, and for scrolling it into view if needed. This choice can
// likely be improved.
self.scroll_to = Some(end);
}
/// Removes any selection present at the given offset.
/// Returns true if a selection was removed, false otherwise.
pub fn deselect_at_offset(&mut self, text: &Rope, offset: usize) -> bool {
if !self.selection.regions_in_range(offset, offset).is_empty() {
let mut sel = self.selection.clone();
sel.delete_range(offset, offset, true);
if !sel.is_empty() {
self.drag_state = None;
self.set_selection_raw(text, sel);
return true;
}
}
false
}
/// Move the selection by the given movement. Return value is the offset of
/// a point that should be scrolled into view.
///
/// If `modify` is `true`, the selections are modified, otherwise the results
/// of individual region movements become carets.
pub fn do_move(&mut self, text: &Rope, movement: Movement, modify: bool) {
self.drag_state = None;
let new_sel =
selection_movement(movement, &self.selection, self, self.scroll_height(), text, modify);
self.set_selection(text, new_sel);
}
/// Set the selection to a new value.
pub fn set_selection<S: Into<Selection>>(&mut self, text: &Rope, sel: S) {
self.set_selection_raw(text, sel.into());
self.scroll_to_cursor(text);
}
/// Sets the selection to a new value, without invalidating.
fn set_selection_for_edit(&mut self, text: &Rope, sel: Selection) {
self.selection = sel;
self.scroll_to_cursor(text);
}
/// Sets the selection to a new value, invalidating the line cache as needed.
/// This function does not perform any scrolling.
fn set_selection_raw(&mut self, text: &Rope, sel: Selection) {
self.invalidate_selection(text);
self.selection = sel;
self.invalidate_selection(text);
}
/// Invalidate the current selection. Note that we could be even more
/// fine-grained in the case of multiple cursors, but we also want this
/// method to be fast even when the selection is large.
fn invalidate_selection(&mut self, text: &Rope) {
// TODO: refine for upstream (caret appears on prev line)
let first_line = self.line_of_offset(text, self.selection.first().unwrap().min());
let last_line = self.line_of_offset(text, self.selection.last().unwrap().max()) + 1;
let all_caret = self.selection.iter().all(|region| region.is_caret());
let invalid = if all_caret {
line_cache_shadow::CURSOR_VALID
} else {
line_cache_shadow::CURSOR_VALID | line_cache_shadow::STYLES_VALID
};
self.lc_shadow.partial_invalidate(first_line, last_line, invalid);
}
fn add_selection_by_movement(&mut self, text: &Rope, movement: Movement) {
let mut sel = Selection::new();
for ®ion in self.sel_regions() {
sel.add_region(region);
let new_region =
region_movement(movement, region, self, self.scroll_height(), text, false);
sel.add_region(new_region);
}
self.set_selection(text, sel);
}
// TODO: insert from keyboard or input method shouldn't break undo group,
/// Invalidates the styles of the given range (start and end are offsets within
/// the text).
pub fn invalidate_styles(&mut self, text: &Rope, start: usize, end: usize) {
let first_line = self.line_of_offset(text, start);
let (mut last_line, last_col) = self.offset_to_line_col(text, end);
last_line += if last_col > 0 { 1 } else { 0 };
self.lc_shadow.partial_invalidate(first_line, last_line, line_cache_shadow::STYLES_VALID);
}
pub fn update_annotations(
&mut self,
plugin: PluginId,
interval: Interval,
annotations: Annotations,
) {
self.annotations.update(plugin, interval, annotations)
}
/// Select entire buffer.
///
/// Note: unlike movement based selection, this does not scroll.
pub fn select_all(&mut self, text: &Rope) {
let selection = SelRegion::new(0, text.len()).into();
self.set_selection_raw(text, selection);
}
/// Finds the unit of text containing the given offset.
fn unit(&self, text: &Rope, offset: usize, granularity: SelectionGranularity) -> Interval {
match granularity {
SelectionGranularity::Point => Interval::new(offset, offset),
SelectionGranularity::Word => {
let mut word_cursor = WordCursor::new(text, offset);
let (start, end) = word_cursor.select_word();
Interval::new(start, end)
}
SelectionGranularity::Line => {
let (line, _) = self.offset_to_line_col(text, offset);
let (start, end) = self.lines.logical_line_range(text, line);
Interval::new(start, end)
}
}
}
/// Selects text with a certain granularity and supports multi_selection
fn select(
&mut self,
text: &Rope,
offset: usize,
granularity: SelectionGranularity,
multi: bool,
) {
// If multi-select is enabled, toggle existing regions
if multi
&& granularity == SelectionGranularity::Point
&& self.deselect_at_offset(text, offset)
{
return;
}
let region = self.unit(text, offset, granularity).into();
let base_sel = match multi {
true => self.selection.clone(),
false => Selection::new(),
};
let mut selection = base_sel.clone();
selection.add_region(region);
self.set_selection(text, selection);
self.drag_state =
Some(DragState { base_sel, min: region.start, max: region.end, granularity });
}
/// Extends an existing selection (eg. when the user performs SHIFT + click).
pub fn extend_selection(
&mut self,
text: &Rope,
offset: usize,
granularity: SelectionGranularity,
) {
if self.sel_regions().is_empty() {
return;
}
let (base_sel, last) = {
let mut base = Selection::new();
let (last, rest) = self.sel_regions().split_last().unwrap();
for ®ion in rest {
base.add_region(region);
}
(base, *last)
};
let mut sel = base_sel.clone();
self.drag_state =
Some(DragState { base_sel, min: last.start, max: last.start, granularity });
let start = (last.start, last.start);
let new_region = self.range_region(text, start, offset, granularity);
// TODO: small nit, merged region should be backward if end < start.
// This could be done by explicitly overriding, or by tweaking the
// merge logic.
sel.add_region(new_region);
self.set_selection(text, sel);
}
/// Splits current selections into lines.
fn do_split_selection_into_lines(&mut self, text: &Rope) {
let mut selection = Selection::new();
for region in self.selection.iter() {
if region.is_caret() {
selection.add_region(SelRegion::caret(region.max()));
} else {
let mut cursor = Cursor::new(text, region.min());
while cursor.pos() < region.max() {
let sel_start = cursor.pos();
let end_of_line = match cursor.next::<LinesMetric>() {
Some(end) if end >= region.max() => max(0, region.max() - 1),
Some(end) => max(0, end - 1),
None if cursor.pos() == text.len() => cursor.pos(),
_ => break,
};
selection.add_region(SelRegion::new(sel_start, end_of_line));
}
}
}
self.set_selection_raw(text, selection);
}
/// Does a drag gesture, setting the selection from a combination of the drag
/// state and new offset.
fn do_drag(&mut self, text: &Rope, offset: usize, affinity: Affinity) {
let new_sel = self.drag_state.as_ref().map(|drag_state| {
let mut sel = drag_state.base_sel.clone();
let start = (drag_state.min, drag_state.max);
let new_region = self.range_region(text, start, offset, drag_state.granularity);
sel.add_region(new_region.with_horiz(None).with_affinity(affinity));
sel
});
if let Some(sel) = new_sel {
self.set_selection(text, sel);
}
}
/// Creates a `SelRegion` for range select or drag operations.
pub fn range_region(
&self,
text: &Rope,
start: (usize, usize),
offset: usize,
granularity: SelectionGranularity,
) -> SelRegion {
let (min_start, max_start) = start;
let end = self.unit(text, offset, granularity);
let (min_end, max_end) = (end.start, end.end);
if offset >= min_start {
SelRegion::new(min_start, max_end)
} else {
SelRegion::new(max_start, min_end)
}
}
/// Returns the regions of the current selection.
pub fn sel_regions(&self) -> &[SelRegion] {
&self.selection
}
/// Collapse all selections in this view into a single caret
pub fn collapse_selections(&mut self, text: &Rope) {
let mut sel = self.selection.clone();
sel.collapse();
self.set_selection(text, sel);
}
/// Determines whether the offset is in any selection (counting carets and
/// selection edges).
pub fn is_point_in_selection(&self, offset: usize) -> bool {
!self.selection.regions_in_range(offset, offset).is_empty()
}
// Encode a single line with its styles and cursors in JSON.
// If "text" is not specified, don't add "text" to the output.
// If "style_spans" are not specified, don't add "styles" to the output.
fn encode_line(
&self,
client: &Client,
styles: &StyleMap,
line: VisualLine,
text: Option<&Rope>,
style_spans: Option<&Spans<Style>>,
last_pos: usize,
) -> Value {
let start_pos = line.interval.start;
let pos = line.interval.end;
let mut cursors = Vec::new();
let mut selections = Vec::new();
for region in self.selection.regions_in_range(start_pos, pos) {
// cursor
let c = region.end;
if (c > start_pos && c < pos)
|| (!region.is_upstream() && c == start_pos)
|| (region.is_upstream() && c == pos)
|| (c == pos && c == last_pos)
{
cursors.push(c - start_pos);
}
// selection with interior
let sel_start_ix = clamp(region.min(), start_pos, pos) - start_pos;
let sel_end_ix = clamp(region.max(), start_pos, pos) - start_pos;
if sel_end_ix > sel_start_ix {
selections.push((sel_start_ix, sel_end_ix));
}
}
let mut hls = Vec::new();
if self.highlight_find {
for find in &self.find {
let mut cur_hls = Vec::new();
for region in find.occurrences().regions_in_range(start_pos, pos) {
let sel_start_ix = clamp(region.min(), start_pos, pos) - start_pos;
let sel_end_ix = clamp(region.max(), start_pos, pos) - start_pos;
if sel_end_ix > sel_start_ix {
cur_hls.push((sel_start_ix, sel_end_ix));
}
}
hls.push(cur_hls);
}
}
let mut result = json!({});
if let Some(text) = text {
result["text"] = json!(text.slice_to_cow(start_pos..pos));
}
if let Some(style_spans) = style_spans {
result["styles"] = json!(self.encode_styles(
client,
styles,
start_pos,
pos,
&selections,
&hls,
style_spans
));
}
if !cursors.is_empty() {
result["cursor"] = json!(cursors);
}
if let Some(line_num) = line.line_num {
result["ln"] = json!(line_num);
}
result
}
pub fn encode_styles(
&self,
client: &Client,
styles: &StyleMap,
start: usize,
end: usize,
sel: &[(usize, usize)],
hls: &Vec<Vec<(usize, usize)>>,
style_spans: &Spans<Style>,
) -> Vec<isize> {
let mut encoded_styles = Vec::new();
assert!(start <= end, "{} {}", start, end);
let style_spans = style_spans.subseq(Interval::new(start, end));
let mut ix = 0;
// we add the special find highlights (1 to N) and selection (0) styles first.
// We add selection after find because we want it to be preferred if the
// same span exists in both sets (as when there is an active selection)
for (index, cur_find_hls) in hls.iter().enumerate() {
for &(sel_start, sel_end) in cur_find_hls {
encoded_styles.push((sel_start as isize) - ix);
encoded_styles.push(sel_end as isize - sel_start as isize);
encoded_styles.push(index as isize + 1);
ix = sel_end as isize;
}
}
for &(sel_start, sel_end) in sel {
encoded_styles.push((sel_start as isize) - ix);
encoded_styles.push(sel_end as isize - sel_start as isize);
encoded_styles.push(0);
ix = sel_end as isize;
}
for (iv, style) in style_spans.iter() {
let style_id = self.get_or_def_style_id(client, styles, style);
encoded_styles.push((iv.start() as isize) - ix);
encoded_styles.push(iv.end() as isize - iv.start() as isize);
encoded_styles.push(style_id as isize);
ix = iv.end() as isize;
}
encoded_styles
}
fn get_or_def_style_id(&self, client: &Client, style_map: &StyleMap, style: &Style) -> usize {
let mut style_map = style_map.borrow_mut();
if let Some(ix) = style_map.lookup(style) {
return ix;
}
let ix = style_map.add(style);
let style = style_map.merge_with_default(style);
client.def_style(&style.to_json(ix));
ix
}
fn send_update_for_plan(
&mut self,
text: &Rope,
client: &Client,
styles: &StyleMap,
style_spans: &Spans<Style>,
plan: &RenderPlan,
pristine: bool,
) {
// every time current visible range changes, annotations are sent to frontend
let start_off = self.offset_of_line(text, self.first_line);
let end_off = self.offset_of_line(text, self.first_line + self.height + 2);
let visible_range = Interval::new(start_off, end_off);
let selection_annotations =
self.selection.get_annotations(visible_range, self, text).to_json();
let find_annotations =
self.find.iter().map(|f| f.get_annotations(visible_range, self, text).to_json());
let plugin_annotations =
self.annotations.iter_range(self, text, visible_range).map(|a| a.to_json());
let annotations = iter::once(selection_annotations)
.chain(find_annotations)
.chain(plugin_annotations)
.collect::<Vec<_>>();
if !self.lc_shadow.needs_render(plan) {
let total_lines = self.line_of_offset(text, text.len()) + 1;
let update =
Update { ops: vec![UpdateOp::copy(total_lines, 1)], pristine, annotations };
client.update_view(self.view_id, &update);
return;
}
// send updated find status only if there have been changes
if self.find_changed != FindStatusChange::None {
let matches_only = self.find_changed == FindStatusChange::Matches;
client.find_status(self.view_id, &json!(self.find_status(text, matches_only)));
self.find_changed = FindStatusChange::None;
}
// send updated replace status if changed
if self.replace_changed {
if let Some(replace) = self.get_replace() {
client.replace_status(self.view_id, &json!(replace))
}
}
let mut b = line_cache_shadow::Builder::new();
let mut ops = Vec::new();
let mut line_num = 0; // tracks old line cache
for seg in self.lc_shadow.iter_with_plan(plan) {
match seg.tactic {
RenderTactic::Discard => {
ops.push(UpdateOp::invalidate(seg.n));
b.add_span(seg.n, 0, 0);
}
RenderTactic::Preserve | RenderTactic::Render => {
// Depending on the state of TEXT_VALID, STYLES_VALID and
// CURSOR_VALID, perform one of the following actions:
//
// - All the three are valid => send the "copy" op
// (+leading "skip" to catch up with "ln" to update);
//
// - Text and styles are valid, cursors are not => same,
// but send an "update" op instead of "copy" to move
// the cursors;
//
// - Text or styles are invalid:
// => send "invalidate" if RenderTactic is "Preserve";
// => send "skip"+"insert" (recreate the lines) if
// RenderTactic is "Render".
if (seg.validity & line_cache_shadow::TEXT_VALID) != 0
&& (seg.validity & line_cache_shadow::STYLES_VALID) != 0
{
let n_skip = seg.their_line_num - line_num;
if n_skip > 0 {
ops.push(UpdateOp::skip(n_skip));
}
let line_offset = self.offset_of_line(text, seg.our_line_num);
let logical_line = text.line_of_offset(line_offset);
if (seg.validity & line_cache_shadow::CURSOR_VALID) != 0 {
// ALL_VALID; copy lines as-is
ops.push(UpdateOp::copy(seg.n, logical_line + 1));
} else {
// !CURSOR_VALID; update cursors
let start_line = seg.our_line_num;
let encoded_lines = self
.lines
.iter_lines(text, start_line)
.take(seg.n)
.map(|l| {
self.encode_line(
client,
styles,
l,
/* text = */ None,
/* style_spans = */ None,
text.len(),
)
})
.collect::<Vec<_>>();
let logical_line_opt =
if logical_line == 0 { None } else { Some(logical_line + 1) };
ops.push(UpdateOp::update(encoded_lines, logical_line_opt));
}
b.add_span(seg.n, seg.our_line_num, seg.validity);
line_num = seg.their_line_num + seg.n;
} else if seg.tactic == RenderTactic::Preserve {
ops.push(UpdateOp::invalidate(seg.n));
b.add_span(seg.n, 0, 0);
} else if seg.tactic == RenderTactic::Render {
let start_line = seg.our_line_num;
let encoded_lines = self
.lines
.iter_lines(text, start_line)
.take(seg.n)
.map(|l| {
self.encode_line(
client,
styles,
l,
Some(text),
Some(style_spans),
text.len(),
)
})
.collect::<Vec<_>>();
debug_assert_eq!(encoded_lines.len(), seg.n);
ops.push(UpdateOp::insert(encoded_lines));
b.add_span(seg.n, seg.our_line_num, line_cache_shadow::ALL_VALID);
}
}
}
}
self.lc_shadow = b.build();
for find in &mut self.find {
find.set_hls_dirty(false)
}
let update = Update { ops, pristine, annotations };
client.update_view(self.view_id, &update);
}
/// Determines the current number of find results and search parameters to send them to
/// the frontend.
pub fn find_status(&self, text: &Rope, matches_only: bool) -> Vec<FindStatus> {
self.find
.iter()
.map(|find| find.find_status(self, text, matches_only))
.collect::<Vec<FindStatus>>()
}
/// Update front-end with any changes to view since the last time sent.
/// The `pristine` argument indicates whether or not the buffer has
/// unsaved changes.
pub fn render_if_dirty(
&mut self,
text: &Rope,
client: &Client,
styles: &StyleMap,
style_spans: &Spans<Style>,
pristine: bool,
) {
let height = self.line_of_offset(text, text.len()) + 1;
let plan = RenderPlan::create(height, self.first_line, self.height);
self.send_update_for_plan(text, client, styles, style_spans, &plan, pristine);
if let Some(new_scroll_pos) = self.scroll_to.take() {
let (line, col) = self.offset_to_line_col(text, new_scroll_pos);
client.scroll_to(self.view_id, line, col);
}
}
// Send the requested lines even if they're outside the current scroll region.
pub fn request_lines(
&mut self,
text: &Rope,
client: &Client,
styles: &StyleMap,
style_spans: &Spans<Style>,
first_line: usize,
last_line: usize,
pristine: bool,
) {
let height = self.line_of_offset(text, text.len()) + 1;
let mut plan = RenderPlan::create(height, self.first_line, self.height);
plan.request_lines(first_line, last_line);
self.send_update_for_plan(text, client, styles, style_spans, &plan, pristine);
}
/// Invalidates front-end's entire line cache, forcing a full render at the next
/// update cycle. This should be a last resort, updates should generally cause
/// finer grain invalidation.
pub fn set_dirty(&mut self, text: &Rope) {
let height = self.line_of_offset(text, text.len()) + 1;
let mut b = line_cache_shadow::Builder::new();
b.add_span(height, 0, 0);
b.set_dirty(true);
self.lc_shadow = b.build();
}
/// Returns the byte range of the currently visible lines.
fn interval_of_visible_region(&self, text: &Rope) -> Interval {
let start = self.offset_of_line(text, self.first_line);
let end = self.offset_of_line(text, self.first_line + self.height + 1);
Interval::new(start, end)
}
/// Generate line breaks, based on current settings. Currently batch-mode,
/// and currently in a debugging state.
pub(crate) fn rewrap(
&mut self,
text: &Rope,
width_cache: &mut WidthCache,
client: &Client,
spans: &Spans<Style>,
) {
let _t = trace_block("View::rewrap", &["core"]);
let visible = self.first_line..self.first_line + self.height;
let inval = self.lines.rewrap_chunk(text, width_cache, client, spans, visible);
if let Some(InvalLines { start_line, inval_count, new_count }) = inval {
self.lc_shadow.edit(start_line, start_line + inval_count, new_count);
}
}
/// Updates the view after the text has been modified by the given `delta`.
/// This method is responsible for updating the cursors, and also for
/// recomputing line wraps.
pub fn after_edit(
&mut self,
text: &Rope,
last_text: &Rope,
delta: &RopeDelta,
client: &Client,
width_cache: &mut WidthCache,
drift: InsertDrift,
) {
let visible = self.first_line..self.first_line + self.height;
match self.lines.after_edit(text, last_text, delta, width_cache, client, visible) {
Some(InvalLines { start_line, inval_count, new_count }) => {
self.lc_shadow.edit(start_line, start_line + inval_count, new_count);
}
None => self.set_dirty(text),
}
// Any edit cancels a drag. This is good behavior for edits initiated through
// the front-end, but perhaps not for async edits.
self.drag_state = None;
// all annotations that come after the edit need to be invalidated
let (iv, _) = delta.summary();
self.annotations.invalidate(iv);
// update only find highlights affected by change
for find in &mut self.find {
find.update_highlights(text, delta);
self.find_changed = FindStatusChange::All;
}
// Note: for committing plugin edits, we probably want to know the priority
// of the delta so we can set the cursor before or after the edit, as needed.
let new_sel = self.selection.apply_delta(delta, true, drift);
self.set_selection_for_edit(text, new_sel);
}
fn do_selection_for_find(&mut self, text: &Rope, case_sensitive: bool) {
// set last selection or word under current cursor as search query
let search_query = match self.selection.last() {
Some(region) => {
if !region.is_caret() {
text.slice_to_cow(region)
} else {
let (start, end) = {
let mut word_cursor = WordCursor::new(text, region.max());
word_cursor.select_word()
};
text.slice_to_cow(start..end)
}
}
_ => return,
};
self.set_dirty(text);
// set selection as search query for first find if no additional search queries are used
// otherwise add new find with selection as search query
if self.find.len() != 1 {
self.add_find();
}
self.find.last_mut().unwrap().set_find(&search_query, case_sensitive, false, true);
self.find_progress = FindProgress::Started;
}
fn add_find(&mut self) {
let id = self.find_id_counter.next();
self.find.push(Find::new(id));
}
fn set_find(&mut self, text: &Rope, queries: Vec<FindQuery>) {
// checks if at least query has been changed, otherwise we don't need to rerun find
let mut find_changed = queries.len() != self.find.len();
// remove deleted queries
self.find.retain(|f| queries.iter().any(|q| q.id == Some(f.id())));
for query in &queries {
let pos = match query.id {
Some(id) => {
// update existing query
match self.find.iter().position(|f| f.id() == id) {
Some(p) => p,
None => return,
}
}
None => {
// add new query
self.add_find();
self.find.len() - 1
}
};
if self.find[pos].set_find(
&query.chars.clone(),
query.case_sensitive,
query.regex,
query.whole_words,
) {
find_changed = true;
}
}
if find_changed {
self.set_dirty(text);
self.find_progress = FindProgress::Started;
}
}
pub fn do_find(&mut self, text: &Rope) {
let search_range = match &self.find_progress.clone() {
FindProgress::Started => {
// start incremental find on visible region
let start = self.offset_of_line(text, self.first_line);
let end = min(text.len(), start + FIND_BATCH_SIZE);
self.find_changed = FindStatusChange::Matches;
self.find_progress = FindProgress::InProgress(Range { start, end });
Some((start, end))
}
FindProgress::InProgress(searched_range) => {
if searched_range.start == 0 && searched_range.end >= text.len() {
// the entire text has been searched
// end find by executing multi-line regex queries on entire text
// stop incremental find
self.find_progress = FindProgress::Ready;
self.find_changed = FindStatusChange::All;
Some((0, text.len()))
} else {
self.find_changed = FindStatusChange::Matches;
// expand find to un-searched regions
let start_off = self.offset_of_line(text, self.first_line);
// If there is unsearched text before the visible region, we want to include it in this search operation
let search_preceding_range = start_off.saturating_sub(searched_range.start)
< searched_range.end.saturating_sub(start_off)
&& searched_range.start > 0;
if search_preceding_range || searched_range.end >= text.len() {
let start = searched_range.start.saturating_sub(FIND_BATCH_SIZE);
self.find_progress =
FindProgress::InProgress(Range { start, end: searched_range.end });
Some((start, searched_range.start))
} else if searched_range.end < text.len() {
let end = min(text.len(), searched_range.end + FIND_BATCH_SIZE);
self.find_progress =
FindProgress::InProgress(Range { start: searched_range.start, end });
Some((searched_range.end, end))
} else {
self.find_changed = FindStatusChange::All;
None
}
}
}
_ => {
self.find_changed = FindStatusChange::None;
None
}
};
if let Some((search_range_start, search_range_end)) = search_range {
for query in &mut self.find {
if !query.is_multiline_regex() {
query.update_find(text, search_range_start, search_range_end, true);
} else {
// only execute multi-line regex queries if we are searching the entire text (last step)
if search_range_start == 0 && search_range_end == text.len() {
query.update_find(text, search_range_start, search_range_end, true);
}
}
}
}
}
/// Selects the next find match.
pub fn do_find_next(
&mut self,
text: &Rope,
reverse: bool,
wrap: bool,
allow_same: bool,
modify_selection: &SelectionModifier,
) {
self.select_next_occurrence(text, reverse, false, allow_same, modify_selection);
if self.scroll_to.is_none() && wrap {
self.select_next_occurrence(text, reverse, true, allow_same, modify_selection);
}
}
/// Selects all find matches.
pub fn do_find_all(&mut self, text: &Rope) {
let mut selection = Selection::new();
for find in &self.find {
for &occurrence in find.occurrences().iter() {
selection.add_region(occurrence);
}
}
if !selection.is_empty() {
// todo: invalidate so that nothing selected accidentally replaced
self.set_selection(text, selection);
}
}
/// Select the next occurrence relative to the last cursor. `reverse` determines whether the
/// next occurrence before (`true`) or after (`false`) the last cursor is selected. `wrapped`
/// indicates a search for the next occurrence past the end of the file.
pub fn select_next_occurrence(
&mut self,
text: &Rope,
reverse: bool,
wrapped: bool,
_allow_same: bool,
modify_selection: &SelectionModifier,
) {
let (cur_start, cur_end) = match self.selection.last() {
Some(sel) => (sel.min(), sel.max()),
_ => (0, 0),
};
// multiple queries; select closest occurrence
let closest_occurrence = self
.find
.iter()
.flat_map(|x| x.next_occurrence(text, reverse, wrapped, &self.selection))
.min_by_key(|x| match reverse {
true if x.end > cur_end => 2 * text.len() - x.end,
true => cur_end - x.end,
false if x.start < cur_start => x.start + text.len(),
false => x.start - cur_start,
});
if let Some(occ) = closest_occurrence {
match modify_selection {
SelectionModifier::Set => self.set_selection(text, occ),
SelectionModifier::Add => {
let mut selection = self.selection.clone();
selection.add_region(occ);
self.set_selection(text, selection);
}
SelectionModifier::AddRemovingCurrent => {
let mut selection = self.selection.clone();
if let Some(last_selection) = self.selection.last() {
if !last_selection.is_caret() {
selection.delete_range(
last_selection.min(),
last_selection.max(),
false,
);
}
}
selection.add_region(occ);
self.set_selection(text, selection);
}
_ => {}
}
}
}
fn do_set_replace(&mut self, chars: String, preserve_case: bool) {
self.replace = Some(Replace { chars, preserve_case });
self.replace_changed = true;
}
fn do_selection_for_replace(&mut self, text: &Rope) {
// set last selection or word under current cursor as replacement string
let replacement = match self.selection.last() {
Some(region) => {
if !region.is_caret() {
text.slice_to_cow(region)
} else {
let (start, end) = {
let mut word_cursor = WordCursor::new(text, region.max());
word_cursor.select_word()
};
text.slice_to_cow(start..end)
}
}
_ => return,
};
self.set_dirty(text);
self.do_set_replace(replacement.into_owned(), false);
}
pub fn get_caret_offset(&self) -> Option<usize> {
match self.selection.len() {
1 if self.selection[0].is_caret() => {
let offset = self.selection[0].start;
Some(offset)
}
_ => None,
}
}
}
impl View {
/// Exposed for benchmarking
#[doc(hidden)]
pub fn debug_force_rewrap_cols(&mut self, text: &Rope, cols: usize) {
use xi_rpc::test_utils::DummyPeer;
let spans: Spans<Style> = Spans::default();
let mut width_cache = WidthCache::new();
let client = Client::new(Box::new(DummyPeer));
self.update_wrap_settings(text, cols, false);
self.rewrap(text, &mut width_cache, &client, &spans);
}
}
impl LineOffset for View {
fn offset_of_line(&self, text: &Rope, line: usize) -> usize {
self.lines.offset_of_visual_line(text, line)
}
fn line_of_offset(&self, text: &Rope, offset: usize) -> usize {
self.lines.visual_line_of_offset(text, offset)
}
}
// utility function to clamp a value within the given range
fn clamp(x: usize, min: usize, max: usize) -> usize {
if x < min {
min
} else if x < max {
x
} else {
max
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::rpc::FindQuery;
#[test]
fn incremental_find_update() {
let mut view = View::new(1.into(), BufferId::new(2));
let mut s = String::new();
for _ in 0..(FIND_BATCH_SIZE - 2) {
s += "x";
}
s += "aaaaaa";
for _ in 0..(FIND_BATCH_SIZE) {
s += "x";
}
s += "aaaaaa";
assert_eq!(view.find_in_progress(), false);
let text = Rope::from(&s);
view.do_edit(
&text,
ViewEvent::Find {
chars: "aaaaaa".to_string(),
case_sensitive: false,
regex: false,
whole_words: false,
},
);
view.do_find(&text);
assert_eq!(view.find_in_progress(), true);
view.do_find_all(&text);
assert_eq!(view.sel_regions().len(), 1);
assert_eq!(
view.sel_regions().first(),
Some(&SelRegion::new(FIND_BATCH_SIZE - 2, FIND_BATCH_SIZE + 6 - 2))
);
view.do_find(&text);
assert_eq!(view.find_in_progress(), true);
view.do_find_all(&text);
assert_eq!(view.sel_regions().len(), 2);
}
#[test]
fn incremental_find_codepoint_boundary() {
let mut view = View::new(1.into(), BufferId::new(2));
let mut s = String::new();
for _ in 0..(FIND_BATCH_SIZE + 2) {
s += "";
}
assert_eq!(view.find_in_progress(), false);
let text = Rope::from(&s);
view.do_edit(
&text,
ViewEvent::Find {
chars: "a".to_string(),
case_sensitive: false,
regex: false,
whole_words: false,
},
);
view.do_find(&text);
assert_eq!(view.find_in_progress(), true);
view.do_find_all(&text);
assert_eq!(view.sel_regions().len(), 1); // cursor
}
#[test]
fn selection_for_find() {
let mut view = View::new(1.into(), BufferId::new(2));
let text = Rope::from("hello hello world\n");
view.set_selection(&text, SelRegion::new(6, 11));
view.do_edit(&text, ViewEvent::SelectionForFind { case_sensitive: false });
view.do_find(&text);
view.do_find_all(&text);
assert_eq!(view.sel_regions().len(), 2);
}
#[test]
fn find_next() {
let mut view = View::new(1.into(), BufferId::new(2));
let text = Rope::from("hello hello world\n");
view.do_edit(
&text,
ViewEvent::Find {
chars: "foo".to_string(),
case_sensitive: false,
regex: false,
whole_words: false,
},
);
view.do_find(&text);
view.do_find_next(&text, false, true, false, &SelectionModifier::Set);
assert_eq!(view.sel_regions().len(), 1);
assert_eq!(view.sel_regions().first(), Some(&SelRegion::new(0, 0))); // caret
view.do_edit(
&text,
ViewEvent::Find {
chars: "hello".to_string(),
case_sensitive: false,
regex: false,
whole_words: false,
},
);
view.do_find(&text);
assert_eq!(view.sel_regions().len(), 1);
view.do_find_next(&text, false, true, false, &SelectionModifier::Set);
assert_eq!(view.sel_regions().first(), Some(&SelRegion::new(0, 5)));
view.do_find_next(&text, false, true, false, &SelectionModifier::Set);
assert_eq!(view.sel_regions().first(), Some(&SelRegion::new(6, 11)));
view.do_find_next(&text, false, true, false, &SelectionModifier::Set);
assert_eq!(view.sel_regions().first(), Some(&SelRegion::new(0, 5)));
view.do_find_next(&text, true, true, false, &SelectionModifier::Set);
assert_eq!(view.sel_regions().first(), Some(&SelRegion::new(6, 11)));
view.do_find_next(&text, true, true, false, &SelectionModifier::Add);
assert_eq!(view.sel_regions().len(), 2);
view.do_find_next(&text, true, true, false, &SelectionModifier::AddRemovingCurrent);
assert_eq!(view.sel_regions().len(), 1);
view.do_find_next(&text, true, true, false, &SelectionModifier::None);
assert_eq!(view.sel_regions().len(), 1);
}
#[test]
fn find_all() {
let mut view = View::new(1.into(), BufferId::new(2));
let text = Rope::from("hello hello world\n hello!");
view.do_edit(
&text,
ViewEvent::Find {
chars: "foo".to_string(),
case_sensitive: false,
regex: false,
whole_words: false,
},
);
view.do_find(&text);
view.do_find_all(&text);
assert_eq!(view.sel_regions().len(), 1); // caret
view.do_edit(
&text,
ViewEvent::Find {
chars: "hello".to_string(),
case_sensitive: false,
regex: false,
whole_words: false,
},
);
view.do_find(&text);
view.do_find_all(&text);
assert_eq!(view.sel_regions().len(), 3);
view.do_edit(
&text,
ViewEvent::Find {
chars: "foo".to_string(),
case_sensitive: false,
regex: false,
whole_words: false,
},
);
view.do_find(&text);
view.do_find_all(&text);
assert_eq!(view.sel_regions().len(), 3);
}
#[test]
fn multi_queries_find_next() {
let mut view = View::new(1.into(), BufferId::new(2));
let text = Rope::from("hello hello world\n hello!");
let query1 = FindQuery {
id: None,
chars: "hello".to_string(),
case_sensitive: false,
regex: false,
whole_words: false,
};
let query2 = FindQuery {
id: None,
chars: "o world".to_string(),
case_sensitive: false,
regex: false,
whole_words: false,
};
view.do_edit(&text, ViewEvent::MultiFind { queries: vec![query1, query2] });
view.do_find(&text);
view.do_find_next(&text, false, true, false, &SelectionModifier::Set);
assert_eq!(view.sel_regions().first(), Some(&SelRegion::new(0, 5)));
view.do_find_next(&text, false, true, false, &SelectionModifier::Set);
assert_eq!(view.sel_regions().first(), Some(&SelRegion::new(6, 11)));
view.do_find_next(&text, false, true, false, &SelectionModifier::Set);
assert_eq!(view.sel_regions().first(), Some(&SelRegion::new(10, 17)));
}
#[test]
fn multi_queries_find_all() {
let mut view = View::new(1.into(), BufferId::new(2));
let text = Rope::from("hello hello world\n hello!");
let query1 = FindQuery {
id: None,
chars: "hello".to_string(),
case_sensitive: false,
regex: false,
whole_words: false,
};
let query2 = FindQuery {
id: None,
chars: "world".to_string(),
case_sensitive: false,
regex: false,
whole_words: false,
};
view.do_edit(&text, ViewEvent::MultiFind { queries: vec![query1, query2] });
view.do_find(&text);
view.do_find_all(&text);
assert_eq!(view.sel_regions().len(), 4);
}
}
```
|
```ruby
require_relative "../../../spec_helper"
platform_is :windows do
require 'win32ole'
describe "WIN32OLE_METHOD#visible?" do
before :each do
ole_type = WIN32OLE_TYPE.new("Microsoft Shell Controls And Automation", "Shell")
@m_browse_for_folder = WIN32OLE_METHOD.new(ole_type, "BrowseForFolder")
end
it "raises ArgumentError if argument is given" do
-> { @m_browse_for_folder.visible?(1) }.should raise_error ArgumentError
end
it "returns true for Shell Control's 'BrowseForFolder' method" do
@m_browse_for_folder.visible?.should be_true
end
end
end
```
|
Kenya–Venezuela relations are the bilateral relations between Kenya and Venezuela. Both nations are members of the Group of 77 and the United Nations.
History
In April 1970, Kenya and Venezuela established diplomatic relations. Since the establishment of diplomatic relations between both nations, relations have taken place primarily in multinational organizations such as at the United Nations.
In September 2009, Kenyan Vice President, Kalonzo Musyoka, paid a visit to Venezuela to attend the 2nd Africa-South America Summit held in Isla de Margarita. During his visit, both nations signed an Agreement for Energy Cooperation in order to promote and develop cooperation in the field of the energy and petroleum industry and cooperation in the areas of exploitation production, storage, transportation, refining and distribution of hydrocarbons.
In the past, the Venezuelan government has promoted the "Sponsor a school in Africa” initiative with the construction of primary schools in Kakamega and in Kajiado County, as well as the provision of educational materials to guarantee the success of the project and the continued cooperation. Several Kenyan students have also traveled to Venezuela for advanced educational studies at Venezuelan universities under the scholarship program of the Gran Mariscal de Ayacucho Foundation (Fundayacucho). The Venezuelan government has also promoted sustainable rice production in several Kenyan counties.
In April 2020, both nations celebrated 50 years of diplomatic relations.
Murder of Venezuelan ambassador to Kenya
In 2012, acting Ambassador of Venezuela to Kenya, Olga Fonseca, was found dead at the embassy residence after two weeks. Her death was ruled to be murder.
It is thought that staff at the residence had complained of sexual abuse by Fonseca's predecessor, and that Fonseca was trying to fire the staff in order to squash the internal investigation. It remains unclear of the exact motive or who was responsible for the death of Fonseca.
In January 2023, former Venezuelan diplomat, Dwight Sagaray, was found guilty of killing acting Ambassador Fonseca in 2012 by a Kenyan court. Dwight Sagaray and three Kenyan nationals were sentenced to 20 years in prison on July 14, 2023.
Diplomatic missions
Kenya is accredited to Venezuela from its embassy in Brasília, Brazil.
Venezuela has an embassy in Nairobi.
References
Kenya–Venezuela relations
|
```c++
#include "Matcher.h"
#include "MsaFilter.h"
#include "Parameters.h"
#include "PSSMCalculator.h"
#include "DBReader.h"
#include "DBWriter.h"
#include "FileUtil.h"
#include "CompressedA3M.h"
#include "MathUtil.h"
#include "kseq.h"
#include "KSeqBufferReader.h"
KSEQ_INIT(kseq_buffer_t*, kseq_buffer_reader)
#ifdef OPENMP
#include <omp.h>
#endif
enum {
MSA_CA3M = 0,
MSA_A3M = 1,
MSA_STOCKHOLM = 2
};
void setMsa2ProfileDefaults(Parameters *p) {
p->msaType = MSA_STOCKHOLM;
}
int msa2profile(int argc, const char **argv, const Command &command) {
Parameters &par = Parameters::getInstance();
setMsa2ProfileDefaults(&par);
par.parseParameters(argc, argv, command, true, 0, MMseqsParameter::COMMAND_PROFILE);
std::vector<std::string> qid_str_vec = Util::split(par.qid, ",");
std::vector<int> qid_vec;
for (size_t qid_idx = 0; qid_idx < qid_str_vec.size(); qid_idx++) {
float qid_float = strtod(qid_str_vec[qid_idx].c_str(), NULL);
qid_vec.push_back(static_cast<int>(qid_float*100));
}
std::sort(qid_vec.begin(), qid_vec.end());
std::string msaData = par.db1;
std::string msaIndex = par.db1Index;
DBReader<unsigned int> *headerReader = NULL, *sequenceReader = NULL;
if (par.msaType == 0) {
msaData = par.db1 + "_ca3m.ffdata";
msaIndex = par.db1 + "_ca3m.ffindex";
std::string msaHeaderData = par.db1 + "_header.ffdata";
std::string msaHeaderIndex = par.db1 + "_header.ffindex";
std::string msaSequenceData = par.db1 + "_sequence.ffdata";
std::string msaSequenceIndex = par.db1 + "_sequence.ffindex";
headerReader = new DBReader<unsigned int>(msaHeaderData.c_str(), msaHeaderIndex.c_str(), par.threads, DBReader<unsigned int>::USE_INDEX|DBReader<unsigned int>::USE_DATA);
headerReader->open(DBReader<unsigned int>::SORT_BY_LINE);
sequenceReader = new DBReader<unsigned int>(msaSequenceData.c_str(), msaSequenceIndex.c_str(), par.threads, DBReader<unsigned int>::USE_INDEX|DBReader<unsigned int>::USE_DATA);
sequenceReader->open(DBReader<unsigned int>::SORT_BY_LINE);
}
unsigned int mode = DBReader<unsigned int>::USE_INDEX|DBReader<unsigned int>::USE_DATA;
std::string lookupFile = msaData + ".lookup";
if (FileUtil::fileExists(lookupFile.c_str())) {
mode |= DBReader<unsigned int>::USE_LOOKUP;
}
DBReader<unsigned int> qDbr(msaData.c_str(), msaIndex.c_str(), par.threads, mode);
qDbr.open(DBReader<unsigned int>::LINEAR_ACCCESS);
Debug(Debug::INFO) << "Finding maximum sequence length and set size.\n";
unsigned int maxSeqLength = 0;
unsigned int maxSetSize = 0;
#pragma omp parallel
{
unsigned int thread_idx = 0;
#ifdef OPENMP
thread_idx = (unsigned int) omp_get_thread_num();
#endif
#pragma omp for schedule(dynamic, 10) reduction(max:maxSeqLength, maxSetSize)
for (size_t id = 0; id < qDbr.getSize(); id++) {
bool inHeader = false;
unsigned int setSize = 0;
unsigned int seqLength = 0;
char *entryData = qDbr.getData(id, thread_idx);
for (size_t i = 0; i < qDbr.getEntryLen(id); ++i) {
// state machine to get the max sequence length and set size from MSA
switch (entryData[i]) {
case '>':
if (seqLength > maxSeqLength) {
maxSeqLength = seqLength;
}
seqLength = 0;
inHeader = true;
setSize++;
break;
case '\n':
if (inHeader) {
inHeader = false;
}
break;
default:
if (!inHeader) {
seqLength++;
}
break;
}
}
// don't forget the last entry in an MSA
if (!inHeader && seqLength > 0) {
if (seqLength > maxSeqLength) {
maxSeqLength = seqLength;
}
setSize++;
}
if (setSize > maxSetSize) {
maxSetSize = setSize;
}
}
}
// for SIMD memory alignment
maxSeqLength = (maxSeqLength) / (VECSIZE_INT * 4) + 2;
maxSeqLength *= (VECSIZE_INT * 4);
unsigned int threads = (unsigned int) par.threads;
int type = Parameters::DBTYPE_HMM_PROFILE;
if (par.pcmode == Parameters::PCMODE_CONTEXT_SPECIFIC) {
type = DBReader<unsigned int>::setExtendedDbtype(type, Parameters::DBTYPE_EXTENDED_CONTEXT_PSEUDO_COUNTS);
}
DBWriter resultWriter(par.db2.c_str(), par.db2Index.c_str(), threads, par.compressed, type);
resultWriter.open();
DBWriter headerWriter(par.hdr2.c_str(), par.hdr2Index.c_str(), threads, par.compressed, Parameters::DBTYPE_GENERIC_DB);
headerWriter.open();
SubstitutionMatrix subMat(par.scoringMatrixFile.values.aminoacid().c_str(), 2.0f, -0.2f);
Debug::Progress progress(qDbr.getSize());
#pragma omp parallel
{
unsigned int thread_idx = 0;
#ifdef OPENMP
thread_idx = (unsigned int) omp_get_thread_num();
#endif
PSSMCalculator calculator(
&subMat, maxSeqLength + 1, maxSetSize, par.pcmode, par.pca, par.pcb
#ifdef GAP_POS_SCORING
, par.gapOpen.values.aminoacid()
, par.gapPseudoCount
#endif
);
Sequence sequence(maxSeqLength + 1, Parameters::DBTYPE_AMINO_ACIDS, &subMat, 0, false, par.compBiasCorrection != 0);
char *msaContent = (char*) mem_align(ALIGN_INT, sizeof(char) * (maxSeqLength + 1) * maxSetSize);
float *seqWeight = new float[maxSetSize];
float *pNullBuffer = new float[maxSeqLength + 1];
bool *maskedColumns = new bool[maxSeqLength + 1];
std::string result;
result.reserve((par.maxSeqLen + 1) * Sequence::PROFILE_READIN_SIZE * sizeof(char));
kseq_buffer_t d;
kseq_t *seq = kseq_init(&d);
char **msaSequences = (char**) mem_align(ALIGN_INT, sizeof(char*) * maxSetSize);
std::vector<Matcher::result_t> alnResults;
alnResults.reserve(maxSetSize);
const bool maskByFirst = par.matchMode == 0;
const float matchRatio = par.matchRatio;
MsaFilter filter(maxSeqLength + 1, maxSetSize, &subMat, par.gapOpen.values.aminoacid(), par.gapExtend.values.aminoacid());
#pragma omp for schedule(dynamic, 1)
for (size_t id = 0; id < qDbr.getSize(); ++id) {
progress.updateProgress();
unsigned int queryKey = qDbr.getDbKey(id);
size_t msaPos = 0;
unsigned int setSize = 0;
unsigned int centerLengthWithGaps = 0;
unsigned int maskedCount = 0;
bool fastaError = false;
char *entryData = qDbr.getData(id, thread_idx);
size_t entryLength = qDbr.getEntryLen(id);
std::string msa;
std::string backtrace;
if (par.msaType == MSA_CA3M) {
msa = CompressedA3M::extractA3M(entryData, entryLength - 2, *sequenceReader, *headerReader, thread_idx);
d.buffer = const_cast<char*>(msa.c_str());
d.length = msa.length();
par.msaType = MSA_A3M;
} else {
d.buffer = entryData;
d.length = entryLength - 1;
}
d.position = 0;
// remove comment line that makes kseq_read fail
if (d.length) {
if (d.buffer[0] == '#') {
size_t pos = 0;
while (pos < d.length && d.buffer[pos] != '\n') {
pos++;
}
if (pos < d.length) {
pos++;
d.buffer += pos;
d.length -= pos;
} else {
d.buffer += pos;
d.length = 0;
}
}
}
// allow skipping first sequence in case of consensus, etc
if (par.skipQuery == true) {
kseq_read(seq);
}
while (kseq_read(seq) >= 0) {
if (seq->name.l == 0 || seq->seq.l == 0) {
Debug(Debug::WARNING) << "Invalid fasta sequence " << setSize << " in entry " << queryKey << "\n";
fastaError = true;
break;
}
if (seq->seq.l > maxSeqLength) {
Debug(Debug::WARNING) << "Member sequence " << setSize << " in entry " << queryKey << " too long\n";
fastaError = true;
break;
}
if ((par.msaType == 0 || par.msaType == 1) && strncmp("ss_", seq->name.s, strlen("ss_")) == 0) {
continue;
}
// first sequence is always the query
if (setSize == 0) {
centerLengthWithGaps = seq->seq.l;
backtrace.reserve(centerLengthWithGaps);
if (maskByFirst == true) {
for (size_t i = 0; i < centerLengthWithGaps; ++i) {
if (seq->seq.s[i] == '-') {
maskedColumns[i] = true;
maskedCount++;
} else {
maskedColumns[i] = false;
}
}
}
if ((mode & DBReader<unsigned int>::USE_LOOKUP) == 0) {
std::string header(seq->name.s);
if (seq->comment.l > 0) {
header.append(" ");
header.append(seq->comment.s);
}
header.append("\n");
headerWriter.writeData(header.c_str(), header.size(), queryKey, thread_idx);
}
}
sequence.mapSequence(0, 0, seq->seq.s, seq->seq.l);
msaSequences[setSize] = msaContent + msaPos;
for (size_t i = 0; i < centerLengthWithGaps; ++i) {
if (maskByFirst == true && maskedColumns[i] == true) {
continue;
}
// skip a3m lower letters
if (par.msaType == MSA_A3M && islower(seq->seq.s[i])) {
continue;
}
msaContent[msaPos++] = (seq->seq.s[i] == '-') ? (int)MultipleAlignment::GAP : sequence.numSequence[i];
}
// construct backtrace for all but the query sequence
if (false && setSize > 0) {
backtrace.clear();
for (size_t i = 0; i < centerLengthWithGaps; ++i) {
bool isMaskedColumn = (maskByFirst && maskedColumns[i]);
if (seq->seq.s[i] == '-' && isMaskedColumn) {
continue;
}
if (seq->seq.s[i] == '-') {
backtrace.push_back('I');
}
else if (isMaskedColumn || (!maskByFirst && msaSequences[0][i] == MultipleAlignment::GAP)
|| (par.msaType == MSA_A3M && islower(seq->seq.s[i]))) {
backtrace.push_back('D');
}
else {
backtrace.push_back('M');
}
}
alnResults.emplace_back(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, backtrace);
}
// fill up the sequence buffer for the SIMD profile calculation
size_t rowSize = msaPos / (VECSIZE_INT*4);
rowSize = (rowSize+1) * (VECSIZE_INT*4);
while(msaPos < rowSize) {
msaContent[msaPos++] = MultipleAlignment::GAP;
}
setSize++;
}
kseq_rewind(seq);
if (fastaError == true) {
Debug(Debug::WARNING) << "Invalid msa " << id << "! Skipping entry.\n";
continue;
}
if (setSize == 0) {
Debug(Debug::WARNING) << "Empty msa " << id << "! Skipping entry.\n";
continue;
}
if (maskByFirst == false) {
PSSMCalculator::computeSequenceWeights(seqWeight, centerLengthWithGaps,
setSize, const_cast<const char**>(msaSequences));
// Replace GAP with ENDGAP for all end gaps
// ENDGAPs are ignored for counting percentage (multi-domain proteins)
for (unsigned int k = 0; k < setSize; ++k) {
for (unsigned int i = 0; i < centerLengthWithGaps && msaSequences[k][i] == MultipleAlignment::GAP; ++i)
msaSequences[k][i] = MultipleAlignment::ENDGAP;
for (unsigned int i = centerLengthWithGaps - 1; msaSequences[k][i] == MultipleAlignment::GAP; i--)
msaSequences[k][i] = MultipleAlignment::ENDGAP;
}
for (unsigned int l = 0; l < centerLengthWithGaps; l++) {
float res = 0;
float gap = 0;
// Add up percentage of gaps
for (unsigned int k = 0; k < setSize; ++k) {
if (msaSequences[k][l] < MultipleAlignment::GAP) {
res += seqWeight[k];
} else if (msaSequences[k][l] != MultipleAlignment::ENDGAP) {
gap += seqWeight[k];
} else if (msaSequences[k][l] == MultipleAlignment::ENDGAP) {
msaSequences[k][l] = MultipleAlignment::GAP;
}
}
maskedColumns[l] = (gap / (res + gap)) > matchRatio;
maskedCount += maskedColumns[l] ? 1 : 0;
}
for (unsigned int k = 0; k < setSize; ++k) {
unsigned int currentCol = 0;
for (unsigned int l = 0; l < centerLengthWithGaps; ++l) {
if (maskedColumns[l] == false) {
msaSequences[k][currentCol++] = msaSequences[k][l];
}
}
for (unsigned int l = currentCol; l < centerLengthWithGaps; ++l) {
msaSequences[k][l] = MultipleAlignment::GAP;
}
}
/* update backtraces
// TODO: check if this works for a3m as well (probably not...)
for (unsigned int k = 0; k < setSize - 1; ++k) {
std::string::iterator readIt = alnResults[k].backtrace.begin();
std::string::iterator writeIt = readIt;
for (unsigned int l = 0; l < centerLengthWithGaps; ++l) {
if (!maskedColumns[l]) {
*writeIt = *readIt;
++readIt;
++writeIt;
} else {
if (*readIt == 'D') {
*writeIt = 'D';
++writeIt;
}
++readIt;
}
}
alnResults[k].backtrace.erase(writeIt, alnResults[k].backtrace.end());
} */
}
unsigned int centerLength = centerLengthWithGaps - maskedCount;
MultipleAlignment::MSAResult msaResult(centerLength, centerLength, setSize, msaSequences);
size_t filteredSetSize = setSize;
if (par.filterMsa == 1) {
filteredSetSize = filter.filter(setSize, centerLength, static_cast<int>(par.covMSAThr * 100),
qid_vec, par.qsc,
static_cast<int>(par.filterMaxSeqId * 100), par.Ndiff, par.filterMinEnable,
(const char **) msaSequences, true);
}
PSSMCalculator::Profile pssmRes =
calculator.computePSSMFromMSA(filteredSetSize, msaResult.centerLength,
(const char **) msaResult.msaSequence,
#ifdef GAP_POS_SCORING
alnResults,
#endif
par.wg, 0.0);
if (par.compBiasCorrection == true) {
SubstitutionMatrix::calcGlobalAaBiasCorrection(&subMat, pssmRes.pssm, pNullBuffer,
Sequence::PROFILE_AA_SIZE,
centerLength);
}
pssmRes.toBuffer((const unsigned char*)msaSequences[0], centerLength, subMat, result);
if (mode & DBReader<unsigned int>::USE_LOOKUP) {
size_t lookupId = qDbr.getLookupIdByKey(queryKey);
std::string header = qDbr.getLookupEntryName(lookupId);
header.append(1, '\n');
headerWriter.writeData(header.c_str(), header.length(), queryKey, thread_idx);
}
resultWriter.writeData(result.c_str(), result.length(), queryKey, thread_idx);
result.clear();
}
kseq_destroy(seq);
free(msaSequences);
free(msaContent);
delete[] pNullBuffer;
delete[] maskedColumns;
delete[] seqWeight;
}
headerWriter.close(true);
resultWriter.close(true);
qDbr.close();
DBReader<unsigned int>::copyDb(par.db1, par.db2, (DBFiles::Files)(DBFiles::LOOKUP | DBFiles::SOURCE));
if (sequenceReader != NULL) {
sequenceReader->close();
delete sequenceReader;
}
if (headerReader != NULL) {
headerReader->close();
delete headerReader;
}
return EXIT_SUCCESS;
}
```
|
```c
/* ssl/ssl_sess.c */
* All rights reserved.
*
* This package is an SSL implementation written
* by Eric Young (eay@cryptsoft.com).
* The implementation was written so as to conform with Netscapes SSL.
*
* This library is free for commercial and non-commercial use as long as
* the following conditions are aheared to. The following conditions
* apply to all code found in this distribution, be it the RC4, RSA,
* lhash, DES, etc., code; not just the SSL code. The SSL documentation
* included with this distribution is covered by the same copyright terms
* except that the holder is Tim Hudson (tjh@cryptsoft.com).
*
* the code are not to be removed.
* If this package is used in a product, Eric Young should be given attribution
* as the author of the parts of the library used.
* This can be in the form of a textual message at program startup or
* in documentation (online or textual) provided with the package.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* 3. All advertising materials mentioning features or use of this software
* must display the following acknowledgement:
* "This product includes cryptographic software written by
* Eric Young (eay@cryptsoft.com)"
* The word 'cryptographic' can be left out if the rouines from the library
* being used are not cryptographic related :-).
* 4. If you include any Windows specific code (or a derivative thereof) from
* the apps directory (application code) you must include an acknowledgement:
* "This product includes software written by Tim Hudson (tjh@cryptsoft.com)"
*
* THIS SOFTWARE IS PROVIDED BY ERIC YOUNG ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*
* The licence and distribution terms for any publically available version or
* derivative of this code cannot be changed. i.e. this code cannot simply be
* copied and put under another distribution licence
* [including the GNU Public Licence.]
*/
/* ====================================================================
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
*
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
*
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in
* the documentation and/or other materials provided with the
* distribution.
*
* 3. All advertising materials mentioning features or use of this
* software must display the following acknowledgment:
* "This product includes software developed by the OpenSSL Project
* for use in the OpenSSL Toolkit. (path_to_url"
*
* 4. The names "OpenSSL Toolkit" and "OpenSSL Project" must not be used to
* endorse or promote products derived from this software without
* prior written permission. For written permission, please contact
* openssl-core@openssl.org.
*
* 5. Products derived from this software may not be called "OpenSSL"
* nor may "OpenSSL" appear in their names without prior written
* permission of the OpenSSL Project.
*
* 6. Redistributions of any form whatsoever must retain the following
* acknowledgment:
* "This product includes software developed by the OpenSSL Project
* for use in the OpenSSL Toolkit (path_to_url"
*
* THIS SOFTWARE IS PROVIDED BY THE OpenSSL PROJECT ``AS IS'' AND ANY
* EXPRESSED OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
* PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE OpenSSL PROJECT OR
* ITS CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
* NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
* LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
* STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
* OF THE POSSIBILITY OF SUCH DAMAGE.
* ====================================================================
*
* This product includes cryptographic software written by Eric Young
* (eay@cryptsoft.com). This product includes software written by Tim
* Hudson (tjh@cryptsoft.com).
*
*/
/* ====================================================================
*
* The portions of the attached software ("Contribution") is developed by
* Nokia Corporation and is licensed pursuant to the OpenSSL open source
* license.
*
* The Contribution, originally written by Mika Kousa and Pasi Eronen of
* Nokia Corporation, consists of the "PSK" (Pre-Shared Key) ciphersuites
* support (see RFC 4279) to OpenSSL.
*
* No patent licenses or other rights except those expressly stated in
* the OpenSSL open source license shall be deemed granted or received
* expressly, by implication, estoppel, or otherwise.
*
* No assurances are provided by Nokia that the Contribution does not
* infringe the patent or other intellectual property rights of any third
* party or that the license provides you with all the necessary rights
* to make use of the Contribution.
*
* THE SOFTWARE IS PROVIDED "AS IS" WITHOUT WARRANTY OF ANY KIND. IN
* ADDITION TO THE DISCLAIMERS INCLUDED IN THE LICENSE, NOKIA
* SPECIFICALLY DISCLAIMS ANY LIABILITY FOR CLAIMS BROUGHT BY YOU OR ANY
* OTHER ENTITY BASED ON INFRINGEMENT OF INTELLECTUAL PROPERTY RIGHTS OR
* OTHERWISE.
*/
#include <stdio.h>
#include <openssl/lhash.h>
#include <openssl/rand.h>
#ifndef OPENSSL_NO_ENGINE
# include <openssl/engine.h>
#endif
#include "ssl_locl.h"
static void SSL_SESSION_list_remove(SSL_CTX *ctx, SSL_SESSION *s);
static void SSL_SESSION_list_add(SSL_CTX *ctx, SSL_SESSION *s);
static int remove_session_lock(SSL_CTX *ctx, SSL_SESSION *c, int lck);
SSL_SESSION *SSL_get_session(const SSL *ssl)
/* aka SSL_get0_session; gets 0 objects, just returns a copy of the pointer */
{
return (ssl->session);
}
SSL_SESSION *SSL_get1_session(SSL *ssl)
/* variant of SSL_get_session: caller really gets something */
{
SSL_SESSION *sess;
/*
* Need to lock this all up rather than just use CRYPTO_add so that
* somebody doesn't free ssl->session between when we check it's non-null
* and when we up the reference count.
*/
CRYPTO_w_lock(CRYPTO_LOCK_SSL_SESSION);
sess = ssl->session;
if (sess)
sess->references++;
CRYPTO_w_unlock(CRYPTO_LOCK_SSL_SESSION);
return (sess);
}
int SSL_SESSION_get_ex_new_index(long argl, void *argp,
CRYPTO_EX_new *new_func,
CRYPTO_EX_dup *dup_func,
CRYPTO_EX_free *free_func)
{
return CRYPTO_get_ex_new_index(CRYPTO_EX_INDEX_SSL_SESSION, argl, argp,
new_func, dup_func, free_func);
}
int SSL_SESSION_set_ex_data(SSL_SESSION *s, int idx, void *arg)
{
return (CRYPTO_set_ex_data(&s->ex_data, idx, arg));
}
void *SSL_SESSION_get_ex_data(const SSL_SESSION *s, int idx)
{
return (CRYPTO_get_ex_data(&s->ex_data, idx));
}
SSL_SESSION *SSL_SESSION_new(void)
{
SSL_SESSION *ss;
ss = (SSL_SESSION *)OPENSSL_malloc(sizeof(SSL_SESSION));
if (ss == NULL) {
SSLerr(SSL_F_SSL_SESSION_NEW, ERR_R_MALLOC_FAILURE);
return (0);
}
memset(ss, 0, sizeof(SSL_SESSION));
ss->verify_result = 1; /* avoid 0 (= X509_V_OK) just in case */
ss->references = 1;
ss->timeout = 60 * 5 + 4; /* 5 minute timeout by default */
ss->time = (unsigned long)time(NULL);
ss->prev = NULL;
ss->next = NULL;
ss->compress_meth = 0;
#ifndef OPENSSL_NO_TLSEXT
ss->tlsext_hostname = NULL;
# ifndef OPENSSL_NO_EC
ss->tlsext_ecpointformatlist_length = 0;
ss->tlsext_ecpointformatlist = NULL;
ss->tlsext_ellipticcurvelist_length = 0;
ss->tlsext_ellipticcurvelist = NULL;
# endif
#endif
CRYPTO_new_ex_data(CRYPTO_EX_INDEX_SSL_SESSION, ss, &ss->ex_data);
#ifndef OPENSSL_NO_PSK
ss->psk_identity_hint = NULL;
ss->psk_identity = NULL;
#endif
#ifndef OPENSSL_NO_SRP
ss->srp_username = NULL;
#endif
return (ss);
}
/*
* Create a new SSL_SESSION and duplicate the contents of |src| into it. If
* ticket == 0 then no ticket information is duplicated, otherwise it is.
*/
SSL_SESSION *ssl_session_dup(SSL_SESSION *src, int ticket)
{
SSL_SESSION *dest;
dest = OPENSSL_malloc(sizeof(*src));
if (dest == NULL) {
goto err;
}
memcpy(dest, src, sizeof(*dest));
/*
* Set the various pointers to NULL so that we can call SSL_SESSION_free in
* the case of an error whilst halfway through constructing dest
*/
#ifndef OPENSSL_NO_PSK
dest->psk_identity_hint = NULL;
dest->psk_identity = NULL;
#endif
dest->ciphers = NULL;
#ifndef OPENSSL_NO_TLSEXT
dest->tlsext_hostname = NULL;
# ifndef OPENSSL_NO_EC
dest->tlsext_ecpointformatlist = NULL;
dest->tlsext_ellipticcurvelist = NULL;
# endif
dest->tlsext_tick = NULL;
#endif
#ifndef OPENSSL_NO_SRP
dest->srp_username = NULL;
#endif
memset(&dest->ex_data, 0, sizeof(dest->ex_data));
/* We deliberately don't copy the prev and next pointers */
dest->prev = NULL;
dest->next = NULL;
dest->references = 1;
if (src->sess_cert != NULL)
CRYPTO_add(&src->sess_cert->references, 1, CRYPTO_LOCK_SSL_SESS_CERT);
if (src->peer != NULL)
CRYPTO_add(&src->peer->references, 1, CRYPTO_LOCK_X509);
#ifndef OPENSSL_NO_PSK
if (src->psk_identity_hint) {
dest->psk_identity_hint = BUF_strdup(src->psk_identity_hint);
if (dest->psk_identity_hint == NULL) {
goto err;
}
}
if (src->psk_identity) {
dest->psk_identity = BUF_strdup(src->psk_identity);
if (dest->psk_identity == NULL) {
goto err;
}
}
#endif
if(src->ciphers != NULL) {
dest->ciphers = sk_SSL_CIPHER_dup(src->ciphers);
if (dest->ciphers == NULL)
goto err;
}
if (!CRYPTO_dup_ex_data(CRYPTO_EX_INDEX_SSL_SESSION,
&dest->ex_data, &src->ex_data)) {
goto err;
}
#ifndef OPENSSL_NO_TLSEXT
if (src->tlsext_hostname) {
dest->tlsext_hostname = BUF_strdup(src->tlsext_hostname);
if (dest->tlsext_hostname == NULL) {
goto err;
}
}
# ifndef OPENSSL_NO_EC
if (src->tlsext_ecpointformatlist) {
dest->tlsext_ecpointformatlist =
BUF_memdup(src->tlsext_ecpointformatlist,
src->tlsext_ecpointformatlist_length);
if (dest->tlsext_ecpointformatlist == NULL)
goto err;
}
if (src->tlsext_ellipticcurvelist) {
dest->tlsext_ellipticcurvelist =
BUF_memdup(src->tlsext_ellipticcurvelist,
src->tlsext_ellipticcurvelist_length);
if (dest->tlsext_ellipticcurvelist == NULL)
goto err;
}
# endif
if (ticket != 0) {
dest->tlsext_tick = BUF_memdup(src->tlsext_tick, src->tlsext_ticklen);
if(dest->tlsext_tick == NULL)
goto err;
} else {
dest->tlsext_tick_lifetime_hint = 0;
dest->tlsext_ticklen = 0;
}
#endif
#ifndef OPENSSL_NO_SRP
if (src->srp_username) {
dest->srp_username = BUF_strdup(src->srp_username);
if (dest->srp_username == NULL) {
goto err;
}
}
#endif
return dest;
err:
SSLerr(SSL_F_SSL_SESSION_DUP, ERR_R_MALLOC_FAILURE);
SSL_SESSION_free(dest);
return NULL;
}
const unsigned char *SSL_SESSION_get_id(const SSL_SESSION *s,
unsigned int *len)
{
if (len)
*len = s->session_id_length;
return s->session_id;
}
unsigned int SSL_SESSION_get_compress_id(const SSL_SESSION *s)
{
return s->compress_meth;
}
/*
* Even with SSLv2, we have 16 bytes (128 bits) of session ID space.
* SSLv3/TLSv1 has 32 bytes (256 bits). As such, filling the ID with random
* gunk repeatedly until we have no conflict is going to complete in one
* iteration pretty much "most" of the time (btw: understatement). So, if it
* takes us 10 iterations and we still can't avoid a conflict - well that's a
* reasonable point to call it quits. Either the RAND code is broken or
* someone is trying to open roughly very close to 2^128 (or 2^256) SSL
* sessions to our server. How you might store that many sessions is perhaps
* a more interesting question ...
*/
#define MAX_SESS_ID_ATTEMPTS 10
static int def_generate_session_id(const SSL *ssl, unsigned char *id,
unsigned int *id_len)
{
unsigned int retry = 0;
do
if (RAND_pseudo_bytes(id, *id_len) <= 0)
return 0;
while (SSL_has_matching_session_id(ssl, id, *id_len) &&
(++retry < MAX_SESS_ID_ATTEMPTS)) ;
if (retry < MAX_SESS_ID_ATTEMPTS)
return 1;
/* else - woops a session_id match */
/*
* XXX We should also check the external cache -- but the probability of
* a collision is negligible, and we could not prevent the concurrent
* creation of sessions with identical IDs since we currently don't have
* means to atomically check whether a session ID already exists and make
* a reservation for it if it does not (this problem applies to the
* internal cache as well).
*/
return 0;
}
int ssl_get_new_session(SSL *s, int session)
{
/* This gets used by clients and servers. */
unsigned int tmp;
SSL_SESSION *ss = NULL;
GEN_SESSION_CB cb = def_generate_session_id;
if ((ss = SSL_SESSION_new()) == NULL)
return (0);
/* If the context has a default timeout, use it */
if (s->session_ctx->session_timeout == 0)
ss->timeout = SSL_get_default_timeout(s);
else
ss->timeout = s->session_ctx->session_timeout;
if (s->session != NULL) {
SSL_SESSION_free(s->session);
s->session = NULL;
}
if (session) {
if (s->version == SSL2_VERSION) {
ss->ssl_version = SSL2_VERSION;
ss->session_id_length = SSL2_SSL_SESSION_ID_LENGTH;
} else if (s->version == SSL3_VERSION) {
ss->ssl_version = SSL3_VERSION;
ss->session_id_length = SSL3_SSL_SESSION_ID_LENGTH;
} else if (s->version == TLS1_VERSION) {
ss->ssl_version = TLS1_VERSION;
ss->session_id_length = SSL3_SSL_SESSION_ID_LENGTH;
} else if (s->version == TLS1_1_VERSION) {
ss->ssl_version = TLS1_1_VERSION;
ss->session_id_length = SSL3_SSL_SESSION_ID_LENGTH;
} else if (s->version == TLS1_2_VERSION) {
ss->ssl_version = TLS1_2_VERSION;
ss->session_id_length = SSL3_SSL_SESSION_ID_LENGTH;
} else if (s->version == DTLS1_BAD_VER) {
ss->ssl_version = DTLS1_BAD_VER;
ss->session_id_length = SSL3_SSL_SESSION_ID_LENGTH;
} else if (s->version == DTLS1_VERSION) {
ss->ssl_version = DTLS1_VERSION;
ss->session_id_length = SSL3_SSL_SESSION_ID_LENGTH;
} else if (s->version == DTLS1_2_VERSION) {
ss->ssl_version = DTLS1_2_VERSION;
ss->session_id_length = SSL3_SSL_SESSION_ID_LENGTH;
} else {
SSLerr(SSL_F_SSL_GET_NEW_SESSION, SSL_R_UNSUPPORTED_SSL_VERSION);
SSL_SESSION_free(ss);
return (0);
}
#ifndef OPENSSL_NO_TLSEXT
/*-
* If RFC5077 ticket, use empty session ID (as server).
* Note that:
* (a) ssl_get_prev_session() does lookahead into the
* ClientHello extensions to find the session ticket.
* When ssl_get_prev_session() fails, s3_srvr.c calls
* ssl_get_new_session() in ssl3_get_client_hello().
* At that point, it has not yet parsed the extensions,
* however, because of the lookahead, it already knows
* whether a ticket is expected or not.
*
* (b) s3_clnt.c calls ssl_get_new_session() before parsing
* ServerHello extensions, and before recording the session
* ID received from the server, so this block is a noop.
*/
if (s->tlsext_ticket_expected) {
ss->session_id_length = 0;
goto sess_id_done;
}
#endif
/* Choose which callback will set the session ID */
CRYPTO_r_lock(CRYPTO_LOCK_SSL_CTX);
if (s->generate_session_id)
cb = s->generate_session_id;
else if (s->session_ctx->generate_session_id)
cb = s->session_ctx->generate_session_id;
CRYPTO_r_unlock(CRYPTO_LOCK_SSL_CTX);
/* Choose a session ID */
tmp = ss->session_id_length;
if (!cb(s, ss->session_id, &tmp)) {
/* The callback failed */
SSLerr(SSL_F_SSL_GET_NEW_SESSION,
SSL_R_SSL_SESSION_ID_CALLBACK_FAILED);
SSL_SESSION_free(ss);
return (0);
}
/*
* Don't allow the callback to set the session length to zero. nor
* set it higher than it was.
*/
if (!tmp || (tmp > ss->session_id_length)) {
/* The callback set an illegal length */
SSLerr(SSL_F_SSL_GET_NEW_SESSION,
SSL_R_SSL_SESSION_ID_HAS_BAD_LENGTH);
SSL_SESSION_free(ss);
return (0);
}
/* If the session length was shrunk and we're SSLv2, pad it */
if ((tmp < ss->session_id_length) && (s->version == SSL2_VERSION))
memset(ss->session_id + tmp, 0, ss->session_id_length - tmp);
else
ss->session_id_length = tmp;
/* Finally, check for a conflict */
if (SSL_has_matching_session_id(s, ss->session_id,
ss->session_id_length)) {
SSLerr(SSL_F_SSL_GET_NEW_SESSION, SSL_R_SSL_SESSION_ID_CONFLICT);
SSL_SESSION_free(ss);
return (0);
}
#ifndef OPENSSL_NO_TLSEXT
sess_id_done:
if (s->tlsext_hostname) {
ss->tlsext_hostname = BUF_strdup(s->tlsext_hostname);
if (ss->tlsext_hostname == NULL) {
SSLerr(SSL_F_SSL_GET_NEW_SESSION, ERR_R_INTERNAL_ERROR);
SSL_SESSION_free(ss);
return 0;
}
}
#endif
} else {
ss->session_id_length = 0;
}
if (s->sid_ctx_length > sizeof ss->sid_ctx) {
SSLerr(SSL_F_SSL_GET_NEW_SESSION, ERR_R_INTERNAL_ERROR);
SSL_SESSION_free(ss);
return 0;
}
memcpy(ss->sid_ctx, s->sid_ctx, s->sid_ctx_length);
ss->sid_ctx_length = s->sid_ctx_length;
s->session = ss;
ss->ssl_version = s->version;
ss->verify_result = X509_V_OK;
return (1);
}
/*-
* ssl_get_prev attempts to find an SSL_SESSION to be used to resume this
* connection. It is only called by servers.
*
* session_id: points at the session ID in the ClientHello. This code will
* read past the end of this in order to parse out the session ticket
* extension, if any.
* len: the length of the session ID.
* limit: a pointer to the first byte after the ClientHello.
*
* Returns:
* -1: error
* 0: a session may have been found.
*
* Side effects:
* - If a session is found then s->session is pointed at it (after freeing an
* existing session if need be) and s->verify_result is set from the session.
* - Both for new and resumed sessions, s->tlsext_ticket_expected is set to 1
* if the server should issue a new session ticket (to 0 otherwise).
*/
int ssl_get_prev_session(SSL *s, unsigned char *session_id, int len,
const unsigned char *limit)
{
/* This is used only by servers. */
SSL_SESSION *ret = NULL;
int fatal = 0;
int try_session_cache = 1;
#ifndef OPENSSL_NO_TLSEXT
int r;
#endif
if (session_id + len > limit) {
fatal = 1;
goto err;
}
if (len == 0)
try_session_cache = 0;
#ifndef OPENSSL_NO_TLSEXT
/* sets s->tlsext_ticket_expected */
r = tls1_process_ticket(s, session_id, len, limit, &ret);
switch (r) {
case -1: /* Error during processing */
fatal = 1;
goto err;
case 0: /* No ticket found */
case 1: /* Zero length ticket found */
break; /* Ok to carry on processing session id. */
case 2: /* Ticket found but not decrypted. */
case 3: /* Ticket decrypted, *ret has been set. */
try_session_cache = 0;
break;
default:
abort();
}
#endif
if (try_session_cache &&
ret == NULL &&
!(s->session_ctx->session_cache_mode &
SSL_SESS_CACHE_NO_INTERNAL_LOOKUP)) {
SSL_SESSION data;
data.ssl_version = s->version;
data.session_id_length = len;
if (len == 0)
return 0;
memcpy(data.session_id, session_id, len);
CRYPTO_r_lock(CRYPTO_LOCK_SSL_CTX);
ret = lh_SSL_SESSION_retrieve(s->session_ctx->sessions, &data);
if (ret != NULL) {
/* don't allow other threads to steal it: */
CRYPTO_add(&ret->references, 1, CRYPTO_LOCK_SSL_SESSION);
}
CRYPTO_r_unlock(CRYPTO_LOCK_SSL_CTX);
if (ret == NULL)
s->session_ctx->stats.sess_miss++;
}
if (try_session_cache &&
ret == NULL && s->session_ctx->get_session_cb != NULL) {
int copy = 1;
if ((ret = s->session_ctx->get_session_cb(s, session_id, len, ©))) {
s->session_ctx->stats.sess_cb_hit++;
/*
* Increment reference count now if the session callback asks us
* to do so (note that if the session structures returned by the
* callback are shared between threads, it must handle the
* reference count itself [i.e. copy == 0], or things won't be
* thread-safe).
*/
if (copy)
CRYPTO_add(&ret->references, 1, CRYPTO_LOCK_SSL_SESSION);
/*
* Add the externally cached session to the internal cache as
* well if and only if we are supposed to.
*/
if (!
(s->session_ctx->session_cache_mode &
SSL_SESS_CACHE_NO_INTERNAL_STORE))
/*
* The following should not return 1, otherwise, things are
* very strange
*/
SSL_CTX_add_session(s->session_ctx, ret);
}
}
if (ret == NULL)
goto err;
/* Now ret is non-NULL and we own one of its reference counts. */
if (ret->sid_ctx_length != s->sid_ctx_length
|| memcmp(ret->sid_ctx, s->sid_ctx, ret->sid_ctx_length)) {
/*
* We have the session requested by the client, but we don't want to
* use it in this context.
*/
goto err; /* treat like cache miss */
}
if ((s->verify_mode & SSL_VERIFY_PEER) && s->sid_ctx_length == 0) {
/*
* We can't be sure if this session is being used out of context,
* which is especially important for SSL_VERIFY_PEER. The application
* should have used SSL[_CTX]_set_session_id_context. For this error
* case, we generate an error instead of treating the event like a
* cache miss (otherwise it would be easy for applications to
* effectively disable the session cache by accident without anyone
* noticing).
*/
SSLerr(SSL_F_SSL_GET_PREV_SESSION,
SSL_R_SESSION_ID_CONTEXT_UNINITIALIZED);
fatal = 1;
goto err;
}
if (ret->cipher == NULL) {
unsigned char buf[5], *p;
unsigned long l;
p = buf;
l = ret->cipher_id;
l2n(l, p);
if ((ret->ssl_version >> 8) >= SSL3_VERSION_MAJOR)
ret->cipher = ssl_get_cipher_by_char(s, &(buf[2]));
else
ret->cipher = ssl_get_cipher_by_char(s, &(buf[1]));
if (ret->cipher == NULL)
goto err;
}
if (ret->timeout < (long)(time(NULL) - ret->time)) { /* timeout */
s->session_ctx->stats.sess_timeout++;
if (try_session_cache) {
/* session was from the cache, so remove it */
SSL_CTX_remove_session(s->session_ctx, ret);
}
goto err;
}
s->session_ctx->stats.sess_hit++;
if (s->session != NULL)
SSL_SESSION_free(s->session);
s->session = ret;
s->verify_result = s->session->verify_result;
return 1;
err:
if (ret != NULL) {
SSL_SESSION_free(ret);
#ifndef OPENSSL_NO_TLSEXT
if (!try_session_cache) {
/*
* The session was from a ticket, so we should issue a ticket for
* the new session
*/
s->tlsext_ticket_expected = 1;
}
#endif
}
if (fatal)
return -1;
else
return 0;
}
int SSL_CTX_add_session(SSL_CTX *ctx, SSL_SESSION *c)
{
int ret = 0;
SSL_SESSION *s;
/*
* add just 1 reference count for the SSL_CTX's session cache even though
* it has two ways of access: each session is in a doubly linked list and
* an lhash
*/
CRYPTO_add(&c->references, 1, CRYPTO_LOCK_SSL_SESSION);
/*
* if session c is in already in cache, we take back the increment later
*/
CRYPTO_w_lock(CRYPTO_LOCK_SSL_CTX);
s = lh_SSL_SESSION_insert(ctx->sessions, c);
/*
* s != NULL iff we already had a session with the given PID. In this
* case, s == c should hold (then we did not really modify
* ctx->sessions), or we're in trouble.
*/
if (s != NULL && s != c) {
/* We *are* in trouble ... */
SSL_SESSION_list_remove(ctx, s);
SSL_SESSION_free(s);
/*
* ... so pretend the other session did not exist in cache (we cannot
* handle two SSL_SESSION structures with identical session ID in the
* same cache, which could happen e.g. when two threads concurrently
* obtain the same session from an external cache)
*/
s = NULL;
}
/* Put at the head of the queue unless it is already in the cache */
if (s == NULL)
SSL_SESSION_list_add(ctx, c);
if (s != NULL) {
/*
* existing cache entry -- decrement previously incremented reference
* count because it already takes into account the cache
*/
SSL_SESSION_free(s); /* s == c */
ret = 0;
} else {
/*
* new cache entry -- remove old ones if cache has become too large
*/
ret = 1;
if (SSL_CTX_sess_get_cache_size(ctx) > 0) {
while (SSL_CTX_sess_number(ctx) >
SSL_CTX_sess_get_cache_size(ctx)) {
if (!remove_session_lock(ctx, ctx->session_cache_tail, 0))
break;
else
ctx->stats.sess_cache_full++;
}
}
}
CRYPTO_w_unlock(CRYPTO_LOCK_SSL_CTX);
return (ret);
}
int SSL_CTX_remove_session(SSL_CTX *ctx, SSL_SESSION *c)
{
return remove_session_lock(ctx, c, 1);
}
static int remove_session_lock(SSL_CTX *ctx, SSL_SESSION *c, int lck)
{
SSL_SESSION *r;
int ret = 0;
if ((c != NULL) && (c->session_id_length != 0)) {
if (lck)
CRYPTO_w_lock(CRYPTO_LOCK_SSL_CTX);
if ((r = lh_SSL_SESSION_retrieve(ctx->sessions, c)) == c) {
ret = 1;
r = lh_SSL_SESSION_delete(ctx->sessions, c);
SSL_SESSION_list_remove(ctx, c);
}
if (lck)
CRYPTO_w_unlock(CRYPTO_LOCK_SSL_CTX);
if (ret) {
r->not_resumable = 1;
if (ctx->remove_session_cb != NULL)
ctx->remove_session_cb(ctx, r);
SSL_SESSION_free(r);
}
} else
ret = 0;
return (ret);
}
void SSL_SESSION_free(SSL_SESSION *ss)
{
int i;
if (ss == NULL)
return;
i = CRYPTO_add(&ss->references, -1, CRYPTO_LOCK_SSL_SESSION);
#ifdef REF_PRINT
REF_PRINT("SSL_SESSION", ss);
#endif
if (i > 0)
return;
#ifdef REF_CHECK
if (i < 0) {
fprintf(stderr, "SSL_SESSION_free, bad reference count\n");
abort(); /* ok */
}
#endif
CRYPTO_free_ex_data(CRYPTO_EX_INDEX_SSL_SESSION, ss, &ss->ex_data);
OPENSSL_cleanse(ss->key_arg, sizeof ss->key_arg);
OPENSSL_cleanse(ss->master_key, sizeof ss->master_key);
OPENSSL_cleanse(ss->session_id, sizeof ss->session_id);
if (ss->sess_cert != NULL)
ssl_sess_cert_free(ss->sess_cert);
if (ss->peer != NULL)
X509_free(ss->peer);
if (ss->ciphers != NULL)
sk_SSL_CIPHER_free(ss->ciphers);
#ifndef OPENSSL_NO_TLSEXT
if (ss->tlsext_hostname != NULL)
OPENSSL_free(ss->tlsext_hostname);
if (ss->tlsext_tick != NULL)
OPENSSL_free(ss->tlsext_tick);
# ifndef OPENSSL_NO_EC
ss->tlsext_ecpointformatlist_length = 0;
if (ss->tlsext_ecpointformatlist != NULL)
OPENSSL_free(ss->tlsext_ecpointformatlist);
ss->tlsext_ellipticcurvelist_length = 0;
if (ss->tlsext_ellipticcurvelist != NULL)
OPENSSL_free(ss->tlsext_ellipticcurvelist);
# endif /* OPENSSL_NO_EC */
#endif
#ifndef OPENSSL_NO_PSK
if (ss->psk_identity_hint != NULL)
OPENSSL_free(ss->psk_identity_hint);
if (ss->psk_identity != NULL)
OPENSSL_free(ss->psk_identity);
#endif
#ifndef OPENSSL_NO_SRP
if (ss->srp_username != NULL)
OPENSSL_free(ss->srp_username);
#endif
OPENSSL_cleanse(ss, sizeof(*ss));
OPENSSL_free(ss);
}
int SSL_set_session(SSL *s, SSL_SESSION *session)
{
int ret = 0;
const SSL_METHOD *meth;
if (session != NULL) {
meth = s->ctx->method->get_ssl_method(session->ssl_version);
if (meth == NULL)
meth = s->method->get_ssl_method(session->ssl_version);
if (meth == NULL) {
SSLerr(SSL_F_SSL_SET_SESSION, SSL_R_UNABLE_TO_FIND_SSL_METHOD);
return (0);
}
if (meth != s->method) {
if (!SSL_set_ssl_method(s, meth))
return (0);
}
#ifndef OPENSSL_NO_KRB5
if (s->kssl_ctx && !s->kssl_ctx->client_princ &&
session->krb5_client_princ_len > 0) {
s->kssl_ctx->client_princ =
(char *)OPENSSL_malloc(session->krb5_client_princ_len + 1);
memcpy(s->kssl_ctx->client_princ, session->krb5_client_princ,
session->krb5_client_princ_len);
s->kssl_ctx->client_princ[session->krb5_client_princ_len] = '\0';
}
#endif /* OPENSSL_NO_KRB5 */
/* CRYPTO_w_lock(CRYPTO_LOCK_SSL); */
CRYPTO_add(&session->references, 1, CRYPTO_LOCK_SSL_SESSION);
if (s->session != NULL)
SSL_SESSION_free(s->session);
s->session = session;
s->verify_result = s->session->verify_result;
/* CRYPTO_w_unlock(CRYPTO_LOCK_SSL); */
ret = 1;
} else {
if (s->session != NULL) {
SSL_SESSION_free(s->session);
s->session = NULL;
}
meth = s->ctx->method;
if (meth != s->method) {
if (!SSL_set_ssl_method(s, meth))
return (0);
}
ret = 1;
}
return (ret);
}
long SSL_SESSION_set_timeout(SSL_SESSION *s, long t)
{
if (s == NULL)
return (0);
s->timeout = t;
return (1);
}
long SSL_SESSION_get_timeout(const SSL_SESSION *s)
{
if (s == NULL)
return (0);
return (s->timeout);
}
long SSL_SESSION_get_time(const SSL_SESSION *s)
{
if (s == NULL)
return (0);
return (s->time);
}
long SSL_SESSION_set_time(SSL_SESSION *s, long t)
{
if (s == NULL)
return (0);
s->time = t;
return (t);
}
X509 *SSL_SESSION_get0_peer(SSL_SESSION *s)
{
return s->peer;
}
int openssl_SSL_SESSION_set1_id_context(SSL_SESSION *s, const unsigned char *sid_ctx,
unsigned int sid_ctx_len)
{
if (sid_ctx_len > SSL_MAX_SID_CTX_LENGTH) {
SSLerr(SSL_F_SSL_SESSION_SET1_ID_CONTEXT,
SSL_R_SSL_SESSION_ID_CONTEXT_TOO_LONG);
return 0;
}
s->sid_ctx_length = sid_ctx_len;
memcpy(s->sid_ctx, sid_ctx, sid_ctx_len);
return 1;
}
long SSL_CTX_set_timeout(SSL_CTX *s, long t)
{
long l;
if (s == NULL)
return (0);
l = s->session_timeout;
s->session_timeout = t;
return (l);
}
long SSL_CTX_get_timeout(const SSL_CTX *s)
{
if (s == NULL)
return (0);
return (s->session_timeout);
}
#ifndef OPENSSL_NO_TLSEXT
int SSL_set_session_secret_cb(SSL *s,
int (*tls_session_secret_cb) (SSL *s,
void *secret,
int *secret_len,
STACK_OF(SSL_CIPHER)
*peer_ciphers,
SSL_CIPHER
**cipher,
void *arg),
void *arg)
{
if (s == NULL)
return (0);
s->tls_session_secret_cb = tls_session_secret_cb;
s->tls_session_secret_cb_arg = arg;
return (1);
}
int SSL_set_session_ticket_ext_cb(SSL *s, tls_session_ticket_ext_cb_fn cb,
void *arg)
{
if (s == NULL)
return (0);
s->tls_session_ticket_ext_cb = cb;
s->tls_session_ticket_ext_cb_arg = arg;
return (1);
}
int SSL_set_session_ticket_ext(SSL *s, void *ext_data, int ext_len)
{
if (s->version >= TLS1_VERSION) {
if (s->tlsext_session_ticket) {
OPENSSL_free(s->tlsext_session_ticket);
s->tlsext_session_ticket = NULL;
}
s->tlsext_session_ticket =
OPENSSL_malloc(sizeof(TLS_SESSION_TICKET_EXT) + ext_len);
if (!s->tlsext_session_ticket) {
SSLerr(SSL_F_SSL_SET_SESSION_TICKET_EXT, ERR_R_MALLOC_FAILURE);
return 0;
}
if (ext_data) {
s->tlsext_session_ticket->length = ext_len;
s->tlsext_session_ticket->data = s->tlsext_session_ticket + 1;
memcpy(s->tlsext_session_ticket->data, ext_data, ext_len);
} else {
s->tlsext_session_ticket->length = 0;
s->tlsext_session_ticket->data = NULL;
}
return 1;
}
return 0;
}
#endif /* OPENSSL_NO_TLSEXT */
typedef struct timeout_param_st {
SSL_CTX *ctx;
long time;
LHASH_OF(SSL_SESSION) *cache;
} TIMEOUT_PARAM;
static void timeout_doall_arg(SSL_SESSION *s, TIMEOUT_PARAM *p)
{
if ((p->time == 0) || (p->time > (s->time + s->timeout))) { /* timeout */
/*
* The reason we don't call SSL_CTX_remove_session() is to save on
* locking overhead
*/
(void)lh_SSL_SESSION_delete(p->cache, s);
SSL_SESSION_list_remove(p->ctx, s);
s->not_resumable = 1;
if (p->ctx->remove_session_cb != NULL)
p->ctx->remove_session_cb(p->ctx, s);
SSL_SESSION_free(s);
}
}
static IMPLEMENT_LHASH_DOALL_ARG_FN(timeout, SSL_SESSION, TIMEOUT_PARAM)
void SSL_CTX_flush_sessions(SSL_CTX *s, long t)
{
unsigned long i;
TIMEOUT_PARAM tp;
tp.ctx = s;
tp.cache = s->sessions;
if (tp.cache == NULL)
return;
tp.time = t;
CRYPTO_w_lock(CRYPTO_LOCK_SSL_CTX);
i = CHECKED_LHASH_OF(SSL_SESSION, tp.cache)->down_load;
CHECKED_LHASH_OF(SSL_SESSION, tp.cache)->down_load = 0;
lh_SSL_SESSION_doall_arg(tp.cache, LHASH_DOALL_ARG_FN(timeout),
TIMEOUT_PARAM, &tp);
CHECKED_LHASH_OF(SSL_SESSION, tp.cache)->down_load = i;
CRYPTO_w_unlock(CRYPTO_LOCK_SSL_CTX);
}
int ssl_clear_bad_session(SSL *s)
{
if ((s->session != NULL) &&
!(s->shutdown & SSL_SENT_SHUTDOWN) &&
!(SSL_in_init(s) || SSL_in_before(s))) {
SSL_CTX_remove_session(s->ctx, s->session);
return (1);
} else
return (0);
}
/* locked by SSL_CTX in the calling function */
static void SSL_SESSION_list_remove(SSL_CTX *ctx, SSL_SESSION *s)
{
if ((s->next == NULL) || (s->prev == NULL))
return;
if (s->next == (SSL_SESSION *)&(ctx->session_cache_tail)) {
/* last element in list */
if (s->prev == (SSL_SESSION *)&(ctx->session_cache_head)) {
/* only one element in list */
ctx->session_cache_head = NULL;
ctx->session_cache_tail = NULL;
} else {
ctx->session_cache_tail = s->prev;
s->prev->next = (SSL_SESSION *)&(ctx->session_cache_tail);
}
} else {
if (s->prev == (SSL_SESSION *)&(ctx->session_cache_head)) {
/* first element in list */
ctx->session_cache_head = s->next;
s->next->prev = (SSL_SESSION *)&(ctx->session_cache_head);
} else {
/* middle of list */
s->next->prev = s->prev;
s->prev->next = s->next;
}
}
s->prev = s->next = NULL;
}
static void SSL_SESSION_list_add(SSL_CTX *ctx, SSL_SESSION *s)
{
if ((s->next != NULL) && (s->prev != NULL))
SSL_SESSION_list_remove(ctx, s);
if (ctx->session_cache_head == NULL) {
ctx->session_cache_head = s;
ctx->session_cache_tail = s;
s->prev = (SSL_SESSION *)&(ctx->session_cache_head);
s->next = (SSL_SESSION *)&(ctx->session_cache_tail);
} else {
s->next = ctx->session_cache_head;
s->next->prev = s;
s->prev = (SSL_SESSION *)&(ctx->session_cache_head);
ctx->session_cache_head = s;
}
}
void SSL_CTX_sess_set_new_cb(SSL_CTX *ctx,
int (*cb) (struct ssl_st *ssl,
SSL_SESSION *sess))
{
ctx->new_session_cb = cb;
}
int (*SSL_CTX_sess_get_new_cb(SSL_CTX *ctx)) (SSL *ssl, SSL_SESSION *sess) {
return ctx->new_session_cb;
}
void SSL_CTX_sess_set_remove_cb(SSL_CTX *ctx,
void (*cb) (SSL_CTX *ctx, SSL_SESSION *sess))
{
ctx->remove_session_cb = cb;
}
void (*SSL_CTX_sess_get_remove_cb(SSL_CTX *ctx)) (SSL_CTX *ctx,
SSL_SESSION *sess) {
return ctx->remove_session_cb;
}
void SSL_CTX_sess_set_get_cb(SSL_CTX *ctx,
SSL_SESSION *(*cb) (struct ssl_st *ssl,
unsigned char *data, int len,
int *copy))
{
ctx->get_session_cb = cb;
}
SSL_SESSION *(*SSL_CTX_sess_get_get_cb(SSL_CTX *ctx)) (SSL *ssl,
unsigned char *data,
int len, int *copy) {
return ctx->get_session_cb;
}
void SSL_CTX_set_info_callback(SSL_CTX *ctx,
void (*cb) (const SSL *ssl, int type, int val))
{
ctx->info_callback = cb;
}
void (*SSL_CTX_get_info_callback(SSL_CTX *ctx)) (const SSL *ssl, int type,
int val) {
return ctx->info_callback;
}
void SSL_CTX_set_client_cert_cb(SSL_CTX *ctx,
int (*cb) (SSL *ssl, X509 **x509,
EVP_PKEY **pkey))
{
ctx->client_cert_cb = cb;
}
int (*SSL_CTX_get_client_cert_cb(SSL_CTX *ctx)) (SSL *ssl, X509 **x509,
EVP_PKEY **pkey) {
return ctx->client_cert_cb;
}
#ifndef OPENSSL_NO_ENGINE
int SSL_CTX_set_client_cert_engine(SSL_CTX *ctx, ENGINE *e)
{
if (!ENGINE_init(e)) {
SSLerr(SSL_F_SSL_CTX_SET_CLIENT_CERT_ENGINE, ERR_R_ENGINE_LIB);
return 0;
}
if (!ENGINE_get_ssl_client_cert_function(e)) {
SSLerr(SSL_F_SSL_CTX_SET_CLIENT_CERT_ENGINE,
SSL_R_NO_CLIENT_CERT_METHOD);
ENGINE_finish(e);
return 0;
}
ctx->client_cert_engine = e;
return 1;
}
#endif
void SSL_CTX_set_cookie_generate_cb(SSL_CTX *ctx,
int (*cb) (SSL *ssl,
unsigned char *cookie,
unsigned int *cookie_len))
{
ctx->app_gen_cookie_cb = cb;
}
void SSL_CTX_set_cookie_verify_cb(SSL_CTX *ctx,
int (*cb) (SSL *ssl, unsigned char *cookie,
unsigned int cookie_len))
{
ctx->app_verify_cookie_cb = cb;
}
IMPLEMENT_PEM_rw(SSL_SESSION, SSL_SESSION, PEM_STRING_SSL_SESSION,
SSL_SESSION)
```
|
Adams Square (1879–1963) was a square in downtown Boston, Massachusetts. Now demolished, it was formerly located on the site of the current Boston City Hall in Government Center.
History
The square was a product of the 1873–4 extension of Washington Street to Haymarket Square, which created a large open space at the junction of Cornhill, Brattle, Washington, and Devonshire Streets. In 1879 the city decided to erect a statue of the Patriot and statesman Samuel Adams at this spot, and the area was accordingly given the name Adams Square that same year. During its early history the square was part of a thriving retail district near the northern end of Washington Street and was the home of Leopold Morse & Co., one of the largest clothing retailers in the city.
In 1898 Adams Square became a stop along the Tremont Street Subway (the predecessor to the MBTA Green Line) with the opening of Adams Square Station, whose large granite head house became the principal architectural feature of the area. Subsequent alterations to the square in the early twentieth century were undertaken in an effort to relieve congestion caused by increasing automobile traffic. In 1928 the city removed the Adams statue and relocated it to adjacent Dock Square in order to improve traffic flow, and three years later the original head house of the subway station was torn down to increase driver visibility and replaced with a significantly smaller entranceway.
In the mid-20th century the square was targeted for urban renewal as part of the Government Center project. It was demolished in 1963 and replaced with Boston City Hall.
Images
See also
Dock Square
Scollay Square
Notes
References
External links
Adams Square - The Curse of the Bambino Musical
Washington Street, Adams Square, Facing North, in 1954-1959 - Perpetual Form of the City, via Dome
Former buildings and structures in Boston
Squares in Boston
Financial District, Boston
19th century in Boston
Government Center, Boston
|
```xml
import { TextDocument } from 'vscode-languageserver-textdocument';
import { getFileFsPath } from '../utils/paths';
import { Definition } from 'vscode-languageserver-types';
import { LanguageModes } from '../embeddedSupport/languageModes';
/**
* State associated with a specific Vue file
* The state is shared between different modes
*/
export interface VueFileInfo {
/**
* The default export component info from script section
*/
componentInfo: ComponentInfo;
}
export interface ComponentInfo {
name?: string;
definition?: Definition;
insertInOptionAPIPos?: number;
componentsDefine?: {
start: number;
end: number;
insertPos: number;
};
childComponents?: ChildComponent[];
emits?: EmitInfo[];
/**
* Todo: Extract type info in cases like
* props: {
* foo: String
* }
*/
props?: PropInfo[];
data?: DataInfo[];
computed?: ComputedInfo[];
methods?: MethodInfo[];
}
export interface ChildComponent {
name: string;
documentation?: string;
definition?: {
path: string;
start: number;
end: number;
};
global: boolean;
info?: VueFileInfo;
}
export interface EmitInfo {
name: string;
/**
* `true` if
* emits: {
* foo: (...) => {...}
* }
*
* `false` if
* - `emits: ['foo']`
* - `@Emit()`
* - `emits: { foo: null }`
*/
hasValidator: boolean;
documentation?: string;
typeString?: string;
}
export interface PropInfo {
name: string;
/**
* `true` if
* props: {
* foo: { ... }
* }
*
* `false` if
* - `props: ['foo']`
* - `props: { foo: String }`
*
*/
hasObjectValidator: boolean;
required: boolean;
isBoundToModel: boolean;
documentation?: string;
typeString?: string;
}
export interface DataInfo {
name: string;
documentation?: string;
}
export interface ComputedInfo {
name: string;
documentation?: string;
}
export interface MethodInfo {
name: string;
documentation?: string;
}
export class VueInfoService {
private languageModes: LanguageModes;
private vueFileInfo: Map<string, VueFileInfo> = new Map();
constructor() {}
init(languageModes: LanguageModes) {
this.languageModes = languageModes;
}
updateInfo(doc: TextDocument, info: VueFileInfo) {
this.vueFileInfo.set(getFileFsPath(doc.uri), info);
}
getInfo(doc: TextDocument) {
this.languageModes.getAllLanguageModeRangesInDocument(doc).forEach(m => {
if (m.mode.updateFileInfo) {
m.mode.updateFileInfo(doc);
}
});
return this.vueFileInfo.get(getFileFsPath(doc.uri));
}
}
```
|
Vice Admiral Sir Raymond Shayle Hawkins KCB (21 December 1909 – 18 October 1987) was a Royal Navy officer who went on to be Fourth Sea Lord.
Naval career
Born on 21 December 1909 and educated at Bedford School, Raymond Hawkins joined the Royal Navy in 1927, serving aboard in 1932 and aboard in 1933. He served with Submarines between 1935 and 1943, and was promoted to lieutenant commander in 1940. He served aboard in 1943, as assistant naval attaché in Paris in 1954, and as commanding officer of HMS St Vincent in 1957. He was appointed director of marine engineering in 1961 and Fourth Sea Lord and Vice Controller of the Navy in 1963. Promoted to vice admiral, his job title changed to Chief of Naval Supplies and Transport and Vice-Controller of the Navy in 1964. He retired in 1967.
In retirement, Sir Raymond Hawkins was appointed director of engineering for English Electric Diesels. He died on 18 October 1987.
References
|-
1909 births
1987 deaths
People educated at Bedford School
Royal Navy vice admirals
Knights Commander of the Order of the Bath
Lords of the Admiralty
|
```ruby
require_relative '../../../spec_helper'
require 'matrix'
describe "Matrix::Scalar#+" do
it "needs to be reviewed for spec completeness"
end
```
|
Dinwiddie High School is a secondary school in Dinwiddie County, Virginia, United States. It is the only high school in the county.
History
The Mann Act in 1906 provided for a system of high schools across the state. High schools were eventually built for white students in the county at
Midway (1911 – 1965),
Sunnyside (1912 – 1930),
Dinwiddie (1913 – 1965),
Darvills (1914 – 1942),
McKenney (1916 – 1930).
These were all consolidated into Dinwiddie County High School in 1965.
Campus
In 2008, Dinwiddie High School moved to a new building located on a campus across the street from its former building. The new school building serves students in grades 9 through 12 with a capacity of 1,600. The move is intended to ease overcrowding and accommodate future population growth in the region. The former high school building is now Dinwiddie Middle School for grades 6 through 8. Both schools are nicknamed the Generals, or the Gens. Although the two schools are separate, many authorities have confirmed the actual closeness of the students in the junior and senior high schools. Therefore, this makes it impossible to split the two schools. The campuses' close proximity allow for the sharing of amenities such as the football field. Dinwiddie High School has started to bring many new cool features to the campus as well. They have remodeled the flowerbeds around the school adding new mulch and colorful flowers. In the 2018–19 school year, the school decided to try a new form of class called a "Gen Block". This will give students about an hour of time to themselves of something they want to do such as coloring, cooking, sewing, and other extracurricular activities that they would like to do. "Gen Block" allows the student some free time halfway through the day, giving them a sense of relief about not having to worry about grades but just to have fun.
Athletics
Attended the 2000 Virginia State Football Championship at University of Richmond Stadium against Heritage Newport News, losing 42-7.
Attended the 2008 Virginia State Football Championship at Lane Stadium against the Phoebus High School Phantoms, losing 37-13.
Attended the 2013 Virginia State Football Championship at Williams Stadium against the Sherando High School Warriors, winning 56-14.
Attended the 2016 Virginia State Football Championship at Zable Stadium against the Salem High School Spartans, losing 31-27.
Since Dinwiddie became Virginia State Champions in 2013, the county has all been revolved around football. They have gotten better field conditions with recent fund raisers and more publicity from surrounding businesses that want to show their support and give back to a strong program that also gives them advertising.
Notable alumni
Jim Austin, Former MLB player (Milwaukee Brewers)
Mike Christopher, Former MLB player (Los Angeles Dodgers, Cleveland Indians, Detroit Tigers)
Curtis Wilkerson, Former MLB player (Texas Rangers, Chicago Cubs, Pittsburgh Pirates, Kansas City Royals)
Notable faculty
Thomas G. Pullen, former president University of Baltimore
References
External links
Official site
http://www.dinwiddie.k12.va.us/
Schools in Dinwiddie County, Virginia
Public high schools in Virginia
|
The eCRM or electronic customer relationship management coined by Oscar Gomes encompasses all standard CRM functions with the use of the net environment i.e., intranet, extranet and internet. Electronic CRM concerns all forms of managing relationships with customers through the use of information technology (IT).
eCRM processes include data collection, data aggregation, and customer interaction. Compared to traditional CRM, the integrated information for eCRM intraorganizational collaboration can be more efficient to communicate with customers.
From RM to CRM
The concept of relationship marketing (RM) was established by marketing professor Leonard Berry in 1983. He considered it to consist of attracting, maintaining and enhancing customer relationships within organizations.
In the years that followed, companies were engaging more and more in a meaningful dialogue with individual customers. In doing so, new organizational forms as well as technologies were used, eventually resulting in what we know as customer relationship management.
The main difference between CRM and e-CRM is that the first does not acknowledge the use of technology, where the latter uses information technology (IT) in implementing RM strategies.
The essence of CRM
The exact meaning of CRM is still subject of heavy discussions. However, the overall goal can be seen as effectively managing differentiated relationships with all customers and communicating with them on an individual basis. Underlying thought is that companies realize that they can supercharge profits by acknowledging that different groups of customers vary widely in their behavior, desires, and responsiveness to marketing.
Loyal customers can not only give operational companies sustained revenue but also advertise for new marketers. To reinforce the reliance of customers and create additional customer sources, firms utilize CRM to maintain the relationship as the general two categories B2B (business-to-business) and B2C (business-to-customer or business-to-consumer). Because of the needs and behaviors are different between B2B and B2C, the implementation of CRM should come from respective viewpoints.
Differences from CRM
Major differences between CRM and eCRM:
Customer contacts
CRM – Contact with customer made through the retail store, phone, and fax.
eCRM – All of the traditional methods are used in addition to Internet, email, wireless, and PDA technologies.
System interface
CRM – Implements the use of ERP systems, emphasis is on the back-end.
eCRM – Geared more toward front end, which interacts with the back-end through use of ERP systems, data warehouses, and data marts.
System overhead (client computers)
CRM – The client must download various applications to view the web-enabled applications. They would have to be rewritten for different platform.
eCRM – Does not have these requirements because the client uses the browser.
Customization and personalization of information
CRM – Views differ based on the audience, and personalized views are not available. Individual personalization requires program changes.
eCRM – Personalized individual views based on purchase history and preferences. Individual has ability to customize view.
System focus
CRM – System (created for internal use) designed based on job function and products. Web applications designed for a single department or business unit.
eCRM – System (created for external use) designed based on customer needs. Web application designed for enterprise-wide use.
System maintenance and modification
CRM – More time involved in implementation and maintenance is more expensive because the system exists at different locations and on various servers.
eCRM – Reduction in time and cost. Implementation and maintenance can take place at one location and on one server.
by MalleBevax
eCRM
As the Internet is becoming more and more important in business life, many companies consider it as an opportunity to reduce customer-service costs, tighten customer relationships and most important, further personalize marketing messages and enable mass customization. ECRM is being adopted by companies because it increases customer loyalty and customer retention by improving customer satisfaction, one of the objectives of eCRM. E-loyalty results in long-term profits for online retailers because they incur less costs of recruiting new customers, plus they have an increase in customer retention.
Together with the creation of sales force automation (SFA), where electronic methods were used to gather data and analyze customer information, the trend of the upcoming Internet can be seen as the foundation of what we know as eCRM today.
As we implement eCRM process, there are three steps life cycle:
Data collection: About customers preference information for actively (answer knowledge) and passively (surfing record) ways via website, email, questionnaire.
Data aggregation: Filter and analysis for firm's specific needs to fulfill their customers.
Customer interaction: According to customer's need, company provide the proper feedback to them.
eCRM can be defined as activities to manage customer relationships by using the Internet, web browsers or other electronic touch points.
The challenge hereby is to offer communication and information on the right topic, in the right amount, and at the right time that fits the customer's specific needs.
Strategy components
When enterprises integrate their customer information, there are three eCRM strategy components:
Operational: Because of sharing information, the processes in business should make customer's need as first and seamlessly implement. This avoids multiple times to bother customers and redundant process.
Analytical: Analysis helps company maintain a long-term relationship with customers.
Collaborative: Due to improved communication technology, different departments in company implement (intraorganizational) or work with business partners (interorganizational) more efficiently by sharing information. (Nenad Jukic et al., 2003)
Implementing and integrating
Non-electronic solution
Several CRM software packages exist that can help companies in deploying CRM activities. Besides choosing one of these packages, companies can also choose to design and build their own solutions. In order to implement CRM in an effective way, one needs to consider the following factors:
Create a customer-focused culture in the organization.
Adopt customer-based managers to assess satisfaction.
Develop an end-to-end process to serve customers.
Recommend questions to be asked to help a customer solve a problem.
Track all aspects of selling to customers, as well as prospects.
Furthermore, CRM solutions are more effective once they are being implemented in other information systems used by the company. Examples are transaction processing system (TPS) to process data real-time, which can then be sent to the sales and finance departments in order to recalculate inventory and financial position quick and accurately. Once this information is transferred back to the CRM software and services it could prevent customers from placing an order in the belief that an item is in stock while it is not.
Cloud solution
Today, more and more enterprise CRM systems move to cloud computing solution, "up from 8 percent of the CRM market in 2005 to 20 percent of the market in 2008, according to Gartner". Moving managing system into cloud, companies can cost efficiently as pay-per-use on manage, maintain, and upgrade etc. system and connect with their customers streamlined in the cloud. In cloud based CRM system, transaction can be recorded via CRM database immediately.
Some enterprise CRM in cloud systems are web-based customers don't need to install an additional interface and the activities with businesses can be updated real-time. People may communicate on mobile devices to get the efficient services. Furthermore, customer/case experience and the interaction feedbacks are another way of CRM collaboration and integration information in corporate organization to improve businesses’ services.
There are multifarious cloud CRM services for enterprise to use and here are some hints to the your right CRMsystem:
Assess your company's needs: some of enterprise CRM systems are featured
Take advantage of free trials: comparison and familiarization each of the optional.
Do the math: estimate the customer strategy for company budget.
Consider mobile options: some system like Salesforce.com can be combined with other mobile device application.
Ask about security: consider whether the cloud CRM solution provides as much protection as your own system.
Make sure the sales team is on board: as the frontline of enterprise, the launched CRM system should be the help for sales.
Know your exit strategy: understand the exit mechanism to keep flexibility.
vCRM
Channels through which companies can communicate with its customers, are growing by the day, and as a result, their time and attention has turned into a major challenge.
One of the reasons eCRM is so popular nowadays is that digital channels can create unique and positive experiences – not just transactions – for customers.
An extreme, but ever growing in popularity, example of the creation of experiences in order to establish customer service is the use of Virtual Worlds, such as Second Life. Through this so-called vCRM, companies are able to create synergies between virtual and physical channels and reaching a very wide consumer base. However, given the newness of the technology, most companies are still struggling to identify effective entries in Virtual Worlds.
Its highly interactive character, which allows companies to respond directly to any customer's requests or problems, is another feature of eCRM that helps companies establish and sustain long-term customer relationships.
Furthermore, Information Technology has helped companies to even further differentiate between customers and address a personal message or service. Some examples of tools used in eCRM:
Personalized Web Pages where customers are recognized and their preferences are shown.
Customized products or services.
CRM programs should be directed towards customer value that competitors cannot match. However, in a world where almost every company is connected to the Internet, eCRM has become a requirement for survival, not just a competitive advantage.
Different levels
In defining the scope of eCRM, three different levels can be distinguished:
Foundational services:
This includes the minimum necessary services such as web site effectiveness and responsiveness as well as order fulfillment.
Customer-centered services:
These services include order tracking, product configuration and customization as well as security/trust.
Value-added services:
These are extra services such as online auctions and online training and education.
Self-services are becoming increasingly important in CRM activities. The rise of the Internet and eCRM has boosted the options for self-service activities.
A critical success factor is the integration of such activities into traditional channels. An example was Ford's plan to sell cars directly to customers via its Web Site, which provoked an outcry among its dealers network.
CRM activities are mainly of two different types. Reactive service is where the customer has a problem and contacts the company. Proactive service is where the manager has decided not to wait for the customer to contact the firm, but to be aggressive and contact the customer himself in order to establish a dialogue and solve problems.
Steps to eCRM Success
Many factors play a part in ensuring that the implementation any level of eCRM is successful. One obvious way it could be measured is by the ability for the system to add value to the existing business. There are four suggested implementation steps that affect the viability of a project like this:
Developing customer-centric strategies
Redesigning workflow management systems
Re-engineering work processes
Supporting with the right technologies
Mobile CRM
One subset of Electronic CRM is Mobile CRM (mCRM). This is defined as "services that aim at nurturing customer relationships, acquiring or maintaining customers, support marketing, sales or services processes, and use wireless networks as the medium of delivery to the customers. However, since communications is the central aspect of customer relations activities, many opt for the following definition of mCRM: "communication, either one-way or interactive, which is related to sales, marketing and customer service activities conducted through mobile medium for the purpose of building and maintaining customer relationships between a company and its customer(s).
eCRM allows customers to access company services from more and more places, since the Internet access points are increasing by the day. mCRM however, takes this one step further and allows customers or managers to access the systems for instance from a mobile phone or PDA with internet access, resulting in high flexibility.
Since mCRM is not able to provide a complete range of customer relationship activities it should be integrated in the complete CRM system.
There are three main reasons that mobile CRM is becoming so popular. The first is that the devices consumer use are improving in multiple ways that allow for this advancement. Displays are larger and clearer and access times on networks are improving overall. Secondly, the users are also becoming more sophisticated. The technology to them is nothing new so it is easy to adapt. Lastly, the software being developed for these applications has become worthwhile and useful to end users.
There are four basic steps that a company should follow to implement a mobile CRM system. By following these and also keeping the IT department, the end users and management in agreement, the outcome can be beneficial for all.
Step 1 – Needs analysis phase: This is the point to take your times and understand all the technical needs and desires for each of the users and stakeholders. It also has to be kept in mind that the mobile CRM system must be able to grow and change with the business.
Step 2 – Mobile design phase: This is the next critical phase that will show all the technical concerns that need to be addressed. A few main things to consider are screen size, device storage and security.
Step 3 – Mobile application testing phase: This step is mostly to ensure that the users and stakeholders all approve of the new system.
Step 4 – Rollout phase: This is when the new system is implemented but also when training on the final product is done with all users.
Advantages of mobile CRM
The mobile channel creates a more personal direct connection with customers.
It is continuously active and allows necessary individuals to take action quickly using the information.
Typically it is an opt-in only channel which allows for high and quality responsiveness.
Overall it supports loyalty between the customer and company, which improves and strengthens relationships.
Failures
Designing, creating and implementing IT projects has always been risky. Not only because of the amount of money that is involved, but also because of the high chances of failure. However, a positive trend can be seen, indicating that CRM failures dropped from a failure rate of 80% in 1998, to about 40% in 2003.
Some of the major issues relating to CRM failure are the following:
Difficulty in measuring and valuing intangible benefits.
Failure to identify and focus on specific business problems.
Lack of active senior management sponsorship.
Poor user acceptance.
Trying to automate a poorly defined process.
Failure rates in CRM from 2001-2009:
2001- 50% failure rate according to the Gartner group
2002- 70% failure rate according to Butler group
2003- 69.3% according to Selling Power, CSO Forum
2004- 18% according to AMR Research group
2005- 31% according to AMR Research
2006- 29% according to AMR Research
2007- 56% according to Economist Intelligence Unit
2009- 47% according to Forrester Research
Differing measurement criteria and methods of the research groups make it difficult to compare these rates. Most of these rates were based on customer response pertaining to questions on the success of CRM implementations.
Privacy
The effective and efficient employment of CRM activities cannot go without the remarks of safety and privacy. CRM systems depend on databases in which all kinds of customer data is stored. In general, the following rule applies: the more data, the better the service companies can deliver to individual customers.
Some known examples of these problems are conducting credit-card transaction online of the phenomenon known as 'cookies' used on the Internet in order to track someone's information and behavior.
The design and the quality of the website are two very important aspects that influence the level of trust customers experience and their willingness or reluctance to conduct a transaction or leave personal information.
Privacy policies can be ineffective in relaying to customers how much of their information is being used. In a recent study by The University of Pennsylvania and University of California, it was revealed that over half the respondents have an incorrect understanding of how their information is being used. They believe that, if a company has a privacy policy, they will not share the customer's information with third party companies without the customer's express consent. Therefore, if marketers want to use consumer information for advertising purposes, they must clearly illustrate the ways in which they will use the customer's information and present the benefits of this in order to acquire the customer's consent. Privacy concerns are being addressed more and more. Legislation is being proposed that regulates the use of personal data. Also, Internet policy officials are calling for more performance measures of privacy policies.
Statistics on privacy:
38% of retailers don't talk about privacy in their sign up or welcome email
About 50% of major online retailers discuss privacy concerns during the email subscription process
As the use of the Internet, electronic CRM solutions, and even the existence of e-business are rising, so are the efforts to further develop the systems being used and to increase their safety for customers, in order to further reap the benefits of their use.
See also
Customer relationship management
Comparison of CRM systems
Customer lifecycle management
B2B
B2C
Cloud computing
Enterprise resource planning
Notes
Further reading
Romano, Nicholas C. and Fjermestad, Jerry L. (2009) Preface to the focus theme on eCRM. Electronic Markets 19(2-3) 69-70.
Yujong Hwang (2009) The impact of uncertainty avoidance, social norms and innovativeness on trust and ease of use in electronic customer relationship management. Electronic Markets 19 (2-3) 89-98
Pierre Hadaya and Luc Cassivi (2009) Collaborative e-product development and product innovation in a demand-driven network: the moderating role of eCRM. Electronic Markets 19(2-3) 71-87.
Customer relationship management software
|
Balampur is a village in the Bhopal district of Madhya Pradesh, India. It is located in the Huzur tehsil and the Phanda block. Bhadbhadaghat is the nearest railway station.
Demographics
According to the 2011 census of India, Balampur has 611 households. The effective literacy rate (i.e. the literacy rate of population excluding children aged 6 and below) is 65.3%.
References
Villages in Huzur tehsil
|
```c++
// Use, modification and distribution are subject to the
// LICENSE_1_0.txt or copy at path_to_url
#ifndef BOOST_MATH_TUPLE_HPP_INCLUDED
# define BOOST_MATH_TUPLE_HPP_INCLUDED
# include <boost/config.hpp>
#include <boost/tr1/detail/config.hpp> // for BOOST_HAS_TR1_TUPLE
#ifndef BOOST_NO_CXX11_HDR_TUPLE
#include <tuple>
namespace boost{ namespace math{
using ::std::tuple;
// [6.1.3.2] Tuple creation functions
using ::std::ignore;
using ::std::make_tuple;
using ::std::tie;
using ::std::get;
// [6.1.3.3] Tuple helper classes
using ::std::tuple_size;
using ::std::tuple_element;
}}
#elif defined(BOOST_HAS_TR1_TUPLE)
#include <boost/tr1/tuple.hpp>
namespace boost{ namespace math{
using ::std::tr1::tuple;
// [6.1.3.2] Tuple creation functions
using ::std::tr1::ignore;
using ::std::tr1::make_tuple;
using ::std::tr1::tie;
using ::std::tr1::get;
// [6.1.3.3] Tuple helper classes
using ::std::tr1::tuple_size;
using ::std::tr1::tuple_element;
}}
#elif (defined(__BORLANDC__) && (__BORLANDC__ <= 0x600)) || (defined(_MSC_VER) && (_MSC_VER < 1310)) || defined(__IBMCPP__)
#include <boost/tuple/tuple.hpp>
#include <boost/tuple/tuple_comparison.hpp>
#include <boost/type_traits/integral_constant.hpp>
namespace boost{ namespace math{
using ::boost::tuple;
// [6.1.3.2] Tuple creation functions
using ::boost::tuples::ignore;
using ::boost::make_tuple;
using ::boost::tie;
// [6.1.3.3] Tuple helper classes
template <class T>
struct tuple_size
: public ::boost::integral_constant
< ::std::size_t, ::boost::tuples::length<T>::value>
{};
template < int I, class T>
struct tuple_element
{
typedef typename boost::tuples::element<I,T>::type type;
};
#if !BOOST_WORKAROUND(__BORLANDC__, < 0x0582)
// [6.1.3.4] Element access
using ::boost::get;
#endif
} } // namespaces
#else
#include <boost/fusion/include/tuple.hpp>
#include <boost/fusion/include/std_pair.hpp>
namespace boost{ namespace math{
using ::boost::fusion::tuple;
// [6.1.3.2] Tuple creation functions
using ::boost::fusion::ignore;
using ::boost::fusion::make_tuple;
using ::boost::fusion::tie;
using ::boost::fusion::get;
// [6.1.3.3] Tuple helper classes
using ::boost::fusion::tuple_size;
using ::boost::fusion::tuple_element;
}}
#endif
#endif
```
|
```python
from web3 import (
Web3,
)
from web3.providers import (
AutoProvider,
BaseProvider,
)
class ConnectedProvider(BaseProvider):
def is_connected(self, show_traceback: bool = False):
return True
class DisconnectedProvider(BaseProvider):
def is_connected(self, show_traceback: bool = False):
return False
def test_is_connected_connected():
"""
Web3.is_connected() returns True when connected to a node.
"""
w3 = Web3(ConnectedProvider())
assert w3.is_connected() is True
def test_is_connected_disconnected():
"""
Web3.is_connected() returns False when configured with a provider
that's not connected to a node.
"""
w3 = Web3(DisconnectedProvider())
assert w3.is_connected() is False
def test_autoprovider_detection():
def no_provider():
return None
def must_not_call():
raise AssertionError
auto = AutoProvider(
[
no_provider,
DisconnectedProvider,
ConnectedProvider,
must_not_call,
]
)
w3 = Web3(auto)
assert w3.is_connected()
assert isinstance(auto._active_provider, ConnectedProvider)
```
|
Marcin Lijewski (born 21 September 1977) is a former Polish handball player and the current coach of Poland.
As a player he received a silver medal with the Polish team at the 2007 World Men's Handball Championship in Germany and a bronze medal at the 2009 World Men's Handball Championship in Croatia. He participated at the 2008 Summer Olympics, where Poland finished 5th.
Sporting achievements
State awards
2007 Gold Cross of Merit
Personal life
His brother Krzysztof Lijewski is also a former handball player and current assistant coach at Industria Kielce.
References
External links
1977 births
Living people
21st-century Polish people
People from Krotoszyn
Sportspeople from Greater Poland Voivodeship
Polish male handball players
Wisła Płock (handball) players
SG Flensburg-Handewitt players
Handball-Bundesliga players
Expatriate handball players
Polish expatriate sportspeople in Germany
Handball players at the 2008 Summer Olympics
Olympic handball players for Poland
Polish expatriate sportspeople in Iran
Polish handball coaches
Handball coaches of international teams
|
The Message is the debut studio album by American hip hop group Grandmaster Flash and the Furious Five, released on October 3, 1982 by Sugar Hill Records. It features the influential title track and hip hop single "The Message".
Release and reception
The Message was released in October 1982 by Sugar Hill Records. The album charted at number 53 in the United States and at number 77 in the United Kingdom.
Reviewing in December 1982 for The New York Times, Robert Palmer hailed The Message as the year's best album and explained that while the emerging rap genre had often been criticized for confining itself to "bragging and boasting ... The Message is different. It's a gritty, plain-spoken, vividly cinematic portrait of black street life...social realism has rarely worked well in a pop-music context, but The Message is an utterly convincing cry of frustration and despair that cannot be ignored." Robert Christgau ranked it as the 21st best album of 1982 on his list for The Village Voices annual Pazz & Jop critics' poll. In Christgau's Record Guide: The '80s (1990), he wrote that, although "She's Fresh" is the "only instant killer", each song's attempt to experiment and "touch a lot of bases with a broad demographic ... justifies itself".
According to music journalist Tom Breihan, The Message was a "singles-plus filler cash-in" that proved "a fascinating time capsule of rap's early attempts with the album format" as well as "a full-length artistic breakthrough, a rap album that earned respect on its own terms". In a retrospective review, AllMusic's Ron Wynn called it the "ultimate peak" for Grandmaster Flash and the Furious Five, naming the title track as its highlight. Miles Marshall Lewis, reviewing the album's 2002 British reissue in The New Rolling Stone Album Guide (2004), cited "The Adventures of Grandmaster Flash on the Wheels of Steel" as the "clincher" and "the only prime-period example of Flash's ability to set and shatter moods, with his turntables and faders running through a collage of at least 10 records that sound like hundreds." Mark Richardson from Pitchfork said that The Message featured "two absolutely essential songs"—the title track and "Scorpio," which he dubbed "the greatest early electro track." However, he felt the rest of the songs were inferior. The album was also included in the book 1001 Albums You Must Hear Before You Die.
Track listing
Sample credits
"She's Fresh" contains samples from "It's Just Begun" by The Jimmy Castor Bunch and "The Lovomaniacs" by Boobie Knight.
"It's Nasty" contains samples from "Genius of Love" by Tom Tom Club.
"It's a Shame" contains samples from "Mt. Airy Groove" by Pieces Of A Dream.
"The Adventures of Grandmaster Flash on the Wheels of Steel" contains samples from "Good Times" by Chic, "Apache" by The Incredible Bongo Band, "Rapture" by Blondie, "Another One Bites the Dust" by Queen, "8th Wonder" by The Sugarhill Gang, "Monster Jam" by Sequence, "Glow of Love" by Change and "Life Story" by The Hellers.
Personnel
Grandmaster Flash (Joseph Saddler) – turntables, drum programming, Flashformer transform DJ device, background vocals
Keef Cowboy (Keith Wiggins) – lead and background vocals, writer and arranger
Grandmaster Melle Mel (Melvin Glover) – lead and background vocals, writer and arranger
The Kidd Creole (Nathaniel Glover Jr.) – lead and background vocals, writer and arranger
Scorpio (Eddie Morris) – lead and background vocals, writer and arranger
Rahiem (Guy Todd Williams) – lead and background vocals, writer and arranger
Doug Wimbish - bass
Skip McDonald - guitar
Reggie Griffin, Jiggs, Sylvia Robinson - Prophet Sequential
Gary Henry, Dwain Mitchell - keyboards
Keith Leblanc - drums
Ed Fletcher - percussion
Chops Horn Section - brass
Charts
Album
Singles
References
Bibliography
External links
Groups Official Website
The Kidd Creole's Official Website
The Message (Adobe Flash) at Radio3Net (streamed copy where licensed)
1982 debut albums
Grandmaster Flash and the Furious Five albums
Electro albums by American artists
Albums produced by Grandmaster Flash
Sugar Hill Records (hip hop label) albums
|
Popowo is a village in the administrative district of Gmina Międzychód, within Międzychód County, Greater Poland Voivodeship, in west-central Poland. It lies approximately east of Międzychód and west of the regional capital Poznań.
References
Villages in Międzychód County
|
```c
/*
*
*/
#include <ctype.h>
#include <stdlib.h>
#include <string.h>
#include <zephyr/device.h>
#include <zephyr/drivers/sensor.h>
#include <zephyr/kernel.h>
#include <zephyr/rtio/rtio.h>
#include <zephyr/shell/shell.h>
#include <zephyr/sys/iterable_sections.h>
#include <zephyr/sys/util.h>
#include "sensor_shell.h"
LOG_MODULE_REGISTER(sensor_shell, CONFIG_SENSOR_LOG_LEVEL);
#define SENSOR_GET_HELP \
"Get sensor data. Channel names are optional. All channels are read " \
"when no channels are provided. Syntax:\n" \
"<device_name> <channel name 0> .. <channel name N>"
#define SENSOR_STREAM_HELP \
"Start/stop streaming sensor data. Data ready trigger will be used if no triggers " \
"are provided. Syntax:\n" \
"<device_name> on|off <trigger name> incl|drop|nop"
#define SENSOR_ATTR_GET_HELP \
"Get the sensor's channel attribute. Syntax:\n" \
"<device_name> [<channel_name 0> <attribute_name 0> .. " \
"<channel_name N> <attribute_name N>]"
#define SENSOR_ATTR_SET_HELP \
"Set the sensor's channel attribute.\n" \
"<device_name> <channel_name> <attribute_name> <value>"
#define SENSOR_INFO_HELP "Get sensor info, such as vendor and model name, for all sensors."
#define SENSOR_TRIG_HELP \
"Get or set the trigger type on a sensor. Currently only supports `data_ready`.\n" \
"<device_name> <on/off> <trigger_name>"
static const char *sensor_channel_name[SENSOR_CHAN_COMMON_COUNT] = {
[SENSOR_CHAN_ACCEL_X] = "accel_x",
[SENSOR_CHAN_ACCEL_Y] = "accel_y",
[SENSOR_CHAN_ACCEL_Z] = "accel_z",
[SENSOR_CHAN_ACCEL_XYZ] = "accel_xyz",
[SENSOR_CHAN_GYRO_X] = "gyro_x",
[SENSOR_CHAN_GYRO_Y] = "gyro_y",
[SENSOR_CHAN_GYRO_Z] = "gyro_z",
[SENSOR_CHAN_GYRO_XYZ] = "gyro_xyz",
[SENSOR_CHAN_MAGN_X] = "magn_x",
[SENSOR_CHAN_MAGN_Y] = "magn_y",
[SENSOR_CHAN_MAGN_Z] = "magn_z",
[SENSOR_CHAN_MAGN_XYZ] = "magn_xyz",
[SENSOR_CHAN_DIE_TEMP] = "die_temp",
[SENSOR_CHAN_AMBIENT_TEMP] = "ambient_temp",
[SENSOR_CHAN_PRESS] = "press",
[SENSOR_CHAN_PROX] = "prox",
[SENSOR_CHAN_HUMIDITY] = "humidity",
[SENSOR_CHAN_LIGHT] = "light",
[SENSOR_CHAN_IR] = "ir",
[SENSOR_CHAN_RED] = "red",
[SENSOR_CHAN_GREEN] = "green",
[SENSOR_CHAN_BLUE] = "blue",
[SENSOR_CHAN_ALTITUDE] = "altitude",
[SENSOR_CHAN_PM_1_0] = "pm_1_0",
[SENSOR_CHAN_PM_2_5] = "pm_2_5",
[SENSOR_CHAN_PM_10] = "pm_10",
[SENSOR_CHAN_DISTANCE] = "distance",
[SENSOR_CHAN_CO2] = "co2",
[SENSOR_CHAN_O2] = "o2",
[SENSOR_CHAN_VOC] = "voc",
[SENSOR_CHAN_GAS_RES] = "gas_resistance",
[SENSOR_CHAN_VOLTAGE] = "voltage",
[SENSOR_CHAN_VSHUNT] = "vshunt",
[SENSOR_CHAN_CURRENT] = "current",
[SENSOR_CHAN_POWER] = "power",
[SENSOR_CHAN_RESISTANCE] = "resistance",
[SENSOR_CHAN_ROTATION] = "rotation",
[SENSOR_CHAN_POS_DX] = "pos_dx",
[SENSOR_CHAN_POS_DY] = "pos_dy",
[SENSOR_CHAN_POS_DZ] = "pos_dz",
[SENSOR_CHAN_POS_DXYZ] = "pos_dxyz",
[SENSOR_CHAN_RPM] = "rpm",
[SENSOR_CHAN_GAUGE_VOLTAGE] = "gauge_voltage",
[SENSOR_CHAN_GAUGE_AVG_CURRENT] = "gauge_avg_current",
[SENSOR_CHAN_GAUGE_STDBY_CURRENT] = "gauge_stdby_current",
[SENSOR_CHAN_GAUGE_MAX_LOAD_CURRENT] = "gauge_max_load_current",
[SENSOR_CHAN_GAUGE_TEMP] = "gauge_temp",
[SENSOR_CHAN_GAUGE_STATE_OF_CHARGE] = "gauge_state_of_charge",
[SENSOR_CHAN_GAUGE_FULL_CHARGE_CAPACITY] = "gauge_full_cap",
[SENSOR_CHAN_GAUGE_REMAINING_CHARGE_CAPACITY] = "gauge_remaining_cap",
[SENSOR_CHAN_GAUGE_NOM_AVAIL_CAPACITY] = "gauge_nominal_cap",
[SENSOR_CHAN_GAUGE_FULL_AVAIL_CAPACITY] = "gauge_full_avail_cap",
[SENSOR_CHAN_GAUGE_AVG_POWER] = "gauge_avg_power",
[SENSOR_CHAN_GAUGE_STATE_OF_HEALTH] = "gauge_state_of_health",
[SENSOR_CHAN_GAUGE_TIME_TO_EMPTY] = "gauge_time_to_empty",
[SENSOR_CHAN_GAUGE_TIME_TO_FULL] = "gauge_time_to_full",
[SENSOR_CHAN_GAUGE_CYCLE_COUNT] = "gauge_cycle_count",
[SENSOR_CHAN_GAUGE_DESIGN_VOLTAGE] = "gauge_design_voltage",
[SENSOR_CHAN_GAUGE_DESIRED_VOLTAGE] = "gauge_desired_voltage",
[SENSOR_CHAN_GAUGE_DESIRED_CHARGING_CURRENT] = "gauge_desired_charging_current",
[SENSOR_CHAN_ALL] = "all",
};
static const char *sensor_attribute_name[SENSOR_ATTR_COMMON_COUNT] = {
[SENSOR_ATTR_SAMPLING_FREQUENCY] = "sampling_frequency",
[SENSOR_ATTR_LOWER_THRESH] = "lower_thresh",
[SENSOR_ATTR_UPPER_THRESH] = "upper_thresh",
[SENSOR_ATTR_SLOPE_TH] = "slope_th",
[SENSOR_ATTR_SLOPE_DUR] = "slope_dur",
[SENSOR_ATTR_HYSTERESIS] = "hysteresis",
[SENSOR_ATTR_OVERSAMPLING] = "oversampling",
[SENSOR_ATTR_FULL_SCALE] = "full_scale",
[SENSOR_ATTR_OFFSET] = "offset",
[SENSOR_ATTR_CALIB_TARGET] = "calib_target",
[SENSOR_ATTR_CONFIGURATION] = "configuration",
[SENSOR_ATTR_CALIBRATION] = "calibration",
[SENSOR_ATTR_FEATURE_MASK] = "feature_mask",
[SENSOR_ATTR_ALERT] = "alert",
[SENSOR_ATTR_FF_DUR] = "ff_dur",
[SENSOR_ATTR_BATCH_DURATION] = "batch_dur",
};
enum sample_stats_state {
SAMPLE_STATS_STATE_UNINITIALIZED = 0,
SAMPLE_STATS_STATE_ENABLED,
SAMPLE_STATS_STATE_DISABLED,
};
struct sample_stats {
int64_t accumulator;
uint64_t sample_window_start;
uint32_t count;
enum sample_stats_state state;
};
static struct sample_stats sensor_stats[CONFIG_SENSOR_SHELL_MAX_TRIGGER_DEVICES][SENSOR_CHAN_ALL];
static const struct device *sensor_trigger_devices[CONFIG_SENSOR_SHELL_MAX_TRIGGER_DEVICES];
static bool device_is_sensor(const struct device *dev)
{
#ifdef CONFIG_SENSOR_INFO
STRUCT_SECTION_FOREACH(sensor_info, sensor) {
if (sensor->dev == dev) {
return true;
}
}
return false;
#else
return true;
#endif /* CONFIG_SENSOR_INFO */
}
static int find_sensor_trigger_device(const struct device *sensor)
{
for (int i = 0; i < CONFIG_SENSOR_SHELL_MAX_TRIGGER_DEVICES; i++) {
if (sensor_trigger_devices[i] == sensor) {
return i;
}
}
return -1;
}
/* Forward declaration */
static void data_ready_trigger_handler(const struct device *sensor,
const struct sensor_trigger *trigger);
#define TRIGGER_DATA_ENTRY(trig_enum, str_name, handler_func) \
[(trig_enum)] = {.name = #str_name, \
.handler = (handler_func), \
.trigger = {.chan = SENSOR_CHAN_ALL, .type = (trig_enum)}}
/**
* @brief This table stores a mapping of string trigger names along with the sensor_trigger struct
* that gets passed to the driver to enable that trigger, plus a function pointer to a handler. If
* that pointer is NULL, this indicates there is not currently support for that trigger type in the
* sensor shell.
*/
static const struct {
const char *name;
sensor_trigger_handler_t handler;
struct sensor_trigger trigger;
} sensor_trigger_table[SENSOR_TRIG_COMMON_COUNT] = {
TRIGGER_DATA_ENTRY(SENSOR_TRIG_TIMER, timer, NULL),
TRIGGER_DATA_ENTRY(SENSOR_TRIG_DATA_READY, data_ready, data_ready_trigger_handler),
TRIGGER_DATA_ENTRY(SENSOR_TRIG_DELTA, delta, NULL),
TRIGGER_DATA_ENTRY(SENSOR_TRIG_NEAR_FAR, near_far, NULL),
TRIGGER_DATA_ENTRY(SENSOR_TRIG_THRESHOLD, threshold, NULL),
TRIGGER_DATA_ENTRY(SENSOR_TRIG_TAP, tap, NULL),
TRIGGER_DATA_ENTRY(SENSOR_TRIG_DOUBLE_TAP, double_tap, NULL),
TRIGGER_DATA_ENTRY(SENSOR_TRIG_FREEFALL, freefall, NULL),
TRIGGER_DATA_ENTRY(SENSOR_TRIG_MOTION, motion, NULL),
TRIGGER_DATA_ENTRY(SENSOR_TRIG_STATIONARY, stationary, NULL),
TRIGGER_DATA_ENTRY(SENSOR_TRIG_FIFO_WATERMARK, fifo_wm, NULL),
TRIGGER_DATA_ENTRY(SENSOR_TRIG_FIFO_FULL, fifo_full, NULL),
};
/**
* Lookup the sensor trigger data by name
*
* @param name The name of the trigger
* @return < 0 on error
* @return >= 0 if found
*/
static int sensor_trigger_name_lookup(const char *name)
{
for (int i = 0; i < ARRAY_SIZE(sensor_trigger_table); ++i) {
if (strcmp(name, sensor_trigger_table[i].name) == 0) {
return i;
}
}
return -1;
}
enum dynamic_command_context {
NONE,
CTX_GET,
CTX_ATTR_GET_SET,
CTX_STREAM_ON_OFF,
};
static enum dynamic_command_context current_cmd_ctx = NONE;
/* Mutex for accessing shared RTIO/IODEV data structures */
K_MUTEX_DEFINE(cmd_get_mutex);
/* Crate a single common config for one-shot reading */
static struct sensor_chan_spec iodev_sensor_shell_channels[SENSOR_CHAN_ALL];
static struct sensor_read_config iodev_sensor_shell_read_config = {
.sensor = NULL,
.is_streaming = false,
.channels = iodev_sensor_shell_channels,
.count = 0,
.max = ARRAY_SIZE(iodev_sensor_shell_channels),
};
RTIO_IODEV_DEFINE(iodev_sensor_shell_read, &__sensor_iodev_api, &iodev_sensor_shell_read_config);
/* Create the RTIO context to service the reading */
RTIO_DEFINE_WITH_MEMPOOL(sensor_read_rtio, 8, 8, 32, 64, 4);
static int parse_named_int(const char *name, const char *heystack[], size_t count)
{
char *endptr;
int i;
/* Attempt to parse channel name as a number first */
i = strtoul(name, &endptr, 0);
if (*endptr == '\0') {
return i;
}
/* Channel name is not a number, look it up */
for (i = 0; i < count; i++) {
if (strcmp(name, heystack[i]) == 0) {
return i;
}
}
return -ENOTSUP;
}
static int parse_sensor_value(const char *val_str, struct sensor_value *out)
{
const bool is_negative = val_str[0] == '-';
const char *decimal_pos = strchr(val_str, '.');
long value;
char *endptr;
/* Parse int portion */
value = strtol(val_str, &endptr, 0);
if (*endptr != '\0' && *endptr != '.') {
return -EINVAL;
}
if (value > INT32_MAX || value < INT32_MIN) {
return -EINVAL;
}
out->val1 = (int32_t)value;
if (decimal_pos == NULL) {
return 0;
}
/* Parse the decimal portion */
value = strtoul(decimal_pos + 1, &endptr, 0);
if (*endptr != '\0') {
return -EINVAL;
}
while (value < 100000) {
value *= 10;
}
if (value > INT32_C(999999)) {
return -EINVAL;
}
out->val2 = (int32_t)value;
if (is_negative) {
out->val2 *= -1;
}
return 0;
}
void sensor_shell_processing_callback(int result, uint8_t *buf, uint32_t buf_len, void *userdata)
{
struct sensor_shell_processing_context *ctx = userdata;
const struct sensor_decoder_api *decoder;
uint8_t decoded_buffer[128];
struct {
uint64_t base_timestamp_ns;
int count;
uint64_t timestamp_delta;
int64_t values[3];
int8_t shift;
} accumulator_buffer;
int rc;
ARG_UNUSED(buf_len);
if (result < 0) {
shell_error(ctx->sh, "Read failed");
return;
}
rc = sensor_get_decoder(ctx->dev, &decoder);
if (rc != 0) {
shell_error(ctx->sh, "Failed to get decoder for '%s'", ctx->dev->name);
return;
}
for (int trigger = 0; decoder->has_trigger != NULL && trigger < SENSOR_TRIG_COMMON_COUNT;
++trigger) {
if (!decoder->has_trigger(buf, trigger)) {
continue;
}
shell_info(ctx->sh, "Trigger (%d / %s) detected", trigger,
(sensor_trigger_table[trigger].name == NULL
? "UNKNOWN"
: sensor_trigger_table[trigger].name));
}
for (struct sensor_chan_spec ch = {0, 0}; ch.chan_type < SENSOR_CHAN_ALL; ch.chan_type++) {
uint32_t fit = 0;
size_t base_size;
size_t frame_size;
uint16_t frame_count;
/* Channels with multi-axis equivalents are skipped */
switch (ch.chan_type) {
case SENSOR_CHAN_ACCEL_X:
case SENSOR_CHAN_ACCEL_Y:
case SENSOR_CHAN_ACCEL_Z:
case SENSOR_CHAN_GYRO_X:
case SENSOR_CHAN_GYRO_Y:
case SENSOR_CHAN_GYRO_Z:
case SENSOR_CHAN_MAGN_X:
case SENSOR_CHAN_MAGN_Y:
case SENSOR_CHAN_MAGN_Z:
case SENSOR_CHAN_POS_DX:
case SENSOR_CHAN_POS_DY:
case SENSOR_CHAN_POS_DZ:
continue;
}
rc = decoder->get_size_info(ch, &base_size, &frame_size);
if (rc != 0) {
LOG_DBG("skipping unsupported channel %s:%d",
sensor_channel_name[ch.chan_type], ch.chan_idx);
/* Channel not supported, skipping */
continue;
}
if (base_size > ARRAY_SIZE(decoded_buffer)) {
shell_error(ctx->sh,
"Channel (type %d, idx %d) requires %zu bytes to decode, but "
"only %zu are available",
ch.chan_type, ch.chan_idx, base_size,
ARRAY_SIZE(decoded_buffer));
continue;
}
while (decoder->get_frame_count(buf, ch, &frame_count) == 0) {
LOG_DBG("decoding %d frames from channel %s:%d",
frame_count, sensor_channel_name[ch.chan_type], ch.chan_idx);
fit = 0;
memset(&accumulator_buffer, 0, sizeof(accumulator_buffer));
while (decoder->decode(buf, ch, &fit, 1, decoded_buffer) > 0) {
switch (ch.chan_type) {
case SENSOR_CHAN_ACCEL_XYZ:
case SENSOR_CHAN_GYRO_XYZ:
case SENSOR_CHAN_MAGN_XYZ:
case SENSOR_CHAN_POS_DXYZ: {
struct sensor_three_axis_data *data =
(struct sensor_three_axis_data *)decoded_buffer;
if (accumulator_buffer.count == 0) {
accumulator_buffer.base_timestamp_ns =
data->header.base_timestamp_ns;
}
accumulator_buffer.count++;
accumulator_buffer.shift = data->shift;
accumulator_buffer.timestamp_delta +=
data->readings[0].timestamp_delta;
accumulator_buffer.values[0] += data->readings[0].values[0];
accumulator_buffer.values[1] += data->readings[0].values[1];
accumulator_buffer.values[2] += data->readings[0].values[2];
break;
}
case SENSOR_CHAN_PROX: {
struct sensor_byte_data *data =
(struct sensor_byte_data *)decoded_buffer;
if (accumulator_buffer.count == 0) {
accumulator_buffer.base_timestamp_ns =
data->header.base_timestamp_ns;
}
accumulator_buffer.count++;
accumulator_buffer.timestamp_delta +=
data->readings[0].timestamp_delta;
accumulator_buffer.values[0] += data->readings[0].is_near;
break;
}
default: {
struct sensor_q31_data *data =
(struct sensor_q31_data *)decoded_buffer;
if (accumulator_buffer.count == 0) {
accumulator_buffer.base_timestamp_ns =
data->header.base_timestamp_ns;
}
accumulator_buffer.count++;
accumulator_buffer.shift = data->shift;
accumulator_buffer.timestamp_delta +=
data->readings[0].timestamp_delta;
accumulator_buffer.values[0] += data->readings[0].value;
break;
}
}
}
/* Print the accumulated value average */
switch (ch.chan_type) {
case SENSOR_CHAN_ACCEL_XYZ:
case SENSOR_CHAN_GYRO_XYZ:
case SENSOR_CHAN_MAGN_XYZ:
case SENSOR_CHAN_POS_DXYZ: {
struct sensor_three_axis_data *data =
(struct sensor_three_axis_data *)decoded_buffer;
data->header.base_timestamp_ns =
accumulator_buffer.base_timestamp_ns;
data->header.reading_count = 1;
data->shift = accumulator_buffer.shift;
data->readings[0].timestamp_delta =
(uint32_t)(accumulator_buffer.timestamp_delta /
accumulator_buffer.count);
data->readings[0].values[0] = (q31_t)(accumulator_buffer.values[0] /
accumulator_buffer.count);
data->readings[0].values[1] = (q31_t)(accumulator_buffer.values[1] /
accumulator_buffer.count);
data->readings[0].values[2] = (q31_t)(accumulator_buffer.values[2] /
accumulator_buffer.count);
shell_info(ctx->sh,
"channel type=%d(%s) index=%d shift=%d num_samples=%d "
"value=%" PRIsensor_three_axis_data,
ch.chan_type, sensor_channel_name[ch.chan_type],
ch.chan_idx, data->shift, accumulator_buffer.count,
PRIsensor_three_axis_data_arg(*data, 0));
break;
}
case SENSOR_CHAN_PROX: {
struct sensor_byte_data *data =
(struct sensor_byte_data *)decoded_buffer;
data->header.base_timestamp_ns =
accumulator_buffer.base_timestamp_ns;
data->header.reading_count = 1;
data->readings[0].timestamp_delta =
(uint32_t)(accumulator_buffer.timestamp_delta /
accumulator_buffer.count);
data->readings[0].is_near =
accumulator_buffer.values[0] / accumulator_buffer.count;
shell_info(ctx->sh,
"channel type=%d(%s) index=%d num_samples=%d "
"value=%" PRIsensor_byte_data(is_near),
ch.chan_type, sensor_channel_name[ch.chan_type],
ch.chan_idx, accumulator_buffer.count,
PRIsensor_byte_data_arg(*data, 0, is_near));
break;
}
default: {
struct sensor_q31_data *data =
(struct sensor_q31_data *)decoded_buffer;
data->header.base_timestamp_ns =
accumulator_buffer.base_timestamp_ns;
data->header.reading_count = 1;
data->shift = accumulator_buffer.shift;
data->readings[0].timestamp_delta =
(uint32_t)(accumulator_buffer.timestamp_delta /
accumulator_buffer.count);
data->readings[0].value = (q31_t)(accumulator_buffer.values[0] /
accumulator_buffer.count);
shell_info(ctx->sh,
"channel type=%d(%s) index=%d shift=%d num_samples=%d "
"value=%" PRIsensor_q31_data,
ch.chan_type,
(ch.chan_type >= ARRAY_SIZE(sensor_channel_name))
? ""
: sensor_channel_name[ch.chan_type],
ch.chan_idx,
data->shift, accumulator_buffer.count,
PRIsensor_q31_data_arg(*data, 0));
}
}
++ch.chan_idx;
}
ch.chan_idx = 0;
}
}
static int cmd_get_sensor(const struct shell *sh, size_t argc, char *argv[])
{
static struct sensor_shell_processing_context ctx;
const struct device *dev;
int count = 0;
int err;
err = k_mutex_lock(&cmd_get_mutex, K_NO_WAIT);
if (err < 0) {
shell_error(sh, "Another sensor reading in progress");
return err;
}
dev = device_get_binding(argv[1]);
if (dev == NULL) {
shell_error(sh, "Device unknown (%s)", argv[1]);
k_mutex_unlock(&cmd_get_mutex);
return -ENODEV;
}
if (!device_is_sensor(dev)) {
shell_error(sh, "Device is not a sensor (%s)", argv[1]);
k_mutex_unlock(&cmd_get_mutex);
return -ENODEV;
}
if (argc == 2) {
/* read all channel types */
for (int i = 0; i < ARRAY_SIZE(iodev_sensor_shell_channels); ++i) {
if (SENSOR_CHANNEL_3_AXIS(i)) {
continue;
}
iodev_sensor_shell_channels[count++] = (struct sensor_chan_spec){i, 0};
}
} else {
/* read specific channels */
for (int i = 2; i < argc; ++i) {
int chan = parse_named_int(argv[i], sensor_channel_name,
ARRAY_SIZE(sensor_channel_name));
if (chan < 0) {
shell_error(sh, "Failed to read channel (%s)", argv[i]);
continue;
}
iodev_sensor_shell_channels[count++] =
(struct sensor_chan_spec){chan, 0};
}
}
if (count == 0) {
shell_error(sh, "No channels to read, bailing");
k_mutex_unlock(&cmd_get_mutex);
return -EINVAL;
}
iodev_sensor_shell_read_config.sensor = dev;
iodev_sensor_shell_read_config.count = count;
ctx.dev = dev;
ctx.sh = sh;
err = sensor_read_async_mempool(&iodev_sensor_shell_read, &sensor_read_rtio, &ctx);
if (err < 0) {
shell_error(sh, "Failed to read sensor: %d", err);
}
if (!IS_ENABLED(CONFIG_SENSOR_SHELL_STREAM)) {
/*
* Streaming enables a thread that polls the RTIO context, so if it's enabled, we
* don't need a blocking read here.
*/
sensor_processing_with_callback(&sensor_read_rtio,
sensor_shell_processing_callback);
}
k_mutex_unlock(&cmd_get_mutex);
return 0;
}
static int cmd_sensor_attr_set(const struct shell *shell_ptr, size_t argc, char *argv[])
{
const struct device *dev;
int rc;
dev = device_get_binding(argv[1]);
if (dev == NULL) {
shell_error(shell_ptr, "Device unknown (%s)", argv[1]);
return -ENODEV;
}
if (!device_is_sensor(dev)) {
shell_error(shell_ptr, "Device is not a sensor (%s)", argv[1]);
k_mutex_unlock(&cmd_get_mutex);
return -ENODEV;
}
for (size_t i = 2; i < argc; i += 3) {
int channel = parse_named_int(argv[i], sensor_channel_name,
ARRAY_SIZE(sensor_channel_name));
int attr = parse_named_int(argv[i + 1], sensor_attribute_name,
ARRAY_SIZE(sensor_attribute_name));
struct sensor_value value = {0};
if (channel < 0) {
shell_error(shell_ptr, "Channel '%s' unknown", argv[i]);
return -EINVAL;
}
if (attr < 0) {
shell_error(shell_ptr, "Attribute '%s' unknown", argv[i + 1]);
return -EINVAL;
}
if (parse_sensor_value(argv[i + 2], &value)) {
shell_error(shell_ptr, "Sensor value '%s' invalid", argv[i + 2]);
return -EINVAL;
}
rc = sensor_attr_set(dev, channel, attr, &value);
if (rc) {
shell_error(shell_ptr, "Failed to set channel(%s) attribute(%s): %d",
sensor_channel_name[channel], sensor_attribute_name[attr], rc);
continue;
}
shell_info(shell_ptr, "%s channel=%s, attr=%s set to value=%s", dev->name,
sensor_channel_name[channel], sensor_attribute_name[attr], argv[i + 2]);
}
return 0;
}
static void cmd_sensor_attr_get_handler(const struct shell *shell_ptr, const struct device *dev,
const char *channel_name, const char *attr_name,
bool print_missing_attribute)
{
int channel =
parse_named_int(channel_name, sensor_channel_name, ARRAY_SIZE(sensor_channel_name));
int attr = parse_named_int(attr_name, sensor_attribute_name,
ARRAY_SIZE(sensor_attribute_name));
struct sensor_value value = {0};
int rc;
if (channel < 0) {
shell_error(shell_ptr, "Channel '%s' unknown", channel_name);
return;
}
if (attr < 0) {
shell_error(shell_ptr, "Attribute '%s' unknown", attr_name);
return;
}
rc = sensor_attr_get(dev, channel, attr, &value);
if (rc != 0) {
if (rc == -EINVAL && !print_missing_attribute) {
return;
}
shell_error(shell_ptr, "Failed to get channel(%s) attribute(%s): %d",
sensor_channel_name[channel], sensor_attribute_name[attr], rc);
return;
}
shell_info(shell_ptr, "%s(channel=%s, attr=%s) value=%.6f", dev->name,
sensor_channel_name[channel], sensor_attribute_name[attr],
sensor_value_to_double(&value));
}
static int cmd_sensor_attr_get(const struct shell *shell_ptr, size_t argc, char *argv[])
{
const struct device *dev;
dev = device_get_binding(argv[1]);
if (dev == NULL) {
shell_error(shell_ptr, "Device unknown (%s)", argv[1]);
return -ENODEV;
}
if (!device_is_sensor(dev)) {
shell_error(shell_ptr, "Device is not a sensor (%s)", argv[1]);
k_mutex_unlock(&cmd_get_mutex);
return -ENODEV;
}
if (argc > 2) {
for (size_t i = 2; i < argc; i += 2) {
cmd_sensor_attr_get_handler(shell_ptr, dev, argv[i], argv[i + 1],
/*print_missing_attribute=*/true);
}
} else {
for (size_t channel_idx = 0; channel_idx < ARRAY_SIZE(sensor_channel_name);
++channel_idx) {
for (size_t attr_idx = 0; attr_idx < ARRAY_SIZE(sensor_attribute_name);
++attr_idx) {
cmd_sensor_attr_get_handler(shell_ptr, dev,
sensor_channel_name[channel_idx],
sensor_attribute_name[attr_idx],
/*print_missing_attribute=*/false);
}
}
}
return 0;
}
static void channel_name_get(size_t idx, struct shell_static_entry *entry);
SHELL_DYNAMIC_CMD_CREATE(dsub_channel_name, channel_name_get);
static void attribute_name_get(size_t idx, struct shell_static_entry *entry);
SHELL_DYNAMIC_CMD_CREATE(dsub_attribute_name, attribute_name_get);
static void channel_name_get(size_t idx, struct shell_static_entry *entry)
{
int cnt = 0;
entry->syntax = NULL;
entry->handler = NULL;
entry->help = NULL;
if (current_cmd_ctx == CTX_GET) {
entry->subcmd = &dsub_channel_name;
} else if (current_cmd_ctx == CTX_ATTR_GET_SET) {
entry->subcmd = &dsub_attribute_name;
} else {
entry->subcmd = NULL;
}
for (int i = 0; i < ARRAY_SIZE(sensor_channel_name); i++) {
if (sensor_channel_name[i] != NULL) {
if (cnt == idx) {
entry->syntax = sensor_channel_name[i];
break;
}
cnt++;
}
}
}
static void attribute_name_get(size_t idx, struct shell_static_entry *entry)
{
int cnt = 0;
entry->syntax = NULL;
entry->handler = NULL;
entry->help = NULL;
entry->subcmd = &dsub_channel_name;
for (int i = 0; i < ARRAY_SIZE(sensor_attribute_name); i++) {
if (sensor_attribute_name[i] != NULL) {
if (cnt == idx) {
entry->syntax = sensor_attribute_name[i];
break;
}
cnt++;
}
}
}
static void trigger_opt_get_for_stream(size_t idx, struct shell_static_entry *entry);
SHELL_DYNAMIC_CMD_CREATE(dsub_trigger_opt_get_for_stream, trigger_opt_get_for_stream);
static void trigger_opt_get_for_stream(size_t idx, struct shell_static_entry *entry)
{
entry->syntax = NULL;
entry->handler = NULL;
entry->help = NULL;
entry->subcmd = NULL;
switch (idx) {
case SENSOR_STREAM_DATA_INCLUDE:
entry->syntax = "incl";
break;
case SENSOR_STREAM_DATA_DROP:
entry->syntax = "drop";
break;
case SENSOR_STREAM_DATA_NOP:
entry->syntax = "nop";
break;
}
}
static void trigger_name_get_for_stream(size_t idx, struct shell_static_entry *entry);
SHELL_DYNAMIC_CMD_CREATE(dsub_trigger_name_for_stream, trigger_name_get_for_stream);
static void trigger_name_get_for_stream(size_t idx, struct shell_static_entry *entry)
{
int cnt = 0;
entry->syntax = NULL;
entry->handler = NULL;
entry->help = NULL;
entry->subcmd = &dsub_trigger_opt_get_for_stream;
for (int i = 0; i < ARRAY_SIZE(sensor_trigger_table); i++) {
if (sensor_trigger_table[i].name != NULL) {
if (cnt == idx) {
entry->syntax = sensor_trigger_table[i].name;
break;
}
cnt++;
}
}
}
static void stream_on_off(size_t idx, struct shell_static_entry *entry)
{
entry->syntax = NULL;
entry->handler = NULL;
entry->help = NULL;
if (idx == 0) {
entry->syntax = "on";
entry->subcmd = &dsub_trigger_name_for_stream;
} else if (idx == 1) {
entry->syntax = "off";
entry->subcmd = NULL;
}
}
SHELL_DYNAMIC_CMD_CREATE(dsub_stream_on_off, stream_on_off);
static void device_name_get(size_t idx, struct shell_static_entry *entry);
SHELL_DYNAMIC_CMD_CREATE(dsub_device_name, device_name_get);
static void device_name_get(size_t idx, struct shell_static_entry *entry)
{
const struct device *dev = shell_device_lookup(idx, NULL);
current_cmd_ctx = CTX_GET;
entry->syntax = (dev != NULL) ? dev->name : NULL;
entry->handler = NULL;
entry->help = NULL;
entry->subcmd = &dsub_channel_name;
}
static void device_name_get_for_attr(size_t idx, struct shell_static_entry *entry)
{
const struct device *dev = shell_device_lookup(idx, NULL);
current_cmd_ctx = CTX_ATTR_GET_SET;
entry->syntax = (dev != NULL) ? dev->name : NULL;
entry->handler = NULL;
entry->help = NULL;
entry->subcmd = &dsub_channel_name;
}
SHELL_DYNAMIC_CMD_CREATE(dsub_device_name_for_attr, device_name_get_for_attr);
static void trigger_name_get(size_t idx, struct shell_static_entry *entry)
{
int cnt = 0;
entry->syntax = NULL;
entry->handler = NULL;
entry->help = NULL;
entry->subcmd = NULL;
for (int i = 0; i < ARRAY_SIZE(sensor_trigger_table); i++) {
if (sensor_trigger_table[i].name != NULL) {
if (cnt == idx) {
entry->syntax = sensor_trigger_table[i].name;
break;
}
cnt++;
}
}
}
SHELL_DYNAMIC_CMD_CREATE(dsub_trigger_name, trigger_name_get);
static void trigger_on_off_get(size_t idx, struct shell_static_entry *entry)
{
entry->handler = NULL;
entry->help = NULL;
entry->subcmd = &dsub_trigger_name;
switch (idx) {
case 0:
entry->syntax = "on";
break;
case 1:
entry->syntax = "off";
break;
default:
entry->syntax = NULL;
break;
}
}
SHELL_DYNAMIC_CMD_CREATE(dsub_trigger_onoff, trigger_on_off_get);
static void device_name_get_for_trigger(size_t idx, struct shell_static_entry *entry)
{
const struct device *dev = shell_device_lookup(idx, NULL);
entry->syntax = (dev != NULL) ? dev->name : NULL;
entry->handler = NULL;
entry->help = NULL;
entry->subcmd = &dsub_trigger_onoff;
}
SHELL_DYNAMIC_CMD_CREATE(dsub_trigger, device_name_get_for_trigger);
static void device_name_get_for_stream(size_t idx, struct shell_static_entry *entry)
{
const struct device *dev = shell_device_lookup(idx, NULL);
current_cmd_ctx = CTX_STREAM_ON_OFF;
entry->syntax = (dev != NULL) ? dev->name : NULL;
entry->handler = NULL;
entry->help = NULL;
entry->subcmd = &dsub_stream_on_off;
}
SHELL_DYNAMIC_CMD_CREATE(dsub_device_name_for_stream, device_name_get_for_stream);
static int cmd_get_sensor_info(const struct shell *sh, size_t argc, char **argv)
{
ARG_UNUSED(argc);
ARG_UNUSED(argv);
#ifdef CONFIG_SENSOR_INFO
const char *null_str = "(null)";
STRUCT_SECTION_FOREACH(sensor_info, sensor) {
shell_print(sh,
"device name: %s, vendor: %s, model: %s, "
"friendly name: %s",
sensor->dev->name, sensor->vendor ? sensor->vendor : null_str,
sensor->model ? sensor->model : null_str,
sensor->friendly_name ? sensor->friendly_name : null_str);
}
return 0;
#else
return -EINVAL;
#endif
}
static void data_ready_trigger_handler(const struct device *sensor,
const struct sensor_trigger *trigger)
{
const int64_t now = k_uptime_get();
struct sensor_value value;
int sensor_idx = find_sensor_trigger_device(sensor);
struct sample_stats *stats;
int sensor_name_len_before_at;
const char *sensor_name;
if (sensor_idx < 0) {
LOG_ERR("Unable to find sensor trigger device");
return;
}
stats = sensor_stats[sensor_idx];
sensor_name = sensor_trigger_devices[sensor_idx]->name;
if (sensor_name) {
sensor_name_len_before_at = strchr(sensor_name, '@') - sensor_name;
} else {
sensor_name_len_before_at = 0;
}
if (sensor_sample_fetch(sensor)) {
LOG_ERR("Failed to fetch samples on data ready handler");
}
for (int i = 0; i < SENSOR_CHAN_ALL; ++i) {
int rc;
/* Skip disabled channels */
if (stats[i].state == SAMPLE_STATS_STATE_DISABLED) {
continue;
}
/* Skip 3 axis channels */
if (SENSOR_CHANNEL_3_AXIS(i)) {
continue;
}
rc = sensor_channel_get(sensor, i, &value);
if (stats[i].state == SAMPLE_STATS_STATE_UNINITIALIZED) {
if (rc == -ENOTSUP) {
/*
* Stop reading this channel if the driver told us
* it's not supported.
*/
stats[i].state = SAMPLE_STATS_STATE_DISABLED;
} else if (rc == 0) {
stats[i].state = SAMPLE_STATS_STATE_ENABLED;
}
}
if (rc != 0) {
/* Skip on any error. */
continue;
}
/* Do something with the data */
stats[i].accumulator += value.val1 * INT64_C(1000000) + value.val2;
if (stats[i].count++ == 0) {
stats[i].sample_window_start = now;
} else if (now > stats[i].sample_window_start +
CONFIG_SENSOR_SHELL_TRIG_PRINT_TIMEOUT_MS) {
int64_t micro_value = stats[i].accumulator / stats[i].count;
value.val1 = micro_value / 1000000;
value.val2 = (int32_t)llabs(micro_value - (value.val1 * 1000000));
LOG_INF("sensor=%.*s, chan=%s, num_samples=%u, data=%d.%06d",
sensor_name_len_before_at, sensor_name,
sensor_channel_name[i],
stats[i].count,
value.val1, value.val2);
stats[i].accumulator = 0;
stats[i].count = 0;
}
}
}
static int cmd_trig_sensor(const struct shell *sh, size_t argc, char **argv)
{
const struct device *dev;
int trigger;
bool trigger_enabled = false;
int err;
if (argc < 4) {
shell_error(sh, "Wrong number of args");
return -EINVAL;
}
/* Parse device name */
dev = device_get_binding(argv[1]);
if (dev == NULL) {
shell_error(sh, "Device unknown (%s)", argv[1]);
return -ENODEV;
}
/* Map the trigger string to an enum value */
trigger = sensor_trigger_name_lookup(argv[3]);
if (trigger < 0 || sensor_trigger_table[trigger].handler == NULL) {
shell_error(sh, "Unsupported trigger type (%s)", argv[3]);
return -ENOTSUP;
}
/* Parse on/off */
if (strcmp(argv[2], "on") == 0) {
/* find a free entry in sensor_trigger_devices[] */
int sensor_idx = find_sensor_trigger_device(NULL);
if (sensor_idx < 0) {
shell_error(sh, "Unable to support more simultaneous sensor trigger"
" devices");
err = -ENOTSUP;
} else {
struct sample_stats *stats = sensor_stats[sensor_idx];
sensor_trigger_devices[sensor_idx] = dev;
/* reset stats state to UNINITIALIZED */
for (unsigned int ch = 0; ch < SENSOR_CHAN_ALL; ch++) {
stats[ch].state = SAMPLE_STATS_STATE_UNINITIALIZED;
}
err = sensor_trigger_set(dev, &sensor_trigger_table[trigger].trigger,
sensor_trigger_table[trigger].handler);
trigger_enabled = true;
}
} else if (strcmp(argv[2], "off") == 0) {
/* Clear the handler for the given trigger on this device */
err = sensor_trigger_set(dev, &sensor_trigger_table[trigger].trigger, NULL);
if (!err) {
/* find entry in sensor_trigger_devices[] and free it */
int sensor_idx = find_sensor_trigger_device(dev);
if (sensor_idx < 0) {
shell_error(sh, "Unable to find sensor device in trigger array");
} else {
sensor_trigger_devices[sensor_idx] = NULL;
}
}
} else {
shell_error(sh, "Pass 'on' or 'off' to enable/disable trigger");
return -EINVAL;
}
if (err) {
shell_error(sh, "Error while setting trigger %d on device %s (%d)", trigger,
argv[1], err);
} else {
shell_info(sh, "%s trigger idx=%d %s on device %s",
trigger_enabled ? "Enabled" : "Disabled", trigger,
sensor_trigger_table[trigger].name, argv[1]);
}
return err;
}
/* clang-format off */
SHELL_STATIC_SUBCMD_SET_CREATE(sub_sensor,
SHELL_CMD_ARG(get, &dsub_device_name, SENSOR_GET_HELP, cmd_get_sensor,
2, 255),
SHELL_CMD_ARG(attr_set, &dsub_device_name_for_attr, SENSOR_ATTR_SET_HELP,
cmd_sensor_attr_set, 2, 255),
SHELL_CMD_ARG(attr_get, &dsub_device_name_for_attr, SENSOR_ATTR_GET_HELP,
cmd_sensor_attr_get, 2, 255),
SHELL_COND_CMD(CONFIG_SENSOR_SHELL_STREAM, stream, &dsub_device_name_for_stream,
SENSOR_STREAM_HELP, cmd_sensor_stream),
SHELL_COND_CMD(CONFIG_SENSOR_INFO, info, NULL, SENSOR_INFO_HELP,
cmd_get_sensor_info),
SHELL_CMD_ARG(trig, &dsub_trigger, SENSOR_TRIG_HELP, cmd_trig_sensor,
2, 255),
SHELL_SUBCMD_SET_END
);
/* clang-format on */
SHELL_CMD_REGISTER(sensor, &sub_sensor, "Sensor commands", NULL);
```
|
```objective-c
// -*- C++ -*-
//===your_sha256_hash------===//
//
// See path_to_url for license information.
//
//===your_sha256_hash------===//
#ifndef _LIBCPP___ITERATOR_ITERATOR_H
#define _LIBCPP___ITERATOR_ITERATOR_H
#include <__config>
#include <cstddef>
#if !defined(_LIBCPP_HAS_NO_PRAGMA_SYSTEM_HEADER)
# pragma GCC system_header
#endif
_LIBCPP_BEGIN_NAMESPACE_STD
template<class _Category, class _Tp, class _Distance = ptrdiff_t,
class _Pointer = _Tp*, class _Reference = _Tp&>
struct _LIBCPP_TEMPLATE_VIS _LIBCPP_DEPRECATED_IN_CXX17 iterator
{
typedef _Tp value_type;
typedef _Distance difference_type;
typedef _Pointer pointer;
typedef _Reference reference;
typedef _Category iterator_category;
};
_LIBCPP_END_NAMESPACE_STD
#endif // _LIBCPP___ITERATOR_ITERATOR_H
```
|
Bismuth vanadate is the inorganic compound with the formula BiVO4. It is a bright yellow solid. It is widely studied as visible light photo-catalyst with a narrow band gap of less than 2.4 eV. It is a representative of "complex inorganic colored pigments," or CICPs. More specifically, bismuth vanadate is a mixed-metal oxide. Bismuth vanadate is also known under the Colour Index International as C.I. Pigment Yellow 184. It occurs naturally as the rare minerals pucherite, clinobisvanite, and dreyerite.
History and uses
Bismuth vanadate is a bright yellow powder and may have a slight green tint. When used as a pigment it has a high Chroma and excellent hiding power. In nature, bismuth vanadate can be found as the mineral pucherite, clinobisvanite, and dreyerite depending on the particular polymorph formed. Its synthesis was first recorded in a pharmaceutical patent in 1924 and began to be used readily as a pigment in the mid-1980s. Today it is manufactured across the world for pigment use.
Properties
Most commercial bismuth vanadate pigments are based on monoclinic (clinobisvanite) and tetragonal (dreyerite) structures though in the past two phase systems involving a 4:3 relationship between bismuth vanadate and bismuth molybdate (Bi2MoO6) have been used.
As a photocatalyst
BiVO4 has received much attention as a photocatalyst for water splitting and for remediation.
In the monoclinic phase, BiVO4 is an n-type photoactive semiconductor with a bandgap of 2.4 eV, which has been investigated for water splitting after doping with W and Mo. BiVO4 photoanodes have demonstrated record solar-to-hydrogen (STH) conversion efficiencies of 5.2% for flat films and 8.2% for WO3@BiVO4 core-shell nanorods (highest for metal-oxide photo-electrode) with the advantage of a very simple and cheap material.
Production
While most CICPs are formed exclusively through high temperature calcination, bismuth vanadate can be formed from a series of pH controlled precipitation reactions. These reactions can be carried out with or without the presence of molybdenum depending on the desired final phase. It is also possible to start with the parent oxides (Bi2O3 and V2O5) and perform a high temperature calcination to achieve a pure product.
References
Vanadates
Bismuth compounds
Inorganic pigments
|
```javascript
/**
* @license Apache-2.0
*
*
*
* path_to_url
*
* Unless required by applicable law or agreed to in writing, software
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
*/
'use strict';
// MODULES //
var request = require( './request.js' );
// MAIN //
/**
* Returns a mock `http` module.
*
* @private
* @param {(Error|null)} err - error object
* @param {NonNegativeInteger} [statusCode=201] - status code
* @returns {Object} mock `http` module
*/
function http( err, statusCode ) {
var obj = {};
obj.request = request( obj, err, statusCode );
return obj;
}
// EXPORTS //
module.exports = http;
```
|
is an Italian-Japanese 1996 anime television series based on the fairytale of the same name by Charles Perrault and The Brothers Grimm. It was produced by Tatsunoko Productions and Mondo TV. The series originally aired from April 4 to October 3, 1996, comprising 26 episodes.
Summary
The Story of Cinderella opens as Cinderella's life changes for the worse when her widower father leaves on a business trip. No sooner is he out of sight than Cinderella's stepmother has unceremoniously moved her two own daughters into Cinderella's room, thrown out her things, handed her a servant's dress and put her to brutal harsh menial labor. The series covers Cinderella's trials and tribulations as she tries to adapt to her new life while suffering the abuse of her stepmother and her two stepsisters. All the while, her fairy godmother, Paulette, subtly watches her and tries to influence events to fix Cinderella's life without her noticing. One of her first acts in this is to grant several of the animals of the house the power of speech, thus giving Cinderella companions in her dog Patch, a pair of mice named Chuchu and Bingo, and a bird named Pappy who provide her company as well as help with her chores. The biggest twist in the series is that Cinderella meets her Prince Charming early - except here he's the roguish Prince Charles, who has a habit of sneaking out of the castle and meets Cinderella by accident while disguised as a commoner. The two have a few misunderstandings before becoming friends and start having adventures together. Meanwhile, the villainous Duke Zaral plots against the royal family throughout the story, at times working Cinderella into his plots and machinations. The series eventually culminates in the ball in which the fairy tale ends; but with its own unique twist.
Plot
Cinderella is the only daughter of a rich, widowed duke. Her mother died when she was young, leaving behind only a few keepsakes for Cinderella to remember her by. The duke has remarried, giving Cinderella a new stepmother and two stepsisters.
The story begins when Cinderella’s father leaves on a long business trip. No sooner has he departed, however, than her stepfamily forces her to move to the attic and puts her to work doing all the household chores. Paulette, Cinderella’s fairy godmother, secretly observes this change in circumstances and uses her magic to make Cinderella’s life easier by giving four animals the power of speech: her dog, Patch, two mice, Chuchu and Bingo, and a bird, Pappy. These animals help Cinderella and keep an eye on her well-being for Paulette throughout the series.
One day, Cinderella sneaks into town and meets a pageboy who claims he serves the prince of the Emerald Castle. Cinderella quickly realizes he’s lying and dubs him a fibber. Unknown to her, however, the boy is Prince Charles himself in disguise; he sneaks into town using his page’s identity because he finds his lessons and princely duties boring. Cinderella’s stepmother tries to exploit Cinderella’s connection with the prince’s page in order to marry one of her own daughters to the prince. Though the ploy fails, thanks to her meddling Cinderella eventually forgives Charles for lying to her. The two form a friendship that slowly begins to deepen into romance. While Cinderella grows more used to life as a servant, Charles begins to appreciate the importance of his own duties after witnessing her struggles firsthand.
Meanwhile, Duke Zaral is also trying to marry his daughter off to Prince Charles. Isabel is initially infatuated with Charles, but eventually realizes he does not love her and instead chooses to elope with a childhood friend.
As Cinderella and Charles go on more adventures together, they stumble across a plot to overthrow the King and Queen. Charles eventually discovers that Zaral is responsible. He succeeds in stopping the coup with Cinderella’s help, but reveals his true identity to her in the process. Cinderella, heartbroken, ends their friendship, assuming that the prince would never marry a servant girl, especially after she repeatedly called him a liar.
In the aftermath of Zaral’s coup, Charles’s parents decide he is ready to take the throne and throw a ball in his honor, with every girl in the kingdom invited. Cinderella decides to go in order to properly say goodbye, choosing to wear her mother’s dress for the occasion. However, her stepfamily mocks the outfit for being out of style and destroy her invitation before leaving without her. Paulette appears and reveals herself as Cinderella’s fairy godmother. Using her magic, she fixes Cinderella’s dress and invitation, as well as conjuring a carriage for her. However, she warns that the magic will only last until midnight, so Cinderella needs to leave before then.
At the ball, no one recognizes Cinderella. Charles, attracted to Cinderella because he finds her familiar, spends the whole evening with her. Cinderella loses track of time and is forced to rush out just before midnight, losing a shoe in the process.
Soon after the ball, Charles’s page, Alex, begins taking the lost shoe house to house in search of the woman from the ball. Cinderella initially refuses to try the shoe on, but agrees after her animal friends reveal that she has the second shoe. She is brought back to the palace, and she and Charles become engaged.
On the day of the wedding, Zaral gives Charles poison and kidnaps Cinderella. He drags her to the top of the castle clock tower and attempts to bargain her life for the kingdom. He is interrupted by Charles, who faked his poisoning, and the two have a fierce battle which ends with Zaral falling from the tower to his death.
With peace restored, Cinderella and Charles finally marry and live happily ever after.
Cast
Maria Kawamura as Cinderella
Masami Kikuchi as Prince Charles
Toshiko Sawada as Cinderella's stepmother
Keiko Konno as Catherine, Cinderella's first stepsister
Akiko Matsukuma as Jeanne, Cinderella's second stepsister
Yuuko Mita as Paulette, the Fairy Godmother
Ken Narita as Alex, Charles' best friend
Tomohiro Tsuboi as Bingo, the male mice
Yayoi Nakazawa as Chuchu, the female mice
Tsutomu Tsuji as Wanda/Patch, the dog
Aki Matsushita as Pappy, the bird
Tamao Hayashi as Misha, the cat
Akemi Okamura as Isabelle, Charles' ex-fiancé
Yutaka Nakano as The King, Charles' father
Atsuko Yuya as The Queen, Charles' mother
Kazuhiro Nakata as Zaral/Zarel, Isabelle's father
Themes
Soundtrack
Cinderella (Original Soundtrack) is consisting of 26 tracks which were used as the background music of the TV Series. The whole album is composed by John Sposito and the lyrics are written by Paola Granatelli. The theme song from the album which is titled as "Cenerentola" was sung by Erica Gaura
Track listing
Episodes
References
1996 anime television series debuts
NHK original programming
Shōjo manga
Tatsunoko Production
Works based on Cinderella
Anime and manga based on fairy tales
Italian children's animated adventure television series
Italian children's animated fantasy television series
Japanese children's animated adventure television series
Japanese children's animated fantasy television series
|
Neoxaline is a bio-active Aspergillus japonicus isolate. It is an antimitotic agent and shows weak inhibitory activity of blood platelet aggregation induced by simulation of the central nervous system. It has been synthesized through the "highly stereoselective introduction of a reverse prenyl group to create a quaternary carbon stereocenter using (−)-3a-hydroxyfuroindoline as a building block, construction of the indoline spiroaminal via cautious stepwise oxidations with cyclizations from the indoline, assembly of (Z)-dehydrohistidine, and photoisomerization of unnatural (Z)-neoxaline to the natural (E)-neoxaline."
See also
Satoshi Ōmura
References
Piperidine alkaloids
Pyrrolidine alkaloids
Mitotic inhibitors
|
The Mathew Mound (designated 33-HA-122) is a Native American mound in the southwestern part of the U.S. state of Ohio. Located off Oak Road near the village of Evendale, the mound is believed to have been built by members of the Adena or Hopewell peoples during the Woodland period.
For many years, local legend held that the mound was the burial site of a historic Native American known as "Opekasit"; the legend lent itself to the milk-processing operation run by the mound's owners, which was known for many years as the Opekasit Dairy. The mound is high with a diameter of slightly more than . Having never been excavated, it is a prime example of a burial mound from the Woodland period. Based on the excavation of similar mounds, it is expected that the Mathew Mound contains evidence of funerary practices such as grave goods and the remains of buried bodies. It is one of two archaeological sites in the vicinity: the other, a small campsite designated 33-HA-125, is a small fraction of a mile west of the mound, but excavation yielded nothing significant. Because the mound is a potentially valuable archaeological site, it was listed on the National Register of Historic Places in 1975.
References
Archaeological sites in Hamilton County, Ohio
National Register of Historic Places in Hamilton County, Ohio
Archaeological sites on the National Register of Historic Places in Ohio
Woodland period
Mounds in Ohio
|
```c++
#include <vespa/vespalib/testkit/test_kit.h>
#include <vespa/vespalib/process/process.h>
#include <vespa/vespalib/util/stringfmt.h>
#include <vespa/searchcommon/common/schema.h>
#include <vespa/searchlib/fef/indexproperties.h>
#include <vespa/searchlib/fef/onnx_model.h>
#include <initializer_list>
#include <map>
#include <set>
#include <string>
#include <vector>
const char *prog = "../../../apps/verify_ranksetup/vespa-verify-ranksetup-bin";
const std::string gen_dir("generated");
const char *valid_feature = "value(0)";
const char *invalid_feature = "invalid_feature_name and format";
using namespace search::fef::indexproperties;
using namespace search::index;
using search::fef::OnnxModel;
using search::index::schema::CollectionType;
using search::index::schema::DataType;
using vespalib::make_string_short::fmt;
enum class SearchMode { INDEXED, STREAMING, BOTH };
struct Writer {
FILE *file;
explicit Writer(const std::string &file_name) {
file = fopen(file_name.c_str(), "w");
ASSERT_TRUE(file != nullptr);
}
void fmt(const char *format, ...) const __attribute__((format(printf,2,3)))
{
va_list ap;
va_start(ap, format);
vfprintf(file, format, ap);
va_end(ap);
}
~Writer() { fclose(file); }
};
void verify_dir() {
std::string pwd(getenv("PWD"));
ASSERT_NOT_EQUAL(pwd.find("searchcore/src/tests/proton/verify_ranksetup"), pwd.npos);
}
//your_sha256_hash-------------
struct Attribute {
std::string dataType;
std::string collectionType;
std::string imported;
Attribute(const std::string &dataType_,
const std::string &collectionType_,
const std::string &imported_)
: dataType(dataType_), collectionType(collectionType_), imported(imported_)
{}
~Attribute();
};
Attribute::~Attribute() = default;
struct Setup {
std::map<std::string,std::pair<std::string,std::string> > indexes;
std::map<std::string,Attribute> attributes;
std::map<std::string,std::string> properties;
std::map<std::string,std::string> constants;
std::vector<bool> extra_profiles;
std::map<std::string,std::string> ranking_expressions;
std::map<std::string,OnnxModel> onnx_models;
Setup();
~Setup();
void add_onnx_model(OnnxModel model) {
onnx_models.insert_or_assign(model.name(), std::move(model));
}
void index(const std::string &name, schema::DataType data_type,
schema::CollectionType collection_type)
{
indexes[name].first = schema::getTypeName(data_type);
indexes[name].second = schema::getTypeName(collection_type);
}
void attribute(const std::string &name, schema::DataType data_type,
schema::CollectionType collection_type, bool imported = false)
{
attributes.emplace(name, Attribute(schema::getTypeName(data_type),
schema::getTypeName(collection_type),
(imported ? "true" : "false")));
}
void property(const std::string &name, const std::string &val) {
properties[name] = val;
}
void query_feature_type(const std::string &name, const std::string &type) {
property(fmt("vespa.type.query.%s", name.c_str()), type);
}
void query_feature_default_value(const std::string &name, const std::string &expr) {
property(fmt("query(%s)", name.c_str()), expr);
}
void rank_expr(const std::string &name, const std::string &expr) {
property(fmt("rankingExpression(%s).rankingScript", name.c_str()), expr);
}
void ext_rank_expr(const std::string &name, const std::string &file) {
auto expr_name = fmt("my_expr_%s", name.c_str());
property(fmt("rankingExpression(%s).expressionName", name.c_str()), expr_name);
ranking_expressions.insert_or_assign(expr_name, TEST_PATH(file));
}
void first_phase(const std::string &feature) {
property(rank::FirstPhase::NAME, feature);
}
void second_phase(const std::string &feature) {
property(rank::SecondPhase::NAME, feature);
}
void match_feature(const std::string &feature) {
property(match::Feature::NAME, feature);
}
void summary_feature(const std::string &feature) {
property(summary::Feature::NAME, feature);
}
void dump_feature(const std::string &feature) {
property(dump::Feature::NAME, feature);
}
void good_profile() {
extra_profiles.push_back(true);
}
void bad_profile() {
extra_profiles.push_back(false);
}
void write_attributes(const Writer &out) {
out.fmt("attribute[%zu]\n", attributes.size());
auto pos = attributes.begin();
for (size_t i = 0; pos != attributes.end(); ++pos, ++i) {
out.fmt("attribute[%zu].name \"%s\"\n", i, pos->first.c_str());
out.fmt("attribute[%zu].datatype %s\n", i, pos->second.dataType.c_str());
out.fmt("attribute[%zu].collectiontype %s\n", i, pos->second.collectionType.c_str());
out.fmt("attribute[%zu].imported %s\n", i, pos->second.imported.c_str());
}
}
void write_indexschema(const Writer &out) {
out.fmt("indexfield[%zu]\n", indexes.size());
auto pos = indexes.begin();
for (size_t i = 0; pos != indexes.end(); ++pos, ++i) {
out.fmt("indexfield[%zu].name \"%s\"\n", i, pos->first.c_str());
out.fmt("indexfield[%zu].datatype %s\n", i, pos->second.first.c_str());
out.fmt("indexfield[%zu].collectiontype %s\n", i, pos->second.second.c_str());
}
}
void write_vsmfield(const Writer &out, size_t idx, std::string name, std::string dataType) {
out.fmt("fieldspec[%zu].name \"%s\"\n", idx, name.c_str());
if (dataType == "STRING") {
out.fmt("fieldspec[%zu].searchmethod AUTOUTF8\n", idx);
out.fmt("fieldspec[%zu].normalize LOWERCASE\n", idx);
} else {
out.fmt("fieldspec[%zu].searchmethod %s\n", idx, dataType.c_str());
}
}
void write_vsmfields(const Writer &out) {
std::set<std::string> allFields;
size_t i = 0;
for (const auto & field : indexes) {
write_vsmfield(out, i, field.first, field.second.first);
out.fmt("fieldspec[%zu].fieldtype INDEX\n", i);
i++;
allFields.insert(field.first);
}
for (const auto & field : attributes) {
if (allFields.count(field.first) != 0) continue;
write_vsmfield(out, i, field.first, field.second.dataType);
out.fmt("fieldspec[%zu].fieldtype ATTRIBUTE\n", i);
i++;
allFields.insert(field.first);
}
out.fmt("documenttype[0].name \"foobar\"\n");
size_t j = 0;
for (const auto & field : allFields) {
out.fmt("documenttype[0].index[%zu].name \"%s\"\n", j, field.c_str());
out.fmt("documenttype[0].index[%zu].field[0].name \"%s\"\n", j, field.c_str());
j++;
}
}
void write_rank_profiles(const Writer &out) {
out.fmt("rankprofile[%zu]\n", extra_profiles.size() + 1);
out.fmt("rankprofile[0].name \"default\"\n");
auto pos = properties.begin();
for (size_t i = 0; pos != properties.end(); ++pos, ++i) {
out.fmt("rankprofile[0].fef.property[%zu]\n", properties.size());
out.fmt("rankprofile[0].fef.property[%zu].name \"%s\"\n", i, pos->first.c_str());
out.fmt("rankprofile[0].fef.property[%zu].value \"%s\"\n", i, pos->second.c_str());
}
for (size_t i = 1; i < (extra_profiles.size() + 1); ++i) {
out.fmt("rankprofile[%zu].name \"extra_%zu\"\n", i, i);
out.fmt("rankprofile[%zu].fef.property[%zu].name \"%s\"\n", i, i, rank::FirstPhase::NAME.c_str());
out.fmt("rankprofile[%zu].fef.property[%zu].value \"%s\"\n", i, i, extra_profiles[i-1]?valid_feature:invalid_feature);
}
}
void write_ranking_constants(const Writer &out) {
size_t idx = 0;
for (const auto &entry: constants) {
out.fmt("constant[%zu].name \"%s\"\n", idx, entry.first.c_str());
out.fmt("constant[%zu].fileref \"12345\"\n", idx);
out.fmt("constant[%zu].type \"%s\"\n", idx, entry.second.c_str());
++idx;
}
}
void write_ranking_expressions(const Writer &out) {
size_t idx = 0;
for (const auto &entry: ranking_expressions) {
out.fmt("expression[%zu].name \"%s\"\n", idx, entry.first.c_str());
out.fmt("expression[%zu].fileref \"expr_ref_%zu\"\n", idx, idx);
++idx;
}
}
void write_onnx_models(const Writer &out) {
size_t idx = 0;
for (const auto &entry: onnx_models) {
out.fmt("model[%zu].name \"%s\"\n", idx, entry.second.name().c_str());
out.fmt("model[%zu].fileref \"onnx_ref_%zu\"\n", idx, idx);
size_t idx2 = 0;
for (const auto &input: entry.second.inspect_input_features()) {
out.fmt("model[%zu].input[%zu].name \"%s\"\n", idx, idx2, input.first.c_str());
out.fmt("model[%zu].input[%zu].source \"%s\"\n", idx, idx2, input.second.c_str());
++idx2;
}
idx2 = 0;
for (const auto &output: entry.second.inspect_output_names()) {
out.fmt("model[%zu].output[%zu].name \"%s\"\n", idx, idx2, output.first.c_str());
out.fmt("model[%zu].output[%zu].as \"%s\"\n", idx, idx2, output.second.c_str());
++idx2;
}
out.fmt("model[%zu].dry_run_on_setup %s\n", idx, entry.second.dry_run_on_setup() ? "true" : "false");
++idx;
}
}
void write_self_cfg(const Writer &out) {
size_t idx = 0;
for (const auto &entry: ranking_expressions) {
out.fmt("file[%zu].ref \"expr_ref_%zu\"\n", idx, idx);
out.fmt("file[%zu].path \"%s\"\n", idx, entry.second.c_str());
++idx;
}
idx = 0;
for (const auto &entry: onnx_models) {
out.fmt("file[%zu].ref \"onnx_ref_%zu\"\n", idx, idx);
out.fmt("file[%zu].path \"%s\"\n", idx, entry.second.file_path().c_str());
++idx;
}
}
void generate() {
write_attributes(Writer(gen_dir + "/attributes.cfg"));
write_indexschema(Writer(gen_dir + "/indexschema.cfg"));
write_vsmfields(Writer(gen_dir + "/vsmfields.cfg"));
write_rank_profiles(Writer(gen_dir + "/rank-profiles.cfg"));
write_ranking_constants(Writer(gen_dir + "/ranking-constants.cfg"));
write_ranking_expressions(Writer(gen_dir + "/ranking-expressions.cfg"));
write_onnx_models(Writer(gen_dir + "/onnx-models.cfg"));
write_self_cfg(Writer(gen_dir + "/verify-ranksetup.cfg"));
}
bool verify(SearchMode mode = SearchMode::BOTH) {
if (mode == SearchMode::BOTH) {
bool res_indexed = verify_mode(SearchMode::INDEXED);
bool res_streaming = verify_mode(SearchMode::STREAMING);
EXPECT_EQUAL(res_indexed, res_streaming);
return res_indexed;
} else {
return verify_mode(mode);
}
}
bool verify_mode(SearchMode mode) {
generate();
vespalib::Process process(fmt("%s dir:%s%s", prog, gen_dir.c_str(),
(mode == SearchMode::STREAMING ? " -S" : "")),
true);
for (auto line = process.read_line(); !line.empty(); line = process.read_line()) {
fprintf(stderr, "> %s\n", line.c_str());
}
return (process.join() == 0);
}
void verify_valid(std::initializer_list<std::string> features, SearchMode mode = SearchMode::BOTH) {
for (const std::string &f: features) {
first_phase(f);
if (!EXPECT_TRUE(verify(mode))) {
fprintf(stderr, "--> feature '%s' was invalid (should be valid)\n", f.c_str());
}
}
}
void verify_invalid(std::initializer_list<std::string> features, SearchMode mode = SearchMode::BOTH) {
for (const std::string &f: features) {
first_phase(f);
if (!EXPECT_TRUE(!verify(mode))) {
fprintf(stderr, "--> feature '%s' was valid (should be invalid)\n", f.c_str());
}
}
}
};
Setup::Setup()
: indexes(),
attributes(),
properties(),
extra_profiles()
{
verify_dir();
}
Setup::~Setup() = default;
//your_sha256_hash-------------
struct EmptySetup : Setup {};
struct SimpleSetup : Setup {
SimpleSetup() : Setup() {
index("title", DataType::STRING, CollectionType::SINGLE);
index("list", DataType::STRING, CollectionType::ARRAY);
index("keywords", DataType::STRING, CollectionType::WEIGHTEDSET);
attribute("date", DataType::INT32, CollectionType::SINGLE);
attribute("pos_zcurve", DataType::INT64, CollectionType::SINGLE);
attribute("imported_attr", DataType::INT32, CollectionType::SINGLE, true);
constants["my_tensor"] = "tensor(x{},y{})";
}
};
struct OnnxSetup : Setup {
OnnxSetup() : Setup() {
add_onnx_model(OnnxModel("simple", TEST_PATH("../../../../../eval/src/tests/tensor/onnx_wrapper/simple.onnx")));
add_onnx_model(std::move(OnnxModel("mapped", TEST_PATH("../../../../../eval/src/tests/tensor/onnx_wrapper/simple.onnx"))
.input_feature("query_tensor", "rankingExpression(qt)")
.input_feature("attribute_tensor", "rankingExpression(at)")
.input_feature("bias_tensor", "rankingExpression(bt)")
.output_name("output", "result")));
add_onnx_model(std::move(OnnxModel("fragile", TEST_PATH("../../../../../searchlib/src/tests/features/onnx_feature/fragile.onnx"))
.dry_run_on_setup(true)));
add_onnx_model(std::move(OnnxModel("unfragile", TEST_PATH("../../../../../searchlib/src/tests/features/onnx_feature/fragile.onnx"))
.dry_run_on_setup(false)));
}
};
struct ShadowSetup : Setup {
ShadowSetup() : Setup() {
index("both", DataType::STRING, CollectionType::SINGLE);
attribute("both", DataType::STRING, CollectionType::SINGLE);
}
};
TEST_F("print usage", Setup()) {
EXPECT_TRUE(!vespalib::Process::run(fmt("%s", prog)));
}
TEST_F("setup output directory", Setup()) {
ASSERT_TRUE(vespalib::Process::run(fmt("rm -rf %s", gen_dir.c_str())));
ASSERT_TRUE(vespalib::Process::run(fmt("mkdir %s", gen_dir.c_str())));
}
//your_sha256_hash-------------
TEST_F("require that empty setup passes validation", EmptySetup()) {
EXPECT_TRUE(f.verify());
}
TEST_F("require that we can verify multiple rank profiles", SimpleSetup()) {
f.first_phase(valid_feature);
f.good_profile();
EXPECT_TRUE(f.verify());
f.bad_profile();
EXPECT_TRUE(!f.verify());
}
TEST_F("require that first phase can break validation", SimpleSetup()) {
f.first_phase(invalid_feature);
EXPECT_TRUE(!f.verify());
}
TEST_F("require that second phase can break validation", SimpleSetup()) {
f.second_phase(invalid_feature);
EXPECT_TRUE(!f.verify());
}
TEST_F("require that match features can break validation", SimpleSetup()) {
f.match_feature(invalid_feature);
EXPECT_TRUE(!f.verify());
}
TEST_F("require that summary features can break validation", SimpleSetup()) {
f.summary_feature(invalid_feature);
EXPECT_TRUE(!f.verify());
}
TEST_F("require that dump features can break validation", SimpleSetup()) {
f.dump_feature(invalid_feature);
EXPECT_TRUE(!f.verify());
}
//your_sha256_hash-------------
TEST_F("require that fieldMatch feature requires single value field", SimpleSetup()) {
f.verify_invalid({"fieldMatch(keywords)", "fieldMatch(list)"}, SearchMode::INDEXED);
f.verify_valid({"fieldMatch(title)"});
}
TEST_F("require that age feature requires attribute parameter", SimpleSetup()) {
f.verify_invalid({"age(unknown)", "age(title)"}, SearchMode::INDEXED);
f.verify_valid({"age(date)"});
}
TEST_F("require that nativeRank can be used on any valid field", SimpleSetup()) {
f.verify_invalid({"nativeRank(unknown)"});
f.verify_valid({"nativeRank", "nativeRank(title)", "nativeRank(date)", "nativeRank(title,date)"});
}
TEST_F("require that nativeAttributeMatch requires attribute parameter", SimpleSetup()) {
f.verify_invalid({"nativeAttributeMatch(unknown)", "nativeAttributeMatch(title)", "nativeAttributeMatch(title,date)"}, SearchMode::INDEXED);
f.verify_valid({"nativeAttributeMatch", "nativeAttributeMatch(date)"});
}
TEST_F("require that shadowed attributes can be used", ShadowSetup()) {
f.verify_valid({"attribute(both)"});
}
TEST_F("require that ranking constants can be used", SimpleSetup()) {
f.verify_valid({"constant(my_tensor)"});
}
TEST_F("require that undefined ranking constants cannot be used", SimpleSetup()) {
f.verify_invalid({"constant(bogus_tensor)"});
}
TEST_F("require that ranking expressions can be verified", SimpleSetup()) {
f.rank_expr("my_expr", "constant(my_tensor)+attribute(date)");
f.verify_valid({"rankingExpression(my_expr)"});
}
//your_sha256_hash-------------
TEST_F("require that tensor join is supported", SimpleSetup()) {
f.rank_expr("my_expr", "join(constant(my_tensor),attribute(date),f(t,d)(t+d))");
f.verify_valid({"rankingExpression(my_expr)"});
}
TEST_F("require that nested tensor join is not supported", SimpleSetup()) {
f.rank_expr("my_expr", "join(constant(my_tensor),attribute(date),f(t,d)(join(t,d,f(x,y)(x+y))))");
f.verify_invalid({"rankingExpression(my_expr)"});
}
TEST_F("require that imported attribute field can be used by rank feature", SimpleSetup()) {
f.verify_valid({"attribute(imported_attr)"});
}
//your_sha256_hash-------------
TEST_F("require that external ranking expression can be verified", SimpleSetup()) {
f.ext_rank_expr("my_expr", "good_ranking_expression");
f.verify_valid({"rankingExpression(my_expr)"});
}
TEST_F("require that external ranking expression can fail verification", SimpleSetup()) {
f.ext_rank_expr("my_expr", "bad_ranking_expression");
f.verify_invalid({"rankingExpression(my_expr)"});
}
TEST_F("require that missing expression file fails verification", SimpleSetup()) {
f.ext_rank_expr("my_expr", "missing_ranking_expression_file");
f.verify_invalid({"rankingExpression(my_expr)"});
}
//your_sha256_hash-------------
TEST_F("require that onnx model can be verified", OnnxSetup()) {
f.rank_expr("query_tensor", "tensor<float>(a[1],b[4]):[[1,2,3,4]]");
f.rank_expr("attribute_tensor", "tensor<float>(a[4],b[1]):[[5],[6],[7],[8]]");
f.rank_expr("bias_tensor", "tensor<float>(a[1],b[1]):[[9]]");
f.verify_valid({"onnx(simple)"});
}
TEST_F("require that onnx model can be verified with old name", OnnxSetup()) {
f.rank_expr("query_tensor", "tensor<float>(a[1],b[4]):[[1,2,3,4]]");
f.rank_expr("attribute_tensor", "tensor<float>(a[4],b[1]):[[5],[6],[7],[8]]");
f.rank_expr("bias_tensor", "tensor<float>(a[1],b[1]):[[9]]");
f.verify_valid({"onnxModel(simple)"});
}
TEST_F("require that input type mismatch makes onnx model fail verification", OnnxSetup()) {
f.rank_expr("query_tensor", "tensor<float>(a[1],b[3]):[[1,2,3]]"); // <- 3 vs 4
f.rank_expr("attribute_tensor", "tensor<float>(a[4],b[1]):[[5],[6],[7],[8]]");
f.rank_expr("bias_tensor", "tensor<float>(a[1],b[1]):[[9]]");
f.verify_invalid({"onnx(simple)"});
}
TEST_F("require that onnx model can have inputs and outputs mapped", OnnxSetup()) {
f.rank_expr("qt", "tensor<float>(a[1],b[4]):[[1,2,3,4]]");
f.rank_expr("at", "tensor<float>(a[4],b[1]):[[5],[6],[7],[8]]");
f.rank_expr("bt", "tensor<float>(a[1],b[1]):[[9]]");
f.verify_valid({"onnx(mapped).result"});
}
TEST_F("require that fragile model can pass verification", OnnxSetup()) {
f.rank_expr("in1", "tensor<float>(a[2]):[1,2]");
f.rank_expr("in2", "tensor<float>(a[2]):[3,4]");
f.verify_valid({"onnx(fragile)"});
}
TEST_F("require that broken fragile model fails verification", OnnxSetup()) {
f.rank_expr("in1", "tensor<float>(a[2]):[1,2]");
f.rank_expr("in2", "tensor<float>(a[3]):[3,4,31515]");
f.verify_invalid({"onnx(fragile)"});
}
TEST_F("require that broken fragile model without dry-run passes verification", OnnxSetup()) {
f.rank_expr("in1", "tensor<float>(a[2]):[1,2]");
f.rank_expr("in2", "tensor<float>(a[3]):[3,4,31515]");
f.verify_valid({"onnx(unfragile)"});
}
//your_sha256_hash-------------
TEST_F("require that query tensor can have default value", SimpleSetup()) {
f.query_feature_type("foo", "tensor(x[3])");
f.query_feature_default_value("foo", "tensor(x[3])(x+1)");
f.verify_valid({"query(foo)"});
}
TEST_F("require that query tensor default value must have appropriate type", SimpleSetup()) {
f.query_feature_type("foo", "tensor(y[3])");
f.query_feature_default_value("foo", "tensor(x[3])(x+1)");
f.verify_invalid({"query(foo)"});
}
TEST_F("require that query tensor default value must be a valid expression", SimpleSetup()) {
f.query_feature_type("foo", "tensor(x[3])");
f.query_feature_default_value("foo", "this expression is not parseable");
f.verify_invalid({"query(foo)"});
}
TEST_F("require that query tensor default value expression does not need parameters", SimpleSetup()) {
f.query_feature_type("foo", "tensor(x[3])");
f.query_feature_default_value("foo", "externalSymbol");
f.verify_invalid({"query(foo)"});
}
//your_sha256_hash-------------
TEST_F("require that zcurve distance can be set up", SimpleSetup()) {
f.verify_valid({"distance(pos)"});
}
TEST_F("require that zcurve distance must be backed by an attribute", SimpleSetup()) {
f.verify_invalid({"distance(unknown)"});
}
//your_sha256_hash-------------
TEST_F("cleanup files", Setup()) {
ASSERT_TRUE(vespalib::Process::run(fmt("rm -rf %s", gen_dir.c_str())));
}
TEST_MAIN() { TEST_RUN_ALL(); }
```
|
Michael McGahey (29 May 1925 – 30 January 1999) was a Scottish miners' leader and Communist. He had a distinctive gravelly voice, and described himself as "a product of my class and my movement".
Early life
His father, John McGahey, worked in the mines at Shotts, North Lanarkshire when Mick was born. John was a founder member of the Communist Party of Great Britain and took an active part in the 1926 General Strike. The family moved to Cambuslang in search of work, and it was here that McGahey went to school.
McGahey started work at age 14 at the Gateside Colliery, and continued to work as a miner for the next 25 years. He followed his father into the Communist Party and the National Union of Mineworkers (NUM), remaining a member of both the Communist Party, until its dissolution in 1990, and the NUM, all his life.
Trade unionist and communist
He became chairman of the local branch of his union when he was only eighteen and thereafter progressed through its echelons, though never quite reaching the national presidency. He was elected to the Scottish Executive of the NUM in 1958, becoming president of the Scottish area in 1967. He was regarded as a highly competent operator but his strongly militant line was opposed by others in the Union. He was defeated in the 1971 elections for National President by Joe Gormley.
McGahey was, however, elected National Vice-president of the NUM in 1972. He made similar progress in the Communist Party of Great Britain (CPGB), being elected to its Executive in 1971. He remained a member until the CPGB dissolved in 1991 and then joined its successor in Scotland, the Communist Party of Scotland. He was reportedly the subject of phone tapping by the UK security service MI5 whose transcribers found him difficult to understand because of his accent and the effects of alcohol consumption.
He came to the attention of the public during the miners strikes of 1972 and 1974. He later claimed these were purely industrial disputes, made political by the then Prime Minister, Edward Heath. Nevertheless, he took a characteristically militant line, opposing some of the tactics of Gormley, accusing him of "ballotisis" and swearing he would not be "constitutionalised" out of a national strike. Gormley, it was later claimed, postponed his own retirement until 1981, by which time McGahey was over 55, too old by union rules to stand for president.
He played a smaller role (mostly on Scottish affairs) during the 1984–1985 miners' strike, as he was nearing retirement. He opposed the holding of a national ballot and favoured letting regions make their own decisions on whether to strike. He saw the appointment of Ian MacGregor as chair of the National Coal Board as a "declaration of war". James Cowan, then deputy chairman of the NCB, claims that McGahey warned him to retire in 1983 and protect his health, as he feared that a "bloody" strike was inevitable with the appointment of Ian MacGregor and that there would be conflict between different regions in the NUM.
Surveillance by MI5 on McGahey during the 1984–85 strike found that he was "extremely angry and embarrassed" about Scargill's links with the Libyan regime, but that he was "happy to take part, with Scargill and other NUM leaders, in contacts with Soviet representatives".
After the strike, McGahey became more critical of Arthur Scargill and argued against the growing concentration of power within the NUM in the national leadership at the expense of regional areas. He expressed regret on the use of violent picketing in Nottinghamshire and the divisions that this caused amongst mineworkers, saying:
I am not sure we handled it all correctly. The mass intrusion of pickets into Notts, not just Yorkshiremen; I accept some responsibility for that, and so will the left have to. I think if as an executive we had approached Notts without pickets, it might have been different. Because I reject, I have made this clear since the strike, that 25 or 30,000 Notts miners, their wives and families and communities are scabs and blacklegs. I refuse to accept this. We did alienate them during the strike.
McGahey continuously insisted that the NUM find a way to reconcile with the Union of Democratic Mineworkers (a Midlands-based breakaway from the NUM). Scargill referred to this insistence as a "tragedy that people from the far north should pontificate about what we should be doing to win back members for the NUM."
Comments after the death of Ian MacGregor
On the death of Ian MacGregor (chair of the National Coal Board during the 1984–85 strike), McGahey said, "It's no loss to people of my ilk. MacGregor was a vicious, anti-trades unionist, anti-working class person, recruited by the Tory government quite deliberately for the purpose of destroying trade unionism in the mining industry. I will not suffer any grief, not will I in any way cry over the loss of Ian MacGregor."
Memorials
He married Catherine Young in 1954 with whom he had two daughters and a son.
A heavy smoker for most of his life, McGahey suffered in later years from chronic emphysema and pneumoconiosis.
A significant memorial, in the form of mine workings, stands to him at the east end of Cambuslang Main Street and there have been calls in the Scottish Parliament for a more national memorial.
On 28 April 2006, in Bonnyrigg, ex-UNISON general secretary Rodney Bickerstaffe unveiled a memorial to mark the 10th Anniversary of McGahey's address to the Midlothian TUC Worker's Memorial Day event in George V Park. McGahey's son was present.
Bickerstaffe described McGahey as a "working class hero" who never lost touch with his roots and socialist values. He listed some of McGahey's sayings which were just as relevant today. "We are a movement not a monument", he quoted as a reminder of the need to continue to move and to fight on, and finished by saying "We know the reasons why Michael never became NUM President, but whether he had stayed as a steward or a delegate he would still have had a major impact on the movement".
References
1925 births
1999 deaths
Communist Party of Great Britain members
Scottish communists
Scottish trade unionists
People from Cambuslang
Members of the General Council of the Trades Union Congress
People from Shotts
Scottish miners
Vice Presidents of the National Union of Mineworkers (Great Britain)
|
Hyperaspidius marginatus is a species of lady beetle in the family Coccinellidae. It is found in North America.
References
Further reading
Coccinellidae
Articles created by Qbugbot
Beetles described in 1933
|
The Pale Horseman is the second historical novel in the Saxon Stories by Bernard Cornwell, published in 2005. It is set in 9th century Wessex and Cornwall.
Plot summary
876 – 878: Lord Uhtred of Bebbanburg arrives at King Alfred of Wessex's court to proclaim the defeat of the forces of Danish chieftain and warrior Ubba Lothbrokson, as well as his killing of Ubba himself in single combat, only to find that his enemy Ealdorman Odda the Younger has lied, denying he had any part in the great victory. Uhtred is so enraged, he draws his sword in the king's presence, and is forced to do penance. This strengthens Alfred's dislike and distrust of him.
Alfred makes peace with the Danish king Guthrum, rather than take advantage of the victory, much to Uhtred's disgust. Uhtred goes home, but eventually becomes bored and goes off raiding into Cornwall. He comes across a settlement ruled by Peredur, who hires Uhtred and his men to fight an enemy. Only later does Uhtred realize he has been tricked; his opponent is not some half-trained gang, but rather the Dane Svein of the White Horse and his band of seasoned warriors. Uhtred and Svein ally, kill Peredur and pillage his settlement. Uhtred carries off one of Peredur's wives, the shadow queen Iseult, who is believed to have supernatural powers. A monk named Asser, who was at Peredur's court, witnesses the betrayal and escapes. Uhtred and Svein then part ways. On his way home, Uhtred captures a Danish ship laden with treasure. He returns to his estate and pious wife Mildrith, using his share of the treasure to build a great hall and pay his large debt to the Church.
The Witan summons Uhtred to an audience with King Alfred in Cippanhamm, where he is accused, based on the testimony of Asser, of using the king's ship to raid the Britons, with whom Wessex is at peace, and joining Svein in attacking the Cynuit abbey. The warrior Steapa Snotor, one of Odda the Younger's warriors, says he too saw Uhtred at the abbey. They decide to settle the dispute with a trial by combat to the death between Uhtred and Steapa. The duel is cut short when Guthrum breaks his word and launches a surprise attack. Everyone flees. Uhtred, Leofric, and Iseult hide in the fields until nightfall, when they enter Cippanhamm and rescue a friend, the whore Eanflæd, as well as a beautiful nun named Hild. The five of them wander for a few weeks until they reach the swamps of Athelney.
At the edge of the marsh, Uhtred rescues a monk from Guthrum's men, only to discover that the monk is actually Alfred. After praying while Uhtred briefly consorts with childhood friends, the distraught King Alfred considers going into exile, but with Uhtred's encouragement, decides to stay and fight. For a few months, they hide in the swamp, spreading the word that Alfred is still alive; slowly men come to join them.
When Svein anchors his fleet at the mouth of the River Parret close to their hideout, Uthred tricks the men Svein left to guard his ships and burns all but one. Without his ships, Svein is forced to join his rival, Guthrum. This is what Alfred wants: an opportunity for one decisive battle against both Danish invaders.
Alfred raises those fyrds that have remained loyal, but is still outnumbered. Furthermore, all of the Danes are trained warriors, while only a portion of Alfred's men are. Nevertheless, they win the Battle of Ethandun, with Uhtred playing a pivotal role, and Alfred's kingdom is saved.
Characters in "The Pale Horseman"
Uhtred - the protagonist, narrator
King Alfred of Wessex (Alfred the Great) - the King of Wessex
Leofric - Captain of the Heahengel, one of the ships of Wessex
Iseult - A Briton shadow queen from Cornwall
Father Beocca - Priest and family friend
Guthrum the Unlucky - Danish King
Svein of the White Horse - Danish chieftain
Haesten - Captured Dane freed by Uhtred, later joins Guthrum. Haesten is a historical character.
Ragnar Ragnarsson - Uthred's best friend
Odda the Younger - Son of Odda the Elder, Ealdorman of Defnascir
Steapa Snotor (the Clever) - Odda the Younger's bodyguard
Father Pyrlig - A Welsh priest and former warrior
Ælswith - Alfred's wife who dislikes Uhtred
Eanflæd - whore rescued by Uhtred in Cippanhamm
Æthelflæd - Alfred's daughter
Æthelwold - Alfred's nephew and friend of Uhtred
Hild - Nun rescued at Alfred's behest in Cippanhamm
Mildrith - Uhtred's pious West Saxon wife
Brother Asser - Welsh monk and enemy of Uhtred
Release details
2005, UK, HarperCollins , Pub date 3 October 2005, hardback
2005, UK, HarperCollins , Pub date 3 October 2005, audio cassette (Kati Nicholl editor, Jamie Glover narrator)
2005, UK, HarperCollins , Pub date 3 October 2005, audio CD (Kati Nicholl editor, Jamie Glover narrator)
2006, UK, HarperLargePrint , Pub date ? January 2006, paperback (large print)
2006, UK, HarperCollins , Pub date 22 May 2006, paperback (forthcoming edition)
See also
Four Horsemen of the Apocalypse#White Horse
2005 British novels
The Saxon Stories
HarperCollins books
|
Psilaxis is a genus of sea snails, marine gastropod mollusks in the family Architectonicidae, the staircase shells or sundials.
Species
Species within the genus Psilaxis include:
Psilaxis clertoni Tenório, Barros, Francisco & Silva, 2011
Psilaxis krebsii (Mörch, 1875)
Psilaxis oxytropis (A. Adams, 1855)
Psilaxis radiatus (Röding, 1798)
References
Bieler R. (1993). Architectonicidae of the Indo-Pacific (Mollusca, Gastropoda). Abhandlungen des Naturwissenschaftlichen Vereins in Hamburg (NF) 30: 1-376 [15 December]. page(s): 116
Rolán E., 2005. Malacological Fauna From The Cape Verde Archipelago. Part 1, Polyplacophora and Gastropoda.
External links
Woodring W.P.B. (1928). Miocene mollusks from Bowden, Jamaica. 2. Gastropods and discussion of results. Carnegie Institution of Washington Publication. 385: vii + 564 pp., 40 pls.
Architectonicidae
|
```c
/*your_sha256_hash---------
*
* FILE
* fe-misc.c
*
* DESCRIPTION
* miscellaneous useful functions
*
* The communication routines here are analogous to the ones in
* backend/libpq/pqcomm.c and backend/libpq/pqformat.c, but operate
* in the considerably different environment of the frontend libpq.
* In particular, we work with a bare nonblock-mode socket, rather than
* a stdio stream, so that we can avoid unwanted blocking of the application.
*
* XXX: MOVE DEBUG PRINTOUT TO HIGHER LEVEL. As is, block and restart
* will cause repeat printouts.
*
* We must speak the same transmitted data representations as the backend
* routines.
*
*
*
* IDENTIFICATION
* src/interfaces/libpq/fe-misc.c
*
*your_sha256_hash---------
*/
#include "postgres_fe.h"
#include <signal.h>
#include <time.h>
#ifdef WIN32
#include "win32.h"
#else
#include <unistd.h>
#include <sys/time.h>
#endif
#ifdef HAVE_POLL_H
#include <poll.h>
#endif
#ifdef HAVE_SYS_SELECT_H
#include <sys/select.h>
#endif
#include "libpq-fe.h"
#include "libpq-int.h"
#include "mb/pg_wchar.h"
#include "pg_config_paths.h"
#include "port/pg_bswap.h"
static int pqPutMsgBytes(const void *buf, size_t len, PGconn *conn);
static int pqSendSome(PGconn *conn, int len);
static int pqSocketCheck(PGconn *conn, int forRead, int forWrite,
time_t end_time);
static int pqSocketPoll(int sock, int forRead, int forWrite, time_t end_time);
/*
* PQlibVersion: return the libpq version number
*/
int
PQlibVersion(void)
{
return PG_VERSION_NUM;
}
/*
* pqGetc: get 1 character from the connection
*
* All these routines return 0 on success, EOF on error.
* Note that for the Get routines, EOF only means there is not enough
* data in the buffer, not that there is necessarily a hard error.
*/
int
pqGetc(char *result, PGconn *conn)
{
if (conn->inCursor >= conn->inEnd)
return EOF;
*result = conn->inBuffer[conn->inCursor++];
return 0;
}
/*
* pqPutc: write 1 char to the current message
*/
int
pqPutc(char c, PGconn *conn)
{
if (pqPutMsgBytes(&c, 1, conn))
return EOF;
return 0;
}
/*
* pqGets[_append]:
* get a null-terminated string from the connection,
* and store it in an expansible PQExpBuffer.
* If we run out of memory, all of the string is still read,
* but the excess characters are silently discarded.
*/
static int
pqGets_internal(PQExpBuffer buf, PGconn *conn, bool resetbuffer)
{
/* Copy conn data to locals for faster search loop */
char *inBuffer = conn->inBuffer;
int inCursor = conn->inCursor;
int inEnd = conn->inEnd;
int slen;
while (inCursor < inEnd && inBuffer[inCursor])
inCursor++;
if (inCursor >= inEnd)
return EOF;
slen = inCursor - conn->inCursor;
if (resetbuffer)
resetPQExpBuffer(buf);
appendBinaryPQExpBuffer(buf, inBuffer + conn->inCursor, slen);
conn->inCursor = ++inCursor;
return 0;
}
int
pqGets(PQExpBuffer buf, PGconn *conn)
{
return pqGets_internal(buf, conn, true);
}
int
pqGets_append(PQExpBuffer buf, PGconn *conn)
{
return pqGets_internal(buf, conn, false);
}
/*
* pqPuts: write a null-terminated string to the current message
*/
int
pqPuts(const char *s, PGconn *conn)
{
if (pqPutMsgBytes(s, strlen(s) + 1, conn))
return EOF;
return 0;
}
/*
* pqGetnchar:
* get a string of exactly len bytes in buffer s, no null termination
*/
int
pqGetnchar(char *s, size_t len, PGconn *conn)
{
if (len > (size_t) (conn->inEnd - conn->inCursor))
return EOF;
memcpy(s, conn->inBuffer + conn->inCursor, len);
/* no terminating null */
conn->inCursor += len;
return 0;
}
/*
* pqSkipnchar:
* skip over len bytes in input buffer.
*
* Note: this is primarily useful for its debug output, which should
* be exactly the same as for pqGetnchar. We assume the data in question
* will actually be used, but just isn't getting copied anywhere as yet.
*/
int
pqSkipnchar(size_t len, PGconn *conn)
{
if (len > (size_t) (conn->inEnd - conn->inCursor))
return EOF;
conn->inCursor += len;
return 0;
}
/*
* pqPutnchar:
* write exactly len bytes to the current message
*/
int
pqPutnchar(const char *s, size_t len, PGconn *conn)
{
if (pqPutMsgBytes(s, len, conn))
return EOF;
return 0;
}
/*
* pqGetInt
* read a 2 or 4 byte integer and convert from network byte order
* to local byte order
*/
int
pqGetInt(int *result, size_t bytes, PGconn *conn)
{
uint16 tmp2;
uint32 tmp4;
switch (bytes)
{
case 2:
if (conn->inCursor + 2 > conn->inEnd)
return EOF;
memcpy(&tmp2, conn->inBuffer + conn->inCursor, 2);
conn->inCursor += 2;
*result = (int) pg_ntoh16(tmp2);
break;
case 4:
if (conn->inCursor + 4 > conn->inEnd)
return EOF;
memcpy(&tmp4, conn->inBuffer + conn->inCursor, 4);
conn->inCursor += 4;
*result = (int) pg_ntoh32(tmp4);
break;
default:
pqInternalNotice(&conn->noticeHooks,
"integer of size %lu not supported by pqGetInt",
(unsigned long) bytes);
return EOF;
}
return 0;
}
/*
* pqPutInt
* write an integer of 2 or 4 bytes, converting from host byte order
* to network byte order.
*/
int
pqPutInt(int value, size_t bytes, PGconn *conn)
{
uint16 tmp2;
uint32 tmp4;
switch (bytes)
{
case 2:
tmp2 = pg_hton16((uint16) value);
if (pqPutMsgBytes((const char *) &tmp2, 2, conn))
return EOF;
break;
case 4:
tmp4 = pg_hton32((uint32) value);
if (pqPutMsgBytes((const char *) &tmp4, 4, conn))
return EOF;
break;
default:
pqInternalNotice(&conn->noticeHooks,
"integer of size %lu not supported by pqPutInt",
(unsigned long) bytes);
return EOF;
}
return 0;
}
/*
* Make sure conn's output buffer can hold bytes_needed bytes (caller must
* include already-stored data into the value!)
*
* Returns 0 on success, EOF if failed to enlarge buffer
*/
int
pqCheckOutBufferSpace(size_t bytes_needed, PGconn *conn)
{
int newsize = conn->outBufSize;
char *newbuf;
/* Quick exit if we have enough space */
if (bytes_needed <= (size_t) newsize)
return 0;
/*
* If we need to enlarge the buffer, we first try to double it in size; if
* that doesn't work, enlarge in multiples of 8K. This avoids thrashing
* the malloc pool by repeated small enlargements.
*
* Note: tests for newsize > 0 are to catch integer overflow.
*/
do
{
newsize *= 2;
} while (newsize > 0 && bytes_needed > (size_t) newsize);
if (newsize > 0 && bytes_needed <= (size_t) newsize)
{
newbuf = realloc(conn->outBuffer, newsize);
if (newbuf)
{
/* realloc succeeded */
conn->outBuffer = newbuf;
conn->outBufSize = newsize;
return 0;
}
}
newsize = conn->outBufSize;
do
{
newsize += 8192;
} while (newsize > 0 && bytes_needed > (size_t) newsize);
if (newsize > 0 && bytes_needed <= (size_t) newsize)
{
newbuf = realloc(conn->outBuffer, newsize);
if (newbuf)
{
/* realloc succeeded */
conn->outBuffer = newbuf;
conn->outBufSize = newsize;
return 0;
}
}
/* realloc failed. Probably out of memory */
appendPQExpBufferStr(&conn->errorMessage,
"cannot allocate memory for output buffer\n");
return EOF;
}
/*
* Make sure conn's input buffer can hold bytes_needed bytes (caller must
* include already-stored data into the value!)
*
* Returns 0 on success, EOF if failed to enlarge buffer
*/
int
pqCheckInBufferSpace(size_t bytes_needed, PGconn *conn)
{
int newsize = conn->inBufSize;
char *newbuf;
/* Quick exit if we have enough space */
if (bytes_needed <= (size_t) newsize)
return 0;
/*
* Before concluding that we need to enlarge the buffer, left-justify
* whatever is in it and recheck. The caller's value of bytes_needed
* includes any data to the left of inStart, but we can delete that in
* preference to enlarging the buffer. It's slightly ugly to have this
* function do this, but it's better than making callers worry about it.
*/
bytes_needed -= conn->inStart;
if (conn->inStart < conn->inEnd)
{
if (conn->inStart > 0)
{
memmove(conn->inBuffer, conn->inBuffer + conn->inStart,
conn->inEnd - conn->inStart);
conn->inEnd -= conn->inStart;
conn->inCursor -= conn->inStart;
conn->inStart = 0;
}
}
else
{
/* buffer is logically empty, reset it */
conn->inStart = conn->inCursor = conn->inEnd = 0;
}
/* Recheck whether we have enough space */
if (bytes_needed <= (size_t) newsize)
return 0;
/*
* If we need to enlarge the buffer, we first try to double it in size; if
* that doesn't work, enlarge in multiples of 8K. This avoids thrashing
* the malloc pool by repeated small enlargements.
*
* Note: tests for newsize > 0 are to catch integer overflow.
*/
do
{
newsize *= 2;
} while (newsize > 0 && bytes_needed > (size_t) newsize);
if (newsize > 0 && bytes_needed <= (size_t) newsize)
{
newbuf = realloc(conn->inBuffer, newsize);
if (newbuf)
{
/* realloc succeeded */
conn->inBuffer = newbuf;
conn->inBufSize = newsize;
return 0;
}
}
newsize = conn->inBufSize;
do
{
newsize += 8192;
} while (newsize > 0 && bytes_needed > (size_t) newsize);
if (newsize > 0 && bytes_needed <= (size_t) newsize)
{
newbuf = realloc(conn->inBuffer, newsize);
if (newbuf)
{
/* realloc succeeded */
conn->inBuffer = newbuf;
conn->inBufSize = newsize;
return 0;
}
}
/* realloc failed. Probably out of memory */
appendPQExpBufferStr(&conn->errorMessage,
"cannot allocate memory for input buffer\n");
return EOF;
}
/*
* pqPutMsgStart: begin construction of a message to the server
*
* msg_type is the message type byte, or 0 for a message without type byte
* (only startup messages have no type byte)
*
* Returns 0 on success, EOF on error
*
* The idea here is that we construct the message in conn->outBuffer,
* beginning just past any data already in outBuffer (ie, at
* outBuffer+outCount). We enlarge the buffer as needed to hold the message.
* When the message is complete, we fill in the length word (if needed) and
* then advance outCount past the message, making it eligible to send.
*
* The state variable conn->outMsgStart points to the incomplete message's
* length word: it is either outCount or outCount+1 depending on whether
* there is a type byte. The state variable conn->outMsgEnd is the end of
* the data collected so far.
*/
int
pqPutMsgStart(char msg_type, PGconn *conn)
{
int lenPos;
int endPos;
/* allow room for message type byte */
if (msg_type)
endPos = conn->outCount + 1;
else
endPos = conn->outCount;
/* do we want a length word? */
lenPos = endPos;
/* allow room for message length */
endPos += 4;
/* make sure there is room for message header */
if (pqCheckOutBufferSpace(endPos, conn))
return EOF;
/* okay, save the message type byte if any */
if (msg_type)
conn->outBuffer[conn->outCount] = msg_type;
/* set up the message pointers */
conn->outMsgStart = lenPos;
conn->outMsgEnd = endPos;
/* length word, if needed, will be filled in by pqPutMsgEnd */
return 0;
}
/*
* pqPutMsgBytes: add bytes to a partially-constructed message
*
* Returns 0 on success, EOF on error
*/
static int
pqPutMsgBytes(const void *buf, size_t len, PGconn *conn)
{
/* make sure there is room for it */
if (pqCheckOutBufferSpace(conn->outMsgEnd + len, conn))
return EOF;
/* okay, save the data */
memcpy(conn->outBuffer + conn->outMsgEnd, buf, len);
conn->outMsgEnd += len;
/* no Pfdebug call here, caller should do it */
return 0;
}
/*
* pqPutMsgEnd: finish constructing a message and possibly send it
*
* Returns 0 on success, EOF on error
*
* We don't actually send anything here unless we've accumulated at least
* 8K worth of data (the typical size of a pipe buffer on Unix systems).
* This avoids sending small partial packets. The caller must use pqFlush
* when it's important to flush all the data out to the server.
*/
int
pqPutMsgEnd(PGconn *conn)
{
/* Fill in length word if needed */
if (conn->outMsgStart >= 0)
{
uint32 msgLen = conn->outMsgEnd - conn->outMsgStart;
msgLen = pg_hton32(msgLen);
memcpy(conn->outBuffer + conn->outMsgStart, &msgLen, 4);
}
/* trace client-to-server message */
if (conn->Pfdebug)
{
if (conn->outCount < conn->outMsgStart)
pqTraceOutputMessage(conn, conn->outBuffer + conn->outCount, true);
else
pqTraceOutputNoTypeByteMessage(conn,
conn->outBuffer + conn->outMsgStart);
}
/* Make message eligible to send */
conn->outCount = conn->outMsgEnd;
if (conn->outCount >= 8192)
{
int toSend = conn->outCount - (conn->outCount % 8192);
if (pqSendSome(conn, toSend) < 0)
return EOF;
/* in nonblock mode, don't complain if unable to send it all */
}
return 0;
}
/* ----------
* pqReadData: read more data, if any is available
* Possible return values:
* 1: successfully loaded at least one more byte
* 0: no data is presently available, but no error detected
* -1: error detected (including EOF = connection closure);
* conn->errorMessage set
* NOTE: callers must not assume that pointers or indexes into conn->inBuffer
* remain valid across this call!
* ----------
*/
int
pqReadData(PGconn *conn)
{
int someread = 0;
int nread;
if (conn->sock == PGINVALID_SOCKET)
{
appendPQExpBufferStr(&conn->errorMessage,
libpq_gettext("connection not open\n"));
return -1;
}
/* Left-justify any data in the buffer to make room */
if (conn->inStart < conn->inEnd)
{
if (conn->inStart > 0)
{
memmove(conn->inBuffer, conn->inBuffer + conn->inStart,
conn->inEnd - conn->inStart);
conn->inEnd -= conn->inStart;
conn->inCursor -= conn->inStart;
conn->inStart = 0;
}
}
else
{
/* buffer is logically empty, reset it */
conn->inStart = conn->inCursor = conn->inEnd = 0;
}
/*
* If the buffer is fairly full, enlarge it. We need to be able to enlarge
* the buffer in case a single message exceeds the initial buffer size. We
* enlarge before filling the buffer entirely so as to avoid asking the
* kernel for a partial packet. The magic constant here should be large
* enough for a TCP packet or Unix pipe bufferload. 8K is the usual pipe
* buffer size, so...
*/
if (conn->inBufSize - conn->inEnd < 8192)
{
if (pqCheckInBufferSpace(conn->inEnd + (size_t) 8192, conn))
{
/*
* We don't insist that the enlarge worked, but we need some room
*/
if (conn->inBufSize - conn->inEnd < 100)
return -1; /* errorMessage already set */
}
}
/* OK, try to read some data */
retry3:
nread = pqsecure_read(conn, conn->inBuffer + conn->inEnd,
conn->inBufSize - conn->inEnd);
if (nread < 0)
{
switch (SOCK_ERRNO)
{
case EINTR:
goto retry3;
/* Some systems return EAGAIN/EWOULDBLOCK for no data */
#ifdef EAGAIN
case EAGAIN:
return someread;
#endif
#if defined(EWOULDBLOCK) && (!defined(EAGAIN) || (EWOULDBLOCK != EAGAIN))
case EWOULDBLOCK:
return someread;
#endif
/* We might get ECONNRESET etc here if connection failed */
case ALL_CONNECTION_FAILURE_ERRNOS:
goto definitelyFailed;
default:
/* pqsecure_read set the error message for us */
return -1;
}
}
if (nread > 0)
{
conn->inEnd += nread;
/*
* Hack to deal with the fact that some kernels will only give us back
* 1 packet per recv() call, even if we asked for more and there is
* more available. If it looks like we are reading a long message,
* loop back to recv() again immediately, until we run out of data or
* buffer space. Without this, the block-and-restart behavior of
* libpq's higher levels leads to O(N^2) performance on long messages.
*
* Since we left-justified the data above, conn->inEnd gives the
* amount of data already read in the current message. We consider
* the message "long" once we have acquired 32k ...
*/
if (conn->inEnd > 32768 &&
(conn->inBufSize - conn->inEnd) >= 8192)
{
someread = 1;
goto retry3;
}
return 1;
}
if (someread)
return 1; /* got a zero read after successful tries */
/*
* A return value of 0 could mean just that no data is now available, or
* it could mean EOF --- that is, the server has closed the connection.
* Since we have the socket in nonblock mode, the only way to tell the
* difference is to see if select() is saying that the file is ready.
* Grumble. Fortunately, we don't expect this path to be taken much,
* since in normal practice we should not be trying to read data unless
* the file selected for reading already.
*
* In SSL mode it's even worse: SSL_read() could say WANT_READ and then
* data could arrive before we make the pqReadReady() test, but the second
* SSL_read() could still say WANT_READ because the data received was not
* a complete SSL record. So we must play dumb and assume there is more
* data, relying on the SSL layer to detect true EOF.
*/
#ifdef USE_SSL
if (conn->ssl_in_use)
return 0;
#endif
switch (pqReadReady(conn))
{
case 0:
/* definitely no data available */
return 0;
case 1:
/* ready for read */
break;
default:
/* we override pqReadReady's message with something more useful */
goto definitelyEOF;
}
/*
* Still not sure that it's EOF, because some data could have just
* arrived.
*/
retry4:
nread = pqsecure_read(conn, conn->inBuffer + conn->inEnd,
conn->inBufSize - conn->inEnd);
if (nread < 0)
{
switch (SOCK_ERRNO)
{
case EINTR:
goto retry4;
/* Some systems return EAGAIN/EWOULDBLOCK for no data */
#ifdef EAGAIN
case EAGAIN:
return 0;
#endif
#if defined(EWOULDBLOCK) && (!defined(EAGAIN) || (EWOULDBLOCK != EAGAIN))
case EWOULDBLOCK:
return 0;
#endif
/* We might get ECONNRESET etc here if connection failed */
case ALL_CONNECTION_FAILURE_ERRNOS:
goto definitelyFailed;
default:
/* pqsecure_read set the error message for us */
return -1;
}
}
if (nread > 0)
{
conn->inEnd += nread;
return 1;
}
/*
* OK, we are getting a zero read even though select() says ready. This
* means the connection has been closed. Cope.
*/
definitelyEOF:
appendPQExpBufferStr(&conn->errorMessage,
libpq_gettext("server closed the connection unexpectedly\n"
"\tThis probably means the server terminated abnormally\n"
"\tbefore or while processing the request.\n"));
/* Come here if lower-level code already set a suitable errorMessage */
definitelyFailed:
/* Do *not* drop any already-read data; caller still wants it */
pqDropConnection(conn, false);
conn->status = CONNECTION_BAD; /* No more connection to backend */
return -1;
}
/*
* pqSendSome: send data waiting in the output buffer.
*
* len is how much to try to send (typically equal to outCount, but may
* be less).
*
* Return 0 on success, -1 on failure and 1 when not all data could be sent
* because the socket would block and the connection is non-blocking.
*
* Note that this is also responsible for consuming data from the socket
* (putting it in conn->inBuffer) in any situation where we can't send
* all the specified data immediately.
*
* Upon write failure, conn->write_failed is set and the error message is
* saved in conn->write_err_msg, but we clear the output buffer and return
* zero anyway; this is because callers should soldier on until it's possible
* to read from the server and check for an error message. write_err_msg
* should be reported only when we are unable to obtain a server error first.
* (Thus, a -1 result is returned only for an internal *read* failure.)
*/
static int
pqSendSome(PGconn *conn, int len)
{
char *ptr = conn->outBuffer;
int remaining = conn->outCount;
int oldmsglen = conn->errorMessage.len;
int result = 0;
/*
* If we already had a write failure, we will never again try to send data
* on that connection. Even if the kernel would let us, we've probably
* lost message boundary sync with the server. conn->write_failed
* therefore persists until the connection is reset, and we just discard
* all data presented to be written. However, as long as we still have a
* valid socket, we should continue to absorb data from the backend, so
* that we can collect any final error messages.
*/
if (conn->write_failed)
{
/* conn->write_err_msg should be set up already */
conn->outCount = 0;
/* Absorb input data if any, and detect socket closure */
if (conn->sock != PGINVALID_SOCKET)
{
if (pqReadData(conn) < 0)
return -1;
}
return 0;
}
if (conn->sock == PGINVALID_SOCKET)
{
conn->write_failed = true;
/* Insert error message into conn->write_err_msg, if possible */
/* (strdup failure is OK, we'll cope later) */
conn->write_err_msg = strdup(libpq_gettext("connection not open\n"));
/* Discard queued data; no chance it'll ever be sent */
conn->outCount = 0;
return 0;
}
/* while there's still data to send */
while (len > 0)
{
int sent;
#ifndef WIN32
sent = pqsecure_write(conn, ptr, len);
#else
/*
* Windows can fail on large sends, per KB article Q201213. The
* failure-point appears to be different in different versions of
* Windows, but 64k should always be safe.
*/
sent = pqsecure_write(conn, ptr, Min(len, 65536));
#endif
if (sent < 0)
{
/* Anything except EAGAIN/EWOULDBLOCK/EINTR is trouble */
switch (SOCK_ERRNO)
{
#ifdef EAGAIN
case EAGAIN:
break;
#endif
#if defined(EWOULDBLOCK) && (!defined(EAGAIN) || (EWOULDBLOCK != EAGAIN))
case EWOULDBLOCK:
break;
#endif
case EINTR:
continue;
default:
/* pqsecure_write set the error message for us */
conn->write_failed = true;
/*
* Transfer error message to conn->write_err_msg, if
* possible (strdup failure is OK, we'll cope later).
*
* We only want to transfer whatever has been appended to
* conn->errorMessage since we entered this routine.
*/
if (!PQExpBufferBroken(&conn->errorMessage))
{
conn->write_err_msg = strdup(conn->errorMessage.data +
oldmsglen);
conn->errorMessage.len = oldmsglen;
conn->errorMessage.data[oldmsglen] = '\0';
}
/* Discard queued data; no chance it'll ever be sent */
conn->outCount = 0;
/* Absorb input data if any, and detect socket closure */
if (conn->sock != PGINVALID_SOCKET)
{
if (pqReadData(conn) < 0)
return -1;
}
return 0;
}
}
else
{
ptr += sent;
len -= sent;
remaining -= sent;
}
if (len > 0)
{
/*
* We didn't send it all, wait till we can send more.
*
* There are scenarios in which we can't send data because the
* communications channel is full, but we cannot expect the server
* to clear the channel eventually because it's blocked trying to
* send data to us. (This can happen when we are sending a large
* amount of COPY data, and the server has generated lots of
* NOTICE responses.) To avoid a deadlock situation, we must be
* prepared to accept and buffer incoming data before we try
* again. Furthermore, it is possible that such incoming data
* might not arrive until after we've gone to sleep. Therefore,
* we wait for either read ready or write ready.
*
* In non-blocking mode, we don't wait here directly, but return 1
* to indicate that data is still pending. The caller should wait
* for both read and write ready conditions, and call
* PQconsumeInput() on read ready, but just in case it doesn't, we
* call pqReadData() ourselves before returning. That's not
* enough if the data has not arrived yet, but it's the best we
* can do, and works pretty well in practice. (The documentation
* used to say that you only need to wait for write-ready, so
* there are still plenty of applications like that out there.)
*
* Note that errors here don't result in write_failed becoming
* set.
*/
if (pqReadData(conn) < 0)
{
result = -1; /* error message already set up */
break;
}
if (pqIsnonblocking(conn))
{
result = 1;
break;
}
if (pqWait(true, true, conn))
{
result = -1;
break;
}
}
}
/* shift the remaining contents of the buffer */
if (remaining > 0)
memmove(conn->outBuffer, ptr, remaining);
conn->outCount = remaining;
return result;
}
/*
* pqFlush: send any data waiting in the output buffer
*
* Return 0 on success, -1 on failure and 1 when not all data could be sent
* because the socket would block and the connection is non-blocking.
* (See pqSendSome comments about how failure should be handled.)
*/
int
pqFlush(PGconn *conn)
{
if (conn->outCount > 0)
{
if (conn->Pfdebug)
fflush(conn->Pfdebug);
return pqSendSome(conn, conn->outCount);
}
return 0;
}
/*
* pqWait: wait until we can read or write the connection socket
*
* JAB: If SSL enabled and used and forRead, buffered bytes short-circuit the
* call to select().
*
* We also stop waiting and return if the kernel flags an exception condition
* on the socket. The actual error condition will be detected and reported
* when the caller tries to read or write the socket.
*/
int
pqWait(int forRead, int forWrite, PGconn *conn)
{
return pqWaitTimed(forRead, forWrite, conn, (time_t) -1);
}
/*
* pqWaitTimed: wait, but not past finish_time.
*
* finish_time = ((time_t) -1) disables the wait limit.
*
* Returns -1 on failure, 0 if the socket is readable/writable, 1 if it timed out.
*/
int
pqWaitTimed(int forRead, int forWrite, PGconn *conn, time_t finish_time)
{
int result;
result = pqSocketCheck(conn, forRead, forWrite, finish_time);
if (result < 0)
return -1; /* errorMessage is already set */
if (result == 0)
{
appendPQExpBufferStr(&conn->errorMessage,
libpq_gettext("timeout expired\n"));
return 1;
}
return 0;
}
/*
* pqReadReady: is select() saying the file is ready to read?
* Returns -1 on failure, 0 if not ready, 1 if ready.
*/
int
pqReadReady(PGconn *conn)
{
return pqSocketCheck(conn, 1, 0, (time_t) 0);
}
/*
* pqWriteReady: is select() saying the file is ready to write?
* Returns -1 on failure, 0 if not ready, 1 if ready.
*/
int
pqWriteReady(PGconn *conn)
{
return pqSocketCheck(conn, 0, 1, (time_t) 0);
}
/*
* Checks a socket, using poll or select, for data to be read, written,
* or both. Returns >0 if one or more conditions are met, 0 if it timed
* out, -1 if an error occurred.
*
* If SSL is in use, the SSL buffer is checked prior to checking the socket
* for read data directly.
*/
static int
pqSocketCheck(PGconn *conn, int forRead, int forWrite, time_t end_time)
{
int result;
if (!conn)
return -1;
if (conn->sock == PGINVALID_SOCKET)
{
appendPQExpBufferStr(&conn->errorMessage,
libpq_gettext("invalid socket\n"));
return -1;
}
#ifdef USE_SSL
/* Check for SSL library buffering read bytes */
if (forRead && conn->ssl_in_use && pgtls_read_pending(conn))
{
/* short-circuit the select */
return 1;
}
#endif
/* We will retry as long as we get EINTR */
do
result = pqSocketPoll(conn->sock, forRead, forWrite, end_time);
while (result < 0 && SOCK_ERRNO == EINTR);
if (result < 0)
{
char sebuf[PG_STRERROR_R_BUFLEN];
appendPQExpBuffer(&conn->errorMessage,
libpq_gettext("%s() failed: %s\n"),
"select",
SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
}
return result;
}
/*
* Check a file descriptor for read and/or write data, possibly waiting.
* If neither forRead nor forWrite are set, immediately return a timeout
* condition (without waiting). Return >0 if condition is met, 0
* if a timeout occurred, -1 if an error or interrupt occurred.
*
* Timeout is infinite if end_time is -1. Timeout is immediate (no blocking)
* if end_time is 0 (or indeed, any time before now).
*/
static int
pqSocketPoll(int sock, int forRead, int forWrite, time_t end_time)
{
/* We use poll(2) if available, otherwise select(2) */
#ifdef HAVE_POLL
struct pollfd input_fd;
int timeout_ms;
if (!forRead && !forWrite)
return 0;
input_fd.fd = sock;
input_fd.events = POLLERR;
input_fd.revents = 0;
if (forRead)
input_fd.events |= POLLIN;
if (forWrite)
input_fd.events |= POLLOUT;
/* Compute appropriate timeout interval */
if (end_time == ((time_t) -1))
timeout_ms = -1;
else
{
time_t now = time(NULL);
if (end_time > now)
timeout_ms = (end_time - now) * 1000;
else
timeout_ms = 0;
}
return poll(&input_fd, 1, timeout_ms);
#else /* !HAVE_POLL */
fd_set input_mask;
fd_set output_mask;
fd_set except_mask;
struct timeval timeout;
struct timeval *ptr_timeout;
if (!forRead && !forWrite)
return 0;
FD_ZERO(&input_mask);
FD_ZERO(&output_mask);
FD_ZERO(&except_mask);
if (forRead)
FD_SET(sock, &input_mask);
if (forWrite)
FD_SET(sock, &output_mask);
FD_SET(sock, &except_mask);
/* Compute appropriate timeout interval */
if (end_time == ((time_t) -1))
ptr_timeout = NULL;
else
{
time_t now = time(NULL);
if (end_time > now)
timeout.tv_sec = end_time - now;
else
timeout.tv_sec = 0;
timeout.tv_usec = 0;
ptr_timeout = &timeout;
}
return select(sock + 1, &input_mask, &output_mask,
&except_mask, ptr_timeout);
#endif /* HAVE_POLL */
}
/*
* A couple of "miscellaneous" multibyte related functions. They used
* to be in fe-print.c but that file is doomed.
*/
/*
* Returns the byte length of the character beginning at s, using the
* specified encoding.
*
* Caution: when dealing with text that is not certainly valid in the
* specified encoding, the result may exceed the actual remaining
* string length. Callers that are not prepared to deal with that
* should use PQmblenBounded() instead.
*/
int
PQmblen(const char *s, int encoding)
{
return pg_encoding_mblen(encoding, s);
}
/*
* Returns the byte length of the character beginning at s, using the
* specified encoding; but not more than the distance to end of string.
*/
int
PQmblenBounded(const char *s, int encoding)
{
return strnlen(s, pg_encoding_mblen(encoding, s));
}
/*
* Returns the display length of the character beginning at s, using the
* specified encoding.
*/
int
PQdsplen(const char *s, int encoding)
{
return pg_encoding_dsplen(encoding, s);
}
/*
* Get encoding id from environment variable PGCLIENTENCODING.
*/
int
PQenv2encoding(void)
{
char *str;
int encoding = PG_SQL_ASCII;
str = getenv("PGCLIENTENCODING");
if (str && *str != '\0')
{
encoding = pg_char_to_encoding(str);
if (encoding < 0)
encoding = PG_SQL_ASCII;
}
return encoding;
}
#ifdef ENABLE_NLS
static void
libpq_binddomain(void)
{
/*
* If multiple threads come through here at about the same time, it's okay
* for more than one of them to call bindtextdomain(). But it's not okay
* for any of them to return to caller before bindtextdomain() is
* complete, so don't set the flag till that's done. Use "volatile" just
* to be sure the compiler doesn't try to get cute.
*/
static volatile bool already_bound = false;
if (!already_bound)
{
/* bindtextdomain() does not preserve errno */
#ifdef WIN32
int save_errno = GetLastError();
#else
int save_errno = errno;
#endif
const char *ldir;
/* No relocatable lookup here because the binary could be anywhere */
ldir = getenv("PGLOCALEDIR");
if (!ldir)
ldir = LOCALEDIR;
bindtextdomain(PG_TEXTDOMAIN("libpq"), ldir);
already_bound = true;
#ifdef WIN32
SetLastError(save_errno);
#else
errno = save_errno;
#endif
}
}
char *
libpq_gettext(const char *msgid)
{
libpq_binddomain();
return dgettext(PG_TEXTDOMAIN("libpq"), msgid);
}
char *
libpq_ngettext(const char *msgid, const char *msgid_plural, unsigned long n)
{
libpq_binddomain();
return dngettext(PG_TEXTDOMAIN("libpq"), msgid, msgid_plural, n);
}
#endif /* ENABLE_NLS */
```
|
Alexandros Tzorvas (; born 12 August 1982) is a Greek former professional footballer who played as a goalkeeper.
Club career
Early career
Ηe began his career as a member of the renowned (domazos academy) Panathinaikos football academy but, despite his potential, he found it hard to make the first team as the more experienced Antonis Nikopolidis, Konstantinos Chalkias and Stefanos Kotsolis provided more than full cover for the goalkeeping position. Due to the strong competition, and as the team laid hopes on him for the future in the next 4 years, he was loaned to the affiliated teams of Agios Nikolaos, Markopoulo and Thrasyvoulos in order to gain experience.
In 2005 Panathinaikos recalled him from loan as a cover for Mario Galinović and Pierre Ebede.
OFI Crete
A lack of first team opportunities led to his trade to OFI with Orestis Karnezis – also a hot goalkeeping prospect – following the opposite path. A streak of very good performances made him quickly a starter for the team from Crete even though his transfer was scrutinized by OFI fans, mainly because of what they see as one sided "colonial"-style agreements that their team is signing with Panathinaikos. In 2008, Tzorvas became the Second Greek Best Goalkeeper behind the legendary Antonis Nikopolidis.
Return to Panathinaikos
During the 2008–09 season, Panathinaikos recalled him from OFI with Arkadiusz Malarz taking the opposite way. Although Mario Galinović was considered the first option for the goalkeeper position for the 2008–2009 season, Tzorvas had his chances already in October being in the starting eleven for a few games for the Greek Superleague and the UEFA Champions League.
For the 2009–10 Super League season, beginning from the match against Aris Thessaloniki F.C., he is considered a basic choice for the goalkeeper position, mainly due to bad performances by teammate Galinović.
Tzorvas is a die-hard fan of Panathinaikos. Once, in a game against Olympiakos at Karaiskákis Stadium, he went in front of the partisan opposition fans and kissed the shamrock badge on his shirt.
Italy
On 26 August 2011 he moved to Italy, joining Serie A club Palermo for €700,000; the player agreed a two-year contract with the club and will wear the No. 33 jersey for the Sicilian club, replacing Salvatore Sirigu who left weeks earlier in order to join Paris Saint-Germain FC. He debuted in rosanero on 11 September 2011, in the match won 4–3 against Inter in Serie A. Originally hired as starting goalkeeper, he fell out of favour with head coach Devis Mangia following some unimpressive performances and ultimately lost his place to Francesco Benussi. He was confirmed as second choice also by Bortolo Mutti, who replaced Mangia as Palermo boss in December, and then clearly relegated on the bench after Palermo opted to sign promising Italian international Emiliano Viviano during the January 2012 transfer window from Genoa.
Alexandros Tzorvas will be looking to put a difficult season behind him when Serie A football kicks-off later this month, but according to reports from Italy on Monday the Greek international might not get the chance to prove his Palermo critics wrong as a sensational transfer to league rivals Udinese is reportedly on the cards this summer. The former Panathinaikos shot-stopper was mentioned in a report released by La Gazzetta dello Sport as Udinese looks to fill the void between the sticks following the sale of Samir Handanovic to Inter Milan and the nerve-racking injury to expected starter Zeljko Brkic.
For Tzorvas, a move away from the Sicilian club would provide a fresh start after a disastrous first season in Italy where a shaky Palermo back-line was partly responsible for Tzorvas’ struggles with the Rosanero. With Udinese set to compete in the play-off round of the UEFA Champions League, the Northeastern Italian club provides an attractive destination for Tzorvas despite the departure of many key players this season.
In the summer 2012 he moved to Genoa in definitive deal for €600,000 in 1-year contract, together with youth team striker Daniel Jara Martínez in temporary deal, in an exchange deal that took Swiss defender Steve von Bergen at Palermo for €1.7 million. The Greek international ‘keeper will back-up expected starter Sebastien Frey, while also competing with Canadian ‘keeper Robert Stillo for a spot on the bench. On 17 March 2013, Alexandros Tzorvas made his Genoa debut away to Fiorentina. His first game with the club was a catastrophe, as he smothered a Stevan Jovetić snapshot, but David Pizarro hit a fierce effort that took a massive deflection to loop over the goalkeeper and land on the crossbar. On 9 July 2013 Tzorvas returned to Greece to play for the newly promoted club of Apollon Smyrni as a free agent.
Apollon Smyrnis
On 9 July 2013 Tzorvas returned to Greece to play for the newly promoted club Apollon Smyrni as a free agent. Greece international Alexandros Tzorvas has agreed to join newly promoted Greek Super League side Apollon Smyrni on a two-year deal, the club has confirmed. The Athenians are making their return to the Greek top flight for the first time in 13 years and have made Tzorvas the latest in a string of summer acquisitions.
On 18 August 2013, he made his debut with the club in a 2–1 home win against Aris.
Tzorvas decision to return to Greece was not an easy one, knowing the differences between Serie A and Super League, but his enormous desire to regain his position in the Greece national football team made him sign a contract with a team that is among the favorites for relegation. Eventually, he succeeded to be among Fernando Santos calls' for a friendly against South Korea
NorthEast United
On 1 September 2014, it was reported that he has signed a contract with NorthEast United FC to play in Indian Super League He started his first match against Kerala Blasters FC, he was named man of the match with a clean sheet. On 23 December 2014, he left the club. After a decent spell at NorthEast United FC, the Greek press reported in March 2015 that Tzorvas was considering a return to India as he has been approached by two unidentified Indian clubs with significant offers.
Although the reason for his departure just after playing two matches are not disclosed.
International career
On 21 March 2008, Otto Rehhagel called Tzorvas as third choice goalkeeper for the friendly against Portugal on 26 March at Frankfurt, Germany. After the retirement of Antonis Nikopolidis from the national team, Tzorvas became the second goalkeeper for Greece. He was named at the 23 man squad for UEFA Euro 2008. He made his senior international debut for Greece on 19 November 2008 in a friendly match against 2006 FIFA World Cup champions Italy at the Karaiskákis Stadium in Piraeus.
One of Tzorvas's greatest career accomplishments to date came in Greece's 2009 home-and-away World Cup playoff with Ukraine. Tzorvas turned away the Ukrainian attack for the entire 180 minutes over ties in Athens and Donetsk, sustaining the Greeks' 1–0 aggregate victory that sent the team to the 2010 FIFA World Cup in South Africa.
At the 2010 FIFA World Cup, he was the starting goalkeeper in all three group matches, beating off competition from Kostas Chalkias and Michalis Sifakis. At the UEFA Euro 2012, however, Kostas Chalkias took over his spot, and Michalis Sifakis played in the final group stage match against Russia and the quarter-final against Germany.
Honours
Club
Panathinaikos
Super League Greece: 2009-10
Greek Cup: 2009–10
References
External links
OnSports profile
1982 births
Living people
Men's association football goalkeepers
Greece men's international footballers
Greece men's under-21 international footballers
Footballers from Athens
Greek men's footballers
Thrasyvoulos F.C. players
OFI Crete F.C. players
Panathinaikos F.C. players
Super League Greece players
Genoa CFC players
Apollon Smyrnis F.C. players
NorthEast United FC players
Serie A players
UEFA Euro 2008 players
2010 FIFA World Cup players
UEFA Euro 2012 players
Expatriate men's footballers in Italy
Expatriate men's footballers in India
Greek expatriate men's footballers
Greek expatriate sportspeople in Italy
|
```scala
package io.buoyant.namer.consul
import com.twitter.finagle.Stack
import com.twitter.finagle.buoyant.{ClientAuth, TlsClientConfig}
import com.twitter.finagle.util.LoadService
import io.buoyant.config.Parser
import io.buoyant.config.types.Port
import io.buoyant.consul.v1.{ConsistencyMode, HealthStatus}
import io.buoyant.namer.{NamerConfig, NamerInitializer}
import org.scalatest.FunSuite
class ConsulTest extends FunSuite {
test("sanity") {
// ensure it doesn't totally blowup
val _ = ConsulConfig(None, None, None, None, None, None, None, None, None, None, None, None, None, None).newNamer(Stack.Params.empty)
}
test("service registration") {
assert(LoadService[NamerInitializer]().exists(_.isInstanceOf[ConsulInitializer]))
}
test("parse minimal config") {
val yaml = s"""
|kind: io.l5d.consul
""".stripMargin
val mapper = Parser.objectMapper(yaml, Iterable(Seq(ConsulInitializer)))
val consul = mapper.readValue[NamerConfig](yaml).asInstanceOf[ConsulConfig]
assert(consul.setHost.isEmpty)
assert(consul.includeTag.isEmpty)
assert(!consul.disabled)
}
test("parse all options config") {
val yaml = s"""
|kind: io.l5d.consul
|host: consul.site.biz
|port: 8600
|token: some-token
|includeTag: true
|useHealthCheck: true
|fixedLengthStreamedAfterKB: 5192
|healthStatuses:
| - warning
|setHost: true
|consistencyMode: stale
|failFast: true
|preferServiceAddress: false
|weights:
| - tag: primary
| weight: 100
|tls:
| disableValidation: false
| commonName: consul.io
| trustCertsBundle: /certificates/cacerts-bundle.pem
| clientAuth:
| certPath: /certificates/cert.pem
| keyPath: /certificates/key.pem
|transferMetadata: true
""".stripMargin
val mapper = Parser.objectMapper(yaml, Iterable(Seq(ConsulInitializer)))
val consul = mapper.readValue[NamerConfig](yaml).asInstanceOf[ConsulConfig]
assert(consul.host == Some("consul.site.biz"))
assert(consul.port == Some(Port(8600)))
assert(consul.token == Some("some-token"))
assert(consul.useHealthCheck == Some(true))
assert(consul.healthStatuses == Some(Set(HealthStatus.Warning)))
assert(consul.setHost == Some(true))
assert(consul.includeTag == Some(true))
assert(consul.consistencyMode == Some(ConsistencyMode.Stale))
assert(consul.failFast == Some(true))
assert(consul.preferServiceAddress == Some(false))
assert(consul.fixedLengthStreamedAfterKB == Some(5192))
assert(consul.weights == Some(Seq(TagWeight("primary", 100.0))))
val clientAuth = ClientAuth("/certificates/cert.pem", None, "/certificates/key.pem")
val tlsConfig = TlsClientConfig(None, Some(false), Some("consul.io"), None, Some("/certificates/cacerts-bundle.pem"), Some(clientAuth))
assert(consul.tls == Some(tlsConfig))
assert(!consul.disabled)
assert(consul.transferMetadata == Some(true))
}
}
```
|
My Date with a Vampire II is a 2000 Hong Kong television series produced by Asia Television (ATV) as a sequel to My Date with a Vampire (1998) even though it is a reboot of the original. It was followed by My Date with a Vampire III in 2004. The series starred many cast members from the first season in similar roles. Like the first season, My Date with a Vampire II also blends aspects of the Chinese "hopping" corpses of jiangshi fiction with those of western vampires, while incorporating elements of Chinese mythology, modern horror legends, and eschatology.
Plot
The second season opens with a duel between Fong Kwok-wah, a Chinese guerrilla fighter, and Yamamoto Kazuo, a major of the Imperial Japanese Army, during the Second Sino-Japanese War as shown in the first season. They attract the attention of Cheung-San, the progenitor of all vampires, who interrupts their duel. Just as Cheung-San is about to bite the two men and turn them into vampires, sorceress Ma Dan-na appears and drives Cheung-San away, thus sparing Fong and Yamamoto from their fates. Shortly after that, Fong agrees to help Ma hunt down and destroy Cheung-San. They succeed in trapping Cheung-San but he breaks free and bites Fong and the boy Fuk-sang, turning them into vampires.
Fong and Fuk-sang are still alive in the present-day (2000) and their physical appearances have not changed since more than 60 years ago. Before Fong became a vampire, he had already started a family so he now has a grandson, Tin-yau. After Tin-yau dies in an incident in England, Fong takes over his grandson's identity. He meets and starts a romance with Ma Siu-ling, a descendant of Ma Dan-na and the heiress to the Ma clan, who have dedicated themselves to ridding the world of evil supernatural beings.
Nüwa, the goddess who created humankind in Chinese mythology, feels very disappointed and heartbroken after seeing how her creations have been corrupted by evil, so she plans to end the world on 2 January 2001. Cheung-San, who has lived long before the world came into existence, is in love with Nüwa. Fong, Ma and their allies learn about Nüwa's plan and intend to stop her from ending the world. However, that brings them into conflict with not only the goddess herself, but also Cheung-San and his followers.
Vampires
See My Date with a Vampire#Vampires for a description of the vampires depicted in this television series.
Cast
Eric Wan as Fong Kwok-wah (), the protagonist who was formerly a Chinese guerrilla fighter during the Second Sino-Japanese War before he was bitten by Cheung-San and became a second-generation vampire with superhuman speed as his special power. He takes on the identity of his grandson, Fong Tin-yau (), after the latter is killed in England.
Eric Wan also portrayed Fong Chung-tong (), Fong Kwok-wah's previous incarnation as a general serving under Qin Shi Huang.
Joey Meng as Ma Siu-ling (), the heiress to the Ma clan and Fong Tin-yau's lover.
Joey Meng also portrayed Ma Dan-na (), Ma Siu-ling's grandaunt and predecessor.
Joey Meng also portrayed Ma Ling-yi (), Ma Siu-ling's previous incarnation and founder of the Ma clan who served as a sorceress under Qin Shi Huang.
Simon Yam as Cheung-San (), the primary antagonist and progenitor of all vampires. Having lived long before the world came into existence, he is in love with Nüwa and would do anything to protect her from harm. At one point, he disguised himself as a normal man, Geung Chan-tso (), in order to better understand human nature.
Ruby Wong as Nüwa (), the goddess who created humankind in Chinese mythology. She decides to end the world after seeing how people have been corrupted by evil.
Angie Cheong as Ma Ding-dong (), Ma Siu-ling's aunt and Ma Dan-na's niece. She had a romantic relationship with Geung Chan-tso at one point without knowing that he was actually Cheung-San.
Kristy Yang as Wong Jan-jan (), Ma Siu-ling's close friend who starts a romantic relationship with Sze-to Fan-yan.
Kristy Yang also appears in flashbacks as Yamamoto Yuki (), Yamamoto Kazuo's wife.
Kenneth Chan as Yamamoto Kazuo (), a major of the Imperial Japanese Army and Fong Kwok-wah's rival. Although he committed suicide, a younger clone of him had been accidentally created and continues to live on as Sze-to Fan-yan (), who becomes a second-generation vampire later after Cheung-San bites him.
Cheung Kwok-kuen as Fuk-sang (), a boy who was bitten by Cheung-San and became a second-generation vampire with the special ability to temporarily change people's appearances.
Wong Shee-tong as Ho Ying-kau (), a sorcerer who is an ally of Ma and Fong. He is the successor of Mo Siu-fong, the protagonist in Vampire Expert.
Chapman To as Kam Ching-chung (), Ma Siu-ling's apprentice who starts a romance with the ghost Sadako.
Pinky Cheung as Kam Mei-loi (), a distant relative of Kam Ching-chung. She falls in love with Domoto Sei, who bites her and turns her into a fourth-generation vampire to save her life after she is mortally wounded. Her special power allows her to fire bullets from her fingers.
Berg Ng as Domoto Sei (), Yamamoto's maternal grandson who, after being bitten by Lei Wai-si, becomes a third-generation vampire with the special power to enter people's dreams. He starts a romance with Kam Mei-loi and has a son, El Niño, with her.
Joey Leung as Domoto El Niño (), Domoto Sei and Kam Mei-loi's son. As he is the offspring of two vampires, he is born with supernatural powers, including extraordinarily high IQ. He helps Fong and Ma open the Pangu Tomb, which contains a secret weapon that can destroy Nüwa.
Sin Ho-ying as Master Peacock (), a Japanese Buddhist monk from Mount Kōya in Japan.
Saeki Hinako as Fujihara Sadako (), a vengeful ghost trapped in a computer network by Crow and forced to kill 3,000 people to fulfil a curse.
Wong Mei-fan as Yuen Mung-mung (), Wong Jan-jan and Ma Siu-ling's neighbour.
Lau Shek-yin as Unlucky Ghost (), a ghost who brings bad luck to those around him.
Chan Wai-ming as Small Mi (), one of Fuk-sang's pet cats who can take on human form after consuming a magic pearl by accident.
Juliana Yiu as Big Mi (), one of Fuk-sang's pet cats who, like Small Mi, can take on human form.
Tse Kwan-ho as Larry, a vampire who owns an ancient castle in England. He turns out to be Qin Shi Huang, who got his wish to be immortal when Xu Fu bit him and turned him into a vampire centuries ago.
Belinda Hamnett as Si-nga (), Larry's fiancée who became a fourth-generation vampire 50 years ago after Larry bit her to save her life when she was mortally wounded.
Ricky Chan as Kei-nok (), one of Blue Strength's henchmen. He is actually Xu Fu (), the alchemist sent by Qin Shi Huang to find the key to immortality. He found Cheung-San, who bit him and turned him into a second-generation vampire.
Yip Leung-choi as Lei Wai-si () / Crow (), one of Blue Strength's henchmen. 400 years ago, he was bitten by Cheung-San and became a second-generation vampire.
Ma Chung-tak as Blue Strength (), one of the Five Colours Ambassadors who represents power.
Wong Ngoi-yiu as Red Wave (), one of the Five Colours Ambassadors who represents confusion.
Elena Kong as Black Rain (), one of the Five Colours Ambassadors who represents wrath.
Wong Mei as White Vixen (), one of the Five Colours Ambassadors who represents obsession and has the ability to travel through time. She disguises herself as a woman named Pak Sum-mei ().
Lee Yun-kei as Sunny, Fong's colleague in the police force. He turns out to be Yellow (), one of the Five Colours Ambassadors who represents envy.
Asuka Higuchi as Ming-yat (), a mysterious lady who reveals that Pangu is actually a clan of divine beings who are also first-generation vampires.
Lam Chi-ho as Yau Chi-kit (), the vice CEO of an IT company and Ma Siu-ling's ex-classmate.
Lee Fei as Jenny, a vampire who serves under Larry.
Cheng Syu-fung as Lau Hoi (), Fong's boss in the police force.
Chung Yeung as Peter, Ma Ding-dong's university classmate who was killed by Cheung-San and had his soul trapped inside a mirror.
Wong Mei-ki as Mary, Chu Wing-fuk's daughter and Fuk-sang's classmate.
Keung Ho-men as Chu Wing-fuk (), Pak Sum-mei's fiancé who turns out to be a serial killer who murdered all his former lovers in order to seize their inheritance/fortune for himelf.
See also
List of vampire television series
External links
My Date with a Vampire II official page on ATV's website
1999 Hong Kong television series debuts
Asia Television original programming
Vampires in television
Jiangshi fiction
Romantic fantasy television series
|
Modernista! was a creative and communications agency based in Boston, Massachusetts. The agency represented Sears, Showtime,
Product (RED), the National Park Foundation, Nickelodeon, Food Should Taste Good, Chug, Sophos, Jack Wills, Boulder Digital Works, Doc to Dock, Stop Handgun Violence, the Art Institute of Boston, General Motors, and TIAA-CREF.
In its later years, the agency increasingly focused on interactive design. It also involved itself in other types of creative projects, including music videos, magazine redesign, and alternate reality gaming. According to co-founder Gary Koepke, the agency modified its business model and operated less as an advertising agency than as a creative think tank.
History
Modernista! is now closed. The agency was launched in January 2000 by Gary Koepke and Lance Jensen, both from marketing/advertising backgrounds. Jensen left the company in December 2010 and now works for Hill Holliday.
In October 2000, Modernista! was appointed agency of record for General Motors’ HUMMER, and in 2006 it was named the agency for GM’s Cadillac. The relationship with both brands ended when GM went bankrupt in 2009. Following this, Modernista! revamped its organization and became much more focused on digital marketing and communications than it had been.
Activities
According to Bruce Horovitz, writing in the November 4, 2008 issue of USA Today, Modernista! was known "for linking music from often obscure artists with compelling visuals in its TV spots." In the print arena, one of the agency’s most visible projects involves the display of advertisements for the Stop Handgun Violence gun-control organization on a 252-foot-long billboard erected along the Massachusetts Turnpike in Boston. Digital projects include web sites and microsites for Sears, HUMMER, Cadillac, Nickelodeon, Boulder Digital Works, Product (RED), the National Park Foundation, and the Art Institute of Boston.
In efforts outside traditional advertising, Modernista! created music videos for David Bowie ("Slow Burn," 2006) and U2 ("Window in the Skies," 2007) and designed computer visuals for concerts by trance DJ Paul Oakenfold. In 2008, the agency redesigned BusinessWeek magazine. It has also done strategic planning consulting for the Bill & Melinda Gates Foundation. In 2010, the agency produced an alternate reality game leading up to season five of the Dexter TV show.
In January 2004, Modernista! was named "Regional Agency of the Year" by Adweek.
Controversies
In March 2008, Modernista! gained attention for a redesign of its own website, which now displayed only a small red navigation menu overlaid on the upper left corner of social-site pages that contained information about or from Modernista!, including Facebook, Flickr, and YouTube. With each new visit to www.modernista.com, one of these social sites was chosen at random to provide the agency’s "launch page." In addition to the social sites, the agency’s blog page could appear on a random basis. In effect, outside of its blog, Modernista! referred visitors exclusively to social media to learn about the agency.
At one time, this "siteless site" (which won a Webby award in 2009) could utilize the Wikipedia entry for Modernista! as a landing page. Wikipedia’s Jimmy Wales asked Modernista! to stop using its Wikipedia article in this fashion. Following this, Wikipedia implemented technology that made it difficult to make such use of an entry, and the Modernista! site stopped routing to Wikipedia.
References
Advertising agencies of the United States
Marketing companies established in 2000
Companies based in Boston
Economy of Detroit
|
```protocol buffer
/*
path_to_url
Unless required by applicable law or agreed to in writing, software
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
*/
syntax = "proto3";
package falco.version;
option go_package = "github.com/falcosecurity/client-go/pkg/api/version";
// This service defines a RPC call
// to request the Falco version.
service service {
rpc version(request) returns (response);
}
// The `request` message is an empty one.
message request
{
}
// The `response` message contains the version of Falco.
// It provides the whole version as a string and also
// its parts as per semver 2.0 specification (path_to_url
message response
{
// falco version
string version = 1;
uint32 major = 2;
uint32 minor = 3;
uint32 patch = 4;
string prerelease = 5;
string build = 6;
// falco engine version
uint32 engine_minor = 7;
string engine_fields_checksum = 8;
uint32 engine_major = 9;
uint32 engine_patch = 10;
string engine_version = 11;
}
```
|
```makefile
#/
# @license Apache-2.0
#
#
#
# path_to_url
#
# Unless required by applicable law or agreed to in writing, software
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#/
# VARIABLES #
# Define the path of the executable for [node-gyp][1].
#
# [1]: path_to_url
NODE_GYP ?= $(BIN_DIR)/node-gyp
# Define command-line options when invoking node-gyp.
NODE_GYP_FLAGS ?=
# Define GYP "defines":
ifndef NODE_GYP_DEFINES
NODE_GYP_DEFINES := fortran_compiler=$(FC)
ifneq (, $(BLAS))
NODE_GYP_DEFINES += blas=$(BLAS)
ifdef BLAS_DIR
NODE_GYP_DEFINES += blas_dir=$(BLAS_DIR)
endif
endif
endif
# Define an add-on package pattern filter:
ifndef NODE_ADDONS_PATTERN
node_addons_pattern := **/package.json
else
node_addons_pattern := "**/$(NODE_ADDONS_PATTERN)/**/package.json"
endif
# Define command-line flags when listing add-ons:
install_node_addons_list_addons_flags := "--pattern $(node_addons_pattern)"
# RULES #
#/
# Installs Node.js native add-ons.
#
# ## Notes
#
# - When `FAST_FAIL` is `0`, if unable to install a native add-on, the recipe prints an error message and proceeds to try installing the next add-on.
#
# @param {integer} [FAST_FAIL] - flag indicating whether to immediately exit if unable to install a native add-on (default: 1)
#
# @example
# make install-node-addons
#
# @example
# FAST_FAIL=0 make install-node-addons
#
# @example
# FAST_FAIL=1 make install-node-addons
#/
install-node-addons: $(NODE_MODULES) clean-node-addons
ifeq ($(FAIL_FAST), true)
$(QUIET) $(MAKE) LIST_PKGS_ADDONS_FLAGS=$(install_node_addons_list_addons_flags) -f $(this_file) list-pkgs-addons | while read -r pkg; do \
if echo "$$pkg" | grep -v '^\/.*\|^[a-zA-Z]:.*' >/dev/null; then \
continue; \
fi; \
echo ''; \
echo "Building add-on: $$pkg"; \
cd $$pkg && \
MAKEFLAGS= \
NODE_PATH="$(NODE_PATH)" \
GYP_DEFINES="$(NODE_GYP_DEFINES)" \
$(NODE_GYP) $(NODE_GYP_FLAGS) rebuild \
|| { echo "Error: failed to build add-on: $$pkg"; exit 1; } \
done
else
$(QUIET) $(MAKE) LIST_PKGS_ADDONS_FLAGS=$(install_node_addons_list_addons_flags) -f $(this_file) list-pkgs-addons | while read -r pkg; do \
if echo "$$pkg" | grep -v '^\/.*\|^[a-zA-Z]:.*' >/dev/null; then \
continue; \
fi; \
echo ''; \
echo "Building add-on: $$pkg"; \
cd $$pkg && \
MAKEFLAGS= \
NODE_PATH="$(NODE_PATH)" \
GYP_DEFINES="$(NODE_GYP_DEFINES)" \
$(NODE_GYP) $(NODE_GYP_FLAGS) rebuild \
|| { echo "Error: failed to build add-on: $$pkg"; exit 0; } \
done
endif
.PHONY: install-node-addons
#/
# Removes all compiled and generated files for Node.js native add-ons.
#
# ## Notes
#
# - When `FAST_FAIL` is `0`, if unable to clean a native add-on, the recipe prints an error message and proceeds to try cleaning the next add-on.
#
# @param {integer} [FAST_FAIL] - flag indicating whether to immediately exit if unable to clean a native add-on (default: 1)
#
# @example
# make clean-node-addons
#
# @example
# FAST_FAIL=0 make clean-node-addons
#
# @example
# FAST_FAIL=1 make clean-node-addons
#/
clean-node-addons: $(NODE_MODULES)
ifeq ($(FAIL_FAST), true)
$(QUIET) $(MAKE) LIST_PKGS_ADDONS_FLAGS=$(install_node_addons_list_addons_flags) -f $(this_file) list-pkgs-addons | while read -r pkg; do \
if echo "$$pkg" | grep -v '^\/.*\|^[a-zA-Z]:.*' >/dev/null; then \
continue; \
fi; \
echo ''; \
echo "Cleaning add-on: $$pkg"; \
cd $$pkg/src && $(MAKE) clean && \
cd $$pkg && $(NODE_GYP) clean \
|| { echo "Error: failed to clean add-on: $$pkg"; exit 1; } \
done
else
$(QUIET) $(MAKE) LIST_PKGS_ADDONS_FLAGS=$(install_node_addons_list_addons_flags) -f $(this_file) list-pkgs-addons | while read -r pkg; do \
if echo "$$pkg" | grep -v '^\/.*\|^[a-zA-Z]:.*' >/dev/null; then \
continue; \
fi; \
echo ''; \
echo "Cleaning add-on: $$pkg"; \
cd $$pkg/src && $(MAKE) clean && \
cd $$pkg && $(NODE_GYP) clean \
|| { echo "Error: failed to clean add-on: $$pkg"; exit 0; } \
done
endif
.PHONY: clean-node-addons
```
|
What's On Kyiv or What's On Kiev was a weekly, then monthly, then online English-language magazine published in Ukraine's capital Kyiv which covered both Kyiv, and Ukraine at large. As of late 2020, the magazine is defunct.
History
What's On was founded in 1999, the first Editor-In-Chief was Amanda Pitt. Peter Dickinson then edited between 2001 and 2007, followed by Neil Campbell. Campbell handed over editorial duties in 2011 to Lana Nicole Niland. What's On was then owned by PAN Publishing, who also published Panorama, the in-flight magazine of Ukraine International Airlines. What's On was read widely in the expatriate (business and diplomatic) community, and distributed free of charge directly to embassies and businesses, as well as around the city. What's On was also read by large numbers of English speaking Ukrainians. The magazine featured news articles, also articles on Ukrainian society, culture, politics, history, business, showbusiness and interviews with prominent people either Ukrainian, or connected to Ukraine. What's On also had a travel and nightlife section, as well as restaurant reviews, and full Kyiv entertainment listings.
In 2012, British journalist Graham Phillips worked at the magazine. What's On Kyiv ceased regular publication after Euromaidan in 2013/14, which the magazine had supported. In the summer of 2014 a special edition of What's On was published, the "Chronicle of a Revolution" - a compendium of the news and photo coverage and some feature articles that had been published in What's On during the 12-week period of Euromaidan.
After a break from early 2014, What's On returned in September 2017, published on a monthly basis, owned by Outpost Publishing, Lana Nicole Niland was the owner and Editor-in-Chief of this publication. In 2019 the magazine went online only, and in late 2020, the magazine announced on their official website that they were here in Kyiv and will be back in business with and for you soon. Since then there have been no further publications, and as of 2022 What's On is defunct.
See also
Kyiv Post
The Kyiv Independent
References
External links
1999 establishments in Ukraine
English-language magazines
Local interest magazines
Magazines established in 1999
Magazines published in Kyiv
Monthly magazines
|
Phlyctaenius is an extinct genus of placoderm fish, which lived during the Devonian period of New Brunswick, Canada. It was named by Traquair (1890) as a replacement for Phlyctaenium Zittel (1879), which was preoccupied.
One species, P. anglicus, was known from remains found in England and Wales and was initially described by Traquair (1890). It was moved to Heightingtonaspis when the genus was described by White (1969).
References
Placoderms of North America
Phlyctaeniidae
Devonian animals
Late Devonian animals
Fossil taxa described in 1890
Fossil taxa described in 1879
|
Delton Stevano Wohon (born September 16, 1992) is an Indonesian former footballer.
Club statistics
References
External links
1992 births
Men's association football midfielders
Living people
Footballers from Jakarta
Minahasa people
Indonesian men's footballers
Indonesian Premier Division players
Liga 1 (Indonesia) players
Persija Jakarta players
Celebest F.C. players
|
```javascript
const router = require('express').Router();
const UserController = require('../../controller/user.controller');
router.post('/register-user', UserController.register);
router.post('/login', UserController.login);
router.post('/google-sign-in', UserController.googleSignIn);
router.post('/change-password', UserController.changePassword);
module.exports = router;
```
|
```go
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
//
// path_to_url
//
// Unless required by applicable law or agreed to in writing, software
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
package decimal256_test
import (
"fmt"
"math"
"math/big"
"strings"
"testing"
"github.com/apache/arrow/go/v18/arrow/decimal256"
"github.com/stretchr/testify/assert"
)
func TestFromU64(t *testing.T) {
for _, tc := range []struct {
v uint64
want decimal256.Num
sign int
}{
{0, decimal256.New(0, 0, 0, 0), 0},
{1, decimal256.New(0, 0, 0, 1), +1},
{2, decimal256.New(0, 0, 0, 2), +1},
{math.MaxInt64, decimal256.New(0, 0, 0, math.MaxInt64), +1},
{math.MaxUint64, decimal256.New(0, 0, 0, math.MaxUint64), +1},
} {
t.Run(fmt.Sprintf("%+0#x", tc.v), func(t *testing.T) {
v := decimal256.FromU64(tc.v)
ref := new(big.Int).SetUint64(tc.v)
if got, want := v, tc.want; got != want {
t.Fatalf("invalid value. got=%+0#x, want=%+0#x (big-int=%+0#x)", got, want, ref)
}
if got, want := v.Sign(), tc.sign; got != want {
t.Fatalf("invalid sign for %+0#x: got=%v, want=%v", v, got, want)
}
if got, want := v.Sign(), ref.Sign(); got != want {
t.Fatalf("invalid sign for %+0#x: got=%v, want=%v", v, got, want)
}
if got, want := v.Array(), tc.want.Array(); got != want {
t.Fatalf("invalid array: got=%+0#v, want=%+0#v", got, want)
}
})
}
}
func u64Cnv(i int64) uint64 { return uint64(i) }
func TestFromI64(t *testing.T) {
for _, tc := range []struct {
v int64
want decimal256.Num
sign int
}{
{0, decimal256.New(0, 0, 0, 0), 0},
{1, decimal256.New(0, 0, 0, 1), 1},
{2, decimal256.New(0, 0, 0, 2), 1},
{math.MaxInt64, decimal256.New(0, 0, 0, math.MaxInt64), 1},
{math.MinInt64, decimal256.New(math.MaxUint64, math.MaxUint64, math.MaxUint64, u64Cnv(math.MinInt64)), -1},
} {
t.Run(fmt.Sprintf("%+0#x", tc.v), func(t *testing.T) {
v := decimal256.FromI64(tc.v)
ref := big.NewInt(tc.v)
if got, want := v, tc.want; got != want {
t.Fatalf("invalid value. got=%+0#x, want=%+0#x (big-int=%+0#x)", got, want, ref)
}
if got, want := v.Sign(), tc.sign; got != want {
t.Fatalf("invalid sign for %+0#x: got=%v, want=%v", v, got, want)
}
if got, want := v.Sign(), ref.Sign(); got != want {
t.Fatalf("invalid sign for %+0#x: got=%v, want=%v", v, got, want)
}
if got, want := v.Array(), tc.want.Array(); got != want {
t.Fatalf("invalid array: got=%+0#v, want=%+0#v", got, want)
}
})
}
}
func TestAdd(t *testing.T) {
for _, tc := range []struct {
n decimal256.Num
rhs decimal256.Num
want decimal256.Num
}{
{decimal256.New(0, 0, 0, 1), decimal256.New(0, 0, 0, 2), decimal256.New(0, 0, 0, 3)},
{decimal256.New(0, 0, 1, 0), decimal256.New(0, 0, 2, 0), decimal256.New(0, 0, 3, 0)},
{decimal256.New(0, 1, 0, 0), decimal256.New(0, 2, 0, 0), decimal256.New(0, 3, 0, 0)},
{decimal256.New(1, 0, 0, 0), decimal256.New(2, 0, 0, 0), decimal256.New(3, 0, 0, 0)},
{decimal256.New(0, 0, 2, 1), decimal256.New(0, 0, 1, 2), decimal256.New(0, 0, 3, 3)},
{decimal256.New(0, 2, 1, 0), decimal256.New(0, 1, 2, 0), decimal256.New(0, 3, 3, 0)},
{decimal256.New(2, 1, 0, 0), decimal256.New(1, 2, 0, 0), decimal256.New(3, 3, 0, 0)},
{decimal256.New(0, 0, 0, 1), decimal256.New(0, 0, 0, math.MaxUint64), decimal256.New(0, 0, 1, 0)},
{decimal256.New(0, 0, 0, math.MaxUint64), decimal256.New(0, 0, 0, 1), decimal256.New(0, 0, 1, 0)},
{decimal256.New(0, 0, 1, 0), decimal256.New(0, 0, math.MaxUint64, 0), decimal256.New(0, 1, 0, 0)},
{decimal256.New(0, 0, math.MaxUint64, 0), decimal256.New(0, 0, 1, 0), decimal256.New(0, 1, 0, 0)},
{decimal256.New(0, 1, 0, 0), decimal256.New(0, math.MaxUint64, 0, 0), decimal256.New(1, 0, 0, 0)},
{decimal256.New(0, math.MaxUint64, 0, 0), decimal256.New(0, 1, 0, 0), decimal256.New(1, 0, 0, 0)},
{decimal256.New(0, 0, 0, 1), decimal256.New(0, 0, 0, 0), decimal256.New(0, 0, 0, 1)},
{decimal256.New(0, 0, 0, 0), decimal256.New(0, 0, 0, 1), decimal256.New(0, 0, 0, 1)},
{decimal256.New(0, 0, 1, 0), decimal256.New(0, 0, 0, 0), decimal256.New(0, 0, 1, 0)},
{decimal256.New(0, 0, 0, 0), decimal256.New(0, 0, 1, 0), decimal256.New(0, 0, 1, 0)},
{decimal256.New(0, 1, 0, 0), decimal256.New(0, 0, 0, 0), decimal256.New(0, 1, 0, 0)},
{decimal256.New(0, 0, 0, 0), decimal256.New(0, 1, 0, 0), decimal256.New(0, 1, 0, 0)},
{decimal256.New(1, 0, 0, 0), decimal256.New(0, 0, 0, 0), decimal256.New(1, 0, 0, 0)},
{decimal256.New(0, 0, 0, 0), decimal256.New(1, 0, 0, 0), decimal256.New(1, 0, 0, 0)},
} {
t.Run("add", func(t *testing.T) {
n := tc.n.Add(tc.rhs)
if got, want := n, tc.want; got != want {
t.Fatalf("invalid value. got=%v, want=%v", got, want)
}
})
}
}
func TestSub(t *testing.T) {
for _, tc := range []struct {
n decimal256.Num
rhs decimal256.Num
want decimal256.Num
}{
{decimal256.New(0, 0, 0, 3), decimal256.New(0, 0, 0, 2), decimal256.New(0, 0, 0, 1)},
{decimal256.New(0, 0, 3, 0), decimal256.New(0, 0, 2, 0), decimal256.New(0, 0, 1, 0)},
{decimal256.New(0, 3, 0, 0), decimal256.New(0, 2, 0, 0), decimal256.New(0, 1, 0, 0)},
{decimal256.New(3, 0, 0, 0), decimal256.New(2, 0, 0, 0), decimal256.New(1, 0, 0, 0)},
{decimal256.New(0, 0, 3, 3), decimal256.New(0, 0, 1, 2), decimal256.New(0, 0, 2, 1)},
{decimal256.New(0, 3, 3, 0), decimal256.New(0, 1, 2, 0), decimal256.New(0, 2, 1, 0)},
{decimal256.New(3, 3, 0, 0), decimal256.New(1, 2, 0, 0), decimal256.New(2, 1, 0, 0)},
{decimal256.New(0, 0, 0, 0), decimal256.New(0, 0, 0, math.MaxUint64), decimal256.New(math.MaxUint64, math.MaxUint64, math.MaxUint64, 1)},
{decimal256.New(0, 0, 1, 0), decimal256.New(0, 0, 0, math.MaxUint64), decimal256.New(0, 0, 0, 1)},
{decimal256.New(0, 0, 0, 0), decimal256.New(0, 0, math.MaxUint64, 0), decimal256.New(math.MaxUint64, math.MaxUint64, 1, 0)},
{decimal256.New(0, 1, 0, 0), decimal256.New(0, 0, math.MaxUint64, 0), decimal256.New(0, 0, 1, 0)},
{decimal256.New(0, 0, 0, 0), decimal256.New(0, math.MaxUint64, 0, 0), decimal256.New(math.MaxUint64, 1, 0, 0)},
{decimal256.New(1, 0, 0, 0), decimal256.New(0, math.MaxUint64, 0, 0), decimal256.New(0, 1, 0, 0)},
{decimal256.New(0, 0, 0, 1), decimal256.New(0, 0, 0, 0), decimal256.New(0, 0, 0, 1)},
{decimal256.New(0, 0, 1, 0), decimal256.New(0, 0, 0, 0), decimal256.New(0, 0, 1, 0)},
{decimal256.New(0, 1, 0, 0), decimal256.New(0, 0, 0, 0), decimal256.New(0, 1, 0, 0)},
{decimal256.New(1, 0, 0, 0), decimal256.New(0, 0, 0, 0), decimal256.New(1, 0, 0, 0)},
{decimal256.New(0, 0, 0, 0), decimal256.New(0, 0, 0, 1), decimal256.New(math.MaxUint64, math.MaxUint64, math.MaxUint64, math.MaxUint64)},
{decimal256.New(0, 0, 0, 0), decimal256.New(0, 0, 1, 0), decimal256.New(math.MaxUint64, math.MaxUint64, math.MaxUint64, 0)},
{decimal256.New(0, 0, 0, 0), decimal256.New(0, 1, 0, 0), decimal256.New(math.MaxUint64, math.MaxUint64, 0, 0)},
{decimal256.New(0, 0, 0, 0), decimal256.New(1, 0, 0, 0), decimal256.New(math.MaxUint64, 0, 0, 0)},
} {
t.Run("sub", func(t *testing.T) {
n := tc.n.Sub(tc.rhs)
if got, want := n, tc.want; got != want {
t.Fatalf("invalid value. got=%v, want=%v", got, want)
}
})
}
}
func TestMul(t *testing.T) {
for _, tc := range []struct {
n decimal256.Num
rhs decimal256.Num
want decimal256.Num
}{
{decimal256.New(0, 0, 0, 2), decimal256.New(0, 0, 0, 3), decimal256.New(0, 0, 0, 6)},
{decimal256.New(0, 0, 2, 0), decimal256.New(0, 0, 0, 3), decimal256.New(0, 0, 6, 0)},
{decimal256.New(0, 2, 0, 0), decimal256.New(0, 0, 0, 3), decimal256.New(0, 6, 0, 0)},
{decimal256.New(2, 0, 0, 0), decimal256.New(0, 0, 0, 3), decimal256.New(6, 0, 0, 0)},
{decimal256.New(0, 0, 3, 3), decimal256.New(0, 0, 0, 2), decimal256.New(0, 0, 6, 6)},
{decimal256.New(0, 3, 3, 0), decimal256.New(0, 0, 0, 2), decimal256.New(0, 6, 6, 0)},
{decimal256.New(3, 3, 0, 0), decimal256.New(0, 0, 0, 2), decimal256.New(6, 6, 0, 0)},
{decimal256.New(0, 0, 0, 2), decimal256.New(0, 0, 3, 3), decimal256.New(0, 0, 6, 6)},
{decimal256.New(0, 0, 2, 0), decimal256.New(0, 0, 3, 3), decimal256.New(0, 6, 6, 0)},
{decimal256.New(0, 2, 0, 0), decimal256.New(0, 0, 3, 3), decimal256.New(6, 6, 0, 0)},
{decimal256.New(0, 0, 0, 2), decimal256.New(0, 0, 0, math.MaxUint64), decimal256.New(0, 0, 1, math.MaxUint64-1)},
{decimal256.New(0, 0, 0, 2), decimal256.New(0, 0, math.MaxUint64, 0), decimal256.New(0, 1, math.MaxUint64-1, 0)},
{decimal256.New(0, 0, 0, 2), decimal256.New(0, math.MaxUint64, 0, 0), decimal256.New(1, math.MaxUint64-1, 0, 0)},
{decimal256.New(0, 0, 0, 1), decimal256.New(0, 0, 0, 0), decimal256.New(0, 0, 0, 0)},
{decimal256.New(0, 0, 1, 0), decimal256.New(0, 0, 0, 0), decimal256.New(0, 0, 0, 0)},
{decimal256.New(0, 1, 0, 0), decimal256.New(0, 0, 0, 0), decimal256.New(0, 0, 0, 0)},
{decimal256.New(1, 0, 0, 0), decimal256.New(0, 0, 0, 0), decimal256.New(0, 0, 0, 0)},
{decimal256.New(0, 0, 0, 0), decimal256.New(0, 0, 0, 1), decimal256.New(0, 0, 0, 0)},
{decimal256.New(0, 0, 0, 0), decimal256.New(0, 0, 1, 0), decimal256.New(0, 0, 0, 0)},
{decimal256.New(0, 0, 0, 0), decimal256.New(0, 1, 0, 0), decimal256.New(0, 0, 0, 0)},
{decimal256.New(0, 0, 0, 0), decimal256.New(1, 0, 0, 0), decimal256.New(0, 0, 0, 0)},
} {
t.Run("mul", func(t *testing.T) {
n := tc.n.Mul(tc.rhs)
if got, want := n, tc.want; got != want {
t.Fatalf("invalid value. got=%v, want=%v", got, want)
}
})
}
}
func TestDiv(t *testing.T) {
for _, tc := range []struct {
n decimal256.Num
rhs decimal256.Num
want_res decimal256.Num
want_rem decimal256.Num
}{
{decimal256.New(0, 0, 0, 3), decimal256.New(0, 0, 0, 2), decimal256.New(0, 0, 0, 1), decimal256.New(0, 0, 0, 1)},
{decimal256.New(0, 0, 3, 0), decimal256.New(0, 0, 2, 0), decimal256.New(0, 0, 0, 1), decimal256.New(0, 0, 1, 0)},
{decimal256.New(0, 3, 0, 0), decimal256.New(0, 2, 0, 0), decimal256.New(0, 0, 0, 1), decimal256.New(0, 1, 0, 0)},
{decimal256.New(3, 0, 0, 0), decimal256.New(2, 0, 0, 0), decimal256.New(0, 0, 0, 1), decimal256.New(1, 0, 0, 0)},
{decimal256.New(0, 0, 3, 2), decimal256.New(0, 0, 2, 3), decimal256.New(0, 0, 0, 1), decimal256.New(0, 0, 0, math.MaxUint64)},
{decimal256.New(0, 3, 2, 0), decimal256.New(0, 2, 3, 0), decimal256.New(0, 0, 0, 1), decimal256.New(0, 0, math.MaxUint64, 0)},
{decimal256.New(3, 2, 0, 0), decimal256.New(2, 3, 0, 0), decimal256.New(0, 0, 0, 1), decimal256.New(0, math.MaxUint64, 0, 0)},
{decimal256.New(0, 0, 0, math.MaxUint64), decimal256.New(0, 0, 0, 1), decimal256.New(0, 0, 0, math.MaxUint64), decimal256.New(0, 0, 0, 0)},
{decimal256.New(0, 0, math.MaxUint64, 0), decimal256.New(0, 0, 0, 1), decimal256.New(0, 0, math.MaxUint64, 0), decimal256.New(0, 0, 0, 0)},
{decimal256.New(0, math.MaxUint64, 0, 0), decimal256.New(0, 0, 0, 1), decimal256.New(0, math.MaxUint64, 0, 0), decimal256.New(0, 0, 0, 0)},
{decimal256.New(math.MaxUint64, 0, 0, 0), decimal256.New(0, 0, 0, 1), decimal256.New(math.MaxUint64, 0, 0, 0), decimal256.New(0, 0, 0, 0)},
{decimal256.New(0, 0, 0, 0), decimal256.New(0, 0, 0, 1), decimal256.New(0, 0, 0, 0), decimal256.New(0, 0, 0, 0)},
} {
t.Run("div", func(t *testing.T) {
res, rem := tc.n.Div(tc.rhs)
if got, want := res, tc.want_res; got != want {
t.Fatalf("invalid res value. got=%v, want=%v", got, want)
}
if got, want := rem, tc.want_rem; got != want {
t.Fatalf("invalid rem value. got=%v, want=%v", got, want)
}
})
}
}
func TestPow(t *testing.T) {
for _, tc := range []struct {
n decimal256.Num
rhs decimal256.Num
want decimal256.Num
}{
{decimal256.New(0, 0, 0, 2), decimal256.New(0, 0, 0, 3), decimal256.New(0, 0, 0, 8)},
{decimal256.New(0, 0, 2, 0), decimal256.New(0, 0, 0, 3), decimal256.New(8, 0, 0, 0)},
{decimal256.New(0, 0, 2, 2), decimal256.New(0, 0, 0, 3), decimal256.New(8, 24, 24, 8)},
{decimal256.New(0, 0, 0, 1), decimal256.New(0, 0, 0, 0), decimal256.New(0, 0, 0, 1)},
{decimal256.New(0, 0, 0, 0), decimal256.New(0, 0, 0, 1), decimal256.New(0, 0, 0, 0)},
} {
t.Run("pow", func(t *testing.T) {
n := tc.n.Pow(tc.rhs)
if got, want := n, tc.want; got != want {
t.Fatalf("invalid value. got=%v, want=%v", got, want)
}
})
}
}
func TestMax(t *testing.T) {
for _, tc := range []struct {
n decimal256.Num
rhs []decimal256.Num
want decimal256.Num
}{
{decimal256.New(0, 0, 0, 2), []decimal256.Num{decimal256.New(8, 4, 2, 1), decimal256.New(9, 0, 0, 8), decimal256.New(0, 17, 0, 0)}, decimal256.New(9, 0, 0, 8)},
{decimal256.New(0, 0, 0, 10), []decimal256.Num{decimal256.New(0, 4, 0, 1), decimal256.New(0, 0, 0, 8), decimal256.New(0, 0, 3, 0)}, decimal256.New(0, 4, 0, 1)},
} {
t.Run("max", func(t *testing.T) {
n := decimal256.Max(tc.n, tc.rhs...)
if got, want := n, tc.want; got != want {
t.Fatalf("invalid value. got=%v, want=%v", got, want)
}
})
}
}
func TestMin(t *testing.T) {
for _, tc := range []struct {
n decimal256.Num
rhs []decimal256.Num
want decimal256.Num
}{
{decimal256.New(0, 0, 0, 2), []decimal256.Num{decimal256.New(8, 4, 2, 1), decimal256.New(9, 0, 0, 8), decimal256.New(0, 17, 0, 0)}, decimal256.New(0, 0, 0, 2)},
{decimal256.New(0, 0, 0, 10), []decimal256.Num{decimal256.New(0, 4, 0, 1), decimal256.New(0, 0, 0, 8), decimal256.New(0, 0, 3, 0)}, decimal256.New(0, 0, 0, 8)},
} {
t.Run("min", func(t *testing.T) {
n := decimal256.Min(tc.n, tc.rhs...)
if got, want := n, tc.want; got != want {
t.Fatalf("invalid value. got=%v, want=%v", got, want)
}
})
}
}
func TestGreater(t *testing.T) {
for _, tc := range []struct {
n decimal256.Num
rhs decimal256.Num
want bool
}{
{decimal256.New(0, 0, 0, 2), decimal256.New(0, 0, 0, 1), true},
{decimal256.New(0, 0, 2, 0), decimal256.New(0, 0, 1, 0), true},
{decimal256.New(0, 2, 0, 0), decimal256.New(0, 1, 0, 0), true},
{decimal256.New(2, 0, 0, 0), decimal256.New(1, 0, 0, 0), true},
{decimal256.New(0, 0, 0, 2), decimal256.New(0, 0, 0, 3), false},
{decimal256.New(0, 0, 2, 0), decimal256.New(0, 0, 3, 0), false},
{decimal256.New(0, 2, 0, 0), decimal256.New(0, 3, 0, 0), false},
{decimal256.New(2, 0, 0, 0), decimal256.New(3, 0, 0, 0), false},
{decimal256.New(0, 0, 0, 2), decimal256.New(0, 0, 0, 2), false},
{decimal256.New(0, 0, 2, 0), decimal256.New(0, 0, 2, 0), false},
{decimal256.New(0, 2, 0, 0), decimal256.New(0, 2, 0, 0), false},
{decimal256.New(2, 0, 0, 0), decimal256.New(2, 0, 0, 0), false},
{decimal256.New(0, 0, 2, math.MaxUint64), decimal256.New(0, 0, 2, 1), true},
{decimal256.New(0, 2, math.MaxUint64, 0), decimal256.New(0, 2, 1, 0), true},
{decimal256.New(2, math.MaxUint64, 0, 0), decimal256.New(2, 1, 0, 0), true},
{decimal256.New(0, 0, 2, math.MaxUint64), decimal256.New(0, 0, 3, 1), false},
{decimal256.New(0, 2, math.MaxUint64, 0), decimal256.New(0, 3, 1, 0), false},
{decimal256.New(2, math.MaxUint64, 0, 0), decimal256.New(3, 1, 0, 0), false},
{decimal256.New(0, 0, 2, math.MaxUint64), decimal256.New(0, 0, 2, math.MaxUint64), false},
{decimal256.New(0, 2, math.MaxUint64, 0), decimal256.New(0, 2, math.MaxUint64, 0), false},
{decimal256.New(2, math.MaxUint64, 0, 0), decimal256.New(2, math.MaxUint64, 0, 0), false},
} {
t.Run("greater", func(t *testing.T) {
n := tc.n.Greater(tc.rhs)
if got, want := n, tc.want; got != want {
t.Fatalf("invalid value. got=%v, want=%v", got, want)
}
})
}
}
func TestLess(t *testing.T) {
for _, tc := range []struct {
n decimal256.Num
rhs decimal256.Num
want bool
}{
{decimal256.New(0, 0, 0, 2), decimal256.New(0, 0, 0, 1), false},
{decimal256.New(0, 0, 2, 0), decimal256.New(0, 0, 1, 0), false},
{decimal256.New(0, 2, 0, 0), decimal256.New(0, 1, 0, 0), false},
{decimal256.New(2, 0, 0, 0), decimal256.New(1, 0, 0, 0), false},
{decimal256.New(0, 0, 0, 2), decimal256.New(0, 0, 0, 3), true},
{decimal256.New(0, 0, 2, 0), decimal256.New(0, 0, 3, 0), true},
{decimal256.New(0, 2, 0, 0), decimal256.New(0, 3, 0, 0), true},
{decimal256.New(2, 0, 0, 0), decimal256.New(3, 0, 0, 0), true},
{decimal256.New(0, 0, 0, 2), decimal256.New(0, 0, 0, 2), false},
{decimal256.New(0, 0, 2, 0), decimal256.New(0, 0, 2, 0), false},
{decimal256.New(0, 2, 0, 0), decimal256.New(0, 2, 0, 0), false},
{decimal256.New(2, 0, 0, 0), decimal256.New(2, 0, 0, 0), false},
{decimal256.New(0, 0, 2, math.MaxUint64), decimal256.New(0, 0, 2, 1), false},
{decimal256.New(0, 2, math.MaxUint64, 0), decimal256.New(0, 2, 1, 0), false},
{decimal256.New(2, math.MaxUint64, 0, 0), decimal256.New(2, 1, 0, 0), false},
{decimal256.New(0, 0, 2, math.MaxUint64), decimal256.New(0, 0, 3, 1), true},
{decimal256.New(0, 2, math.MaxUint64, 0), decimal256.New(0, 3, 1, 0), true},
{decimal256.New(2, math.MaxUint64, 0, 0), decimal256.New(3, 1, 0, 0), true},
{decimal256.New(0, 0, 2, math.MaxUint64), decimal256.New(0, 0, 2, math.MaxUint64), false},
{decimal256.New(0, 2, math.MaxUint64, 0), decimal256.New(0, 2, math.MaxUint64, 0), false},
{decimal256.New(2, math.MaxUint64, 0, 0), decimal256.New(2, math.MaxUint64, 0, 0), false},
} {
t.Run("less", func(t *testing.T) {
n := tc.n.Less(tc.rhs)
if got, want := n, tc.want; got != want {
t.Fatalf("invalid value. got=%v, want=%v", got, want)
}
})
}
}
func TestCmp(t *testing.T) {
for _, tc := range []struct {
n decimal256.Num
rhs decimal256.Num
want int
}{
{decimal256.New(0, 0, 0, 2), decimal256.New(0, 0, 0, 1), 1},
{decimal256.New(0, 0, 2, 0), decimal256.New(0, 0, 1, 0), 1},
{decimal256.New(0, 2, 0, 0), decimal256.New(0, 1, 0, 0), 1},
{decimal256.New(2, 0, 0, 0), decimal256.New(1, 0, 0, 0), 1},
{decimal256.New(0, 0, 0, 2), decimal256.New(0, 0, 0, 3), -1},
{decimal256.New(0, 0, 2, 0), decimal256.New(0, 0, 3, 0), -1},
{decimal256.New(0, 2, 0, 0), decimal256.New(0, 3, 0, 0), -1},
{decimal256.New(2, 0, 0, 0), decimal256.New(3, 0, 0, 0), -1},
{decimal256.New(0, 0, 0, 2), decimal256.New(0, 0, 0, 2), 0},
{decimal256.New(0, 0, 2, 0), decimal256.New(0, 0, 2, 0), 0},
{decimal256.New(0, 2, 0, 0), decimal256.New(0, 2, 0, 0), 0},
{decimal256.New(2, 0, 0, 0), decimal256.New(2, 0, 0, 0), 0},
{decimal256.New(0, 0, 2, math.MaxUint64), decimal256.New(0, 0, 2, 1), 1},
{decimal256.New(0, 2, math.MaxUint64, 0), decimal256.New(0, 2, 1, 0), 1},
{decimal256.New(2, math.MaxUint64, 0, 0), decimal256.New(2, 1, 0, 0), 1},
{decimal256.New(0, 0, 2, math.MaxUint64), decimal256.New(0, 0, 3, 1), -1},
{decimal256.New(0, 2, math.MaxUint64, 0), decimal256.New(0, 3, 1, 0), -1},
{decimal256.New(2, math.MaxUint64, 0, 0), decimal256.New(3, 1, 0, 0), -1},
{decimal256.New(0, 0, 2, math.MaxUint64), decimal256.New(0, 0, 2, math.MaxUint64), 0},
{decimal256.New(0, 2, math.MaxUint64, 0), decimal256.New(0, 2, math.MaxUint64, 0), 0},
{decimal256.New(2, math.MaxUint64, 0, 0), decimal256.New(2, math.MaxUint64, 0, 0), 0},
} {
t.Run("cmp", func(t *testing.T) {
n := tc.n.Cmp(tc.rhs)
if got, want := n, tc.want; got != want {
t.Fatalf("invalid value. got=%v, want=%v", got, want)
}
})
}
}
func TestDecimalToBigInt(t *testing.T) {
tests := []struct {
arr [4]uint64
exp string
}{
{[4]uint64{0, 10084168908774762496, 12965995782233477362, 159309191113245227}, your_sha256_hash000000000000"},
{[4]uint64{0, 8362575164934789120, 5480748291476074253, 18287434882596306388}, your_sha256_hash0000000000000"},
{[4]uint64{0, 0, 0, 0}, "0"},
{[4]uint64{17877984925544397504, 5352188884907840935, 234631617561833724, 196678011949953713}, your_sha256_hash567890123456"},
{[4]uint64{568759148165154112, 13094555188801710680, 18212112456147717891, 18250066061759597902}, your_sha256_hash4567890123456"},
}
for _, tc := range tests {
t.Run("", func(t *testing.T) {
n := decimal256.New(tc.arr[3], tc.arr[2], tc.arr[1], tc.arr[0])
bi := n.BigInt()
assert.Equal(t, tc.exp, bi.String())
n2 := decimal256.FromBigInt(bi)
assert.Equal(t, n2.Array(), n.Array())
})
}
}
func TestDecimalFromFloat(t *testing.T) {
tests := []struct {
val float64
precision, scale int32
expected string
}{
{0, 1, 0, "0"},
{math.Copysign(0, -1), 1, 0, "0"},
{0, 19, 4, "0.0000"},
{math.Copysign(0, -1), 19, 4, "0.0000"},
{123.0, 7, 4, "123.0000"},
{-123, 7, 4, "-123.0000"},
{456.78, 7, 4, "456.7800"},
{-456.78, 7, 4, "-456.7800"},
{456.784, 5, 2, "456.78"},
{-456.784, 5, 2, "-456.78"},
{456.786, 5, 2, "456.79"},
{-456.786, 5, 2, "-456.79"},
{999.99, 5, 2, "999.99"},
{-999.99, 5, 2, "-999.99"},
{123, 19, 0, "123"},
{-123, 19, 0, "-123"},
{123.4, 19, 0, "123"},
{-123.4, 19, 0, "-123"},
{123.6, 19, 0, "124"},
{-123.6, 19, 0, "-124"},
// 2**62
{4.611686018427387904e+18, 19, 0, "4611686018427387904"},
{-4.611686018427387904e+18, 19, 0, "-4611686018427387904"},
// 2**63
{9.223372036854775808e+18, 19, 0, "9223372036854775808"},
{-9.223372036854775808e+18, 19, 0, "-9223372036854775808"},
// 2**64
{1.8446744073709551616e+19, 20, 0, "18446744073709551616"},
{-1.8446744073709551616e+19, 20, 0, "-18446744073709551616"},
{9.999999999999999e+75, 76, 0, your_sha256_hash105414230016"},
{-9.999999999999999e+75, 76, 0, your_sha256_hash2105414230016"},
}
t.Run("float64", func(t *testing.T) {
for _, tt := range tests {
t.Run(tt.expected, func(t *testing.T) {
n, err := decimal256.FromFloat64(tt.val, tt.precision, tt.scale)
assert.NoError(t, err)
assert.Equal(t, tt.expected, big.NewFloat(n.ToFloat64(tt.scale)).Text('f', int(tt.scale)))
})
}
t.Run("large values", func(t *testing.T) {
// test entire float64 range
for scale := int32(-308); scale <= 308; scale++ {
val := math.Pow10(int(scale))
n, err := decimal256.FromFloat64(val, 1, -scale)
assert.NoError(t, err)
assert.Equal(t, "1", n.BigInt().String())
}
for scale := int32(-307); scale <= 306; scale++ {
val := 123 * math.Pow10(int(scale))
n, err := decimal256.FromFloat64(val, 2, -scale-1)
assert.NoError(t, err)
assert.Equal(t, "12", n.BigInt().String())
n, err = decimal256.FromFloat64(val, 3, -scale)
assert.NoError(t, err)
assert.Equal(t, "123", n.BigInt().String())
n, err = decimal256.FromFloat64(val, 4, -scale+1)
assert.NoError(t, err)
assert.Equal(t, "1230", n.BigInt().String())
}
})
})
t.Run("float32", func(t *testing.T) {
for _, tt := range tests {
if tt.precision > 38 {
continue
}
t.Run(tt.expected, func(t *testing.T) {
n, err := decimal256.FromFloat32(float32(tt.val), tt.precision, tt.scale)
assert.NoError(t, err)
assert.Equal(t, tt.expected, big.NewFloat(float64(n.ToFloat32(tt.scale))).Text('f', int(tt.scale)))
})
}
t.Run("large values", func(t *testing.T) {
// test entire float32 range
for scale := int32(-38); scale <= 38; scale++ {
val := float32(math.Pow10(int(scale)))
n, err := decimal256.FromFloat32(val, 1, -scale)
assert.NoError(t, err)
assert.Equal(t, "1", n.BigInt().String())
}
for scale := int32(-37); scale <= 36; scale++ {
val := 123 * float32(math.Pow10(int(scale)))
n, err := decimal256.FromFloat32(val, 2, -scale-1)
assert.NoError(t, err)
assert.Equal(t, "12", n.BigInt().String())
n, err = decimal256.FromFloat32(val, 3, -scale)
assert.NoError(t, err)
assert.Equal(t, "123", n.BigInt().String())
n, err = decimal256.FromFloat32(val, 4, -scale+1)
assert.NoError(t, err)
assert.Equal(t, "1230", n.BigInt().String())
}
})
})
}
func TestFromString(t *testing.T) {
tests := []struct {
s string
expected int64
expectedScale int32
}{
{"12.3", 123, 1},
{"0.00123", 123, 5},
{"1.23e-8", 123, 10},
{"-1.23E-8", -123, 10},
{"1.23e+3", 1230, 0},
{"-1.23E+3", -1230, 0},
{"1.23e+5", 123000, 0},
{"1.2345E+7", 12345000, 0},
{"1.23e-8", 123, 10},
{"-1.23E-8", -123, 10},
{"1.23E+3", 1230, 0},
{"-1.23e+3", -1230, 0},
{"1.23e+5", 123000, 0},
{"1.2345e+7", 12345000, 0},
{"0000000", 0, 0},
{"000.0000", 0, 4},
{".00000", 0, 5},
{"1e1", 10, 0},
{"+234.567", 234567, 3},
{"1e-37", 1, 37},
{"2112.33", 211233, 2},
{"-2112.33", -211233, 2},
{"12E2", 12, -2},
}
for _, tt := range tests {
t.Run(fmt.Sprintf("%s_%d", tt.s, tt.expectedScale), func(t *testing.T) {
n, err := decimal256.FromString(tt.s, 35, tt.expectedScale)
assert.NoError(t, err)
ex := decimal256.FromI64(tt.expected)
assert.Equal(t, ex, n)
})
}
}
// Test issues from GH-38395
func TestToString(t *testing.T) {
const decStr = your_sha256_hash740461239999"
integer, _ := (&big.Int{}).SetString(decStr, 10)
dec := decimal256.FromBigInt(integer)
expected := "0." + decStr
assert.Equal(t, expected, dec.ToString(int32(len(decStr))))
assert.Equal(t, decStr+"0000", dec.ToString(-4))
}
// Test issues from GH-38395
func TestHexFromString(t *testing.T) {
const decStr = "11111111111111111111111111111111111111.00000000000000000000000000000000000000"
num, err := decimal256.FromString(decStr, 76, 38)
if err != nil {
t.Error(err)
} else if decStr != num.ToString(38) {
t.Errorf("expected: %s, actual: %s\n", decStr, num.ToString(38))
actualCoeff := num.BigInt()
expectedCoeff, _ := (&big.Int{}).SetString(strings.Replace(decStr, ".", "", -1), 10)
t.Errorf("expected(hex): %X, actual(hex): %X\n", expectedCoeff.Bytes(), actualCoeff.Bytes())
}
}
func TestBitLen(t *testing.T) {
n := decimal256.GetScaleMultiplier(76)
b := n.BigInt()
b.Mul(b, big.NewInt(25))
assert.Greater(t, b.BitLen(), 255)
assert.Panics(t, func() {
decimal256.FromBigInt(b)
})
_, err := decimal256.FromString(b.String(), decimal256.MaxPrecision, 0)
assert.ErrorContains(t, err, "bitlen too large for decimal256")
_, err = decimal256.FromString(b.String(), decimal256.MaxPrecision, -1)
assert.ErrorContains(t, err, "bitlen too large for decimal256")
}
```
|
Kambur is a mountain in Suðuroy, Faroe Islands, located on the northern side of the village Porkeri: Kambur is also visible from Hov.
References
External links
Visitsuduroy.fo, The Tourist Information Center
Porkeri.fo, The municipality of Porkeri.
Mountains of the Faroe Islands
Suðuroy
|
Benjamin Silliman (August 8, 1779 – November 24, 1864) was an early American chemist and science educator. He was one of the first American professors of science, at Yale College, the first person to use the process of fractional distillation in America, and a founder of the American Journal of Science, the oldest continuously published scientific journal in the United States.
Early life
Silliman was born in a tavern in North Stratford, now Trumbull, Connecticut, to Mary (Fish) Silliman (widow of John Noyes) and General Gold Selleck Silliman. He was born in August 1779, several months after British forces took his father prisoner and his mother had fled their home in Fairfield, Connecticut, to escape 2,000 British troops who burned Fairfield center to the ground.
Silliman was educated at Yale, receiving a B.A. degree in 1796 and a M.A. in 1799. He studied law with Simeon Baldwin from 1798 to 1799 and became a tutor at Yale from 1799 to 1802. He was admitted to the bar in 1802. That same year he was hired by Yale President Timothy Dwight IV as a professor of chemistry and natural history. Silliman, who had never studied chemistry, prepared for the job by studying chemistry with Professor James Woodhouse at the University of Pennsylvania in Philadelphia. In 1804, he delivered his first lectures in chemistry, which were also the first science lectures ever given at Yale. In 1805, he traveled to University of Edinburgh for further study.
Career
Returning to New Haven, he studied its geology. His chemical analysis of a meteorite that fell in 1807 near Weston, Connecticut, was the first published scientific account of an American meteorite. He lectured publicly at New Haven in 1808 and came to discover many of the constituent elements of many minerals. Some time around 1818, Ephraim Lane took some samples of rocks he found at an area called Saganawamps, now a part of the Old Mine Park Archeological Site in Trumbull, Connecticut, to Silliman for identification. Silliman reported in his new American Journal of Science, a publication covering all the natural sciences but with an emphasis on geology, that he had identified tungsten, tellurium, topaz and fluorite in the rocks. He played a major role in the discoveries of the first fossil fishes found in the United States. In 1837, the first (and at the time only) prismatic barite ore of tungsten in the United States was discovered at the mine. The mineral sillimanite was named after Silliman in 1850. Upon the founding of the medical school, he also taught there as one of the founding faculty members.
In 1833 he discussed the relationship of Flood geology to the Genesis account, and also wrote about this topic in 1840.
Silliman was an early supporter of coeducation in the Ivy League. Although Yale wouldn't admit women as students until over 100 years later, he allowed young women into his lecture classes. His efforts convinced Frederick Barnard, later President of Columbia College, that women ought to be admitted as students. "The elder Silliman, during the entire period of his distinguished career as a Professor of Chemistry, Geology and Mineralogy in Yale College, was accustomed every year to admit to his lecture-courses classes of young women from the schools of New Haven. In that institution the undersigned had an opportunity to observe, as a student, the effect of the practice, similar to that which he afterward created for himself in Alabama, as a teacher. The results in both instances, so far as they went, were good; and they went far enough to make it evident that if the presence of young women in college, instead of being occasional, should be constant, they would be better."
American historian David McCullough mentions in his book about early 19th century Americans in Paris that in 1825 Professor Benjamin Silliman while on a tour of Europe conferring with other scientists encountered his former Yale science student Samuel F. B. Morse in the Louvre. McCullough also relates that Silliman would later become president of the college.
As professor emeritus, he delivered lectures at Yale on geology until 1855; Benjamin Silliman Sr had been the first person to use the process of fractional distillation, and, in 1854, Benjamin Silliman Jr became the first person to fractionate petroleum by distillation.
In 1864, Silliman noted oil seeps in the Ojai, California area. In 1866, this led to the start of oil exploration and development in the Ojai Basin.
Like his son-in-law James Dana, Silliman was a Christian. In an address delivered before the Association of American Geologists he spoke in favor of old-earth creationism, stating:
In the same line of thought, he posed arguments against atheism and materialism.
1807 meteor
At 6:30 in the morning of December 14, 1807, a blazing fireball about two-thirds the apparent size of the moon in the sky, was seen traveling southwards by early risers in Vermont and Massachusetts. Three loud explosions were heard over the town of Weston in Fairfield County, Connecticut. Stone fragments fell in at least 6 places. The largest and only unbroken stone of the Weston fall, which weighed 36.5 pounds (16.5 kilograms), was found some days after Silliman and Kingsley had spent several fruitless hours hunting for it. The owner, a Trumbull farmer named Elijah Seeley, was urged to present it to Yale by local people who had met the professors during their investigation, but he insisted on putting it up for sale. It was purchased by Colonel George Gibbs for his large and famous collection of minerals; when the collection became the property of Yale in 1825, Silliman finally acquired this stone; the only specimen of the Weston meteorite that remains in the Yale Peabody Museum collection today.
Personal life
His first marriage was on September 17, 1809, to Harriet Trumbull, daughter of Connecticut governor Jonathan Trumbull, Jr., who was the son of Governor Jonathan Trumbull, Sr. of Connecticut, a hero of the American Revolution. Silliman and his wife had four children: one daughter married Professor Oliver P. Hubbard, another married Professor James Dwight Dana (Silliman's doctoral student until 1833 and assistant from 1836 to 1837); and youngest daughter Julia married Edward Whiting Gilman, brother of Yale graduate and educator Daniel Coit Gilman. His son Benjamin Silliman Jr., also a professor of chemistry at Yale, wrote a report that convinced investors to back George Bissell's seminal search for oil. His second marriage was in 1851 to Mrs. Sarah Isabella (McClellan) Webb, daughter of John McClellan. Silliman died at New Haven and is buried in Grove Street Cemetery.
Legacy
Silliman deemed slavery an "enormous evil". He favored colonization of free African Americans in Liberia, serving as a board member of the Connecticut Colonization Society between 1828 and 1835. He was elected a member of the American Antiquarian Society in 1813, and an Associate Fellow of the American Academy of Arts and Sciences in 1815. Silliman founded and edited the American Journal of Science, and was appointed one of the corporate members of the National Academy of Sciences by the United States Congress. He was also a member of the American Association for the Advancement of Science.
Things named for him
Silliman College, one of Yale's residential colleges, is named for him, as is the mineral Sillimanite.
In Sequoia National Park, Mount Silliman is named for him, as is Silliman Pass, a creek and two lakes below the summit of Mount Silliman.
See also
Connecticut Academy of Arts and Sciences
Petroleum
Notes
References
Further reading
External links
Yale University on Silliman
The Yale Standard on Silliman
On his abolitionism
Sillimanite
1779 births
1864 deaths
Alumni of the University of Edinburgh
American chemists
American mineralogists
Burials at Grove Street Cemetery
Fellows of the American Academy of Arts and Sciences
Members of the American Antiquarian Society
Members of the United States National Academy of Sciences
Yale University alumni
Yale University faculty
People from Trumbull, Connecticut
Silliman family
|
```go
//
//
// path_to_url
//
// Unless required by applicable law or agreed to in writing, software
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
package typeutil
import (
"encoding/json"
"testing"
"github.com/stretchr/testify/require"
)
func TestStringSliceJSON(t *testing.T) {
re := require.New(t)
b := StringSlice([]string{"zone", "rack"})
o, err := json.Marshal(b)
re.NoError(err)
re.Equal("\"zone,rack\"", string(o))
var nb StringSlice
err = json.Unmarshal(o, &nb)
re.NoError(err)
re.Equal(b, nb)
}
func TestEmpty(t *testing.T) {
re := require.New(t)
ss := StringSlice([]string{})
b, err := json.Marshal(ss)
re.NoError(err)
re.Equal("\"\"", string(b))
var ss2 StringSlice
re.NoError(ss2.UnmarshalJSON(b))
re.Equal(ss, ss2)
}
```
|
The Mercedes-Benz CLR was a set of racing cars developed for Mercedes-Benz through a collaboration with in-house tuning division Mercedes-AMG and motorsports specialists HWA GmbH. Designed to meet Le Mans Grand Touring Prototype (LMGTP) regulations, the CLRs were intended to compete in sports car events during 1999, most notably at the 24 Hours of Le Mans which Mercedes had last won in . It was the third in a series of sports cars raced by Mercedes, following the CLK GTRs and CLK LMs that had debuted in 1997 and 1998 respectively. Like its predecessors the CLR retained elements of Mercedes-Benz's production cars, including a V8 engine loosely based on the Mercedes M119 as well as a front fascia, headlamps and grille inspired by the then new Mercedes flagship CL Class. The CLR's bodywork was lower in overall height than that used on the CLKs to produce less drag.
Three CLRs were entered for Le Mans in 1999 after the team performed nearly of testing. The cars suffered aerodynamic instabilities along the circuit's long high-speed straight sections. The car of Australian Mark Webber became airborne and crashed in qualifying, requiring it to be rebuilt. Webber and the repaired CLR returned to the track in a final practice session on the morning of the race, but during its first lap around the circuit, the car once again became airborne and landed on its roof. Mercedes withdrew the damaged CLR but chose to continue in the race despite the accidents. The remaining cars were hastily altered and the drivers were given instructions to avoid closely following other cars.
Nearly four hours into the race Scotsman Peter Dumbreck was battling amongst the race leaders when his CLR suffered the same instability and became airborne, this time vaulting the circuit's safety barriers, crashing into trees and then coming to rest in an open field after several somersaults. This and earlier incidents led Mercedes not only to withdraw its remaining car from the event immediately, but also to cancel the entire CLR programme and move the company out of sports car racing. The accidents led to changes in the regulations dictating the design of Le Mans racing cars as well as alterations to the circuit itself to increase safety.
Background
In 1996 Mercedes-Benz's motorsports programmes included support for cars in Formula One, IndyCar, and the International Touring Car Championship (ITC). Following the collapse of the ITC at the end of the 1996 season, Mercedes' attention shifted to a new international series, the FIA GT Championship. Racing partners AMG were tasked with developing a design to meet the Fédération Internationale de l'Automobile's GT1 regulations for the new championship. The new cars, known as CLK GTRs, were designed for use both as racing and road cars available to the public, as series regulations required the racing cars to be based on production models. The CLK GTRs were successful in their debut season, winning seven of eleven races and earning both the drivers' and teams' championships.
For the 1998 season AMG refined the CLK GTR's design with the launch of the new CLK LM. A major change for the new design was the replacement of the CLK GTR's V12 engine with a smaller V8, thought by Mercedes to be more suitable to take on longer endurance events such as the 24 Hours of Le Mans, a race not part of the FIA GT calendar. Despite earning pole position for Le Mans, the new cars were unreliable and both lasted less than three hours before retiring with mechanical failure. The race was won by Mercedes' FIA GT rivals Porsche. Mercedes did go on to win its second straight FIA GT Championships later that year after winning all ten races.
After the dominance of Mercedes, most of the GT1 class competitors chose to not return to the FIA GT Championship for 1999, leading the FIA to eliminate the category from the series. The Automobile Club de l'Ouest (ACO), organisers of the 24 Hours of Le Mans, chose to follow the FIA's lead and no longer allow GT1 category cars to enter Le Mans. While FIA GT concentrated solely on its lower GT2 category in 1999, the ACO created a new category of race car known as a Le Mans Grand Touring Prototype (LMGTP). The LMGTP regulations for closed-cockpit cars were similar to the former GT1 regulations but shared many elements with the ACO's existing open-cockpit Le Mans Prototype (LMP) category. Mercedes, no longer able to compete in the FIA GT Championship with the CLK LMs, chose to concentrate on the ACO's new LMGTP category.
Development
Work on designing a new car to meet LMGTP regulations began in September 1998 as Mercedes was closing out its second FIA GT Championship season. Development was led by HWA GmbH, the motorsports division of AMG, which became an independent company the following year. The LMGTP rules did not require road versions of the cars to be built, so Gerhard Ungar, chief designer for HWA, was free to develop the CLR without concern for road legality issues or the inclusion of driver comforts. The transition from GT1 to LMGTP also allowed a decrease in the minimum allowed weight, from to . The new design had a much smaller cockpit monocoque made from carbon fibre and aluminium honeycomb. The monocoque derived the design of its lower half from the CLK LM's combination of carbon fibre and steel tube frame, but required a full carbon and aluminium upper half because of new load tests mandatory for LMGTP cockpits. The bodywork of the CLR was also shorter in overall height compared to the CLK LM, while the nose was substantially lower and flatter than its predecessor due to a shorter wheelbase allowing longer overhangs. Aerodynamic development on the design was carried out at the University of Stuttgart's wind tunnel and assisted by the aerodynamic specialists Fondmetal Technologies. Aerodynamic emphasis was placed on low drag for maximum top speed. Mercedes-Benz's brand image was also retained with the reuse of CLK-Class styled tail lights from the CLK LM and a front fascia, grille and headlamps based on the then-new CL-Class.
The engine for the CLR was also a variant of the design used on the CLK LM. The GT108C 32 valve naturally aspirated V8 engine was loosely based on the M119 engine used in Mercedes-Benz road cars at that time. A previous variant of the M119 had won Le Mans for Mercedes in 1989. Displacement was increased from to to compensate for the new air restrictor limitations in the LMGTP category, which allowed the engine to produce approximately . The first engine was completed and began testing in December 1998. The Xtrac 6-speed sequential gearbox came directly from the CLK LM, while Bridgestone continued as the team's tyre supplier. The suspension setup from the CLK LM was largely carried over to the CLR, although a central spring was added to the rear suspension.
Mercedes publicly announced its CLR programme in February 1999 just days before the first car began private testing at California Speedway in the United States. Testing continued into March at California as well as Homestead-Miami Speedway in Florida before the team moved to the Circuit de Nevers Magny-Cours in France. At Magny-Cours three CLRs completed a 30-hour test session covering . On 20 April the CLR was shown to the press for the first time during a test session at the Hockenheimring in Germany. By that point in the development process the CLRs had covered in testing without any major failures.
Preparation
The initial schedule for the CLRs consisted of participation in the May pre-qualifying and testing session at Le Mans in preparation for the race in June. At the team's Hockenheim test session plans were announced to enter several races after Le Mans. The first, scheduled for July, was to be an exhibition event consisting of two races at the Norisring street circuit in Nuremberg, Germany. Mercedes planned to enter four CLRs in the event. The team would then end its season with the final three races of the American Le Mans Series: the 10-hour Petit Le Mans endurance at Road Atlanta and shorter races at Laguna Seca Raceway and Las Vegas Motor Speedway. More than 200 personnel from Mercedes-Benz and HWA formed the crew for the three cars although the team was officially known as AMG-Mercedes.
As part of its launch announcement in February 1999, Mercedes named nine drivers to the team. Retained from the FIA GT programme were Christophe Bouchut, Jean-Marc Gounon, Bernd Schneider, Marcel Tiemann, and Mark Webber. Nick Heidfeld, then a test driver for the McLaren Mercedes Formula One team, was added to the team for his first experience with sports cars. Former Macau Grand Prix winner and All-Japan Formula Three champion Peter Dumbreck also came from an open wheel racing background. Pedro Lamy, 1998 FIA GT2 Champion, was drafted from the Oreca Chrysler team to participate at Le Mans and in the Deutsche Tourewagen Masters for Mercedes, while Franck Lagorce transferred from Nissan's Le Mans squad. Darren Turner, also a test driver for McLaren, served as the team's reserve driver for Le Mans.
Le Mans
Practice and qualifying
By winning the 1998 FIA GT Championship, Mercedes were awarded a single guaranteed entry for Le Mans, which was assigned to Gounon, Tiemann, and Webber in CLR No. 4. Bouchut, Dumbreck, and Heidfeld in the No. 5 and Lagorce, Lamy, and Schneider in the No. 6 entries would have to pre-qualify for the event, while No. 4 was free to use the pre-qualifying session for testing purposes. Pre-qualifying involved all 62 entry applicant teams setting lap times over a long session. The final entry for Le Mans would be made of 48 cars, combining guaranteed entries and the fastest cars in pre-qualifying within their respective classes; the prototype category, combining LMP and LMGTP cars, only allowed 28 cars from 31 entries. Competitors in the prototype category for 1999 included factory-supported LMGTP programmes from Toyota and Audi, and LMP entries from Nissan, BMW, Audi, and Panoz. Toyota set the fastest pre-qualifying time overall, followed by Panoz and BMW. Mercedes No. 6 was the sixth fastest car, while Nos. 4 and 5 were 14th and 15th respectively. Although the cars succeeded in passing pre-qualifying, one CLR suffered a setback when a suspension linkage was torn from the front of the monocoque. The suspension failure was the first major fault suffered by the CLRs since their testing debut in February.
Several weeks after pre-qualifying, Mercedes' returned for two days of practice and qualifying sessions in the week leading up to the race. The sessions would set the starting grid for the race based on the fastest overall lap time by each car. At the end of the first day, Mercedes' entries were fifth, sixth, and eighth on the provisional grid. Toyota led the session, over four seconds ahead of the fastest Mercedes. Early in the second day of sessions, Webber, driving CLR No. 4, was following the Audi R8R of Frank Biela through the portion of the circuit connecting Mulsanne Corner and the Indianapolis complex when he moved out of the Audi's slipstream to overtake. The CLR suddenly lifted its nose and front wheels off the circuit and became airborne, flipping upwards and somersaulting backwards before rotating onto its side. The car impacted the tarmac with its right side while perpendicular to the circuit then flipped back onto its wheels before skidding into the safety barriers on the side of the circuit. Webber was extracted from the car by track marshals and taken to a nearby hospital suffering from a sore neck, chest, and back. The accident occurred in an area not generally accessible to the public and was not seen by television cameras.
Due to the accident, the No. 4 car was unable to improve its qualifying time from the previous day, which relegated the car to tenth on the starting grid as competitors improved their times; Mercedes No. 5 also did not improve its lap time and finished the session seventh. Bernd Schneider was able to go quicker than his time from the previous day with a 3:31.541 lap for the No. 6 car. Toyota took pole position with a 3:29.930 lap, while Schneider's car was placed fourth on the final starting grid. The wreckage of the CLR No. 4 was returned to Mercedes at the end of the qualifying session and the team issued a press release confirming that they could repair the car before the start of the race two days later. A spare CLR monocoque, taken from a test car, was used to rebuild the No. 4. Webber was able to recover from his injuries by spending the following day in physical training and was cleared on Saturday morning to participate in the race.
Warm-up
On the morning of the race, a warm-up session lasting half an hour was held as a final preparation for the teams. Mercedes No. 4, repaired after its Thursday accident, joined its two teammates on the circuit as the session began. Webber was once again driving the No. 4 car as the trio made their way down the Mulsanne Straight. Approaching Mulsanne Corner, Webber trailed his two teammates but was approximately behind a Chrysler Viper GTS-R entered by Team Oreca. Cresting a hill at the approach to the corner, Webber's car lifted its nose into the air once again and rose over above the track, somersaulting backwards before twisting towards its right and hitting the tarmac with the right rear of the car while inverted, shedding its engine cover, rear wing, and nose. The car skidded on its roof into a run-off area just short of the roundabout next to the Mulsanne Corner before coming to a halt. Marshals were eventually able to right the Mercedes and extract Webber, who sustained no major injuries. Television cameras located at Mulsannes Corner captured the aftermath of the accident and broadcast pictures of the CLR on its roof to the worldwide audience. Photographers in the same location also captured the car as it flipped. The ACO later published these photographs in its 1999 yearbook.
Mercedes immediately withdrew CLR No. 4 from the event as the race was only a few hours from beginning. Norbert Haug, head of Mercedes-Benz's motorsport activities, contacted Adrian Newey, chief aerodynamicist of the McLaren Formula One team, for consultation on modifying the remaining CLRs to prevent further accidents. The drivers were also consulted on whether they believed the cars were too dangerous to race; Bouchut felt that the front of the car could become light at high speeds and voiced his concerns to the team, but other drivers had not felt this issue with the cars. Mercedes opted to make modifications to the front bodywork of the two remaining cars by adding dive planes to the fenders for increased downforce but sacrificing overall top speed. The drivers were also instructed not to follow other cars too closely.
Race
With only two CLRs remaining, Mercedes started from the fourth and seventh place grid positions. Schneider was able to move into third place behind the two Toyotas in the opening laps while Bouchut progressed to fourth. The Toyotas made pit stops first, followed by Schneider and Bouchut, then the two BMWs. One of the Toyotas eventually suffered transmission issues which dropped it down in the field, leaving the top six positions to be swapped amongst the two remaining Toyotas, two Mercedes, and two BMWs as they made pit stops on different schedules. Driver changes during later pit stops had Lagorce getting in the No. 6 to replace Schneider, while Dumbreck replaced Bouchut in the No. 5. Schneider reported that, despite some initial problems dealing with the car's new aerodynamics, it was running well by the end of his stint.
Towards the close of the fourth hour of the race, Dumbreck's Mercedes came into contact with the GTS-class Porsche 911 GT2 of the Estoril Racing team at the Ford Chicanes, but continued with no apparent damage. On lap 76 Dumbreck was in third place and catching Thierry Boutsen's Toyota in second place. The Mercedes and Toyota were nose to tail on the run from Mulsannes Corner to Indianapolis at nearly with both drivers partially blinded by the setting sun ahead of them. At a slight right kink in the straight, Dumbreck's CLR ran over the small apex kerbing and suddenly lifted its front wheels from the ground before somersaulting backwards as the entire car became airborne. The Mercedes rotated three times as it flew in the air, reaching a height of nearly . The car continued its trajectory as the circuit curved to the right, clearing a marshaling post and the safety barrier on the left side of the track and missing a large advertising billboard bridging the track just ahead of it. Television cameras broadcasting the live world feed captured the CLR's aerobatics before it went out of view behind trees. The car impacted the ground in an area of woods alongside the circuit that had been cut and cleared only two weeks prior and was inaccessible to spectators. The car dug a rut in the dirt as it continued to tumble in the clearing. The impact forced a tree limb to penetrate the monocoque between the driver's seat and fuel tank. The CLR came to rest right side up and track marshals rushed to the stopped car. Track officials quickly slowed the race with caution flags and safety cars to dispatch recovery vehicles. Dumbreck was knocked unconscious after the initial impact but awoke and climbed from the car where he was found by the marshals and local Gendarmerie officers in the area. Dumbreck was later given a breathalyser test by the officers due to Le Mans' use of public roads before being transported by ambulance to a local hospital for examinations before being released. At the end of the 76th lap Lagorce was ordered by the team to bring the remaining CLR directly to its garage; upon the car's arrival AMG-Mercedes shut the last of its three garage doors signifying its official retirement from the event. National rivals BMW went on to win the race the following day.
Aftermath
Following the race the ACO and the Fédération Française du Sport Automobile (FFSA) national motorsport body investigated the incidents. The FFSA questioned the ACO's decision to allow Mercedes to continue to compete after the two accidents prior to the race start, but the ACO argued that there were no indications that the problems that befell CLR No. 4 were shared by the other Mercedes entries. The ACO argued that the design of the CLR, with the longest front and rear overhang amongst the prototype field, was the cause of the problem. A Porsche 911 GT1, similar in design to the CLR, had suffered a nearly identical accident the year before at Road Atlanta in the United States. The ACO changed the regulations for the LMGTP category in 2000, decreasing the allowable length of overhang. The FIA also instructed its Advisory Expert Group to develop new regulations to prevent similar airborne accidents in other racing cars. The LMGTP class itself was abandoned by the teams in 2000 as Toyota cancelled its programme and Audi concentrated on open-cockpit LMP cars; the class reappeared in 2001.
Peter Dumbreck, in response to his accident, initially blamed the height of the kerbs he had run on when his car became airborne, but Mercedes-Benz responded by stating that blame did not lie with the circuit. The kerbs, as well as the entire Le Mans circuit, were all approved by the FIA. After the 2000 race the ACO and the French government made modifications to the Route nationale 138 which forms the Mulsanne Straight, by decreasing the height of a hill by on the approach to the Mulsanne Corner where Webber had his warm-up accident.
Before the race had concluded, Mercedes-Benz addressed criticism from other drivers and teams of its decisions. Haug believed that the team's data from Webber's practice incident had been adequately analysed and that the drivers did not feel there were problems with their cars in traffic that could cause the same incidents, prompting his decision to continue. He also stated his belief that contact between CLR No. 5 and the Estoril Porsche may have damaged the front diffuser and led to the aerodynamic instabilities. Shortly after Le Mans, Mercedes conducted its own examination of the accidents by running the remaining CLR on an airfield to verify wind tunnel data. Although no conclusions were published by Mercedes, the company cancelled the rest of its 1999 programme, withdrawing from the Norisring exhibition event and the final three rounds of the American Le Mans Series. The team's change in plans for the Norisring eventually led to the entire event being cancelled over a lack of manufacturer involvement. Mercedes returned to touring car racing from 2000 onwards, and has not participated at Le Mans in any capacity since 1999.
Despite the failure of the CLR project, Christophe Bouchut felt that the cars were his favorite to drive in a 21-year career at Le Mans, praising the cars' handling and technology. Following the damage to CLRs Nos. 4 and 5 during the Le Mans week, the remaining car has rarely been seen but has begun to make reappearances in recent years. As part of a 2008 celebration for the retirement of Bernd Schneider, CLR No. 6 was publicly displayed in Sankt Ingbert, Germany. The car appeared in the hands of a private owner in 2009 at a Modena Trackdays event held at the Nürburgring and was driven on the circuit.
The failures of the CLRs have become lore for Le Mans and motorsport in general. Speed Channel, as part of its tenth anniversary, named its broadcast of Dumbreck's accident as the fourth most memorable moment in the network's history. Road & Track magazine's list of the ten most infamous crashes at Le Mans named Webber's warm-up accident as seventh and Dumbreck's crash as second.
See also
1955 Le Mans disaster
Mercedes-Benz CLK GTR
Mercedes-Benz in motorsport
Notes
References
Citations
Book
Video
External links
Technical analysis of the Mercedes-Benz CLR flips
24 Hours of Le Mans race cars
Le Mans Prototypes
CLR
Auto racing controversies
|
Steve Frame is a fictional character from the NBC daytime soap opera Another World.
He was first portrayed by George Reinholt from 1968 to 1975 (Reinholt returned in 1989 for the show's 25th anniversary playing Steve as a ghost) and David Canary from 1981 to 1983. During the six-year gap between Reinholt and Canary, Steve was presumed dead in a helicopter crash in Australia. When David Canary assumed the role, it was explained that Steve had plastic surgery on his face but had amnesia all those years until he returned to Bay City.
Character history
Born in 1940 in Chadwell, Oklahoma, Steven Frame was one of eight children born to Henry and Jenny Frame. His siblings included sisters Emma Frame Ordway, Janice Frame and Sharlene Frame Watts, and brothers Vince Frame, Willis Frame, Jason Frame, and Henry Frame Jr. A self-made millionaire land developer and half-owner of the Bay City Bangles, the town's football team, Steven Frame first came to Bay City in 1967 on business and pretty much kept a low profile, but didn't make his society debut until the following year when he attended the wedding of District Attorney Walter Curtain to Lenore Moore on July 1, 1968.
At the reception held at the Bay City Country Club, Steve introduced himself to Lenore's best friend and bridesmaid, Alice Matthews (Jacqueline Courtney). It was also at the reception that he met Alice's scheming, money-hungry sister-in-law Rachel Matthews (Robin Strasser). Steve and Alice were instantly smitten with each other and began dating. Steve fell head over heels for Alice, who was the only person to call to him by his full first name Steven. Alice was also delighted when he hired her father Jim to handle the accounting for Steve's Bay City office. Though she was encouraged by Rachel not to let her handsome and rich suitor get away, Alice was completely unaware at first of Rachel's true intentions: she wanted Steve for herself and was determined to have him, and even went as far as to find any excuse to double-date with Alice and Steve with her husband, Russ Matthews (Sam Groom), Alice's brother and a newly interned doctor.
Rachel couldn't understand what Steve saw in Alice. She was too sweet and too nice. Rachel believed he needed a real woman, and she was determined to be that woman to him. Steve had something in common with Rachel that he didn't have with Alice: they both came from similarly poor backgrounds, though Steve's was far more impoverished than Rachel's (a 1972 flashback episode suggests that Steve grew up in a broken family in which his father was abusive). One night Rachel visited Steve in his apartment; he was depressed after a fight he had with Alice. Taking full advantage of Steve's vulnerability, Rachel seduced him. Rachel destroyed Steve and Alice's wedding plans when, on the night of Alice and Steve's engagement party, she confronted Alice and told her that not only did she love Steve but that she was carrying his baby. A cold and calm Alice later confronted Steve and asked him about his infidelity and if it was possible that he was the father of Rachel's baby. Steve finally admitted that it could be possible. Rachel gave birth to a boy and named him James Gerald "Jamie" Matthews and passed him off as Russ's child. Russ eventually found out the truth and divorced Rachel.
As the 1970s ushered in, Steve and Alice finally married in September 1971. Their marriage lasted until June 1973 after endless obstacles ripped them apart. Rachel (now played by Victoria Wyndham) was still determined to have Steve for herself and happily used their son as a weapon. Alice suffered a miscarriage and left Bay City for New York City. There she found employment as a private nurse to a young boy named Dennis Carrington (Mike Hammett). Steve married Rachel in order to give Jamie (Robert Doran) a stable family, but he still longed for Alice. Alice had moved back to Bay City and Steve begged her not to marry Dennis's father, Eliot (James Douglas). Alice agreed to give him a second chance. Steve asked Rachel for a divorce, but she was would not give it to him. Desperate, Steve resorted to bribing Rachel's father, Gerald Davis, to testify on his behalf when he sued Rachel for divorce, which was eventually granted with her gaining full custody of Jamie.
Bribing his father-in-law would soon come back to haunt Steve when Gerald mouthed off to John Randolph (Michael M. Ryan), Steve's attorney and brother-in-law (John's wife Pat was Alice's sister) that Steve bribed him to testify, thinking John was in on Steve's scheme. John informed the police of Gerald's accusation towards Steve. As a result of John's betrayal, Steve was arrested, but was given permission to remarry Alice on May 4, 1974 (Another World'''s 10th anniversary).
Steve was ordered to report to prison the very next day after their wedding and serve a sentence of six months. Shortly after Steve's incarceration, Alice suffered a mental breakdown. She was eventually committed to a sanitarium and once released was determined to reunite with Steve. Rachel was still furious over Steve going back to Alice and threatened to kick Alice out of the home that Steve had built for her, believing she was entitled to live there as the mother of his son. Several months later when Rachel planned to move in, she was shocked to see Steve waiting for her, accompanied by her stepfather Gil McGowan (Dolph Sweet), the town police chief, in whose custody Steve was temporarily released into. Steve put an end to her moving in right then and there.
When Steve was officially released from prison he then went to see Alice, but she rejected him. Rachel continued to scheme to keep them apart, but Steve and Alice eventually got back together. In May 1975, just as she and Steve were getting their lives back in order, Alice, and all of Bay City, received tragic news: Steve was presumed dead in a helicopter crash in Australia, where he had gone on business. It was also at this time that both women in Steve's life were undergoing major changes: Alice was in the process of adopting an orphaned girl named Sally Spencer, and Rachel, with the help of the love of her current husband, magazine publisher Mac Cory (Douglass Watson), was beginning to renounce her evil ways to become a compassionate and loving woman (though they had their own rocky relationship). Alice (now played by Susan Harney) tried her best to move on, but never really found true love. She left Bay City for a time, but eventually returned.
Six years later, in October 1981, a mysterious businessman named Edward Black (David Canary) arrived in Bay City, studying pictures and news clippings on Alice (now played by Linda Borgeson). Alice was at that time engaged to Mac Cory, who was divorced from Rachel. But it soon became clear that Edward Black was not the man he claimed to be. At a formal party, Black dropped a bombshell that shocked all of Bay City: he was actually Steve Frame! Apparently Steve had survived the crash in Australia years earlier, but had suffered from amnesia and had plastic surgery on his face. He had returned not only to be with Alice and Jamie (now played by Richard Bekins), but also to resume his position as head of Frame Construction. Alice had broken off her engagement to Mac after Steve's return and they tried to resume their relationship, but after being trapped at one of his construction sites with Rachel, their attraction was resurrected and they reunited. Rachel and Steve were driving to the airport (on their way to get remarried) when they crashed their car. Steve was killed and Rachel survived, but was blinded. Six months later, having regained her sight and realizing that she never stopped loving Mac, Rachel remarried him for the third (and last) time in a double ceremony with Mac's son Sandy Cory (Christopher Rich) and Sandy's bride, Blaine Ewing (Laura Malone).
Though Jamie loved Mac as a father, he never forgot Steve. When Jamie (now played by Laurence Lau) welcomed a son with his wife Vicky (Anne Heche), they named him Steven.
Six years after Steve's death, he would return again, but as a ghost (George Reinholt reprised his role for Another World'''s 25th anniversary) to help Rachel, who was having an out-of-body experience while her body was lying unconscious, overcome with gas, in the engine room of a yacht where the 25th anniversary party for Cory Publishing was being held. She was being enticed by the spirit of Steve's evil sister Janice Frame (Christine Jones) to die, but Steve arrived and told Rachel that it wasn't her time to die yet and that she needed to go back, which she did.
Later, Steve visited Jamie, who was struggling over the fact Vicky had lied to him and that he might not be Steven's father. Steve gave Jamie some words of wisdom and told him how proud he was of the man he turned out to be. Steve told his only son he loved him and faded away forever into eternity.
References
http://www.anotherworldhomepage.com/1steve.html
Television characters introduced in 1968
Another World (TV series) characters
|
```java
/**
*
*
* path_to_url
*
* Unless required by applicable law or agreed to in writing, software
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
*/
package org.thingsboard.server.service.edge.rpc.processor.device;
import org.assertj.core.api.Assertions;
import org.junit.jupiter.api.BeforeEach;
import org.thingsboard.server.common.data.Device;
import org.thingsboard.server.common.data.DeviceProfile;
import org.thingsboard.server.common.data.DeviceProfileType;
import org.thingsboard.server.common.data.DeviceTransportType;
import org.thingsboard.server.common.data.device.profile.DeviceProfileData;
import org.thingsboard.server.common.data.edge.EdgeEvent;
import org.thingsboard.server.common.data.edge.EdgeEventActionType;
import org.thingsboard.server.common.data.id.DashboardId;
import org.thingsboard.server.common.data.id.DeviceId;
import org.thingsboard.server.common.data.id.DeviceProfileId;
import org.thingsboard.server.common.data.id.EdgeId;
import org.thingsboard.server.common.data.id.RuleChainId;
import org.thingsboard.server.common.data.id.TenantId;
import org.thingsboard.server.common.data.security.DeviceCredentials;
import org.thingsboard.server.gen.edge.v1.DeviceProfileUpdateMsg;
import org.thingsboard.server.gen.edge.v1.DownlinkMsg;
import org.thingsboard.server.service.edge.rpc.processor.BaseEdgeProcessorTest;
import java.util.UUID;
import static org.junit.jupiter.api.Assertions.assertNotNull;
import static org.mockito.BDDMockito.willReturn;
public abstract class AbstractDeviceProcessorTest extends BaseEdgeProcessorTest {
protected DeviceId deviceId;
protected DeviceProfileId deviceProfileId;
protected DeviceProfile deviceProfile;
@BeforeEach
public void setUp() {
edgeId = new EdgeId(UUID.randomUUID());
tenantId = new TenantId(UUID.randomUUID());
deviceId = new DeviceId(UUID.randomUUID());
deviceProfileId = new DeviceProfileId(UUID.randomUUID());
deviceProfile = new DeviceProfile();
deviceProfile.setId(deviceProfileId);
deviceProfile.setName("DeviceProfile");
deviceProfile.setDefault(true);
deviceProfile.setType(DeviceProfileType.DEFAULT);
DeviceProfileData deviceProfileData = new DeviceProfileData();
deviceProfile.setProfileData(deviceProfileData);
deviceProfile.setTransportType(DeviceTransportType.DEFAULT);
DeviceCredentials deviceCredentials = new DeviceCredentials();
deviceCredentials.setDeviceId(deviceId);
Device device = new Device();
device.setDeviceProfileId(deviceProfileId);
device.setId(deviceId);
device.setName("Device");
device.setType(deviceProfile.getName());
edgeEvent = new EdgeEvent();
edgeEvent.setTenantId(tenantId);
edgeEvent.setAction(EdgeEventActionType.ADDED);
willReturn(device).given(deviceService).findDeviceById(tenantId, deviceId);
willReturn(deviceProfile).given(deviceProfileService).findDeviceProfileById(tenantId, deviceProfileId);
willReturn(deviceCredentials).given(deviceCredentialsService).findDeviceCredentialsByDeviceId(tenantId, deviceId);
}
protected void updateDeviceProfileDefaultFields(long expectedDashboardIdMSB, long expectedDashboardIdLSB,
long expectedRuleChainIdMSB, long expectedRuleChainIdLSB) {
DashboardId dashboardId = getDashboardId(expectedDashboardIdMSB, expectedDashboardIdLSB);
RuleChainId ruleChainId = getRuleChainId(expectedRuleChainIdMSB, expectedRuleChainIdLSB);
deviceProfile.setDefaultDashboardId(dashboardId);
deviceProfile.setDefaultEdgeRuleChainId(ruleChainId);
}
protected void verify(DownlinkMsg downlinkMsg, long expectedDashboardIdMSB, long expectedDashboardIdLSB,
long expectedRuleChainIdMSB, long expectedRuleChainIdLSB) {
DeviceProfileUpdateMsg deviceProfileUpdateMsg = downlinkMsg.getDeviceProfileUpdateMsgList().get(0);
assertNotNull(deviceProfileUpdateMsg);
Assertions.assertThat(deviceProfileUpdateMsg.getDefaultDashboardIdMSB()).isEqualTo(expectedDashboardIdMSB);
Assertions.assertThat(deviceProfileUpdateMsg.getDefaultDashboardIdLSB()).isEqualTo(expectedDashboardIdLSB);
Assertions.assertThat(deviceProfileUpdateMsg.getDefaultRuleChainIdMSB()).isEqualTo(expectedRuleChainIdMSB);
Assertions.assertThat(deviceProfileUpdateMsg.getDefaultRuleChainIdLSB()).isEqualTo(expectedRuleChainIdLSB);
}
}
```
|
```xml
<?xml version="1.0" encoding="utf-8"?>
<Project DefaultTargets="Build" ToolsVersion="4.0" xmlns="path_to_url">
<ItemGroup Label="ProjectConfigurations">
<ProjectConfiguration Include="Debug|Win32">
<Configuration>Debug</Configuration>
<Platform>Win32</Platform>
</ProjectConfiguration>
<ProjectConfiguration Include="Release|Win32">
<Configuration>Release</Configuration>
<Platform>Win32</Platform>
</ProjectConfiguration>
</ItemGroup>
<PropertyGroup Label="Globals">
<ProjectGuid>{DD9C9854-3882-42B9-BFA2-C6CEBFCE3529}</ProjectGuid>
<RootNamespace>Test_set_interval_set</RootNamespace>
<Keyword>Win32Proj</Keyword>
</PropertyGroup>
<Import Project="$(VCTargetsPath)\Microsoft.Cpp.Default.props" />
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Release|Win32'" Label="Configuration">
<ConfigurationType>Application</ConfigurationType>
<CharacterSet>Unicode</CharacterSet>
<WholeProgramOptimization>true</WholeProgramOptimization>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Debug|Win32'" Label="Configuration">
<ConfigurationType>Application</ConfigurationType>
<CharacterSet>Unicode</CharacterSet>
</PropertyGroup>
<Import Project="$(VCTargetsPath)\Microsoft.Cpp.props" />
<ImportGroup Label="ExtensionSettings">
</ImportGroup>
<ImportGroup Condition="'$(Configuration)|$(Platform)'=='Release|Win32'" Label="PropertySheets">
<Import Project="$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props" Condition="exists('$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props')" Label="LocalAppDataPlatform" />
</ImportGroup>
<ImportGroup Condition="'$(Configuration)|$(Platform)'=='Debug|Win32'" Label="PropertySheets">
<Import Project="$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props" Condition="exists('$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props')" Label="LocalAppDataPlatform" />
</ImportGroup>
<PropertyGroup Label="UserMacros" />
<PropertyGroup>
<_ProjectFileVersion>10.0.30319.1</_ProjectFileVersion>
<OutDir Condition="'$(Configuration)|$(Platform)'=='Debug|Win32'">../../../../bin/debug/\</OutDir>
<IntDir Condition="'$(Configuration)|$(Platform)'=='Debug|Win32'">../../../../bin/obj/$(ProjectName)/debug/\</IntDir>
<LinkIncremental Condition="'$(Configuration)|$(Platform)'=='Debug|Win32'">true</LinkIncremental>
<OutDir Condition="'$(Configuration)|$(Platform)'=='Release|Win32'">../../../../bin/release/\</OutDir>
<IntDir Condition="'$(Configuration)|$(Platform)'=='Release|Win32'">../../../../bin/obj/$(ProjectName)/release/\</IntDir>
<LinkIncremental Condition="'$(Configuration)|$(Platform)'=='Release|Win32'">false</LinkIncremental>
</PropertyGroup>
<ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Debug|Win32'">
<ClCompile>
<Optimization>Disabled</Optimization>
<AdditionalIncludeDirectories>../../../../;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>
<PreprocessorDefinitions>WIN32;_DEBUG;_CONSOLE;%(PreprocessorDefinitions)</PreprocessorDefinitions>
<MinimalRebuild>true</MinimalRebuild>
<BasicRuntimeChecks>EnableFastChecks</BasicRuntimeChecks>
<RuntimeLibrary>MultiThreadedDebugDLL</RuntimeLibrary>
<PrecompiledHeader>
</PrecompiledHeader>
<WarningLevel>Level3</WarningLevel>
<DebugInformationFormat>EditAndContinue</DebugInformationFormat>
</ClCompile>
<Link>
<OutputFile>../../../../bin/debug/$(ProjectName).exe</OutputFile>
<AdditionalLibraryDirectories>../../../../lib; ../../../../stage/lib;%(AdditionalLibraryDirectories)</AdditionalLibraryDirectories>
<GenerateDebugInformation>true</GenerateDebugInformation>
<SubSystem>Console</SubSystem>
<RandomizedBaseAddress>false</RandomizedBaseAddress>
<DataExecutionPrevention>
</DataExecutionPrevention>
<TargetMachine>MachineX86</TargetMachine>
</Link>
</ItemDefinitionGroup>
<ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Release|Win32'">
<ClCompile>
<AdditionalIncludeDirectories>../../../../;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>
<PreprocessorDefinitions>WIN32;NDEBUG;_CONSOLE;%(PreprocessorDefinitions)</PreprocessorDefinitions>
<RuntimeLibrary>MultiThreadedDLL</RuntimeLibrary>
<PrecompiledHeader>
</PrecompiledHeader>
<WarningLevel>Level4</WarningLevel>
<DebugInformationFormat>ProgramDatabase</DebugInformationFormat>
</ClCompile>
<Link>
<OutputFile>../../../../bin/release/$(ProjectName).exe</OutputFile>
<AdditionalLibraryDirectories>../../../../lib; ../../../../stage/lib;%(AdditionalLibraryDirectories)</AdditionalLibraryDirectories>
<GenerateDebugInformation>true</GenerateDebugInformation>
<SubSystem>Console</SubSystem>
<OptimizeReferences>true</OptimizeReferences>
<EnableCOMDATFolding>true</EnableCOMDATFolding>
<RandomizedBaseAddress>false</RandomizedBaseAddress>
<DataExecutionPrevention>
</DataExecutionPrevention>
<TargetMachine>MachineX86</TargetMachine>
</Link>
</ItemDefinitionGroup>
<ItemGroup>
<ClCompile Include="test_set_interval_set.cpp" />
</ItemGroup>
<ItemGroup>
<ClInclude Include="..\test_type_lists.hpp" />
</ItemGroup>
<Import Project="$(VCTargetsPath)\Microsoft.Cpp.targets" />
<ImportGroup Label="ExtensionTargets">
</ImportGroup>
</Project>
```
|
```html
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "path_to_url">
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=US-ASCII">
<title>Struct equal_to</title>
<link rel="stylesheet" href="../../../../../doc/src/boostbook.css" type="text/css">
<meta name="generator" content="DocBook XSL Stylesheets V1.79.1">
<link rel="home" href="../../../index.html" title="The Boost C++ Libraries BoostBook Documentation Subset">
<link rel="up" href="../../../proto/reference.html#header.boost.proto.tags_hpp" title="Header <boost/proto/tags.hpp>">
<link rel="prev" href="greater_equal.html" title="Struct greater_equal">
<link rel="next" href="not_equal_to.html" title="Struct not_equal_to">
</head>
<body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF">
<table cellpadding="2" width="100%"><tr>
<td valign="top"><img alt="Boost C++ Libraries" width="277" height="86" src="../../../../../boost.png"></td>
<td align="center"><a href="../../../../../index.html">Home</a></td>
<td align="center"><a href="../../../../../libs/libraries.htm">Libraries</a></td>
<td align="center"><a href="path_to_url">People</a></td>
<td align="center"><a href="path_to_url">FAQ</a></td>
<td align="center"><a href="../../../../../more/index.htm">More</a></td>
</tr></table>
<hr>
<div class="spirit-nav">
<a accesskey="p" href="greater_equal.html"><img src="../../../../../doc/src/images/prev.png" alt="Prev"></a><a accesskey="u" href="../../../proto/reference.html#header.boost.proto.tags_hpp"><img src="../../../../../doc/src/images/up.png" alt="Up"></a><a accesskey="h" href="../../../index.html"><img src="../../../../../doc/src/images/home.png" alt="Home"></a><a accesskey="n" href="not_equal_to.html"><img src="../../../../../doc/src/images/next.png" alt="Next"></a>
</div>
<div class="refentry">
<a name="boost.proto.tag.equal_to"></a><div class="titlepage"></div>
<div class="refnamediv">
<h2><span class="refentrytitle">Struct equal_to</span></h2>
<p>boost::proto::tag::equal_to — Tag type for the binary == operator. </p>
</div>
<h2 xmlns:rev="path_to_url~gregod/boost/tools/doc/revision" class="refsynopsisdiv-title">Synopsis</h2>
<div xmlns:rev="path_to_url~gregod/boost/tools/doc/revision" class="refsynopsisdiv"><pre class="synopsis"><span class="comment">// In header: <<a class="link" href="../../../proto/reference.html#header.boost.proto.tags_hpp" title="Header <boost/proto/tags.hpp>">boost/proto/tags.hpp</a>>
</span>
<span class="keyword">struct</span> <a class="link" href="equal_to.html" title="Struct equal_to">equal_to</a> <span class="special">{</span>
<span class="special">}</span><span class="special">;</span></pre></div>
</div>
<table xmlns:rev="path_to_url~gregod/boost/tools/doc/revision" width="100%"><tr>
<td align="left"></td>
file LICENSE_1_0.txt or copy at <a href="path_to_url" target="_top">path_to_url
</p>
</div></td>
</tr></table>
<hr>
<div class="spirit-nav">
<a accesskey="p" href="greater_equal.html"><img src="../../../../../doc/src/images/prev.png" alt="Prev"></a><a accesskey="u" href="../../../proto/reference.html#header.boost.proto.tags_hpp"><img src="../../../../../doc/src/images/up.png" alt="Up"></a><a accesskey="h" href="../../../index.html"><img src="../../../../../doc/src/images/home.png" alt="Home"></a><a accesskey="n" href="not_equal_to.html"><img src="../../../../../doc/src/images/next.png" alt="Next"></a>
</div>
</body>
</html>
```
|
Babutsa is a British–Turkish Cypriot music band. The group is named after Opuntias which is a fruit particular to Cyprus. The band was formed of three soloists whose members include Peri Aziz, Ali Sönmez, and Soner Türsoy. Babutsa sings popular traditional Turkish Cypriot folklore songs. They have managed to grab the attention of not only Cypriots around the world, but also of many people in Turkey who are often foreign to Cypriot music. Thus, they have gained much success in Cyprus, the UK, Australia, and Turkey especially with their first single Yanayım Yanayım.
Members
Ali Sönmez (born in 1962, London, England) is of Turkish Cypriot descent.
Soner Türsoy (born in 1965, Famagusta, Cyprus) immigrated to London in 1981.
Discography
Albums
London Calling (2009)
Singles
Yanayım Yanayım (2009)
Tabi Güzelim (2010)
References
Musical trios
Turkish Cypriot musical groups
|
South Carolina Highway 135 (SC 135) is a state highway in the U.S. state of South Carolina. The highway travels through mostly rural areas of Pickens County.
Route description
SC 135 begins at an intersection with U.S. Route 178 (US 178, Moorefield Memorial Highway) southeast of Liberty within Pickens County. It travels to the northeast and crosses Carmel Creek before entering Easley. The highway has a brief concurrency with SC 8 (Pelzer Highway). A very short distance after SC 8 splits off onto South 5th Street, the highway has an interchange with U.S. Route 123 (US 123; Calhoun Memorial Highway). It passes a U.S. Post Office and Gettys Middle School before intersecting SC 93 (Main Street). It immediately crosses over railroad tracks and turns right onto NE Main Street. One block later, it turns left onto North A Street. It crosses over Georges Creek and Mud Dog Branch before leaving the city limits of the city. SC 135 travels in a fairly northerly direction and crosses over Burdine Creek. It has an intersection with SC 183 (Farrs Bridge Road) and SC 186 (Earls Bridge Road). It begins traveling to the north-northwest and crosses over Shoal Creek. It curves to the northwest and crosses over Carpenter Creek. South of Pumpkintown, it crosses over Adams Creek and meets its northern terminus, another intersection with SC 8 (Pumpkintown Highway).
Major intersections
See also
References
External links
SC 135 South Carolina Hwy Index
135
Transportation in Pickens County, South Carolina
|
Michael Hermosillo (born January 17, 1995) is an American professional baseball outfielder in the New York Yankees organization. He previously played for the Los Angeles Angels and Chicago Cubs. Hermosillo was drafted by the Angels in the 28th round of the 2013 Major League Baseball draft. He made his MLB debut in 2018.
Early life
Hermosillo was born in Mesa, Arizona. He attended Ottawa Township High School in Ottawa, Illinois. Along with baseball, he also played football in high school and had committed to the University of Illinois to play college football and college baseball.
Hermosillo participated in the Chicago White Sox Double Duty Classic in 2012 and 2013.
Career
Los Angeles Angels
Hermosillo was drafted by the Los Angeles Angels in the 28th round of the 2013 Major League Baseball draft. He signed with the Angels rather than attend Illinois. He made his professional debut with the Arizona League Angels that same year and played in 11 games.
Hermosillo played 2014 with the Orem Owlz where he batted .244 with three home runs and 23 RBIs in 54 games, 2015 with Orem and the Burlington Bees where he slashed .231/.344/.263 in 93 games, and 2016 with Burlington and the Inland Empire 66ers where he batted .317./.402/.467 with six home runs and 39 RBIs in 77 games. After the 2016 season he played in the Arizona Fall League.
He started 2017 with Inland Empire and was promoted to the Mobile BayBears and Salt Lake Bees during the season. In 120 total games between the three teams, he batted .267 with nine home runs, 44 RBIs, and 35 stolen bases. The Angels added him to their 40-man roster after the season.
He began 2018 with Salt Lake, for whom he batted .267/.357/.480 with 12 home runs and 10 stolen bases for the season.
He made his Major League debut on May 18, 2018. He was called back up on June 3 in place of Kole Calhoun. In 2018 with the Angels he batted .211/.274/.333 with 1 home run and 1 RBI in 57 at bats. In 2019 for Los Angeles, Hermosillo hit .139/.304/.222 with 3 RBI and no homers in 36 at-bats. In 2020, Hermosillo only stepped up to the plate 10 times, and had 2 hits, 2 RBI and a stolen base before he was designated for assignment on August 23, 2020. He became a free agent on November 2, 2020.
Chicago Cubs
On December 2, 2020, Hermosillo signed a minor league contract with the Chicago Cubs organization. He was assigned to the Triple-A Iowa Cubs to begin the season. On August 17, 2021, Hermosillo's contract was selected by the Cubs. Appearing in 16 games, Hermosillo hit .194/.237/.500 with 3 home runs and 7 RBI. On November 30, Hermosillo was non-tendered by the Cubs, making him a free agent.
On December 1, 2021, Hermosillo re-signed with the Cubs on a one-year contract. He suffered a quadriceps strain on May 8, and was transferred to the 60-day injured list on June 30. He was activated from the injured list on September 6. Hermosillo was designated for assignment by Chicago on September 27, and sent outright to Triple–A Iowa on September 30. On October 15, Hermosillo elected to become a free agent.
New York Yankees
On December 16, 2022, Hermosillo signed a minor league deal with the New York Yankees.
References
External links
1995 births
Arizona Complex League Cubs players
Arizona League Angels players
Baseball players from Arizona
Burlington Bees players
Chicago Cubs players
Inland Empire 66ers players
Iowa Cubs players
Living people
Los Angeles Angels players
Major League Baseball outfielders
Mobile BayBears players
Orem Owlz players
Salt Lake Bees players
Scottsdale Scorpions players
Sportspeople from Mesa, Arizona
Scranton/Wilkes-Barre RailRiders players
|
```java
/*
*
*
* path_to_url
*
* Unless required by applicable law or agreed to in writing, software
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
*
*/
package com.ctrip.framework.apollo.openapi.v1.controller;
import com.ctrip.framework.apollo.openapi.api.ClusterOpenApiService;
import com.ctrip.framework.apollo.portal.spi.UserService;
import java.util.Objects;
import javax.servlet.http.HttpServletRequest;
import javax.validation.Valid;
import org.springframework.security.access.prepost.PreAuthorize;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
import com.ctrip.framework.apollo.common.exception.BadRequestException;
import com.ctrip.framework.apollo.common.utils.InputValidator;
import com.ctrip.framework.apollo.common.utils.RequestPrecondition;
import com.ctrip.framework.apollo.core.utils.StringUtils;
import com.ctrip.framework.apollo.openapi.dto.OpenClusterDTO;
@RestController("openapiClusterController")
@RequestMapping("/openapi/v1/envs/{env}")
public class ClusterController {
private final UserService userService;
private final ClusterOpenApiService clusterOpenApiService;
public ClusterController(
UserService userService,
ClusterOpenApiService clusterOpenApiService) {
this.userService = userService;
this.clusterOpenApiService = clusterOpenApiService;
}
@GetMapping(value = "apps/{appId}/clusters/{clusterName:.+}")
public OpenClusterDTO getCluster(@PathVariable("appId") String appId, @PathVariable String env,
@PathVariable("clusterName") String clusterName) {
return this.clusterOpenApiService.getCluster(appId, env, clusterName);
}
@PreAuthorize(value = "@consumerPermissionValidator.hasCreateClusterPermission(#request, #appId)")
@PostMapping(value = "apps/{appId}/clusters")
public OpenClusterDTO createCluster(@PathVariable String appId, @PathVariable String env,
@Valid @RequestBody OpenClusterDTO cluster, HttpServletRequest request) {
if (!Objects.equals(appId, cluster.getAppId())) {
throw new BadRequestException(
"AppId not equal. AppId in path = %s, AppId in payload = %s", appId, cluster.getAppId());
}
String clusterName = cluster.getName();
String operator = cluster.getDataChangeCreatedBy();
RequestPrecondition.checkArguments(!StringUtils.isContainEmpty(clusterName, operator),
"name and dataChangeCreatedBy should not be null or empty");
if (!InputValidator.isValidClusterNamespace(clusterName)) {
throw BadRequestException.invalidClusterNameFormat(InputValidator.INVALID_CLUSTER_NAMESPACE_MESSAGE);
}
if (userService.findByUserId(operator) == null) {
throw BadRequestException.userNotExists(operator);
}
return this.clusterOpenApiService.createCluster(env, cluster);
}
}
```
|
```xml
import type { Resolution } from '@pnpm/resolver-base'
import type { Fetchers, FetchFunction, DirectoryFetcher, GitFetcher } from '@pnpm/fetcher-base'
export function pickFetcher (fetcherByHostingType: Partial<Fetchers>, resolution: Resolution): FetchFunction | DirectoryFetcher | GitFetcher {
let fetcherType = resolution.type
if (resolution.type == null) {
if (resolution.tarball.startsWith('file:')) {
fetcherType = 'localTarball'
} else if (isGitHostedPkgUrl(resolution.tarball)) {
fetcherType = 'gitHostedTarball'
} else {
fetcherType = 'remoteTarball'
}
}
const fetch = fetcherByHostingType[fetcherType! as keyof Fetchers]
if (!fetch) {
throw new Error(`Fetching for dependency type "${resolution.type ?? 'undefined'}" is not supported`)
}
return fetch
}
export function isGitHostedPkgUrl (url: string): boolean {
return (
url.startsWith('path_to_url ||
url.startsWith('path_to_url ||
url.startsWith('path_to_url
) && url.includes('tar.gz')
}
```
|
```java
package com.shuyu.gsyvideoplayer.render.effect;
import android.opengl.GLSurfaceView;
import com.shuyu.gsyvideoplayer.render.view.GSYVideoGLView.ShaderInterface;
/**
* Converts video to Sepia tone.
*
* @author sheraz.khilji
*/
public class SepiaEffect implements ShaderInterface {
/**
* Initialize Effect
*/
public SepiaEffect() {
}
@Override
public String getShader(GLSurfaceView mGlSurfaceView) {
float weights[] = {805.0f / 2048.0f, 715.0f / 2048.0f,
557.0f / 2048.0f, 1575.0f / 2048.0f, 1405.0f / 2048.0f,
1097.0f / 2048.0f, 387.0f / 2048.0f, 344.0f / 2048.0f,
268.0f / 2048.0f};
String matrixString[] = new String[9];
matrixString[0] = " matrix[0][0]=" + weights[0] + ";\n";
matrixString[1] = " matrix[0][1]=" + weights[1] + ";\n";
matrixString[2] = " matrix[0][2]=" + weights[2] + ";\n";
matrixString[3] = " matrix[1][0]=" + weights[3] + ";\n";
matrixString[4] = " matrix[1][1]=" + weights[4] + ";\n";
matrixString[5] = " matrix[1][2]=" + weights[5] + ";\n";
matrixString[6] = " matrix[2][0]=" + weights[6] + ";\n";
matrixString[7] = " matrix[2][1]=" + weights[7] + ";\n";
matrixString[8] = " matrix[2][2]=" + weights[8] + ";\n";
String shader = "#extension GL_OES_EGL_image_external : require\n"
+ "precision mediump float;\n"
+ "uniform samplerExternalOES sTexture;\n" + " mat3 matrix;\n"
+ "varying vec2 vTextureCoord;\n" + "void main() {\n"
+ matrixString[0] + matrixString[1] + matrixString[2]
+ matrixString[3] + matrixString[4] + matrixString[5]
+ matrixString[6] + matrixString[7] + matrixString[8]
+ " vec4 color = texture2D(sTexture, vTextureCoord);\n"
+ " vec3 new_color = min(matrix * color.rgb, 1.0);\n"
+ " gl_FragColor = vec4(new_color.rgb, color.a);\n" + "}\n";
return shader;
}
}
```
|
Victoria Dillard (born September 20, 1967) is an American advocate for Parkinson's disease research. She is also a former television and film actress who is best known for her co-starring roles as Janelle Cooper in the ABC sitcom Spin City, as one of the royal bathers in the 1988 Eddie Murphy romantic comedy Coming to America, and as the wife of Denzel Washington's main character in the 1991 action thriller film Ricochet.
Life and career
Dillard was born in New York City. She began performing at the age of five with the Dance Theatre of Harlem. She worked with the company until she was eighteen, appearing in such productions Porgy and Bess at the Metropolitan Opera. Then she went on tour in A Funny Thing Happened on the Way to the Forum with Mickey Rooney.
Dillard's most notable television role was as Janelle Cooper in the ABC sitcom Spin City. She stayed on the show for four seasons before leaving in 2000. Her other television credits include Star Trek: The Next Generation, Seinfeld, Roc, L.A. Law, Chicago Hope, Martin, Moesha, Family Law, Law & Order, Law & Order: Criminal Intent and other series.
Some of her film credits are Coming to America (1988), Deep Cover (1992), The Glass Shield (1994), Internal Affairs (1990), Out-of-Sync (1995) with LL Cool J, and Ricochet (1991) with Denzel Washington. Dillard also appeared as Betty Shabazz in the 2001 film Ali, her last film to date.
Dillard was featured in the November 1988 Playboy issue in the article "Sex In Cinema 1988", referencing her brief topless appearance in the beginning of Coming to America.
Personal life
Dillard currently lives in New York City. She dances in her free time and writes screenplays and plays for the stage. Dillard was dating actor Laurence Fishburne beginning in 1992 when the two met on the set of the film Deep Cover. Their relationship ended in 1995.
In 2005, at the age of 38, Dillard was diagnosed with Parkinson's disease. It is the same disease that afflicts her former Spin City co-star, actor Michael J. Fox. She has since become an advocate for Parkinson's disease research and treatments.
Filmography
Film
Television
References
External links
1967 births
20th-century American actresses
21st-century American actresses
Actresses from New York City
African-American actresses
American female dancers
American film actresses
American stage actresses
American television actresses
Living people
Dancers from New York (state)
20th-century African-American women
20th-century African-American people
21st-century African-American women
21st-century African-American people
People with Parkinson's disease
|
Novirhabdovirus is a genus of the family Rhabdoviridae containing viruses known to infect aquatic hosts. They can be transmitted from fish to fish or by waterborne virus, as well as through contaminated eggs. Replication and thermal inactivation temperatures are generally lower than for other rhabdoviruses, given the cold-blooded nature of their hosts. Hosts include a large and growing range of marine and freshwater fish.
A common characteristic among novirhabdoviruses is the NV gene, an approximately 500-nucleotide-long gene located between the glycoprotein (G) and polymerase (L) genes. The expected protein encoded by the NV gene is not found in the virions, leading to its being named a "nonvirion" (NV) protein. This is the origin of the genus name Novirhabdovirus.
References
Genus: Novirhabdovirus. ICTV Report. Academic Press. Retrieved on 2020-09-02.
ICTVdB Virus Description: 01.062.0.06. Novirhabdovirus. ICTVdB - The Universal Virus Database, version 3. Retrieved on 2007-07-15.
External links
Further reading
Fish viral diseases
Novirhabdoviruses
Virus genera
|
```javascript
module.exports = {
re: [
/^https?:\/\/vimeo\.com\/moogaloop\.swf\?clip_id=(\d+)/i
],
// direct link to old swf, for example,
// path_to_url
getLink: function(urlMatch, cb) {
cb ({
redirect: "path_to_url" + urlMatch[1]
});
}
};
```
|
Euclidia tarsalis is a moth of the family Erebidae found in Sri Lanka.
References
Moths described in 1865
Euclidia
|
Oskar (Oscar) Hans Antze (24 October 1878 – 23 April 1962 ) was a German chess player.
Antze was born in Cologne, the son of a physician. After his Abitur he had a Medical education at the University of Marburg, the University of Kiel and the Humboldt University of Berlin, receiving a Doctorate (Dr. med.). From 1900-1962 he had a Doctor's office in Bremen.
He shared 1st with Hugo Süchting at Kiel 1900 (Quadrangular); took 4th at Hamburg 1905 (Quadrangular); took 4th at Bremen 1906 (Quadrangular); won at Leipzig 1913.
After World War I, Dr. Antze tied for 3rd–5th at Bad Oeynhausen 1922 (22nd DSB–Congress, Ehrhardt Post won); took 6th at Hannover 1926 (Aron Nimzowitsch won); drew a short match with Efim Bogoljubow (1 : 1) at Bremen 1927; tied for 8th–9th at Duisburg 1929 (26th DSB–Congress, Carl Ahues won); took 8th at Bad Aachen 1934 (2nd GER-ch, Carl Carls won); took 4th at Bremen 1933 (Quadrangular). He died in Bremen in 1962.
References
1878 births
1962 deaths
German chess players
|
```glsl
precision highp float;
attribute vec3 a_cubeVertexPosition;
uniform vec3 u_translation;
uniform vec3 u_scale;
uniform mat4 u_viewMatrix;
uniform mat4 u_projectionMatrix;
varying vec3 v_cubePosition;
void main () {
v_cubePosition = a_cubeVertexPosition;
gl_Position = u_projectionMatrix * u_viewMatrix * vec4(a_cubeVertexPosition * u_scale + u_translation, 1.0);
}
```
|
Femoral pores are a part of a holocrine secretory gland found on the inside of the thighs of certain lizards and amphisbaenians which releases pheromones to attract mates or mark territory. In certain species only the male has these pores and in other species, both sexes have them, with the male's being larger. Femoral pores appear as a series of pits or holes within a row of scales on the ventral portion of the animal's thigh.
Femoral pores are present in all genera in the families Cordylidae, Crotaphytidae, Hoplocercidae, Iguanidae, Phrynosomatidae, and Xantusiidae. They are absent in all genera in the Anguidae, Chamaeleonidae, Dibamidae, Helodermatidae, Scincidae, Xenosauridae, and Varanidae families. They are present in other lizards and amphisbaenians quite variably, some geckoes, Phelsuma, for example have these pores, others in the same family do not.
In the desert iguana (Dipsosaurus dorsalis), the waxy lipids released from the femoral pores absorb ultraviolet (UV) wavelengths making them visible to species which can detect UV light. According to tests performed on the Green iguana, the variation in the chemicals released by the femoral pores can help to determine age, sex, and individual identity of the animal in question. Male leopard geckos (Eublepharis macularius), actually taste the secretions by flicking their tongues, if a male determines the other gecko in question is a male, the two will fight.
In certain species such as geckoes, the females lack femoral pores altogether. In most families of lizards that have femoral pores, notably the iguanids, both sexes have femoral pores, but the males tend to be much larger than females of the same size and age. In these instances they are used as a marker for sexual dimorphism.
The number of femoral pores varies considerably among species. For example, the number of pores in male lizards of the family Lacertidae can range between zero (e.g. Meroles anchietae) and 32 (e.g. Gallotia galloti) per limb. Also, shrub-climbing species tend to have fewer femoral pores than species inhabiting other substrates (such as sandy and rocky substrate), suggesting a role of the environment on the evolution of the chemical signaling apparatus in lacertid lizards.
References
Reptile anatomy
|
```c++
#include <Analyzer/AggregationUtils.h>
#include <Analyzer/InDepthQueryTreeVisitor.h>
#include <Analyzer/FunctionNode.h>
#include <Analyzer/Utils.h>
namespace DB
{
namespace ErrorCodes
{
extern const int ILLEGAL_AGGREGATION;
}
namespace
{
class CollectAggregateFunctionNodesVisitor : public ConstInDepthQueryTreeVisitor<CollectAggregateFunctionNodesVisitor>
{
public:
explicit CollectAggregateFunctionNodesVisitor(QueryTreeNodes * aggregate_function_nodes_)
: aggregate_function_nodes(aggregate_function_nodes_)
{}
explicit CollectAggregateFunctionNodesVisitor(String assert_no_aggregates_place_message_)
: assert_no_aggregates_place_message(std::move(assert_no_aggregates_place_message_))
{}
explicit CollectAggregateFunctionNodesVisitor(bool only_check_)
: only_check(only_check_)
{}
void visitImpl(const QueryTreeNodePtr & node)
{
if (only_check && has_aggregate_functions)
return;
auto * function_node = node->as<FunctionNode>();
if (!function_node || !function_node->isAggregateFunction())
return;
if (!assert_no_aggregates_place_message.empty())
throw Exception(ErrorCodes::ILLEGAL_AGGREGATION,
"Aggregate function {} is found {} in query",
function_node->formatASTForErrorMessage(),
assert_no_aggregates_place_message);
if (aggregate_function_nodes)
aggregate_function_nodes->push_back(node);
has_aggregate_functions = true;
}
bool needChildVisit(const QueryTreeNodePtr &, const QueryTreeNodePtr & child_node) const
{
if (only_check && has_aggregate_functions)
return false;
auto child_node_type = child_node->getNodeType();
return !(child_node_type == QueryTreeNodeType::QUERY || child_node_type == QueryTreeNodeType::UNION);
}
bool hasAggregateFunctions() const
{
return has_aggregate_functions;
}
private:
String assert_no_aggregates_place_message;
QueryTreeNodes * aggregate_function_nodes = nullptr;
bool only_check = false;
bool has_aggregate_functions = false;
};
}
QueryTreeNodes collectAggregateFunctionNodes(const QueryTreeNodePtr & node)
{
QueryTreeNodes result;
CollectAggregateFunctionNodesVisitor visitor(&result);
visitor.visit(node);
return result;
}
void collectAggregateFunctionNodes(const QueryTreeNodePtr & node, QueryTreeNodes & result)
{
CollectAggregateFunctionNodesVisitor visitor(&result);
visitor.visit(node);
}
bool hasAggregateFunctionNodes(const QueryTreeNodePtr & node)
{
CollectAggregateFunctionNodesVisitor visitor(true /*only_check*/);
visitor.visit(node);
return visitor.hasAggregateFunctions();
}
void assertNoAggregateFunctionNodes(const QueryTreeNodePtr & node, const String & assert_no_aggregates_place_message)
{
CollectAggregateFunctionNodesVisitor visitor(assert_no_aggregates_place_message);
visitor.visit(node);
}
void assertNoGroupingFunctionNodes(const QueryTreeNodePtr & node, const String & assert_no_grouping_function_place_message)
{
assertNoFunctionNodes(node, "grouping", ErrorCodes::ILLEGAL_AGGREGATION, "GROUPING", assert_no_grouping_function_place_message);
}
}
```
|
Michelangelo, nicknamed Mike or Mikey, is a superhero and one of the four main characters of the Teenage Mutant Ninja Turtles comics and all related media. Michelangelo is the most naturally gifted of the four brothers but prefers to have a good time rather than train. The most jocular and energetic of the team, he is shown to be rather immature; he is known for his wisecracks, quick-wit, optimism, and love of skateboarding and pizza. He is usually depicted wearing an orange eye mask. His signature weapons are a single or dual nunchaku, though he has also been portrayed using other weapons, such as a grappling hook, manriki-gusari, kusarigama, tonfa, and a three-section staff (in some action figures). He is commonly portrayed in media as speaking with a California accent.
Michelangelo was given a much bigger role in the 1987 cartoon series and subsequent series and films, directed at a younger audience, than in the more serious Mirage comic books, which were aimed at an older audience. He often coins most of their catchphrases, such as "Cowabunga!" and "Booyakasha!" in the 2012 series. Like all of the brothers, he is named after an Italian Renaissance artist; in this case, he is named after Michelangelo. His name was originally spelled "Michaelangelo" by the original creators, possibly spelling his namesake's name wrong by confusion with "Michael". In the Mirage comics, all four of the Turtles wear red masks, but to tell them apart, he was given an orange mask in subsequent media.
Comic books
Mirage Comics
In the Mirage comic books, Michelangelo was initially depicted as fun-loving, carefree, go lucky, and, while not as aggressive as Raphael, always ready to fight. He is much more serious-natured in the comic book than in the film incarnations, which have labeled his character a permanent "dude" talking teenager. It was Michelangelo's one-shot in this series that fleshed out most of the traits that have become synonymous with the character, such as his playfulness, empathy, and easygoing nature. In the one-shot story, Michelangelo adopts a stray cat (which he names Klunk) and also stops thieves from stealing toys meant for orphaned children.
After their defeat at the hands of the Foot Clan, the Turtles, Splinter, April O'Neil, and Casey Jones retreat to a farmhouse in Northampton, Massachusetts which used to belong to Casey's grandmother. While there, April is worried to note that Michelangelo is not himself. He spends his days in the barn taking out his aggression on a punching bag. A scene shows him lashing out at his surroundings and repeatedly punching the wall of the barn until it breaks, then collapsing on it despondently, anger spent. The end of the story implies that Michelangelo's sorrow and frustration have been resolved, as subsequent issues restore Michelangelo's more relaxed, optimistic personality.
It is during the group's time at the farm we learn that Michelangelo also has an interest in comic books, specifically ones involving superheroes such as "The Justice Force" (comic book heroes based on The Justice League and The Fantastic Four). He also finds solace in writing fiction and has produced a story depicting himself as a rōnin in Feudal Japan (vol. 1 Issue #17).
In the story arc City at War, Michelangelo instantly bonds with Casey Jones' adopted daughter Shadow, who nicknames him "Rooish". In the second volume, the Turtles decide to try to live apart from one another. Michelangelo, the social creature that he is, moves in with April and Casey so that he can be close to Shadow. Throughout the first two volumes, Michelangelo seemed to act as the peacemaker of the team. These stories also laid the foundations which demonstrated his closeness with his brother Donatello, their laid-back natures separating them from the more contentious Leonardo and Raphael.
In Volume 4, Michelangelo gets a job as a tour guide showing alien visitors around Earth. His first (and only) tourist is the Regenta (or "princess") Seri of the Styracodon race. Michelangelo convinces Seri to sneak away from her bodyguards so that he can take her on a tour of the northwest coast of the US. Their relationship becomes more intimate, and Seri delivers eggs that will soon hatch into children. Unfortunately, before much time is given for him to settle with this news Seri's bodyguards become aware of his machinations. They attack Michelangelo and transport him back to their homeworld, where he is placed in prison, kept away from Seri and their unborn children. With the help of a Triceraton prisoner named Azokk, he manages to escape and is rescued by a group of Triceratons who came to rescue Azokk. The ordeal results in Mikey's personality taking a very dark turn, having been hardened by the cruelty of the extrajudicial prison and Azokk's death. Upon the Triceratons learning of Azokk's death, they declare war on the Styracodons and carry out a genocidal assault against them. Michelangelo, embittered by Seri's apparent betrayal, joins forces with the Triceratons and gladly aids them in acts of genocide against the Styracodons.
Michelangelo was not given an especially large role in Volumes 1 and 2, did little to advance the plot and was often not portrayed as an especially skilled fighter. His relatively small role was probably due to the need to establish Leonardo's role as "leader" along with the fact that Donatello was Peter Laird's favorite Turtle, and Raphael was Kevin Eastman's favorite.
This incarnation of Michelangelo appeared in the Turtles Forever crossover special voiced by Bradford Cameron.
Image Comics
In the comics published by Image Comics, retroactively setting the Mirage Comics series in the Image Universe, Michelangelo's interest in writing is expanded upon and he is established as a writer of fiction and poetry. During this series, Michelangelo develops a romantic relationship with Horridus, whom he credits as his muse in writing. But the relationship would not last, (as the story may have begun after Rapture's death), she started staying with Officer Dragon and had developed an attraction to him; though Dragon did not know it himself. Michelangelo would be heartbroken when Sara dumps him. An early issue has him selling his first poem to a poetry digest. As the comic continued, Michelangelo's career as a writer gradually expanded. In the final issue, he has published his first novel, a romance called "A Rose Among the Thorns". April mentions that the book was already going back for a second printing and that she had heard that Oprah Winfrey loved it, which "practically guarantees that it will be a best-seller." Michelangelo is the only Turtle who did not end up disfigured in some way in this series; Leonardo lost his left hand and had it replaced by a steel cap with a retractable blade, Raphael was facially disfigured after being shot in the face, and Donatello was transformed into a cyborg after being shot and thrown out of a helicopter. Mikey asked Sara for her hand in marriage but was denied. Michelangelo was last seen arriving at the Northampton farmhouse with his family in the final issue.
Archie Comics
In the Archie Comics series, Michelangelo was initially presented very similarly to his 1987 cartoon portrayal — understandably, considering that the comic started as an adaption of the popular animated series. As the series progressed, Archie'''s "Michaelangelo" was presented as more mature than the cartoon version. This version developed also an interest in poetry. During a battle, he was temporarily blinded and later captured by the US military, whereupon he was interrogated and tortured. He was eventually rescued by his family and saved the life of the man who tortured him.
One of his many skills in the Archie comics was the ability to communicate with animals. In a storyline set in the future, Michelangelo is shown to have become an artist whose main job is running an orphanage.
IDW Comics
Michelangelo like his brothers was reincarnated into a turtle after he was slain by Oroku Saki. He is also the youngest turtle as shown in his human form when he was the only baby among his brothers. He is more mature in this incarnation than others, but is still sociable, for example, bonding with the Mighty Mutanimals and with pizza boy Woody Dirkins. Unfortunately, Michelangelo's friendship with Woody waned, as shown in Teenage Mutant Ninja Turtles Issue 15, when Slash frightened Woody and he left a note for Michelangelo stating that he could not handle the troubles that followed them. Michelangelo is good friends with all Mutanimals but can be unsure about Old Hob, remembering how Hob had been their enemy. Further, Michelangelo was occasionally anxious about them, showing some concern in Issue 68, in which Hob told the Turtles that Slash had been mind-controlled, taken out all the Mutanimals, and helped capture them for the Earth Protection Force, as well as in Issue 70, in which Michelangelo wondered whether Slash would be okay. Michelangelo is also the most sensitive turtle: e.g., in Issue 50 he couldn't handle Splinter accepting the leadership of the Foot Clan. In this incarnation, Michelangelo is romantically interested in Neutrino: Princess Trib instead of Kala based on the 1987 TV series. Michelangelo's kindness found him supporting Donatello's ideas for just about everything, as when Raphael introduced Donatello's tracking device (embedded in a shuriken) in the "Secret History of the Foot Clan". The after-effects of the war left Michelangelo distraught seeing that he was strict with Raphael when they were mourning Donnie's injuries. He also shows signs of intelligence when he decodes ancient Japanese text in the "Ashi no Himitsu" book that tells the secrets of the Foot Clan. Furthermore, he was not afraid to confront Splinter when he had taken the orphans with him to prevent them from training as Foot Ninja. Eventually, he convinced Splinter to listen to him and won their debate. Nonetheless, Michelangelo is still as fun-loving a turtle as in other bodacious versions of himself.
Michelangelo is also the titular character of the miniseries Teenage Mutant Ninja Turtles: The Last Ronin. Set in a dark future where New York has been conquered by the Shredder's grandson Hiroto and the Foot Clan, Michelangelo is the last of the turtles alive; Raphael drowned trying to kill Karai after her attack nearly killed Splinter (who was left in a permanent coma by the same attack), Leonardo and Casey Jones were blown up by Baxter Stockman's Mousers helping their allies escape his attack, and Donatello and Splinter were betrayed and assassinated during attempted peace talks. Michelangelo returns to New York after years of training with his brothers' weapons, but his initial assault fails and he nearly commits seppuku from shame, only surviving because he passed out due to blood loss before he could finish the job. He was rescued by the still-alive April O'Neil and her daughter Casey, joining forces with their resistance movement and becoming a sensei figure to Casey, who has a degree of physical enhancement as she inherited mutagen from her parents' constant exposure to the Turtles. Mikey ultimately kills Hiroto in a final duel at the cost of his own life. He leaves his journal with April and Casey, who are shown making plans to mutate a new quartet of turtles to continue the legacy of their fallen friends.
Television
1987 animated series
Michelangelo's persona became strongly established in the 1987 animated series. He was often seen as a "Party Dude", which, though accurate in the 1987 series (which gave him this title in the theme song), accounts for only part of his personality otherwise. As the "party dude", he usually did not have much input in the team's plans, although he was still just as active as his brothers. He typically spent much of his time joking and socializing with other characters. He is most associated with the "Cowabunga" expression that became a pop culture phenomenon.
Michelangelo had a fondness for pizza, even beyond that of the other Turtles; in the Season 3 episode Cowabunga, Shredhead, his pizza cravings annoyed the others so much that Splinter hypnotized him into refusing and denouncing pizza whenever the very word was mentioned, although the hypnosis was lifted at the end of the episode. He was essentially a provider of comic relief, alongside Raphael.
Michelangelo began the series with his trademark nunchaku as his weapons, but the controversy surrounding the weapons in the United Kingdom led to scenes of their use being edited out of the local broadcast of the series. To compensate for this, the American showrunners dropped the nunchaku from the series entirely in the fourth season, replacing them with a grappling hook called the "Turtle Line" that served as Mikey's signature weapon for the rest of the show's run.
Michelangelo also received his distinctive voice, which has been imitated in other portrayals of him, in the 1990 series. Employing a "surfer slang" vocabulary, he customarily spoke with a unique hybrid of a Californian surfer accent (not unlike the speech of the Spicoli character from Fast Times at Ridgemont High) and what may have been a stereotypical "stoner" accent, though no reference is ever made to drugs in the series and it is likely that the voicing simply emphasized his laid-back and somewhat innocent attitude. In fact, Michelangelo appeared in a 1990 animated special, Cartoon All-Stars to the Rescue, alongside other famous cartoon characters, intended to inform children about the dangers of substance abuse. Additionally, all four of the Turtles were official "spokes-turtles" of the "Just Say No" anti-drug campaign, despite accusations that at least one of them exhibited characteristics of a stoner. During one of these anti-drug PSAs, Michelangelo suggests to a kid being tempted with marijuana that he should "get a pizza" to go with it before the idea is shot down by Donatello. Michelangelo's trademark phrase in this series is the famous "Cowabunga".
Michelangelo's voice actor was Townsend Coleman in the 1987 series' and Johnny Castro in the 25th-anniversary movie Turtles Forever. Michelangelo also made a couple of appearances in the 2012 series in the episode, The Manhattan Project. He and the other turtles along with Casey and April are seen through a portal by their 2012 counterparts walking on a road and he made a speaking cameo along with the other turtles at the end of the episode when a space worm from the 2012 dimension started terrorizing the street. All four turtles see the worm and spring into action while shouting their famous catchphrase, 'Cowabunga'. Townsend Coleman reprised his role as Michelangelo for the cameo. This would mark the first time in over 28 years the 1987 TMNT cast would return to their roles, with the sole exception of Rob Paulsen who returned to the TMNT franchise as Donatello in the 2012 series. The 1987 turtles also had a crossover with the 2012 turtles in the season 4 episode, "Trans-Dimensional Turtles" then in the three-part series final "Wanted: Bebop & Rocksteady".
Coming Out of Their Shells tour
The live-action "Coming Out of Their Shells" concert tour, whose initial show at Radio City Music Hall would be released on Pay per view and later VHS, would see Michelangelo take the role of the band's lead singer and guitarist. The character retained his fun-loving attitude and was the most vocal member of the band in the non-musical spoken segments. During the Cowabunga song portion of the show, Michelangelo claims that he sees Raphael as his very best friend and that Raphael wrote the music for all the Turtles' songs, while Michelangelo wrote the lyrics. This is somewhat in keeping with the Mirage and Image comics' depictions of Michelangelo as a writer and poet. During the event, Michelangelo sings an exclusive ballad not included in the cassette or CD releases for the show called Follow Your Heart near the show's climax when the Turtles begin to feel as though their efforts to defeat Shredder's plans to steal all the music in the world are doomed to failure. The song manages to bring the Turtles around, and the team immediately begins making plans to defeat Shredder's machine and army, which ultimately succeeds. The Making of VHS tape, which acts as an 'in-universe' supplement to the Radio City Music Hall show and treats the Turtles as real people rather than fictional characters, follows up on the idea that Michelangelo and Raphael had been the driving force behind the band and also indicated that they had stumbled onto the idea by pure accident. The two turtles in question state in the video that they had simply been playing around with some sewer pipes and singing while waiting for a pizza to be delivered. Realizing they had been creating music and finding they enjoyed the experience, they continued to pursue music as a hobby and eventually brought Leonardo and Donatello in on it, leading to them becoming the band depicted in the concerts.
1997 live-action series
In the live-action series, Ninja Turtles: The Next Mutation, Michelangelo was played by Jarred Blancard, and voiced by Kirby Morrow. In the crossover episode with Power Rangers in Space, "Shell Shocked", Michelangelo is voiced by Tony Oliver instead.
2003 animated series
In the 2003 TV series, Michelangelo is voiced by Wayne Grayson and speaks with a California accent. Known as 'Mikey' to his brothers, his personality is more akin to the Mirage comics than the 1987 show. Still the comic relief, he often makes statements that spoof pop culture, although he uses less surfer slang than in the 1987 cartoon. His trademark nunchaku is once again his primary weapon, but he has used other weapons such as grappling hooks and those of his brothers. He is slightly more immature than in the Mirage comics-particularly apparent by a high-pitched scream, however, he undergoes character development and becomes more mature as the series progresses. Unlike other incarnations, he was often more reluctant to fight and he often likes to tease and annoy his older brothers, especially Raphael, for whom Michelangelo is the foil. In fact, a running gag is that whenever Michelangelo says or does something stupid, usually involving a catch-phrase from the 1987 show, one of his brothers (usually Raphael) will slap him on the head. Other characters such as Master Splinter and the Ancient One have picked up on this habit but usually whenever he disobeys. In the Fast Forward episode Timing is Everything when Michelangelo was talking too much, Splinter flipped over the seat he was sitting on causing him to fall on the ground, replying afterwards that "Somebody had to do it. It was... time."
Raphael and Michelangelo mostly have a love-hate relationship in which Michelangelo frequently antagonizes Raphael (it especially hurts Raphael's pride that Michelangelo has bested him more than once, both times because Raphael's anger got in the way), but Raphael shows that he cares about him whenever he is in danger. He also seems to be very close with Donatello, where the two are often paired together when Leonardo and Raphael are either arguing or training. Also, Mikey is often the one most enthusiastic about Donatello's newest finished invention and would jump straight into "helping" Don with his first test run on it, though his "help" is not greatly appreciated by the latter and often generates more trouble. Donatello has also invented a hovercraft- a flying skateboard, at one point of time in Season 2 as means of keeping him quiet while Donatello himself worked. Even though his older brothers are irritated with him, he is still a lovable brother and like a baby brother to them. When Mikey was a kid, he and his brothers went with Master Splinter to Japan to bury Master Yoshi's ashes next to his beloved. While there they helped Master Splinter and the Ancient One win a fight against a ghost that was sent by the Foot Mystics to revive the Demon Shredder and that is when they gain their ninja masks. This happens in the episode (Fathers and Sons).
Michelangelo is the youngest brother, also appointed to be "the one with the brightest fire" or with the most potential by The Ancient One. Master Splinter also claims a few times that Michelangelo obtains the most "raw talent" of all the brothers. Although, both Master Splinter and The Ancient One, say because of Michelangelo's lack of focus and interest in training, he will probably never meet his full potential. Michelangelo also claims many times in this series that he wishes not to be so serious and focused as his eldest brother, Leonardo. He also tells his other brothers to "chillax" and not be so serious all the time.
His agility and speed also play a bigger role than in the comics or other TV series and movies. It is shown in activities such as training runs, fights, and training that Michelangelo clearly is the fastest of the four, in which his brothers are constantly having to catch up with him. It also plays a role in the 1st season when Michelangelo and Raphael are fighting, that Michelangelo keeps taunting Raphael, but because of his speed, Raphael cannot tackle him. This is shown more than once. His speed was the main reason he won The Battle Nexus, along with his ability to "get under people's skin" and taunt them. He is also portrayed as the best gymnast of the four. This is true partly because Master Splinter always sends him off to do backflips or extra training as punishment for goofing off or losing focus. Both of these abilities allow Michelangelo to taunt his opponents and beat them quickly without getting hurt by his opponent in the process.
He has shown innocent empathy for others as shown in particular by his adoption of Klunk the stray kitten, to whom he is very close and also by his relationship with Leatherhead. It is his initial awareness of Leatherhead's humanity which ends up forging the bond between the crocodile and the other Turtles. He enjoys Leatherhead's company, although he can tease him on occasion against his better judgment. However, he cares greatly for the crocodile and is quick and willing to forgive and reassure him when a rampaging Leatherhead injures him in a blind, nightmarish rage in the episode 'Hunted.' Leatherhead also appears to care greatly for Michelangelo and is distraught when he believes he has fatally injured him, but delighted to discover that his fears are unfounded when he finds out that the turtle is alive and well.
As in the Mirage comics, Michelangelo is an avid fan of comic book superheroes. In some episodes, he takes on the role of a costumed superhero called "Turtle Titan" and befriends other superheroes such as the "Silver Sentry" and the "Justice Force". As Turtle Titan, Michelangelo uses grappling hooks as both a weapon and a mode of transportation.
Although not shown as particularly focused on ninjutsu, preferring to spend his time reading comics or watching movies, he is quite an effective fighter. In the Season 2 finale, he became the Battle Nexus Champion, considered the best fighter in the multiverse, but his initial victory was due in part to several very lucky breaks (with Raphael even referring to it as "sheer dumb luck, emphasis on the 'dumb' part), including Splinter withdrawing to allow his sons to progress and Leonardo being eliminated by poisoning (although Mikey did defeat Raphael and Donatello was knocked out earlier). Michelangelo later won a rematch against the last finalist and earned a medal of honor for his behavior during the battle, with his ninjutsu prowess being spurred when the opponent revealed that he intended to kill Raphael, Leonardo and Donatello as well after killing Michelangelo in battle, prompting the latter to recall Leonardo's words on how if one of them went down, then all of them would go down as well.
In the crossover movie Turtles Forever, Mikey is the only Turtle who initially likes their comedic 1987 counterparts (he is especially fascinated by the initials on their belt buckles). However, he eventually gets tired of their laid-back attitudes and yells at them when they don't take the 2003 incarnation of Shredder seriously, proving even this version of Mikey has his limits to fooling around.
In early profiles of the 2003 animated series, Michelangelo is regarded as being both the most athletic of the four, and as possessing the single greatest potential in the martial arts out of the quartet, although unfocused on training mentally (though those are admittedly old profiles and may have been early plans for the character, he has certainly proven he is an effective fighter when need be) which prevents him from reaching his fullest potential. This profile statement has since been repeated in the profile of Michelangelo in the lead-up to the debut of the Fast Forward season which began airing on July 29, 2006.
2012 animated series
In the new 2012 series, Michelangelo uses both a kusarigama and nunchaku. His character design was updated as well, making him slightly shorter than his brothers and giving him dark freckles as well as shorter tails on his mask. Mikey is voiced by Greg Cipes.
Mikey is the youngest brother and is prone to goofing off rather than focusing on his training. However, he is still an impressive fighter. His natural affinity for the martial arts is present in his ability to learn moves after seeing them only a few times. He also has the ability to fight without "thinking", once deflecting Splinter's blows while listening to music with his eyes closed, which was something that Donatello had to learn (Although he was unable to do this when directly asked to fight without thinking in their first fight with the telepathic Victor Falco). Also, Mikey has a rare ability to see something off in people and mutants, shown when a Kraang robot of April's mother, dub Mom thing, and with two of the Might Mutant Animals, Slash and Dr. Rockwell, after they escape capture from the Shredder.
Mikey seems to respect Leonardo and although there are few scenes with them interacting together in the first two seasons, it's clear that Leonardo cares about his youngest brother. In season three, however, the two of them go on missions together. While looking for Karai one evening, they worked together to rescue their brothers, April, and Casey from Bebop and Rocksteady. After Karai is captured by Shredder, Leo volunteers to go solo to find a way to help her, Mikey tags alongside him and assures him he is there for him. When Mikey gets eaten by the MegaShredder, Leo fights with all of his might to save him. After Mikey escapes from inside the monster, Leo is overjoyed to see him alive and is proud of the work they have done. Raphael is normally seen as the one Mikey loves to hang out with, usually to Raph's ire. Although Raph repeatedly gives Mikey a beat down for his antics (usually in the form of slapping him to make him shut up) he occasionally shows a more sensitive side when Mikey is feeling down. Particularly at the end of season 2 and the first half of season 3, Raphael is much kinder to Michelangelo and often refers to him affectionately as "little brother". He is also less inclined to slap Mikey, instead flicking him with a finger as a sort of "warning". Of all his brothers, Mikey seems to be the closest to Donatello. This probably stems from the fact that although they are very different they both share a curiosity about things outside ninjutsu. Mikey greatly admires Donnie's new gadgets and is normally the first to test them out. Mikey additionally enjoys creating names for all the mutated forms of villains in the series, naming ones like "Dogpound" and "Fishface". He even is upset with Donnie when he decides on a name (Newtralizer) before Mikey gets the chance, and Donatello yells at Mikey when he attempts to name the mutated Kirby O'Neil "Wingnut" - "You are NOT giving Mr. O'Neil a monster name!"
Although Mikey is not the fastest of mind and has a tendency to make mistakes, it is very clear that all of his brothers care very deeply for him. They all show great distress when he gets hurt or is in trouble, and anger at the one who threatens him, i.e. when Baxter Stockman is throwing Mikey around he is in his more advanced armor. More often than not, Raphael is the one to shout Mikey's name first and attack whoever has harmed his little brother, and was distraught when Mikey was injured during his (Raphael's) brief period as leader of the team.
Master Splinter has yet to reveal what it is Michelangelo must improve on, other than a need to focus. In the First Nickelodeon TMNT Comic Master Splinter states that Mikey has more raw talent than his three brothers combined. Later in an episode, he says that Mikey has his "challenges" too. And even if Mikey can give his brothers grief, they will always care and love him and have his back. It is heavily implied that this version of Mikey has some form of ADHD. And, it was even stated in his original bio, confirming it.
Unlike past incarnations, Mikey does not say his famous "Cowabunga" catchphrase, but instead says "BOOYAKASHA!" He usually does this when accomplishing something or attacking an enemy. In the episode "Meet Mondo Gecko" the character Jason (AKA Mondo Gecko) uses 'cowabunga' as his own catchphrase, and Michelangelo says it is "too old school" but later asks if he can use it from time to time. In the episode "Journey to the Center of Mikey's Mind", he says "Booyakabunga" (a mixture of both Booyakasha and Cowabunga).
Michelangelo also has an affinity for animals in this incarnation, as well as a fear of squirrels. Whilst he is more interested in comics and cartoon shows (such as Crognard the Barbarian, which is a parody of Thundarr the Barbarian; and Super Robo Mecha Force Five, which is a parody of both Voltron and Super Robot Monkey Team Hyperforce Go!, the latter of which also created by the series creator Ciro Nieli) than animals, he adopted a cat April found on the street after the cat ate some mutagen Mikey had spilled ice cream in, resulting in the arable and delicious Ice Cream Kitty; most likely this incarnation's version of Klunk. Michelangelo's relationship with Ice Cream Kitty parallels Raphael's with pre-mutation Spike, wherein he will tell Ice Cream Kitty his innermost feelings and fears, and not feel judged by them - as often his brothers find Mikey's concerns and thoughts too silly to be worth their time, though said concerns end up being true quite often; such as the existence of the Kraang in "Rise of the Turtles Pt1". Bearing this in mind, however, Michelangelo does not seem to have any special relationships with animals like he has done in other incarnations, though he still cares for animals deeply, as exhibited by his caring for April's chickens whilst the Turtles are at April's old farmhouse at the start of season 3. In season 4, when traveling with his brothers, April, Casey, and a robot called the Fugitoid, Mikey's mind is invaded by microscopic aliens, but his brothers soon chase the robots in a strange world in Mikey's subconscious and soon defeat them, putting Mikey's mind back to normal.
2018 animated series
In the 2018 animated series, Rise of the Teenage Mutant Ninja Turtles, Brandon Mychal Smith voices "Michelangelo, the youngest brother, and is depicted as a preteen, as he is 13 years old. Also an artist and awesome skateboarder with a wild colorful, and imaginative personality." This incarnation of Michelangelo lacks a surfer accent and likes to refer to himself as the artist of the group. He possesses a more childlike naiveness. He also has a strong passion for cooking too.
Michelangelo uses a mystic Kusari Fundo, the fiery end of which features a spooky face. He is seen to be able to lift and throw massive objects while in a battle against the Shredder like docked ships and vehicles. Mikey often uses this weapon to show his agility by swinging around the city, using it as a grappling hook. Mikey often uses this in a combination attack with his ninja brothers and is seen swinging them toward an opponent at high speeds so they can deliver a swift blow.
In this series, Michelangelo's character design is based on a box turtle, and he is one of the only brothers who is able to retract his legs and limbs into his shell. This can be seen in the first episode "Mystic Mayhem" when Raphael grabs his shell and throws him.
Michelangelo often acts as the peacemaker, and it is really important to him that his family / extended family be on good terms with one another. This can be seen when he invites his brothers, and father Splinter to a meal with the hopes that they will reconcile with Draxum.
In Rise of the Teenage Mutant Ninja Turtles: The Movie, Mikey mostly takes a backseat throughout the film looking out for Donnie and trying to unlock mystic powers throughout the film that his future self used to send Casey Jones back in time. The latter ends up paying off as Mikey ends up unlocking the power's and used them to rescue Leonardo from the prison dimension.
Movies
Original trilogy (1990–1993)
Michelangelo is depicted in the live-action movies as the easy-going, free-spirited turtle. One of his movie catchphrases is, "I love being a turtle! " and "Cowabunga!" Owing to his popularity with children, he is given many lines and comes up with several (slightly outrageous) plans to advance plots. He speaks with a distinctive California accent (that was imitated in the later versions of TMNT), In the first movie, he and Donatello were regularly paired together while Leonardo and Raphael were arguing. In the first movie and the first sequel, Teenage Mutant Ninja Turtles II: The Secret of the Ooze, he was portrayed by Michelan Sisti; in the second sequel, Teenage Mutant Ninja Turtles III, he was portrayed by David Fraser. In all three movies, he was voiced by The Brady Bunch alum Robbie Rist.
2007 film
In TMNT, Mikey has taken to performing at children's birthday parties as "Cowabunga Carl" in order to help make ends meet to support his family. It becomes apparent early in the film that the physical and emotional absence of his older brothers has finally begun to affect the outgoing Michelangelo; when he returns home from work and announces his presence and no one acknowledges him, he sighs "Whatever..this place used to be fun". Unlike his other incarnations, the 2007 Mikey seems to draw the most emotional support from Donatello instead of his oldest brothers, Leonardo and Raphael. Upon Leonardo's return from Central America, Mikey gives his oldest brother an enthusiastic hug, falling over the couch and tripping over furniture in his excitement. Mikey's lively and innocent demeanor returns in full force when he is in the protective presence of his three brothers again, his good-natured jokes and brief commentaries lightening even in hard situations. Clearly, despite the hardship that his family has recently experienced, Michelangelo has retained much of his usual goofy, laid-back personality and still remains the main form of funny. He is voiced by Mikey Kelley. He is a skilled skateboarder in this movie, able to complete many tricks underground.
Reboot (2014–2016)
Michelangelo appears in Teenage Mutant Ninja Turtles portrayed by Noel Fisher. In the film, he is the youngest and the jokester of the group who loves watching viral videos, playing video games, and skateboarding. Michelangelo tries to find humor in any situation. However, he proves to be a great fighter if someone messes with him and his brothers. He is not afraid to express his true feelings and also has a crush on April, who does not return her feelings to him. In the end, when the Turtles are leaving in their van after Mikey blows up Vern Fenwick's car, Mikey tries to impress April by singing The Turtles' "Happy Together". He appears in the sequel, Teenage Mutant Ninja Turtles: Out of the Shadows, with Fisher reprising the role.
DC crossover film
Michelangelo appears in the direct-to-video crossover film Batman vs. Teenage Mutant Ninja Turtles, voiced by Kyle Mooney.
Mutant Mayhem
Michelangelo appears in Teenage Mutant Ninja Turtles: Mutant Mayhem, the first computer-animated Teenage Mutant Ninja Turtles film since TMNT (2007), voiced by real teenager Shamon Brown Jr. Much like his other counterparts, he is a jokester and most interested in wanting to be part of the real world. He becomes close friends with Mondo Gecko due to their mutual personalities and being more loose and carefree.
Video games
In the video games based on the 1987 animated series, Michelangelo is virtually identical to Leonardo on every level except attack range. However, to reflect his flashy personality, he was changed and became the most agile Turtle in the video games based on the 2003 animated series while Raphael was the least skilled. In TMNT: Smash Up, he is voiced by Wayne Grayson.
Michelangelo is one of the main playable characters in Teenage Mutant Ninja Turtles: Out of the Shadows, where he is voiced by Pierce Cravens. Michelangelo also appears in the 2014 film-based game, voiced by Peter Oldring.
Michelangelo is featured as one of the playable characters from Teenage Mutant Ninja Turtles as DLC in Injustice 2, voiced by Ryan Cooper. While Leonardo is the default turtle, Michelangelo, Raphael, and Donatello can be picked through selection similar to other premier skin characters.
Michelangelo is featured as a TMNT season pass in Smite as a Mercury skin, voiced by Nick Landis. He is also available as a skin in Brawlhalla.
Michelangelo is also a main playable character in the sequel to Turtles in Time, titled Teenage Mutant Ninja Turtles: Shredder's Revenge. In the game, Michelangelo is now the fastest turtle with low range and average power, as opposed to the original, in which he has average speed, average range, and high power. This is the first official Teenage Mutant Ninja Turtles game in which he will be played by his original voice actor, Townsend Coleman.
Michelangelo also appeared Nickelodeon All-Star Brawl, with Townsend Coleman reprising his role.
Spelling
The character's name was originally spelled as "Michaelangelo", with an additional "a". This spelling was used until 2001 with Volume 4 of the comic series from Mirage Studios when the spelling was officially changed to "Michelangelo". The 1996 anime also used the "Michelangelo" spelling. On the TMNT 2007 movie teaser poster featuring Michelangelo, the character's name is spelled "Michaelangelo", though the movie uses the proper spelling of the name in its credits.
Klunk
Klunk is Michelangelo's pet cat. He first appeared in the Michelangelo microseries, and was hit by a car and died in the Tales of the TMNT vol. 2 issue 9. Shortly after, the Turtles discovered that Klunk had mated and had kittens with an alley cat.
Klunk also appears in a few episodes of the 2003 cartoon, starting with The Christmas Aliens. Based on the revamped character designs in the Back to the Sewer'' season, this version of Klunk appears to be female, whereas the Klunk from the Mirage comics was confirmed to be male.
References
External links
Michelangelo profile on official TMNT web site
Michelangelo Biography – TMNT Community Site
Who is the youngest Teenage Mutant Ninja Turtle?
Animal superheroes
Child superheroes
Comic martial artists
Comics characters introduced in 1984
Comics characters who can move at superhuman speeds
Comics characters with accelerated healing
Comics characters with superhuman strength
Fictional chain fighters
Fictional characters from New York City
Fictional characters with attention deficit hyperactivity disorder
Fictional chefs
Fictional comedians
Fictional kobudōka
Fictional mutants
Fictional ninja
Fictional Ninjutsu practitioners
Fictional nunchakuka
Fictional pacifists
Fictional poets
Fictional skateboarders
Fictional stick-fighters
Fictional surfers
Fictional turtles
Fictional writers
Fighting game characters
Male characters in comics
Superheroes who are adopted
Teenage characters in comics
Teenage characters in television
Teenage Mutant Ninja Turtles characters
Teenage superheroes
Vigilante characters in comics
|
```emacs lisp
;;; scimax-org-radio-checkbox.el --- Org radio checkboxes
;; * radio checkboxes
;;; Commentary:
;; This library provides a radio checkbox for org-mode. A radio checkbox is a
;; checkbox list where only one box can be checked, i.e. a radio button choice.
;;
;; This code achieves that by using a C-c C-c hook to toggle the item checked,
;; and remove the others. To facilitate using the checked value in code,
;; `scimax-get-radio-list-value' is provided to get the value from a radio
;; checkbox list by its name.
;;; Code:
(defun scimax-in-radio-list-p ()
"Return radio list if in one, else nil."
(interactive)
(let* ((element (org-element-context))
(radio-list (cond
;; on an item. easy.
((and (eq 'item (car element))
(member
":radio"
(org-element-property
:attr_org
(org-element-property :parent element))))
(org-element-property :parent element))
;; on an item paragraph
((and (eq 'paragraph (car element))
(eq 'item (car (org-element-property :parent element)))
(member
":radio"
(org-element-property
:attr_org
(org-element-property
:parent
(org-element-property :parent element)))))
(org-element-property
:parent
(org-element-property :parent element)))
;; not on an item or item paragraph
(t
nil))))
radio-list))
(defun scimax-radio-CcCc ()
"Hook function for C-cC-c to work in radio checklists."
(interactive)
(let ((radio-list (scimax-in-radio-list-p))
(p (point)))
(when radio-list
;; clear all boxes
(save-excursion
(mapc (lambda (el)
(goto-char (car el))
(when (re-search-forward "\\[X\\]" (line-end-position) t)
(replace-match "[ ]")))
(org-element-property :structure radio-list))
;; Now figure out where to put the new X
(mapc (lambda (el)
(when (and (> p (car el))
(< p (car (last el))))
(goto-char (car el))
(when (re-search-forward "\\[ \\]" (line-end-position) t)
(replace-match "[X]"))))
(org-element-property :structure radio-list)))
;; return t so the hook ends I think
t)))
(add-hook 'org-ctrl-c-ctrl-c-hook 'scimax-radio-CcCc)
;; this works with mouse checking.
(add-hook 'org-checkbox-statistics-hook 'scimax-radio-CcCc)
(defun scimax-org-get-plain-list (name)
"Get the org-element representation of a plain-list named NAME."
(catch 'found
(org-element-map
(org-element-parse-buffer)
'plain-list
(lambda (plain-list)
(when
(string= name (org-element-property :name plain-list))
(throw 'found plain-list))))))
(defun scimax-get-radio-list-value (name)
"Return the value of the checked item in a radio list named NAME."
(save-excursion
(cl-loop for el in (org-element-property
:structure
(scimax-org-get-plain-list name))
if (string= (nth 4 el) "[X]")
return (progn
(let ((item (buffer-substring (car el) (car (last el)))))
(string-match "\\[X\\]\\(.*\\)$" item)
(match-string-no-properties 1 item))))))
(provide 'scimax-org-radio-checkbox)
;;; scimax-org-radio-checkbox.el ends here
```
|
Wes Freed (April 25, 1964 – September 4, 2022) was an Americana Culture artist. His works appeared on album covers of Lauren Hoffman and numerous American rock bands, including Cracker and the Drive-By Truckers.
Early life
Freed was born in the Shenandoah Valley, Virginia, on April 25, 1964. During his high school years, he served as secretary of his school's Future Farmers of America chapter. Injured severely in a cattle chute while in high school, Wes whiled away the hours in recovery drawing and developing his future style of poetic country noir and southern gothic that evoked the dark and lonesome characters of boot leggers, mechanics and haunts grappling with a transitioning rural south during the 1970s. He considered moving to New York to become an artist. However, he relocated to Richmond, Virginia, in 1983 to study painting and printmaking at Virginia Commonwealth University. He ultimately remained in Richmond until his death.
Career
In addition to his art, Freed played in Dirt Ball, an alternative country band based in Richmond. He served as its lead singer starting in 1986. He also played with other local groups, such as the Shiners (a spin-off from Dirt Ball), Mudd Helmet, the Mutant Drones, and the MagBats. It was during his time with Dirt Ball and Mudd Helmet that Freed designed show posters, adopting an "outsider" style that would influence his later works.
Freed became acquainted with the Drive-By Truckers (DBT) in 1997, when they were invited to play in the first Capital City Barndance, a show designed to reflect the long lived Old Dominion Barn Dance with the addition of folk and rock artists as well as traditional country. That fateful event led to a long and beloved relationship between Wes and the Drive By Truckers. As DBT grew in fame and sales so did Wes' art which adorned backdrops, drum kits and roadie boxes up until his first album cover for the band Southern Rock Opera. He ultimately designed ten album covers for the Truckers. He later identified the cover art of The Dirty South (2004) as his personal favorite. Freed utilized marker, watercolor, and acrylic paint, typically on wood. He continued to design posters, T-shirts, backgrounds, and miscellaneous merchandise for DBT, as well as the artwork in the 2009 documentary about the Truckers, titled The Secret To A Happy Ending. The final cover he designed for the band was for Welcome 2 Club XIII (2022).
Apart from his work with DBT, Freed also collaborated with Lauren Hoffman and Cracker. He continued to paint commissioned and gifted work for friends, family, and fans around the world. When visiting Wes there was always something in progress on the easel. To say he was prolific is an understatement. He released in 2019 The Art of Wes Freed – Paints, Posters, Pin-ups and Possums, a coffee table book that compiled his most notable works. Multitudes of friends and family carry his art everywhere they go right on their skin.
Personal life
Freed was married three times to Kat, Kak, and Jyl, and is survived by his partner Jackie. He had no enemies but loved them all and in their own time and way.
Freed died on September 4, 2022, at the age of 58, nine months after he was diagnosed with colorectal cancer.
References
Further reading
External links
1964 births
2022 deaths
American folk musicians
Album-cover and concert-poster artists
Artists from Virginia
Musicians from Virginia
Outsider artists
20th-century American people
21st-century American people
|
Bassane is a settlement in Senegal.
External links
PEPAM
Populated places in the Bignona Department
Arrondissement of Sindian
|
Gerardo Octavio Solís Gómez (born November 19, 1957 in Guadalajara, Jalisco) is a Mexican politician, member of the National Action Party who has served as substitute Governor of Jalisco.
Biography
Solís Gómez served is the cabinet of Francisco Javier Ramírez Acuña as Secretary General of Government. The Congress of Jalisco designated him substitute Governor of Jalisco when Ramírez Acuña left that position to serve in the cabinet of Felipe Calderón.
Solís took office as Governor on November 21, 2006.
References
1957 births
Living people
Governors of Jalisco
National Action Party (Mexico) politicians
Politicians from Guadalajara, Jalisco
|
1,2-Bis(dichlorophosphino)benzene is an organophosphorus compound with the formula C6H4(PCl2)2. A viscous colorless liquid, it is a precursor to chelating diphosphines of the type C6H4(PR2)2.
It is prepared from 1,2-dibromobenzene by sequential lithiation followed by treatment with (Et2N)2PCl (Et = ethyl), which affords C6H4[P(NEt2)2]2. This species is finally cleaved with hydrogen chloride:
C6H4[P(NEt2)2]2 + 8 HCl → C6H4(PCl2)2 + 4 Et2NH2Cl
Related compounds
1,2-Bis(dichlorophosphino)ethane
References
Phosphines
|
```go
package ledis
import (
"errors"
"regexp"
"github.com/siddontang/ledisdb/store"
)
var errDataType = errors.New("error data type")
var errMetaKey = errors.New("error meta key")
//Scan scans the data. If inclusive is true, scan range [cursor, inf) else (cursor, inf)
func (db *DB) Scan(dataType DataType, cursor []byte, count int, inclusive bool, match string) ([][]byte, error) {
storeDataType, err := getDataStoreType(dataType)
if err != nil {
return nil, err
}
return db.scanGeneric(storeDataType, cursor, count, inclusive, match, false)
}
// RevScan scans the data reversed. if inclusive is true, revscan range (-inf, cursor] else (inf, cursor)
func (db *DB) RevScan(dataType DataType, cursor []byte, count int, inclusive bool, match string) ([][]byte, error) {
storeDataType, err := getDataStoreType(dataType)
if err != nil {
return nil, err
}
return db.scanGeneric(storeDataType, cursor, count, inclusive, match, true)
}
func getDataStoreType(dataType DataType) (byte, error) {
var storeDataType byte
switch dataType {
case KV:
storeDataType = KVType
case LIST:
storeDataType = LMetaType
case HASH:
storeDataType = HSizeType
case SET:
storeDataType = SSizeType
case ZSET:
storeDataType = ZSizeType
default:
return 0, errDataType
}
return storeDataType, nil
}
func buildMatchRegexp(match string) (*regexp.Regexp, error) {
var err error
var r *regexp.Regexp
if len(match) > 0 {
if r, err = regexp.Compile(match); err != nil {
return nil, err
}
}
return r, nil
}
func (db *DB) buildScanIterator(minKey []byte, maxKey []byte, inclusive bool, reverse bool) *store.RangeLimitIterator {
tp := store.RangeOpen
if !reverse {
if inclusive {
tp = store.RangeROpen
}
} else {
if inclusive {
tp = store.RangeLOpen
}
}
var it *store.RangeLimitIterator
if !reverse {
it = db.bucket.RangeIterator(minKey, maxKey, tp)
} else {
it = db.bucket.RevRangeIterator(minKey, maxKey, tp)
}
return it
}
func (db *DB) buildScanKeyRange(storeDataType byte, key []byte, reverse bool) (minKey []byte, maxKey []byte, err error) {
if !reverse {
if minKey, err = db.encodeScanMinKey(storeDataType, key); err != nil {
return
}
if maxKey, err = db.encodeScanMaxKey(storeDataType, nil); err != nil {
return
}
} else {
if minKey, err = db.encodeScanMinKey(storeDataType, nil); err != nil {
return
}
if maxKey, err = db.encodeScanMaxKey(storeDataType, key); err != nil {
return
}
}
return
}
func checkScanCount(count int) int {
if count <= 0 {
count = defaultScanCount
}
return count
}
func (db *DB) scanGeneric(storeDataType byte, key []byte, count int,
inclusive bool, match string, reverse bool) ([][]byte, error) {
r, err := buildMatchRegexp(match)
if err != nil {
return nil, err
}
minKey, maxKey, err := db.buildScanKeyRange(storeDataType, key, reverse)
if err != nil {
return nil, err
}
count = checkScanCount(count)
it := db.buildScanIterator(minKey, maxKey, inclusive, reverse)
v := make([][]byte, 0, count)
for i := 0; it.Valid() && i < count; it.Next() {
if k, err := db.decodeScanKey(storeDataType, it.Key()); err != nil {
continue
} else if r != nil && !r.Match(k) {
continue
} else {
v = append(v, k)
i++
}
}
it.Close()
return v, nil
}
func (db *DB) encodeScanMinKey(storeDataType byte, key []byte) ([]byte, error) {
return db.encodeScanKey(storeDataType, key)
}
func (db *DB) encodeScanMaxKey(storeDataType byte, key []byte) ([]byte, error) {
if len(key) > 0 {
return db.encodeScanKey(storeDataType, key)
}
k, err := db.encodeScanKey(storeDataType, nil)
if err != nil {
return nil, err
}
k[len(k)-1] = storeDataType + 1
return k, nil
}
func (db *DB) encodeScanKey(storeDataType byte, key []byte) ([]byte, error) {
switch storeDataType {
case KVType:
return db.encodeKVKey(key), nil
case LMetaType:
return db.lEncodeMetaKey(key), nil
case HSizeType:
return db.hEncodeSizeKey(key), nil
case ZSizeType:
return db.zEncodeSizeKey(key), nil
case SSizeType:
return db.sEncodeSizeKey(key), nil
default:
return nil, errDataType
}
}
func (db *DB) decodeScanKey(storeDataType byte, ek []byte) (key []byte, err error) {
switch storeDataType {
case KVType:
key, err = db.decodeKVKey(ek)
case LMetaType:
key, err = db.lDecodeMetaKey(ek)
case HSizeType:
key, err = db.hDecodeSizeKey(ek)
case ZSizeType:
key, err = db.zDecodeSizeKey(ek)
case SSizeType:
key, err = db.sDecodeSizeKey(ek)
default:
err = errDataType
}
return
}
// for specail data scan
func (db *DB) buildDataScanKeyRange(storeDataType byte, key []byte, cursor []byte, reverse bool) (minKey []byte, maxKey []byte, err error) {
if !reverse {
if minKey, err = db.encodeDataScanMinKey(storeDataType, key, cursor); err != nil {
return
}
if maxKey, err = db.encodeDataScanMaxKey(storeDataType, key, nil); err != nil {
return
}
} else {
if minKey, err = db.encodeDataScanMinKey(storeDataType, key, nil); err != nil {
return
}
if maxKey, err = db.encodeDataScanMaxKey(storeDataType, key, cursor); err != nil {
return
}
}
return
}
func (db *DB) encodeDataScanMinKey(storeDataType byte, key []byte, cursor []byte) ([]byte, error) {
return db.encodeDataScanKey(storeDataType, key, cursor)
}
func (db *DB) encodeDataScanMaxKey(storeDataType byte, key []byte, cursor []byte) ([]byte, error) {
if len(cursor) > 0 {
return db.encodeDataScanKey(storeDataType, key, cursor)
}
k, err := db.encodeDataScanKey(storeDataType, key, nil)
if err != nil {
return nil, err
}
// here, the last byte is the start seperator, set it to stop seperator
k[len(k)-1] = k[len(k)-1] + 1
return k, nil
}
func (db *DB) encodeDataScanKey(storeDataType byte, key []byte, cursor []byte) ([]byte, error) {
switch storeDataType {
case HashType:
return db.hEncodeHashKey(key, cursor), nil
case ZSetType:
return db.zEncodeSetKey(key, cursor), nil
case SetType:
return db.sEncodeSetKey(key, cursor), nil
default:
return nil, errDataType
}
}
func (db *DB) buildDataScanIterator(storeDataType byte, key []byte, cursor []byte, count int,
inclusive bool, reverse bool) (*store.RangeLimitIterator, error) {
if err := checkKeySize(key); err != nil {
return nil, err
}
minKey, maxKey, err := db.buildDataScanKeyRange(storeDataType, key, cursor, reverse)
if err != nil {
return nil, err
}
it := db.buildScanIterator(minKey, maxKey, inclusive, reverse)
return it, nil
}
func (db *DB) hScanGeneric(key []byte, cursor []byte, count int, inclusive bool, match string, reverse bool) ([]FVPair, error) {
count = checkScanCount(count)
r, err := buildMatchRegexp(match)
if err != nil {
return nil, err
}
v := make([]FVPair, 0, count)
it, err := db.buildDataScanIterator(HashType, key, cursor, count, inclusive, reverse)
if err != nil {
return nil, err
}
defer it.Close()
for i := 0; it.Valid() && i < count; it.Next() {
_, f, err := db.hDecodeHashKey(it.Key())
if err != nil {
return nil, err
} else if r != nil && !r.Match(f) {
continue
}
v = append(v, FVPair{Field: f, Value: it.Value()})
i++
}
return v, nil
}
// HScan scans data for hash.
func (db *DB) HScan(key []byte, cursor []byte, count int, inclusive bool, match string) ([]FVPair, error) {
return db.hScanGeneric(key, cursor, count, inclusive, match, false)
}
// HRevScan reversed scans data for hash.
func (db *DB) HRevScan(key []byte, cursor []byte, count int, inclusive bool, match string) ([]FVPair, error) {
return db.hScanGeneric(key, cursor, count, inclusive, match, true)
}
func (db *DB) sScanGeneric(key []byte, cursor []byte, count int, inclusive bool, match string, reverse bool) ([][]byte, error) {
count = checkScanCount(count)
r, err := buildMatchRegexp(match)
if err != nil {
return nil, err
}
v := make([][]byte, 0, count)
it, err := db.buildDataScanIterator(SetType, key, cursor, count, inclusive, reverse)
if err != nil {
return nil, err
}
defer it.Close()
for i := 0; it.Valid() && i < count; it.Next() {
_, m, err := db.sDecodeSetKey(it.Key())
if err != nil {
return nil, err
} else if r != nil && !r.Match(m) {
continue
}
v = append(v, m)
i++
}
return v, nil
}
// SScan scans data for set.
func (db *DB) SScan(key []byte, cursor []byte, count int, inclusive bool, match string) ([][]byte, error) {
return db.sScanGeneric(key, cursor, count, inclusive, match, false)
}
// SRevScan scans data reversed for set.
func (db *DB) SRevScan(key []byte, cursor []byte, count int, inclusive bool, match string) ([][]byte, error) {
return db.sScanGeneric(key, cursor, count, inclusive, match, true)
}
func (db *DB) zScanGeneric(key []byte, cursor []byte, count int, inclusive bool, match string, reverse bool) ([]ScorePair, error) {
count = checkScanCount(count)
r, err := buildMatchRegexp(match)
if err != nil {
return nil, err
}
v := make([]ScorePair, 0, count)
it, err := db.buildDataScanIterator(ZSetType, key, cursor, count, inclusive, reverse)
if err != nil {
return nil, err
}
defer it.Close()
for i := 0; it.Valid() && i < count; it.Next() {
_, m, err := db.zDecodeSetKey(it.Key())
if err != nil {
return nil, err
} else if r != nil && !r.Match(m) {
continue
}
score, err := Int64(it.Value(), nil)
if err != nil {
return nil, err
}
v = append(v, ScorePair{Score: score, Member: m})
i++
}
return v, nil
}
// ZScan scans data for zset.
func (db *DB) ZScan(key []byte, cursor []byte, count int, inclusive bool, match string) ([]ScorePair, error) {
return db.zScanGeneric(key, cursor, count, inclusive, match, false)
}
// ZRevScan scans data reversed for zset.
func (db *DB) ZRevScan(key []byte, cursor []byte, count int, inclusive bool, match string) ([]ScorePair, error) {
return db.zScanGeneric(key, cursor, count, inclusive, match, true)
}
```
|
Lissonomimus auratopilosus is a species of beetle in the family Cerambycidae. It was described by Di Iorio in 1998.
References
Trachyderini
Beetles described in 1998
|
Back slang is an English coded language in which the written word is spoken phonemically backwards.
Usage
Back slang is thought to have originated in Victorian England. It was used mainly by market sellers, such as butchers and greengrocers, for private conversations behind their customers' backs and to pass off lower-quality goods to less-observant customers. The first published reference to it was in 1851, in Henry Mayhew's London Labour and the London Poor. Some back slang has entered Standard English. For example, the term yob was originally back slang for "boy". Back slang is not restricted to words spoken phonemically backwards. English frequently makes use of diphthongs, which is an issue for back slang since diphthongs cannot be reversed. The resulting fix slightly alters the traditional back slang. An example is trousers and its diphthong, ou, which is replaced with wo in the back slang version reswort.
Back slang is said to be used in prisons by inmates to make it more difficult for prison wardens to listen into prisoners' conversations and find out what is being said.
In 2010, back slang was reported to have been adopted for the sake of privacy on foreign tennis courts by the young English players Laura Robson and Heather Watson.
Other languages
Other languages have similar coded forms but reversing the order of syllables rather than phonemes. These include:
French verlan, in which e.g. français [fʁɑ̃sɛ] becomes céfran [sefʁɑ̃];
French louchébem, which relies on syllables inversion too, but also adds extra syllables;
Greek podana (e.g. the word βυζί becomes ζυβί);
IsiXhosa & isiZulu Ilwimi/Ulwimi used mostly by teenagers, often called "high school language";
Japanese tougo (倒語), where moras of a word are inverted and vowels sometimes become long vowels (hara, “stomach”, becomes raaha);
Romanian Totoiana, in which syllables of Romanian words are inverted so that other Romanian speakers can not understand it;
Lunfardo, a Spanish argot spoken in Argentina, includes words in vesre (from revés, literally "backwards");
Šatrovački, a Serbo-Croatian-Bosnian slang system;
19th century Swedish (e.g. the word fika, which means approximately "coffee break").
See also
Costermonger (British street vendors from whom back slang originates)
Pig Latin
References
External links
Victorian Web Article
John Burkardt's list of back slang words at Florida State University
English-based argots
English language in England
Language games
British slang
|
```go
// _ _
// __ _____ __ ___ ___ __ _| |_ ___
// \ \ /\ / / _ \/ _` \ \ / / |/ _` | __/ _ \
// \ V V / __/ (_| |\ V /| | (_| | || __/
// \_/\_/ \___|\__,_| \_/ |_|\__,_|\__\___|
//
//
// CONTACT: hello@weaviate.io
//
package cluster
import (
"context"
"github.com/sirupsen/logrus"
cmd "github.com/weaviate/weaviate/cluster/proto/api"
)
// LeaderWithID is used to return the current leader address and ID of the cluster.
// It may return empty strings if there is no current leader or the leader is unknown.
func (s *Raft) LeaderWithID() (string, string) {
addr, id := s.store.LeaderWithID()
return string(addr), string(id)
}
func (s *Raft) Join(ctx context.Context, id, addr string, voter bool) error {
s.log.WithFields(logrus.Fields{
"id": id,
"address": addr,
"voter": voter,
}).Debug("membership.join")
if s.store.IsLeader() {
return s.store.Join(id, addr, voter)
}
leader := s.store.Leader()
if leader == "" {
return s.leaderErr()
}
req := &cmd.JoinPeerRequest{Id: id, Address: addr, Voter: voter}
_, err := s.cl.Join(ctx, leader, req)
return err
}
func (s *Raft) Remove(ctx context.Context, id string) error {
s.log.WithField("id", id).Debug("membership.remove")
if s.store.IsLeader() {
return s.store.Remove(id)
}
leader := s.store.Leader()
if leader == "" {
return s.leaderErr()
}
req := &cmd.RemovePeerRequest{Id: id}
_, err := s.cl.Remove(ctx, leader, req)
return err
}
func (s *Raft) Stats() map[string]any {
s.log.Debug("membership.stats")
return s.store.Stats()
}
```
|
The Beaver Pass Shelter is in North Cascades National Park, in the U.S. state of Washington. Constructed by the United States Forest Service in 1938, the shelter was inherited by the National Park Service when North Cascades National Park was dedicated in 1968. Beaver Pass Shelter was placed on the National Register of Historic Places in 1989.
Beaver Pass Shelter is a wood-framed structure, sheathed in wood shake siding on three sides, and open to the front which faces east. The shelter is wide at front and deep. The front roofline extends above the ridgeline, somewhat overhanging the back shed roof.
References
Park buildings and structures on the National Register of Historic Places in Washington (state)
Buildings and structures completed in 1938
Buildings and structures in Whatcom County, Washington
National Register of Historic Places in North Cascades National Park
National Register of Historic Places in Whatcom County, Washington
|
```shell
#!/bin/bash
# This source code is licensed under the BSD-style license found in the
# LICENSE file in the root directory of this source tree. An additional grant
# of patent rights can be found in the PATENTS file in the same directory.
set -e
if [ -z "$GIT" ]
then
GIT="git"
fi
# Print out the colored progress info so that it can be brainlessly
# distinguished by users.
function title() {
echo -e "\033[1;32m$*\033[0m"
}
usage="Create new RocksDB version and prepare it for the release process\n"
usage+="USAGE: ./make_new_version.sh <version>"
# -- Pre-check
if [[ $# < 1 ]]; then
echo -e $usage
exit 1
fi
ROCKSDB_VERSION=$1
GIT_BRANCH=`git rev-parse --abbrev-ref HEAD`
echo $GIT_BRANCH
if [ $GIT_BRANCH != "master" ]; then
echo "Error: Current branch is '$GIT_BRANCH', Please switch to master branch."
exit 1
fi
title "Adding new tag for this release ..."
BRANCH="$ROCKSDB_VERSION.fb"
$GIT checkout -b $BRANCH
# Setting up the proxy for remote repo access
title "Pushing new branch to remote repo ..."
git push origin --set-upstream $BRANCH
title "Branch $BRANCH is pushed to github;"
```
|
The Chicago Opera House was a theater complex in Chicago, Illinois, designed by the architectural firm of Cobb and Frost. The Chicago Opera House building took the cue provided by the Metropolitan Opera of New York as a mixed-used building: it housed both a theater and unrelated offices, used to subsidize the cost of the theater building. The theater itself was located in the middle of the complex and office structures flanked each side. The entire complex was known as the "Chicago Opera House Block," and was located at the Southwest corner of West Washington Avenue and North Clark Street.
The Chicago Opera House was opened to the public on August 18, 1885. The first performance in the new theater was of Hamlet starring Thomas W. Keene. From 1887 to 1890, the Chicago Opera House served as the official observation location for recording the climate of the city of Chicago by the National Weather Service.
The theater suffered a fire in December 1888, which mainly damaged portions of the roof. However, the roof was repaired, and most of the exterior of the building remained undamaged. During its existence, the Chicago Opera House was the site of the premiere of several successful musicals such as Sinbad and The Arabian Nights.
The last performance at the building was the stage play The Escape by Paul Armstrong (later made into a film, now lost, by D.W. Griffith in 1914). Demolition on The Chicago Opera House began May 5, 1913. The site is currently occupied by the Burnham Center (formerly known as the Conway Building), completed in 1915.
Construction
The idea for the Chicago Opera House came from Scottish-born newspaperman and financier David Henderson. Henderson "planned the scheme and the stock – 550,000 – was subscribed in six weeks. Thus Chicago had the first fireproof, steel constructed, electric lighted theatre in the country." The construction of the Chicago Opera House was one of the earliest examples of general contracting, run by George A. Fuller. Upon completion, the masonry-clad building was 10 stories and tall.
References
1885 establishments in Illinois
1912 disestablishments in Illinois
Buildings and structures demolished in 1913
Demolished buildings and structures in Chicago
Demolished theatres in Illinois
Demolished music venues in the United States
Music venues completed in 1885
Music venues in Chicago
Opera houses in Illinois
Theatres in Chicago
Theatres completed in 1885
Former theatres in the United States
|
'Ibdis (, ‘Ibdis; ) was a Palestinian village in the Gaza Subdistrict, located northeast of Gaza City. It was situated on flat ground on the coastal plain at an elevation of above sea level, and bordered by a wadi that bore its name on its eastern side. In 1945, Ibdis had a population of 540 and a land area of 4,593 dunams, of which 18 dunams were built-up areas.
History
Tombs, dating to the sixth and seventh century CE, and Byzantine ceramics have been found here.
12 century Crusader church endowments and land deeds mention Latin settlement in the village, calling it Hebde. 'Ibdis was also inhabited in the 15th century. Mamluk records mention its endowment as a waqf.
Ottoman era
Under the Ottomans, in the 1596 tax records, Ibdis was a village in the nahiya of Gaza, part of the Sanjak of Gaza, with a population of 35 households, an estimated 193 people, all Muslims. The villagers paid a fixed tax rate of 33,3 % on various products, including wheat, barley, sesame, fruits, vineyards, beehives, and goats; a total of 8,100 akçe. Half of the revenue went to a waqf.
In 1838, it was noted as a village 'Abdis, located in the Gaza district.
Socin found from an official Ottoman village list from about 1870 that Ibdis had 12 houses and a population of 53, though the population count included men, only. Hartmann found that Abdis had 15 houses.
In 1882, the PEF's Survey of Western Palestine described it as a mid-sized village standing on open ground.
British Mandate era
During the British Mandate period, its houses were built of adobe brick and separated by narrow alleys. Toward the end of the Mandate period, new homes were constructed along the two roads that linked it with Majdal and the Jaffa Road. The village's Muslim community obtained water for domestic use from a deep well. However, because the number of drilled wells was limited, the residents relied largely on rainfall for their crops. Ibdis was well known in the Gaza region for its quality grains, including wheat, barley, and sorghum. In the later period, fruit trees were grown, including grapes, apricots, and oranges.
In the 1922 census of Palestine, conducted by the British Mandate authorities, ‘Abdis had a population of 319, all Muslims, increasing in the 1931 census to 425, still all Muslims, in 62 houses.
In the 1945 statistics Ibdis had a population of 540, all Muslims, with a total of 4,593 dunams of land, according to an official land and population survey. Of this, 149 dunams were used for plantations and irrigable land, 4,307 for cereals, while 18 dunams were built-up land.
1948 War, aftermath
The daily Palestinian newspaper Filastin reported in mid-February 1948, that Israeli forces arrived at Ibdis in three large vehicles on the evening of February 17. They were engaged by the local militia and a clash ensued which went on for over an hour. until the attackers retreated to Negba. According to the account, none of the residents were injured.
On July 8, as the first truce of the 1948 Arab-Israeli War was about to end, Israel's Givati Brigade moved on the southern front to link up with Israeli forces in the Negev. Although, they did not succeed in this mission, they managed to capture numerous villages in the area, including Ibdis. The Third Battalion of the brigade attacked the village at night, resulting in a 'long battle" with two companies of the Egyptian Army stationed there. The Israelis "only finished cleaning the position by the hours of the morning", according to Haganah accounts. It is unclear whether the inhabitants of Ibdis were expelled at that time, but the Haganah claims military equipment was taken was from the Egyptians.
Egyptian forces tried to recapture the village on July 10, but failed after suffering "heavy losses" when combating Israeli forces stationed there. According to the Haganah, the second Israeli victory at Ibdis was a turning point in the Givati advance, since onwards the brigade's forces did not withdraw from a single position until the end of the war. There was another failed attempt to capture the village on July 12. Egyptian president Gamal Abdel Nasser, who was a junior officer on this front recalled "On the first day of the truce the enemy [Israeli forces] moved against the Arab village of 'Ibdis which interpenetrated our lines".
Following the war the area was incorporated into the State of Israel. Merkaz Shapira was established nearby in 1948 and cultivates some land near the village site, but was, by 1992, not on Ibdis lands.
See also
Depopulated Palestinian locations in Israel
References
Bibliography
Nasser, G.A. (1955/1973): "Memoirs" in Journal of Palestine Studies
“Nasser's Memoirs of the First Palestine War” in 2, no. 2 (Win. 73): 3-32, pdf-file, downloadable
External links
Welcome To 'Ibdis
Ibdis, Zochrot
Survey of Western Palestine, Map 16: IAA, Wikimedia commons
'Ibdis from the Khalil Sakakini Cultural Center
Arab villages depopulated during the 1948 Arab–Israeli War
District of Gaza
|
The "Virgin Islands March" is the regional anthem of the United States Virgin Islands. The song was composed by Sam Williams and U.S. Virgin Island native Alton Adams in the 1920s. It served as an unofficial regional anthem of the U.S. Virgin Islands until 1963, when it was officially recognized by Legislative Act.
The song itself is a brisk martial march, consisting of an introductory instrumental section followed by a very cheerful melody. The Guardian reporter Alex Marshall compared it favorably to some national anthems, suggesting that it was reminiscent of the music of the Disney film Mary Poppins.
Since the U.S. Virgin Islands is a U.S. insular territory, the national anthem is still the U.S. one, "The Star-Spangled Banner". During international sporting events, only the "Virgin Islands March" is played.
Lyrics
On most occasions, the first verse followed by the last verse is sung.
References
External links
MIDI version
North American anthems
Anthems of insular areas of the United States
United States Virgin Islands culture
1920s songs
National anthems
|
National Highway 113 (NH 113) is a National Highway in North East India that connects Hawa Camp and Kibithu in Arunachal Pradesh. It is a secondary route of National Highway 13. NH-113 runs entirely in the state of Arunachal Pradesh in India. Kibithu is located on the last road head of extreme northeast of India.
Route
NH113 connects Hawacamp, Hayuliang, Hawai and Kibithu in the state of Arunachal Pradesh in India.
Junctions
Terminal near Hawacamp.
See also
List of National Highways in India (by Highway Number)
National Highways Development Project
References
External links
NH 113 on OpenStreetMap
National highways in India
National Highways in Arunachal Pradesh
|
```html
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "path_to_url">
<html xmlns="path_to_url">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
<title>MEGA 2.0</title>
<meta http-equiv="X-UA-Compatible" content="IE=Edge" />
<link rel="stylesheet" type="text/css" href="../css/style.css" />
<link rel="stylesheet" type="text/css" href="../css/dialogs.css" />
<link rel="stylesheet" type="text/css" href="../css/buttons.css" />
<link rel="stylesheet" type="text/css" href="../css/toast.css" />
<script type="text/javascript" src="../js/vendor/jquery-2.2.1.js"></script>
<script type="text/javascript" src="../js/vendor/jquery.jscrollpane.js"></script>
<script type="text/javascript" src="../js/vendor/jquery.qrcode.js"></script>
<script type="text/javascript" src="../js/vendor/qrcode.js"></script>
<!--<script type="text/javascript" src="../js/jquery.mousewheel.js"></script>
<script type="text/javascript" src="../js/jquery.tokeninput.js"></script>-->
<script type="text/javascript">
$(document).ready(function () {
function dialogPositioning() {
$('.fm-dialog').css('margin-top', '-' + $('.fm-dialog').outerHeight() / 2 + 'px');
}
function shareContactsScroll() {
$('.share-dialog-contacts').jScrollPane({ enableKeyboardNavigation: false, showArrows: true, arrowSize: 8, animateScroll: true });
}
dialogPositioning();
shareContactsScroll();
//$('.share-multiple-input').tokenInput([
// { id: 7, name: "Ruby" },
// { id: 11, name: "Python" },
// { id: 13, name: "JavaScript" },
// { id: 17, name: "ActionScript" },
// { id: 19, name: "Scheme" },
// { id: 23, name: "Lisp" },
// { id: 29, name: "C#" },
// { id: 31, name: "Fortran" },
// { id: 37, name: "Visual Basic" },
// { id: 41, name: "C" },
// { id: 43, name: "C++" },
// { id: 47, name: "Java" }
//], {
// theme: "mega"
// });
$('.share-dialog-permissions').unbind('click');
$('.share-dialog-permissions').bind('click', function () {
var m = $('.permissions-menu'),
scrollPos = 0;
if ($('.share-dialog-contacts .jspPane'))
scrollPos = $('.share-dialog-contacts .jspPane').position().top;
m.removeClass('search-permissions');
if ($(this).attr('class').indexOf('active') == -1) {
var x = $(this).position().left + 50;
var y = $(this).position().top + 14 + scrollPos;
m.css({
'left': x,
'top': y
});
m.removeClass('hidden');
$(this).addClass('active');
} else {
m.addClass('hidden');
$(this).removeClass('active');
}
});
$('.permissions-icon').unbind('click');
$('.permissions-icon').bind('click', function () {
var m = $('.permissions-menu');
if ($(this).attr('class').indexOf('active') == -1) {
var x = $(this).position().left + 12;
var y = $(this).position().top + 8;
m.css({
'left': x,
'top': y
});
m.removeClass('hidden');
$(this).addClass('active');
m.addClass('search-permissions');
} else {
m.addClass('hidden');
$(this).removeClass('active');
m.removeClass('search-permissions');
}
});
$('.permissions-menu-item').unbind('click');
$('.permissions-menu-item').bind('click', function () {
if ($(this).attr('class').indexOf('active') == -1) {
$(this).parent().find('.permissions-menu-item.active').removeClass('active');
var p = $(this).attr('class').replace('permissions-menu-item', '').split(" ").join("");
if ($(this).attr('class').indexOf('search-permissions') == -1) {
$('.permissions-icon').removeClass('read-only read-and-write full-access');
$('.permissions-icon').addClass(p);
}
$(this).addClass('active');
} else
$(this).removeClass('active');
$('.permissions-menu').addClass('hidden');
$('.permissions-icon.active').removeClass('active');
$('.share-dialog-permissions.active').removeClass('active');
});
//Pending info block
$('.pending-indicator').bind('mouseover', function () {
var x = $(this).position().left,
y = $(this).position().top,
infoBlock = $('.share-pending-info'),
scrollPos = 0;
if ($('.share-dialog-contacts .jspPane'))
scrollPos = $('.share-dialog-contacts .jspPane').position().top;
infoHeight = infoBlock.outerHeight();
infoBlock.css({
'left': x,
'top': y - infoHeight + scrollPos
});
infoBlock.fadeIn(200);
});
$('.pending-indicator').bind('mouseout', function () {
$('.share-pending-info').fadeOut(200);
});
//Personal message
$('.share-message textarea').bind('focus', function () {
var $this = $(this);
$('.share-message').addClass('active');
if ($this.val() == 'Include personal message...') {
$this.select();
window.setTimeout(function () {
$this.select();
}, 1);
function mouseUpHandler() {
$this.off("mouseup", mouseUpHandler);
return false;
}
$this.mouseup(mouseUpHandler);
}
});
$('.share-message textarea').bind('blur', function () {
var $this = $(this);
$('.share-message').removeClass('active');
});
function shareMessageResizing() {
var txt = $('.share-message textarea'),
txtHeight = txt.outerHeight(),
hiddenDiv = $('.share-message-hidden'),
pane = $('.share-message-scrolling'),
content = txt.val(),
api;
content = content.replace(/\n/g, '<br />');
hiddenDiv.html(content + '<br/>');
if (txtHeight != hiddenDiv.outerHeight()) {
txt.height(hiddenDiv.outerHeight());
if ($('.share-message-textarea').outerHeight() >= 50) {
pane.jScrollPane({ enableKeyboardNavigation: false, showArrows: true, arrowSize: 5 });
api = pane.data('jsp');
txt.blur();
txt.focus();
api.scrollByY(0);
}
else {
api = pane.data('jsp');
if (api) {
api.destroy();
txt.blur();
txt.focus();
}
}
}
}
$('.share-message textarea').on('keyup', function () {
shareMessageResizing();
});
$('.qr-dialog-label .dialog-feature-toggle').on('click', function () {
var me = $(this);
if (me.hasClass('toggle-on')) {
me.find('.dialog-feature-switch').animate({ marginLeft: '2px' }, 150, 'swing', function () {
me.removeClass('toggle-on');
});
}
else {
me.find('.dialog-feature-switch').animate({ marginLeft: '22px' }, 150, 'swing', function () {
me.addClass('toggle-on');
});
}
});
var QRoptions = {
width: 222,
height: 222,
correctLevel: QRErrorCorrectLevel.H, // high
foreground: '#D90007',
text: 'path_to_url
};
// Render the QR code
$('.qr-icon-big').text('').qrcode(QRoptions);
});
</script>
</head>
<body id="bodyel" class="bottom-pages">
<div class="fm-dialog-overlay"></div>
<div class="fm-dialog qr-contact">
<div class="fm-dialog-header">
<div class="fm-dialog-title">MEGA Contact</div>
<div class="fm-dialog-close"></div>
<div class="clear"></div>
</div>
<div class="qr-dialog-content-block">
<div class="vatar-wrapper avatar">
<img src="path_to_url" />
</div>
<div class="qr-contact-name">Khaled Daifallah</div>
<div class="qr-contact-email">kd@mega.co.nz</div>
<div class="clear"></div>
<div class="default-red-button right links-button " id="qr-ctn-add">
<div class="big-btn-txt">Add Contact</div>
</div>
<div class="qr-ct-exist hidden">Contact exists in your contact list </div>
</div>
</div>
</body>
</html>
```
|
```javascript
module.exports = require('./data.json')
```
|
rEFInd is a boot manager for UEFI and EFI-based machines. It can be used to boot multiple operating systems that are installed on a single non-volatile device. It also provides a way to launch UEFI applications.
It was forked from discontinued rEFIt in 2012, with 0.2.0 as its first release.
rEFind supports x86, x86-64, and AArch64 architecture.
Features
rEFInd has several features:
Automatic operating systems detection.
Customisable OS launch options.
Graphical or text mode. Theme is customisable.
Mac-specific features, including spoofing booting process to enable secondary video chipsets on some Mac.
Linux-specific features, including autodetecting EFI stub loader to boot Linux kernel directly and using fstab in lieu of rEFInd configuration file for boot order.
Support for Secure Boot.
Adoption
rEFInd is the default Unified Extensible Firmware Interface (UEFI) boot manager for TrueOS.
rEFInd is included in official repositories of major Linux distributions.
Development
GNU-EFI and TianoCore are supported as main development platforms for writing binary UEFI applications in C to launch right from the rEFInd GUI menu. Typical purposes of an EFI application are fixing boot problems and programmatically modifying settings within UEFI environment, which would otherwise be performed from within the BIOS of a personal computer (PC) without UEFI.
rEFInd can be built with either GNU-EFI or TianoCore EDK2/UDK.
Fork
RefindPlus is a fork of rEFInd that add several features and improvements for Mac devices, specifically MacPro3,1 and MacPro5,1, and equivalent Xserve.
See also
GNU GRUB - Another boot loader for Unix-like systems
Comparison of boot loaders
References
Free boot loaders
Free system software
Macintosh firmware
Software using the BSD license
Software forks
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.