text stringlengths 8 5.77M |
|---|
The mobile proton in polyalanine peptides.
Ion mobility measurements have been performed for protonated polyalanine peptides (A10 + H+, A15 + H+, A20 + H+, A25 + H+, and A15NH2 + H+) as a function of temperature using a new high-temperature drift tube. Peaks due to helices and globules were found at room temperature for all peptides, except for A10 + H+ (where only the globule is present). As the temperature is increased, the helix and globule peaks broaden and merge to give a single narrow peak. This indicates that the two conformations interconvert rapidly at elevated temperatures. The positions of the merged peaks show that A15 + H+ and A15NH2 + H+ spend most of their time as globules when heated, while A20 + H+ and A25 + H+ spend most of their time as helices. The helix/globule transitions are almost certainly accompanied by intramolecular proton transfer, and so, these results suggest that the proton becomes mobile (able to migrate freely along the backbone) at around 450 K. The peptides dissociate as the temperature is increased further to give predominantly the bn(+), b(n-1)(+), b(n-2)(+), ... series of fragment ions. There is a correlation between the ease of fragmentation and the time spent in the helical conformation for the An + H+ peptides. Helix formation promotes dissociation because it pools the proton at the C-terminus where it is required for dissociation to give the observed products. In addition to the helix and globule, an antiparallel helical dimer is observed for the larger peptides. The dimer can be collisionally dissociated by injection into the drift tube at elevated kinetic energies. |
About me
I am a video game developer from Madrid who likes to get inside every aspect of video games’ creation. I love video games for their unexplored potential and the challenge they offer. I enjoy doing Game Jams with my friends whenever I can.
I like to swim, ride my mountain bike and, of course, play video games. I love listening to music, especially Jazz and Rock.
My passion in video games probably started later than it should. My first console was the Play Station 1, but I only got one when I was about 14 years old. I never had a memory card, so my only chance of completing a game would be to go throw it in a single session. Later on, I got a Nintendo DS, and here is when I started to see the true potential of video games, and that I would like to spend my life making them.
I did not take the decision of studying a degree in video games until the last moment, though. I had always thought about studying IT and then do a master’s degree in video games, but in the end, I decided for the specialised degree because – although it could be riskier – it had a more artistic approach, and provided a more varied profile.
Since university, I have been working on different companies and different self-projects, most of them with the crew of Cronista. |
// Copyright (c) 2018 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#include "paddle/fluid/operators/distributed/request_handler_impl.h"
#include <iostream>
#include <string>
#include <vector>
#include "paddle/fluid/framework/data_type.h"
#include "paddle/fluid/framework/lod_tensor.h"
#include "paddle/fluid/framework/scope.h"
#include "paddle/fluid/framework/selected_rows.h"
#include "paddle/fluid/framework/variable_helper.h"
#include "paddle/fluid/operators/distributed/rpc_server.h"
#include "paddle/fluid/string/piece.h"
#include "paddle/fluid/string/printf.h"
#include "paddle/fluid/string/split.h"
#include "paddle/fluid/operators/distributed/async_sparse_param_update_recorder.h"
#include "paddle/fluid/operators/distributed/heart_beat_monitor.h"
#include "paddle/fluid/operators/distributed/large_scale_kv.h"
namespace paddle {
namespace operators {
namespace distributed {
// define LOOKUP_TABLE_PATH for checkpoint notify to save lookup table variables
// to directory specified.
constexpr char LOOKUP_TABLE_PATH[] = "kLookupTablePath";
bool RequestSendHandler::Handle(const std::string &varname,
framework::Scope *scope,
framework::Variable *invar,
framework::Variable **outvar,
const int trainer_id,
const std::string &out_var_name,
const std::string &table_name) {
VLOG(4) << "RequestSendHandler:" << varname;
// Sync
if (varname == BATCH_BARRIER_MESSAGE) {
VLOG(3) << "sync: recv BATCH_BARRIER_MESSAGE";
rpc_server_->IncreaseBatchBarrier(kRequestSend);
} else if (varname == COMPLETE_MESSAGE) {
VLOG(3) << "sync: recv complete message";
if (HeartBeatMonitor::GetInstance() != nullptr) {
HeartBeatMonitor::GetInstance()->Update(trainer_id, "", COMPLETED);
}
rpc_server_->Complete();
} else {
// Async
if (distributed_mode_ != DistributedMode::kSync) {
VLOG(3) << "async process var: " << varname;
if (varname == BATCH_BARRIER_MESSAGE) {
PADDLE_THROW(
"async mode should not recv BATCH_BARRIER_MESSAGE or "
"COMPLETE_MESSAGE");
}
HeartBeatMonitor::GetInstance()->Update(trainer_id, varname, RUNNING);
std::string run_varname = varname;
string::Piece part_piece("@PIECE");
string::Piece var_name_piece = string::Piece(varname);
if (string::Contains(var_name_piece, part_piece)) {
auto varname_splits = paddle::string::Split(varname, '@');
PADDLE_ENFORCE_EQ(varname_splits.size(), 3);
run_varname = varname_splits[0];
scope->Rename(varname, run_varname);
}
auto *var = scope->FindVar(run_varname);
// for sparse ids
if (var->IsType<framework::SelectedRows>()) {
if (distributed_mode_ == DistributedMode::kAsync ||
distributed_mode_ == DistributedMode::kHalfAsync) {
auto *ins = distributed::LargeScaleKV::GetInstance();
if (ins->GradInLargeScale(run_varname)) {
auto *large_scale_var = ins->GetByGrad(run_varname);
for (auto name : large_scale_var->CachedVarnames()) {
scope->Var(name);
}
}
}
if (distributed_mode_ == DistributedMode::kGeo) {
if (AsyncSparseParamUpdateRecorder::GetInstance()->HasGrad(
run_varname)) {
auto &grad_slr =
scope->FindVar(run_varname)->Get<framework::SelectedRows>();
AsyncSparseParamUpdateRecorder::GetInstance()->Update(
run_varname, grad_slr.rows());
}
}
}
executor_->RunPreparedContext((*grad_to_prepared_ctx_)[run_varname].get(),
scope);
return true;
} else { // sync
rpc_server_->WaitCond(kRequestSend);
VLOG(3) << "sync: processing received var: " << varname;
PADDLE_ENFORCE_NOT_NULL(
invar, platform::errors::NotFound(
"sync: Can not find server side var %s.", varname));
}
}
return true;
}
bool RequestGetHandler::Handle(const std::string &varname,
framework::Scope *scope,
framework::Variable *invar,
framework::Variable **outvar,
const int trainer_id,
const std::string &out_var_name,
const std::string &table_name) {
VLOG(3) << "RequestGetHandler:" << varname
<< " out_var_name: " << out_var_name << " trainer_id: " << trainer_id
<< " table_name: " << table_name;
if (distributed_mode_ == DistributedMode::kSync) {
if (varname == FETCH_BARRIER_MESSAGE) {
VLOG(3) << "sync: recv fetch barrier message";
rpc_server_->IncreaseBatchBarrier(kRequestGet);
} else {
rpc_server_->WaitCond(kRequestGet);
*outvar = scope_->FindVar(varname);
}
} else {
if (varname != FETCH_BARRIER_MESSAGE && varname != COMPLETE_MESSAGE) {
if (enable_dc_asgd_) {
// NOTE: the format is determined by distribute_transpiler.py
std::string param_bak_name =
string::Sprintf("%s.trainer_%d_bak", varname, trainer_id);
VLOG(3) << "getting " << param_bak_name << " trainer_id " << trainer_id;
auto var = scope_->FindVar(varname);
auto t_orig = var->Get<framework::LoDTensor>();
auto param_bak = scope_->Var(param_bak_name);
auto t = param_bak->GetMutable<framework::LoDTensor>();
t->mutable_data(dev_ctx_->GetPlace(), t_orig.type());
VLOG(3) << "copying " << varname << " to " << param_bak_name;
framework::TensorCopy(t_orig, dev_ctx_->GetPlace(), t);
}
if (distributed_mode_ == DistributedMode::kGeo &&
AsyncSparseParamUpdateRecorder::GetInstance()->HasParam(varname) &&
!table_name.empty()) {
VLOG(3) << "AsyncSparseParamUpdateRecorder " << varname << " exist ";
std::vector<int64_t> updated_rows;
AsyncSparseParamUpdateRecorder::GetInstance()->GetAndClear(
varname, trainer_id, &updated_rows);
if (VLOG_IS_ON(3)) {
std::ostringstream sstream;
sstream << "[";
for (auto &row_id : updated_rows) {
sstream << row_id << ", ";
}
sstream << "]";
VLOG(3) << "updated_rows size: " << updated_rows.size() << " "
<< sstream.str();
}
auto &origin_tensor =
scope_->FindVar(varname)->Get<framework::LoDTensor>();
auto *origin_tensor_data = origin_tensor.data<float>();
auto &dims = origin_tensor.dims();
*outvar = scope->Var();
auto *out_slr = (*outvar)->GetMutable<framework::SelectedRows>();
out_slr->set_rows(updated_rows);
out_slr->set_height(dims[0]);
auto out_dims = framework::make_ddim(
{static_cast<int64_t>(updated_rows.size()), dims[1]});
auto *data = out_slr->mutable_value()->mutable_data<float>(
out_dims, origin_tensor.place());
auto width = dims[1];
for (size_t i = 0; i < updated_rows.size(); ++i) {
PADDLE_ENFORCE_LT(updated_rows[i], dims[0]);
memcpy(data + i * width, origin_tensor_data + updated_rows[i] * width,
sizeof(float) * width);
}
} else {
*outvar = scope_->FindVar(varname);
}
}
}
return true;
}
bool RequestGetNoBarrierHandler::Handle(const std::string &varname,
framework::Scope *scope,
framework::Variable *invar,
framework::Variable **outvar,
const int trainer_id,
const std::string &out_var_name,
const std::string &table_name) {
VLOG(4) << "RequestGetNoBarrierHandler:" << varname
<< " out_var_name: " << out_var_name;
// get var from pserver immediately without barriers
string::Piece without_barrier_piece(WITHOUT_BARRIER_MESSAGE);
string::Piece var_name_piece = string::Piece(varname);
if (string::Contains(var_name_piece, without_barrier_piece)) {
var_name_piece = string::TrimSuffix(var_name_piece, without_barrier_piece);
VLOG(4) << "Get var " << var_name_piece << " with "
<< WITHOUT_BARRIER_MESSAGE;
*outvar = scope_->FindVar(var_name_piece.ToString());
return true;
} else {
PADDLE_THROW("GetNoBarrier must contain %s", WITHOUT_BARRIER_MESSAGE);
}
return true;
}
bool RequestPrefetchHandler::Handle(const std::string &varname,
framework::Scope *scope,
framework::Variable *invar,
framework::Variable **outvar,
const int trainer_id,
const std::string &out_var_name,
const std::string &table_name) {
VLOG(4) << "RequestPrefetchHandler " << varname;
(*outvar)->GetMutable<framework::LoDTensor>();
VLOG(1) << "Prefetch "
<< "tablename: " << table_name << " ids:" << varname
<< " out: " << out_var_name;
paddle::platform::CPUPlace cpu_place;
auto *ins = distributed::LargeScaleKV::GetInstance();
if (ins->ParamInLargeScale(table_name)) {
auto lookup_table_op = PullLargeScaleOp(table_name, varname, out_var_name);
lookup_table_op->Run(*scope, cpu_place);
} else {
auto lookup_table_op =
BuildLookupTableOp(table_name, varname, out_var_name);
lookup_table_op->Run(*scope, cpu_place);
}
return true;
}
bool RequestCheckpointHandler::Handle(const std::string &varname,
framework::Scope *scope,
framework::Variable *invar,
framework::Variable **outvar,
const int trainer_id,
const std::string &out_var_name,
const std::string &table_name) {
VLOG(4) << "receive save var " << varname << " with path " << out_var_name;
auto *ins = distributed::LargeScaleKV::GetInstance();
ins->Get(varname)->Save(out_var_name);
// auto checkpoint_op = BuildCheckpointOp(varname, out_var_name);
// paddle::platform::CPUPlace cpu_place;
// checkpoint_op->Run(*scope_, cpu_place);
return true;
}
bool RequestNotifyHandler::Handle(const std::string &varname,
framework::Scope *scope,
framework::Variable *invar,
framework::Variable **outvar,
const int trainer_id,
const std::string &out_var_name,
const std::string &table_name) {
VLOG(3) << "RequestNotifyHandler: " << varname
<< ", trainer_id: " << trainer_id;
string::Piece decay_piece(STEP_COUNTER);
string::Piece var_name_piece = string::Piece(varname);
if (string::Contains(var_name_piece, decay_piece)) {
VLOG(3) << "LearningRate Decay Counter Update";
auto *send_var = scope->FindVar(varname);
auto send_var_tensor = send_var->Get<framework::LoDTensor>();
auto *send_value =
send_var_tensor.mutable_data<int64_t>(send_var_tensor.place());
auto counter = decay_counters.at(trainer_id);
counter += send_value[0];
decay_counters.at(trainer_id) = counter;
auto *global_step_var = this->scope()->FindVar(LEARNING_RATE_DECAY_COUNTER);
if (global_step_var == nullptr) {
PADDLE_THROW(platform::errors::InvalidArgument(
"can not find LEARNING_RATE_DECAY_COUNTER "));
}
auto *tensor = global_step_var->GetMutable<framework::LoDTensor>();
auto *value = tensor->mutable_data<int64_t>(platform::CPUPlace());
auto global_counter = 0;
for (auto &trainer_counter : decay_counters) {
global_counter += trainer_counter.second;
}
value[0] = global_counter;
if (lr_decay_prepared_ctx_.get() == nullptr) {
PADDLE_THROW(platform::errors::InvalidArgument(
"can not find decay block for executor"));
}
executor_->RunPreparedContext(lr_decay_prepared_ctx_.get(), scope_);
}
return true;
}
bool RequestSendAndRecvHandler::Handle(const std::string &varname,
framework::Scope *Scope,
framework::Variable *var,
framework::Variable **outvar,
const int trainer_id,
const std::string &out_var_name,
const std::string &table_name) {
VLOG(3) << "SendAndRecvHandle: " << varname
<< " out_var_name: " << out_var_name
<< " , trainer_id: " << trainer_id;
executor_->RunPreparedContext((*grad_to_prepared_ctx_)[varname].get(), Scope);
*outvar = Scope->FindVar(out_var_name);
return true;
}
} // namespace distributed
} // namespace operators
} // namespace paddle
|
Oops...
Something went wrong.
Speed Kills
Trailer
In a controversial story when the hero finds himself facing a very difficult life. The millionaire man, Ben Aronov, begins a life full of clashes when he finds himself struggling to overcome a double life full of problems related to law and the most dangerous drug men in the region.
This movie not only refuses to tell a story - it undermines the viewers' attempts to interpret the narrative for themselves. It's a montage run amok. It's a film stitched out of senseless scenes, devoid of tension or even logic. |
Q:
ERROR: Execution of script failed! $ is not defined
I've used my script on Tampermonkey (Firefox) for 6 month.
At now I have this error, script work partially, not all time.
I don't know why.
I have tries various suggestions about jQuery, doesn't work
Top of my script:
(function() {
'use strict';
function RPP(){
RedeemRPProduct('fp_bonus_100');
}
var body = $('body'); //I think error is here..
var points = {};
At now script works some times. When it doesn't works appear in console this error:
ERROR: Execution of script 'script-name' failed! $ is not defined
What's happened?
Thanks
A:
I think the cause is that the website administrators removed jQuery from their code. With modern JS, it's mostly obsolete for things like selectors. I recommend simply removing jQuery references from your script and use normal DOM API:
To access body:
document.body
To find an element using selector:
document.querySelector("div#my_div")
Alternatively you could include jQuery in your script using @require.
|
Former White House Chief of Staff James Baker and other Republican insiders are lobbying the White House to enact a carbon tax, much to the dismay of conservatives.
Research from the Heritage Foundation shows that a carbon tax does little to help the environment or the average American.
A Heritage Foundation study analyzed a $36 per ton tax on carbon, similar to Baker’s $40 tax per ton on carbon. The think tank study suggests that the effects of a carbon tax would be:
an average shortfall of nearly 400,000 jobs.
an average manufacturing shortfall of over 200,000 jobs.
a total income loss of more than $20,000 for a family of four.
an aggregate gross domestic product loss of over $2.5 trillion.
an increase in household electricity expenditures between 13 and 20 percent.
A carbon tax would not reduce global temperatures much either. According to a Heritage Foundation study, if the United States were to cut all carbon emissions immediately, there would only be a .137 degree Celsius drop in global temperature. If all industrialized nations eliminated all carbon emissions there would only be a .278 degree Celsius drop in global temperature.Climate activists such as EPA administrators Lisa Jackson, Gina McCartney, and former Secretary of State John Kerry admit that a substantial reduction in American carbon emissions will not substantially impact global CO2 levels.
United Nations climate chief Christiana Figueres admitted there are ulterior motives to enacting a carbon tax. Figeures told reporters, “This is probably the most difficult task we have ever given ourselves, which is to intentionally transform the economic development model, for the first time in human history.” She continued, “This is the first time in the history of mankind that we are setting ourselves the task of intentionally, within a defined period of time, to change the economic development model that has been reigning for at least 150 years–since the industrial revolution.” |
Q:
FlatList at the middle of the screen doesn't show full content
Like I said above my problem is that my FlatList that is positioned on the middle of the screen and goes to the bottom is not showing the whole content of the last card that I have. Like this:
As you can see the last card is missing the image and 3 actions buttons that you can see that have the above card. So my question is what style can I give to the flatlist or if it have a property to solve this problem.
Thanks in advance for your help
EDIT
This is my code(the relevant part I think):
return (
<View>
<View style={styles.topHeader}>
<View style={styles.imageProfile}>
{this.state.isLoaded ?
this.props.isLogged ?
<View>
<View style={styles.imgContainer} style={{ alignItems: "center", marginBottom: 20 }}>
<Image style={styles.userImg} style={{ width: 100, height: 100, borderRadius: 50 }} source={{ uri: "data:image/png;base64," + this.state.user.picture }} />
</View>
<Text style={styles.name}>{this.state.user.name}</Text>
<Text style={styles.subheaderText}>{arrayCounters}</Text>
</View>
:
<View>
<View style={styles.imgContainer} style={{ alignItems: "center", marginBottom: 20 }}>
<Image style={styles.userImg} style={{ width: 100, height: 100, borderRadius: 50 }} source={require("../../assets/no-user.jpg")} />
</View>
<TouchableOpacity onPress={this.logIn}>
<Text style={styles.subheaderText}>Iniciar sesión</Text>
</TouchableOpacity>
</View>
: null}
{!this.state.isLoaded &&
<View style={styles.loading}>
<ActivityIndicator size="large" color="#F5DA49" />
</View>
}
</View>
</View>
<View style={styles.tabsContainer}>
<TouchableOpacity onPress={() => this.changeTab(1)} style={[styles.tab, this.state.tabSelected === 1 && styles.tabSelected]}>
<Text style={styles.tabText}>PÚBLICACIONES</Text>
</TouchableOpacity>
<TouchableOpacity onPress={() => this.changeTab(2)} style={[styles.tab, this.state.tabSelected === 2 && styles.tabSelected]}>
<Text style={styles.tabText}>FAVORITOS</Text>
</TouchableOpacity>
</View>
<View>
<View>
{this.state.isLoadedMyPosts && this.state.tabSelected === 2 &&
this.state.favoritesPosts &&
<FlatList
data={this.state.favoritesPosts}
renderItem={({ item, separators }) => (
<PostItem key={item._id} item={item} isTabFavorites=
{true} removedFav={this.removedFav.bind(this)} />
)}
keyExtractor={item => item._id}
onEndReachedThreshold={0.5}
/>
}
The styles:
const styles = StyleSheet.create({
noPosts: {
alignItems: 'center',
position: "relative",
marginTop: 50
},
textNoPosts: {
marginTop: 20,
fontSize: 20
},
name: {
fontSize: 18,
color: "#FFF",
marginBottom: 5
},
tabText: {
color: "#262628",
fontSize: 20
},
tabsContainer: {
width: width,
flexDirection: "row",
marginBottom: 10
},
tab: {
width: width / 2,
backgroundColor: "#FFF",
alignItems: "center",
paddingVertical: 15
},
tabSelected: {
borderBottomColor: '#F5DA49',
borderBottomWidth: 4
},
loadingPosts: {
position: 'absolute',
left: 0,
right: 0,
top: 120,
justifyContent: 'center',
alignItems: 'center'
},
loading: {
position: 'absolute',
left: 0,
right: 0,
top: 0,
bottom: 0,
opacity: 0.5,
backgroundColor: 'black',
justifyContent: 'center',
alignItems: 'center'
},
topHeader: {
backgroundColor: "#262628",
width: width,
height: 200
},
imageProfile: {
justifyContent: "center",
alignItems: "center",
height: 200
},
userImg: {
borderRadius: 50,
alignItems: "center"
},
subheaderText: {
color: "#fff"
},
imgContainer: {
}
});
A:
Wrap the whole of the view in scroll view. I guess that should work
|
The mother of a convicted Tasmanian pimp has asked prison authorities to provide protection for her son.
Gary John Devine is serving a 10-year jail sentence at Risdon Prison for his part in prostituting a 12-year-old girl.
He placed the newspaper advertisement, acted as the girl's pimp and ran the underage prostitution business from his unit.
Gloria Devine wants the 51 year old moved from medium security to the prison in Hobart for people on remand because he is in danger at Risdon.
"He said he gets a lot of things thrown at him and called names."
Mrs Devine says she does not condone her son's actions but believes he is at risk because his crimes involved a child.
"He's got the rights to be protected more than any other prisoner with what's been going on."
Mrs Devine says she's contacted prison management to request her son's transfer.
She wants him moved to the Hobart Reception Prison which holds remand prisoners.
"Well medium security is just in a large block of units all in one, you can walk from one yard to another. There's no-one stopping anyone getting out from one yard to another if they want to do anything to anyone," she said.
"Everyone knows what goes on at the prisons... you don't have to be genuis to work that out, everyone knows and that is a concern." |
Sources say that another Eastern Conference team contracted to stay at the Trump SoHo in New York this season has likewise already decided to switch to a different property in Manhattan when its current contract expires at season’s end and that the Trump association is among the factors for the switch.
Seven other teams told ESPN.com on Tuesday that they are still currently scheduled to stay at Trump-branded properties this season.
As a matter of privacy, ESPN has chosen not to name those eight teams in total so as not to publicly identify where they will be staying on this season’s trips for games against the New York Knicks, Brooklyn Nets or Chicago Bulls.
Trump does not currently hold an equity stake in the Trump SoHo hotel, but his company still owns and operates the Chicago property.
Sources say that the Bucks stayed at the Trump International Tower and Hotel in Chicago on a preseason trip to play the Bulls in early October, but met with complications when they tried to make an 11th-hour cancellation. Milwaukee, sources say, has already made other arrangements for its regular-season road games against the Bulls.
The Grizzlies and Mavericks, sources say, have stayed at the Trump SoHo in the past but opted during the offseason to book new New York hotels for this season.
Grizzlies coach David Fizdale, speaking after the team’s shootaround Wednesday in Los Angeles, insisted that where Memphis stays or doesn’t stay is not politically motivated.
“No truth to that,” Fizdale told the Memphis Commercial Appeal on Wednesday. “Our decisions as to what hotels we stay in are made long before any of this election stuff took place. It’s no story.”
Bucks co-owner Marc Lasry and Cuban are both high-profile supporters of Democratic nominee Hillary Clinton, who last week lost the race to be the United States’ 45th president to Trump.
Since the election, three prominent NBA coaches — Detroit’s Stan Van Gundy, Golden State’s Steve Kerr and San Antonio’s Gregg Popovich — have publicly addressed the unease that they and some of their players feel in the wake of Trump’s election, starting with the dismay Van Gundy voiced that the country’s new president, in his words, is “openly and brazenly racist and misogynistic and ethnocentric.”
exists to spread freedom of information, conservative values, and truth to the public through various media forms. We are an organization committed to upholding the First Amendment right of the Freedom of Speech and make every effort to lend to our readers—and Americans everywhere—strength, compassion, and honor. We welcome input and engagement from you, our online community, in mutual respect and courage to face our America, together. |
Geoffrey Sserunkuma
Geoffrey Sserunkuma (born 7 June 1983) is a Ugandan international footballer who plays for Wakiso Giants FC and the Uganda national team (the "Cranes") as a striker.
Club career
Operating as striker, Sserukuma played for Police Jinja. He enjoyed success at Kampala City Council FC before a transfer to Ethiopian Premier League club Saint-George SA in July 2007. In summer 2008, he left the club Addis Ababa and moved to Bloemfontein Celtic. In July 2009, he left Bloemfontein Celtic and completed a move to Vasco Da Gama, after a falling out with Celtic manager Owen da Gama.
Bidvest Wits
On 6 April 2010, Sserunkuma signed for Bidvest Wits agreeing a two-year deal with the club.
Vasco Da Gama
However, he returned to Vasco Da Gama the following season, playing in the second-tier following the club's relegation from the top flight.
Lweza Football Club
In 2015, Sserunkuma joined Lweza FC. Sserunkuma played for a season at the Lweza F.C and scored eight goals in that season.
Kampala City Council
In July 2016 Sserunkuma joined Kampala City Council FC from Lweza Football Club; this was the second stint for Sserunkuma at the Lugogo based club following his first era during 2004 and 2006 seasons.
Sserunkuma opened his goal account with a debut strike against JMC Hippos on Friday 22 August 2016 as the Kampala City Council FC edged their visitors 2-1 at Phillip Omondi Stadium, Lugogo.
While in 2016/2017 season Sserunkuma was the first player to hit double figures that season, His goal in the third minute against BUL FC was his 10th goal of the season. He last featured for Kampala City Council when it was playing against Paidha Black Angels FC in Uganda Cup 2017 finals in Arua where he scored his last goal. Sserunkuma scored 31 goals in all competitions for KCCA FC last season and helped the team win their first ever domestic double.
Buildcon F.C
In July 2017 Sserunkuma joined Buildcon F.C. On 12 August, Sserunkuma scored his first goal for Buildcon F.C against Lusaka Dynamos in a league match played at Levy Mwanawasa Stadium.
Napster
He played for Napster FC for a season.
Wakiso Giants FC
On 7 August 2019 Sserunkuma joined Wakiso Giants FC.,
International career
He first began playing for the Cranes in the year 2002
He was part of the Uganda Cranes team that participated in the 2016 Championship of Africa Nations tournament in Rwanda and scored against Zimbabwe in their 1-1 draw. Sserunkuma was one of the six locally based players in the Cranes squad which represented Uganda in 2017 Africa Cup of Nations at Gabon.
International statistics
International goals
Scores and results list Uganda's goal tally first.
Honors and achievements
Club
Kampala Capital City Authority FC
Ugandan Super League: 2017
Uganda Cup: 2017
Individual
Uganda Super League top scorer (1): 2016-2017
Uganda Super League MVP : 2016-2017
Uganda Super League FANS` PLAYER OF THE YEAR : 2016-2017
Kawowo sports Best XI of the 2016-17 Uganda Premier League:
Most Valuable Player : 2017
Player of the year : 2017
Fans player of the year : 2017
References
External links
Category:1983 births
Category:Living people
Category:Association football forwards
Category:Ugandan footballers
Category:Uganda international footballers
Category:Sportspeople from Kampala
Category:Ugandan expatriate footballers
Category:Expatriate soccer players in South Africa
Category:Expatriate footballers in Ethiopia
Category:Ugandan expatriates in South Africa
Category:Saint George SC players
Category:Vasco da Gama (South Africa) players
Category:Bidvest Wits F.C. players
Category:Bloemfontein Celtic F.C. players
Category:Kampala Capital City Authority FC players
Category:2017 Africa Cup of Nations players
Category:Wakiso Giants FC players |
A new scapula retractor for posterolateral thoracotomy.
A new retractor is presented for use in performing posterolateral thoracotomy. Its advantages are stabilization of the tip of the scapula and preventing it from protruding over the intercostal space to be incised. |
<transition_graph cluster-delay="60s" stonith-timeout="60s" failed-stop-offset="INFINITY" failed-start-offset="INFINITY" transition_id="0">
<synapse id="0">
<action_set>
<rsc_op id="10" operation="monitor" operation_key="main_rsc_monitor_10000" on_node="srv03" on_node_uuid="e2ffc1ed-3ebe-47e2-b51b-b0f04b454311">
<primitive id="main_rsc" class="ocf" provider="pacemaker" type="Dummy"/>
<attributes CRM_meta_interval="10000" CRM_meta_name="monitor" CRM_meta_on_fail="restart" CRM_meta_on_node="srv03" CRM_meta_on_node_uuid="e2ffc1ed-3ebe-47e2-b51b-b0f04b454311" CRM_meta_timeout="30000" />
</rsc_op>
</action_set>
<inputs>
<trigger>
<rsc_op id="9" operation="start" operation_key="main_rsc_start_0" on_node="srv03" on_node_uuid="e2ffc1ed-3ebe-47e2-b51b-b0f04b454311"/>
</trigger>
</inputs>
</synapse>
<synapse id="1">
<action_set>
<rsc_op id="9" operation="start" operation_key="main_rsc_start_0" on_node="srv03" on_node_uuid="e2ffc1ed-3ebe-47e2-b51b-b0f04b454311">
<primitive id="main_rsc" class="ocf" provider="pacemaker" type="Dummy"/>
<attributes CRM_meta_name="start" CRM_meta_on_fail="restart" CRM_meta_on_node="srv03" CRM_meta_on_node_uuid="e2ffc1ed-3ebe-47e2-b51b-b0f04b454311" CRM_meta_timeout="60000" />
</rsc_op>
</action_set>
<inputs>
<trigger>
<rsc_op id="8" operation="stop" operation_key="main_rsc_stop_0" on_node="srv01" on_node_uuid="45f985d7-e7c8-4834-b01b-16b99526672b"/>
</trigger>
</inputs>
</synapse>
<synapse id="2">
<action_set>
<rsc_op id="8" operation="stop" operation_key="main_rsc_stop_0" on_node="srv01" on_node_uuid="45f985d7-e7c8-4834-b01b-16b99526672b">
<primitive id="main_rsc" class="ocf" provider="pacemaker" type="Dummy"/>
<attributes CRM_meta_name="stop" CRM_meta_on_fail="block" CRM_meta_on_node="srv01" CRM_meta_on_node_uuid="45f985d7-e7c8-4834-b01b-16b99526672b" CRM_meta_timeout="60000" />
</rsc_op>
</action_set>
<inputs/>
</synapse>
<synapse id="3">
<action_set>
<rsc_op id="21" operation="stop" operation_key="prmPingd:0_stop_0" on_node="srv01" on_node_uuid="45f985d7-e7c8-4834-b01b-16b99526672b">
<primitive id="prmPingd" long-id="prmPingd:0" class="ocf" provider="pacemaker" type="ping"/>
<attributes CRM_meta_clone="0" CRM_meta_clone_max="3" CRM_meta_clone_node_max="1" CRM_meta_globally_unique="false" CRM_meta_name="stop" CRM_meta_notify="false" CRM_meta_on_fail="ignore" CRM_meta_on_node="srv01" CRM_meta_on_node_uuid="45f985d7-e7c8-4834-b01b-16b99526672b" CRM_meta_timeout="60000" host_list="192.168.40.1"/>
</rsc_op>
</action_set>
<inputs>
<trigger>
<pseudo_event id="28" operation="stop" operation_key="clnPingd_stop_0"/>
</trigger>
</inputs>
</synapse>
<synapse id="4" priority="1000000">
<action_set>
<pseudo_event id="29" operation="stopped" operation_key="clnPingd_stopped_0">
<attributes CRM_meta_clone_max="3" CRM_meta_clone_node_max="1" CRM_meta_globally_unique="false" CRM_meta_notify="false" CRM_meta_timeout="20000" />
</pseudo_event>
</action_set>
<inputs>
<trigger>
<rsc_op id="21" operation="stop" operation_key="prmPingd:0_stop_0" on_node="srv01" on_node_uuid="45f985d7-e7c8-4834-b01b-16b99526672b"/>
</trigger>
<trigger>
<pseudo_event id="28" operation="stop" operation_key="clnPingd_stop_0"/>
</trigger>
</inputs>
</synapse>
<synapse id="5">
<action_set>
<pseudo_event id="28" operation="stop" operation_key="clnPingd_stop_0">
<attributes CRM_meta_clone_max="3" CRM_meta_clone_node_max="1" CRM_meta_globally_unique="false" CRM_meta_notify="false" CRM_meta_timeout="20000" />
</pseudo_event>
</action_set>
<inputs/>
</synapse>
</transition_graph>
|
Search for missing cashier: Witness talks
A former "person of interest," who has since been cleared in disappearance of gas station cashier Jessica Heeringa, discusses his interviews with police and insights into the case with WOOD's Leon Hendrix. |
Limerick duo nominated for Player of The Year Award
Munster’s Conor Murray and Keith Earls have been nominated for the Rugby Players Ireland player of the year award.
The Limerick men are joined on the shortlist by Leinster duo Johnathon Sexton and Tadgh Furlong.
Murray, who holds the award at the moment has had a landmark year, playing a huge part in the British & Irish Lions drawn series with New Zealand in the summer, before landing the Grand Slam with Ireland.
Keith Earls has played some of his finest rugby for province and country this season, the Moyross man played a major role for Ireland in the Six Nations before suffering a knee injury against England in the last round of the competition.
UL Bohemians Ciara Griffin has been nominated for the women’s player of the year award.
While Ex Munster man Duncan Casey is also in with a chance of winning an award.
The former Glenstal Abbey student has been nominated for the Medal of Excellence – awarded to the individual who has shown exceptional commitment to the game of rugby both on and off the field |
A drug-resistant malaria superbug is on the rise — here’s why that’s so concerning
Malaria parasites carried by mosquitoes in the Greater Mekong region have started to become resistant to the main drugs used to treat the disease.
Malaria still kills more than 400,000 a year and the spread of a resistant strain like this could kill millions.
Southeast Asia is where drug-resistant malaria tends to emerge before spreading to other parts of the world.
In a hot and humid part of the world a threat that could kill millions is growing.
A mutated malaria parasite carried by mosquitoes in the Greater Mekong region of western Cambodia, northeastern Thailand, southern Laos, and the south of Vietnam has started to become a dominant malaria parasite in that region.
And this particular line of malaria parasite is resistant to not just one but two of the most effective drugs we have for treating the devastating illness.
This is a "sinister development" that "presents one of the greatest threats to the control and elimination of malaria," a group of researchers from Thailand, Vietnam, and the UK wrote in a recent letter to the medical journal The Lancet.
Medicine is an ever-evolving battle. Scientists find a treatment that's able to kill a pathogen; any members of that pathogen species able to survive the treatment then pass their resistance on, making the treatment less useful. New treatments are found or developed and the process repeats — unless the bug can be contained using other strategies (like alternating between different types of treatment so no one form of resistance becomes paramount) or unless that bug can be wiped out completely.
The most well-known example of this comes from the ongoing fight against bacterial infections. Bacteria are developing resistance to medicine faster than we're developing new antibiotics, threatening to return the world to a pre-antibiotic era where a simple scratch could become fatal (the risks from any surgery become astounding in that scenario) — something experts call a growing threat that could kill 10 million people a year by 2050.
The same thing is happening with our medications for the malaria parasite.
From Southeast Asia to Africa
While the development of resistance isn't new, this particular case is disturbing. In the past, malaria-causing mosquitoes developed resistance to pesticides like DDT and to chloroquine, once a widely used malaria treatment that's now not effective for the form of the disease that kills the most people.
The resistant strain was first discovered near Pailin in 2008, but it has now spread throughout the region.The Lancet
There are five different types of malaria-causing parasite, and each carries a version of the disease with unique levels of severity and associated difficulties in treatment. The one that's developing resistance in the Mekong is called Plasmodium falciparum. It's both the most common and most fatal, accounting for 90% of the more than 400,000 annual deaths that malaria causes.
According to the World Health Organization, malaria still infects 212 million people a year, meaning that a strain that our most effective drugs can't treat would have devastating consequences. Approximately 92% of malaria deaths occur in Sub-Saharan Africa. But the distance between there and Southeast Asia is no reassurance.
"Almost always, drug resistance has emerged in Southeast Asia and jumped to Africa," Janice Culpepper, a senior program officer on the Malaria Program Strategy Team in Global Health at the Bill & Melinda Gates Foundation, explained in a conversation with reporters that Business Insider participated in last fall. The biodiversity hotspot in the Mekong region is a place where new mutations like drug resistance tend to appear, but successful species there can spread to other parts of the world easily.
In this location, medical professionals had been using a combination of the drugs artemisinin (which was becoming less effective) and piperaquine as a combination therapy, with the idea that it was hard for parasites to develop resistance to two drugs at once.
But in this case, a strain of Plasmodium falciparum that was already resistant to artemisinin has developed resistance to piperaquine too.
"It's a race against the clock — we have to eliminate it before malaria becomes untreatable again and we see a lot of deaths," Arjen Dondorp, head of a malaria research unit in Bangkok and one author of the Lancet letter, told the BBC.
In theory, malaria can be eliminated from a region. A number of countries have done so over the years. But in this case, with the resistant strain spreading, time is short.
"The evolution and subsequent transnational spread of this single fit multidrug-resistant malaria parasite lineage is of international concern," the authors of the letter wrote. |
---
abstract: |
We have observed exclusive $\gamma\gamma$ production in proton-antiproton collisions at $\sqrt{s}=1.96$ TeV, using data from 1.11 $\pm$ 0.07 fb$^{-1}$ integrated luminosity taken by the Run II Collider Detector at Fermilab. We selected events with two electromagnetic showers, each with transverse energy $E_T > 2.5$ GeV and pseudorapidity $|\eta| <$ 1.0, with no other particles detected in $-7.4 <
\eta < +7.4$. The two showers have similar $E_T$ and azimuthal angle separation $\Delta\phi \sim \pi$; 34 events have two charged particle tracks, consistent with the QED process $p \bar{p} \rightarrow p + e^+e^- + \bar{p}$ by two-photon exchange, while 43 events have no charged tracks. The number of these events that are exclusive $\pi^0\pi^0$ is consistent with zero and is $<$ 15 at 95% C.L. The cross section for $p\bar{p} \to
p+\gamma\gamma+\bar{p}$ with $|\eta(\gamma)| < 1.0$ and $E_T(\gamma) > 2.5$ GeV is $2.48\,^{+0.40}_{-0.35}(\mathrm{stat})\,^{+0.40}_{-0.51}(\mathrm{syst})\,
\mathrm {
pb}$.
author:
- 'T. Aaltonen'
- 'M.G. Albrow'
- 'B. Álvarez González$^z$'
- 'S. Amerio'
- 'D. Amidei'
- 'A. Anastassov$^x$'
- 'A. Annovi'
- 'J. Antos'
- 'G. Apollinari'
- 'J.A. Appel'
- 'T. Arisawa'
- 'A. Artikov'
- 'J. Asaadi'
- 'W. Ashmanskas'
- 'B. Auerbach'
- 'A. Aurisano'
- 'F. Azfar'
- 'W. Badgett'
- 'T. Bae'
- 'A. Barbaro-Galtieri'
- 'V.E. Barnes'
- 'B.A. Barnett'
- 'P. Barria$^{hh}$'
- 'P. Bartos'
- 'M. Bauce$^{ff}$'
- 'F. Bedeschi'
- 'S. Behari'
- 'G. Bellettini$^{gg}$'
- 'J. Bellinger'
- 'D. Benjamin'
- 'A. Beretvas'
- 'A. Bhatti'
- 'D. Bisello$^{ff}$'
- 'I. Bizjak'
- 'K.R. Bland'
- 'B. Blumenfeld'
- 'A. Bocci'
- 'A. Bodek'
- 'D. Bortoletto'
- 'J. Boudreau'
- 'A. Boveia'
- 'L. Brigliadori$^{ee}$'
- 'C. Bromberg'
- 'E. Brucken'
- 'J. Budagov'
- 'H.S. Budd'
- 'K. Burkett'
- 'G. Busetto$^{ff}$'
- 'P. Bussey'
- 'A. Buzatu'
- 'A. Calamba'
- 'C. Calancha'
- 'S. Camarda'
- 'M. Campanelli'
- 'M. Campbell'
- 'F. Canelli$^{11}$'
- 'B. Carls'
- 'D. Carlsmith'
- 'R. Carosi'
- 'S. Carrillo$^m$'
- 'S. Carron'
- 'B. Casal$^k$'
- 'M. Casarsa'
- 'A. Castro$^{ee}$'
- 'P. Catastini'
- 'D. Cauz'
- 'V. Cavaliere'
- 'M. Cavalli-Sforza'
- 'A. Cerri$^f$'
- 'L. Cerrito$^s$'
- 'Y.C. Chen'
- 'M. Chertok'
- 'G. Chiarelli'
- 'G. Chlachidze'
- 'F. Chlebana'
- 'K. Cho'
- 'D. Chokheli'
- 'W.H. Chung'
- 'Y.S. Chung'
- 'M.A. Ciocci$^{hh}$'
- 'A. Clark'
- 'C. Clarke'
- 'G. Compostella$^{ff}$'
- 'M.E. Convery'
- 'J. Conway'
- 'M.Corbo'
- 'M. Cordelli'
- 'C.A. Cox'
- 'D.J. Cox'
- 'F. Crescioli$^{gg}$'
- 'J. Cuevas$^z$'
- 'R. Culbertson'
- 'D. Dagenhart'
- 'N. d’Ascenzo$^w$'
- 'M. Datta'
- 'P. de Barbaro'
- 'M. Dell’Orso$^{gg}$'
- 'L. Demortier'
- 'M. Deninno'
- 'F. Devoto'
- 'M. d’Errico$^{ff}$'
- 'A. Di Canto$^{gg}$'
- 'B. Di Ruzza'
- 'J.R. Dittmann'
- 'M. D’Onofrio'
- 'S. Donati$^{gg}$'
- 'P. Dong'
- 'M. Dorigo'
- 'T. Dorigo'
- 'K. Ebina'
- 'A. Elagin'
- 'A. Eppig'
- 'R. Erbacher'
- 'S. Errede'
- 'N. Ershaidat$^{dd}$'
- 'R. Eusebi'
- 'S. Farrington'
- 'M. Feindt'
- 'J.P. Fernandez'
- 'R. Field'
- 'G. Flanagan$^u$'
- 'R. Forrest'
- 'M.J. Frank'
- 'M. Franklin'
- 'J.C. Freeman'
- 'Y. Funakoshi'
- 'I. Furic'
- 'M. Gallinaro'
- 'J.E. Garcia'
- 'A.F. Garfinkel'
- 'P. Garosi$^{hh}$'
- 'H. Gerberich'
- 'E. Gerchtein'
- 'S. Giagu'
- 'V. Giakoumopoulou'
- 'P. Giannetti'
- 'K. Gibson'
- 'C.M. Ginsburg'
- 'N. Giokaris'
- 'P. Giromini'
- 'G. Giurgiu'
- 'V. Glagolev'
- 'D. Glenzinski'
- 'M. Gold'
- 'D. Goldin'
- 'N. Goldschmidt'
- 'A. Golossanov'
- 'G. Gomez'
- 'G. Gomez-Ceballos'
- 'M. Goncharov'
- 'O. González'
- 'I. Gorelov'
- 'A.T. Goshaw'
- 'K. Goulianos'
- 'S. Grinstein'
- 'C. Grosso-Pilcher'
- 'R.C. Group$^{53}$'
- 'J. Guimaraes da Costa'
- 'S.R. Hahn'
- 'E. Halkiadakis'
- 'A. Hamaguchi'
- 'J.Y. Han'
- 'F. Happacher'
- 'K. Hara'
- 'D. Hare'
- 'M. Hare'
- 'R.F. Harr'
- 'K. Hatakeyama'
- 'C. Hays'
- 'M. Heck'
- 'J. Heinrich'
- 'M. Herndon'
- 'S. Hewamanage'
- 'A. Hocker'
- 'W. Hopkins$^g$'
- 'D. Horn'
- 'S. Hou'
- 'R.E. Hughes'
- 'M. Hurwitz'
- 'U. Husemann'
- 'N. Hussain'
- 'M. Hussein'
- 'J. Huston'
- 'G. Introzzi'
- 'M. Iori$^{jj}$'
- 'A. Ivanov$^p$'
- 'E. James'
- 'D. Jang'
- 'B. Jayatilaka'
- 'E.J. Jeon'
- 'S. Jindariani'
- 'M. Jones'
- 'K.K. Joo'
- 'S.Y. Jun'
- 'T.R. Junk'
- 'T. Kamon$^{25}$'
- 'P.E. Karchin'
- 'A. Kasmi'
- 'Y. Kato$^o$'
- 'W. Ketchum'
- 'J. Keung'
- 'V. Khotilovich'
- 'B. Kilminster'
- 'D.H. Kim'
- 'H.S. Kim'
- 'J.E. Kim'
- 'M.J. Kim'
- 'S.B. Kim'
- 'S.H. Kim'
- 'Y.K. Kim'
- 'Y.J. Kim'
- 'N. Kimura'
- 'M. Kirby'
- 'S. Klimenko'
- 'K. Knoepfel'
- 'K. Kondo'
- 'D.J. Kong'
- 'J. Konigsberg'
- 'A.V. Kotwal'
- 'M. Kreps'
- 'J. Kroll'
- 'D. Krop'
- 'M. Kruse'
- 'V. Krutelyov$^c$'
- 'T. Kuhr'
- 'M. Kurata'
- 'S. Kwang'
- 'A.T. Laasanen'
- 'S. Lami'
- 'S. Lammel'
- 'M. Lancaster'
- 'R.L. Lander'
- 'K. Lannon$^y$'
- 'A. Lath'
- 'G. Latino$^{hh}$'
- 'T. LeCompte'
- 'E. Lee'
- 'H.S. Lee$^q$'
- 'J.S. Lee'
- 'S.W. Lee$^{bb}$'
- 'S. Leo$^{gg}$'
- 'S. Leone'
- 'J.D. Lewis'
- 'A. Limosani$^t$'
- 'C.-J. Lin'
- 'M. Lindgren'
- 'E. Lipeles'
- 'A. Lister'
- 'D.O. Litvintsev'
- 'C. Liu'
- 'H. Liu'
- 'Q. Liu'
- 'T. Liu'
- 'S. Lockwitz'
- 'A. Loginov'
- 'D. Lucchesi$^{ff}$'
- 'J. Lueck'
- 'P. Lujan'
- 'P. Lukens'
- 'G. Lungu'
- 'J. Lys'
- 'R. Lysak$^e$'
- 'R. Madrak'
- 'K. Maeshima'
- 'P. Maestro$^{hh}$'
- 'S. Malik'
- 'G. Manca$^a$'
- 'A. Manousakis-Katsikakis'
- 'F. Margaroli'
- 'C. Marino'
- 'M. Martínez'
- 'P. Mastrandrea'
- 'K. Matera'
- 'M.E. Mattson'
- 'A. Mazzacane'
- 'P. Mazzanti'
- 'K.S. McFarland'
- 'P. McIntyre'
- 'R. McNulty$^j$'
- 'A. Mehta'
- 'P. Mehtala'
- 'C. Mesropian'
- 'T. Miao'
- 'D. Mietlicki'
- 'A. Mitra'
- 'H. Miyake'
- 'S. Moed'
- 'N. Moggi'
- 'M.N. Mondragon$^m$'
- 'C.S. Moon'
- 'R. Moore'
- 'M.J. Morello$^{ii}$'
- 'J. Morlock'
- 'P. Movilla Fernandez'
- 'A. Mukherjee'
- 'Th. Muller'
- 'P. Murat'
- 'M. Mussini$^{ee}$'
- 'J. Nachtman$^n$'
- 'Y. Nagai'
- 'J. Naganoma'
- 'I. Nakano'
- 'A. Napier'
- 'J. Nett'
- 'C. Neu'
- 'M.S. Neubauer'
- 'J. Nielsen$^d$'
- 'L. Nodulman'
- 'S.Y. Noh'
- 'O. Norniella'
- 'L. Oakes'
- 'S.H. Oh'
- 'Y.D. Oh'
- 'I. Oksuzian'
- 'T. Okusawa'
- 'R. Orava'
- 'L. Ortolan'
- 'S. Pagan Griso$^{ff}$'
- 'C. Pagliarone'
- 'E. Palencia$^f$'
- 'V. Papadimitriou'
- 'A.A. Paramonov'
- 'J. Patrick'
- 'G. Pauletta$^{kk}$'
- 'M. Paulini'
- 'C. Paus'
- 'D.E. Pellett'
- 'A. Penzo'
- 'T.J. Phillips'
- 'G. Piacentino'
- 'E. Pianori'
- 'J. Pilot'
- 'K. Pitts'
- 'C. Plager'
- 'L. Pondrom'
- 'S. Poprocki$^g$'
- 'K. Potamianos'
- 'F. Prokoshin$^{cc}$'
- 'A. Pranko'
- 'F. Ptohos$^h$'
- 'G. Punzi$^{gg}$'
- 'A. Rahaman'
- 'V. Ramakrishnan'
- 'N. Ranjan'
- 'I. Redondo'
- 'P. Renton'
- 'M. Rescigno'
- 'T. Riddick'
- 'F. Rimondi$^{ee}$'
- 'L. Ristori$^{42}$'
- 'A. Robson'
- 'T. Rodrigo'
- 'T. Rodriguez'
- 'E. Rogers'
- 'S. Rolli$^i$'
- 'R. Roser'
- 'F. Ruffini$^{hh}$'
- 'A. Ruiz'
- 'J. Russ'
- 'V. Rusu'
- 'A. Safonov'
- 'W.K. Sakumoto'
- 'Y. Sakurai'
- 'L. Santi$^{kk}$'
- 'K. Sato'
- 'V. Saveliev$^w$'
- 'A. Savoy-Navarro$^{aa}$'
- 'P. Schlabach'
- 'A. Schmidt'
- 'E.E. Schmidt'
- 'T. Schwarz'
- 'L. Scodellaro'
- 'A. Scribano$^{hh}$'
- 'F. Scuri'
- 'S. Seidel'
- 'Y. Seiya'
- 'A. Semenov'
- 'F. Sforza$^{hh}$'
- 'S.Z. Shalhout'
- 'T. Shears'
- 'P.F. Shepard'
- 'M. Shimojima$^v$'
- 'M. Shochet'
- 'I. Shreyber-Tecker'
- 'A. Simonenko'
- 'P. Sinervo'
- 'K. Sliwa'
- 'J.R. Smith'
- 'F.D. Snider'
- 'A. Soha'
- 'V. Sorin'
- 'H. Song'
- 'P. Squillacioti$^{hh}$'
- 'M. Stancari'
- 'R. St. Denis'
- 'B. Stelzer'
- 'O. Stelzer-Chilton'
- 'D. Stentz$^x$'
- 'J. Strologas'
- 'G.L. Strycker'
- 'Y. Sudo'
- 'A. Sukhanov'
- 'I. Suslov'
- 'K. Takemasa'
- 'Y. Takeuchi'
- 'J. Tang'
- 'M. Tecchio'
- 'P.K. Teng'
- 'J. Thom$^g$'
- 'J. Thome'
- 'G.A. Thompson'
- 'E. Thomson'
- 'D. Toback'
- 'S. Tokar'
- 'K. Tollefson'
- 'T. Tomura'
- 'D. Tonelli'
- 'S. Torre'
- 'D. Torretta'
- 'P. Totaro'
- 'M. Trovato$^{ii}$'
- 'F. Ukegawa'
- 'S. Uozumi'
- 'A. Varganov'
- 'F. Vázquez$^m$'
- 'G. Velev'
- 'C. Vellidis'
- 'M. Vidal'
- 'I. Vila'
- 'R. Vilar'
- 'J. Vizán'
- 'M. Vogel'
- 'G. Volpi'
- 'P. Wagner'
- 'R.L. Wagner'
- 'T. Wakisaka'
- 'R. Wallny'
- 'S.M. Wang'
- 'A. Warburton'
- 'D. Waters'
- 'W.C. Wester III'
- 'D. Whiteson$^b$'
- 'A.B. Wicklund'
- 'E. Wicklund'
- 'S. Wilbur'
- 'F. Wick'
- 'H.H. Williams'
- 'J.S. Wilson'
- 'P. Wilson'
- 'B.L. Winer'
- 'P. Wittich$^g$'
- 'S. Wolbers'
- 'H. Wolfe'
- 'T. Wright'
- 'X. Wu'
- 'Z. Wu'
- 'K. Yamamoto'
- 'D. Yamato'
- 'T. Yang'
- 'U.K. Yang$^r$'
- 'Y.C. Yang'
- 'W.-M. Yao'
- 'G.P. Yeh'
- 'K. Yi$^n$'
- 'J. Yoh'
- 'K. Yorita'
- 'T. Yoshida$^l$'
- 'G.B. Yu'
- 'I. Yu'
- 'S.S. Yu'
- 'J.C. Yun'
- 'A. Zanetti'
- 'Y. Zeng'
- 'C. Zhou'
- 'S. Zucchelli$^{ee}$'
date: 'September 21, 2011'
title: ' Observation of Exclusive [$\gamma\gamma$]{} Production in [$p\bar{p}$]{} Collisions at [$\sqrt{s}=1.96$]{} TeV '
---
=2 =3
[^1]
[^2]
In proton-(anti)proton collisions, two direct high-$E_T$ photons can be produced at leading order by $q\bar{q}$$\,\rightarrow\,$${\gamma}{\gamma}$ and by $gg$$\,\rightarrow\,$${\gamma}{\gamma}$ through a quark loop. In the latter case it is possible for another gluon exchange to cancel the color of the fusing gluons, allowing the (anti)proton to emerge intact with no hadrons produced. For $p\bar{p}$ collisions, this is the “exclusive" process $p\bar{p}$$\,\rightarrow\,$$p$$\,+\,$$\gamma\gamma$$\,+\,$$\bar{p}$, for which the leading order diagram is shown in Fig. \[fig:cepDiagrams\]a [@cdfloi; @exclgg1]. The outgoing (anti)proton has nearly the beam momentum, and transverse momentum $p_T \lesssim 1$ GeV/c, having emitted a pair of gluons in a color singlet. There is a pseudorapidity gap $\Delta\eta >$ 6 adjacent to the (anti)proton. In Regge theory this is diffractive scattering via pomeron [@forshawross; @donnachie], $\rm I\kern +0.53em\llap P$, exchange. The cross section for $|\eta(\gamma)| < 1.0$ and transverse energy $E_T(\gamma)
> 2.5$ GeV is predicted [@KMRgg; @KMRgg2] to be $\sigma({\gamma}{\gamma})_{\mathrm{exclusive}} \sim$ 0.2 - 2 pb, depending on the low-$x$ (unintegrated) gluon density. Additional uncertainties come from the cross section for $g+g$$\,\rightarrow\,$${\gamma}+{\gamma}$, the probability that no hadrons are produced by additional parton interactions (rapidity gap survival factor and Sudakov suppression [@sudakov]), and the probability that neither proton dissociates (e.g., $p$$\,\rightarrow\,$$p\
\pi^+\pi^-$) [@KMRgg]. The calculation is also imprecise because of the low $Q^2$, the squared 4-momentum transfer. The total theoretical uncertainty on the cross section can be estimated to be a factor $^{\times{3}}_{\div{3}}$ [@exchiggs]. Apart from its intrinsic interest for QCD, the process tests the theory of exclusive Higgs boson production [@cdfloi; @exclgg1; @KMRgg; @bialaslandshoff; @schafer; @exchiggs; @harland; @forshaw; @acf] $p+p$$\,\rightarrow\,$$p+H+p$, Fig. \[fig:cepDiagrams\]b, which may be detectable at the LHC. The leading order processes $gg$$\,\rightarrow\,$$\gamma\gamma$ and $gg$$\,\rightarrow\,$$H$ are calculable perturbatively, but the more uncertain elements of the exclusive processes (mainly the unintegrated gluon densities, the Sudakov suppression, and the gap survival probability) are common to both (see Fig. \[fig:cepDiagrams\]). For a 120 GeV standard model Higgs boson the exclusive cross section at $\sqrt{s} = 7$ TeV is 3 fb with a factor $^{\times{3}}_{\div{3}}$ uncertainty [@exchiggs].
![Leading order diagrams for central exclusive production in $p(\bar{p})-p$ collisions: a) exclusive $\gamma\gamma$ production in $\bar{p}-p$ collisions; b) exclusive Higgs boson production in $p-p$ collisions. Note the screening gluon that cancels the color flow from the interacting gluons. \[fig:cepDiagrams\] ](figure1.eps){width="38.00000%"}
Processes other than $gg$$\,\rightarrow\,$$\gamma\gamma$ can produce an exclusive $\gamma\gamma$ final state. Contributions from $q\bar{q}$$\,\rightarrow\,$$\gamma\gamma$ and $\gamma\gamma$$\,\rightarrow\,$$\gamma\gamma$ are respectively $<$ 5% and $<$ 1% of $gg$$\,\rightarrow\,$$\gamma\gamma$ [@KMRgg]. Backgrounds to exclusive $\gamma\gamma$ events to be considered are $\pi^0\pi^0$ and $\eta\eta$, with each meson decaying to two photons, of which one is not detected. We also consider events where one or both protons dissociate, e.g., $p$$\,\rightarrow\,$$p\,\pi^+\pi^-$, to be background. These backgrounds are small.
We previously published a search for exclusive $\gamma\gamma$ production, finding three candidate events with $E_T({\gamma}) >$ 5 GeV and $|\eta|
<$ 1.0, using data from 532 pb$^{-1}$ of integrated luminosity [@cdfgg1]. The prediction of Ref. [@KMRgg] was 0.8$^{+1.6}_{-0.5}$ events. Two events had a single narrow electromagnetic (EM) shower on each side, as expected for ${\gamma}{\gamma}$, but no observation could be claimed. This Letter reports the observation of 43 events with a contamination of $< 15 \:
\pi^0\pi^0$ events (at 95% C.L.), after we lowered the trigger threshold on the EM showers from 4 GeV to 2 GeV and collected data from another 1.11 fb$^{-1}$ of integrated luminosity. We used the QED process $p$$\,+\,$$\bar{p}$$\,
$$\rightarrow$$\,$$p\,$$+$$\,\gamma^*\gamma^*$$+$$\,\bar{p}$$\,
$$\rightarrow$$\,$$p$$\,+\,$$e^+e^-$$+\,$$\bar{p}$ in the same data set, for which the cross section is well known, as a check of the analysis.
The data were collected by the Collider Detector at Fermilab, CDF II, at the Tevatron, with $p\bar{p}$ collisions at $\sqrt{s}$ = 1.96 TeV. The CDF II detector is a general purpose detector described elsewhere [@cdf]; here we give a brief summary of the detector components used in this analysis. Surrounding the beam pipe is a tracking system consisting of a silicon microstrip detector, a cylindrical drift chamber (COT) [@cot], and a solenoid providing a 1.4 Tesla magnetic field. The tracking system is fully efficient at reconstructing isolated tracks with $p_T \geq$ 1 GeV/c and $|\eta|<1$. It is surrounded by the central and end-plug calorimeters covering the range $|\eta|<3.6$. Both calorimeters have separate EM and hadronic compartments. A proportional wire chamber (CES) [@balka], with orthogonal anode wires and cathode strips, is embedded in the central EM calorimeter, covering the region of $|\eta| < 1.1$, at a depth of six radiation lengths. It allows a measurement of the number and shape, in both $\eta$ and azimuth $\phi$, of EM showers (clusters of wires or strips). The anode-wire pitch (in $\phi$) is 1.5 cm and the cathode-strip pitch varies with $\eta$ from 1.7 cm to 2.0 cm. The CES provides a means of distinguishing single photon showers from $\pi^0 \rightarrow
\gamma\gamma$ up to $E_T(\pi^0) \sim$ 8 GeV. The region $3.6<|\eta|<5.2$ is covered by a lead-liquid scintillator calorimeter called the Miniplug [@miniplug]. At higher pseudorapidities, $5.4<|\eta|<7.4$, scintillation counters, called beam shower counters (BSC-1/2/3), are located on each side of the CDF detector. Gas Cherenkov detectors, with 48 photomultipliers per side, covering 3.7 $<|\eta|<$ 4.7, detect charged particles, and were also used to determine the luminosity with a 6% uncertainty [@clc].
The data were recorded using a three-level on-line event selection system (trigger). At the first level we required one EM cluster with $E_T >$ 2 GeV and $|\eta| < 2.1$ and no signal above noise in the BSC-1 counters ($|\eta| = 5.4 - 5.9)$. This rapidity gap requirement rejected a large fraction of inelastic collisions as well as most events with more than one interaction (pileup). A second EM cluster with similar properties was required at level two. A level three trigger selected events with two calorimeter showers consistent with coming from electrons or photons: i.e., passing the requirement (cut) that the ratio of shower energy in the hadronic (HAD) calorimeter to that in the EM (HAD:EM) be less than 0.125, and that the signal shape in the CES is consistent with a single shower.
We now describe the offline selection of events, with two isolated EM showers and no other particles except the outgoing $p$ and $\bar{p}$, which were not detected. Two central, $|\eta|<1$, EM showers were required with $E_T >$ 2.5 GeV to avoid trigger threshold inefficiencies. The energy resolution is $dE/E \sim 8\%$ from test beam studies and *in situ* $p/E$ matching for electrons. A refined HAD:EM ratio cut of $< 0.055+0.000\:45E$ was applied, as well as an acoplanarity cut of $|\pi-\Delta\phi|<0.6$. The trigger selection efficiency for single photons was measured using data collected with an interaction trigger (minimum bias). The BSC-1 gap trigger was taken to be 100% efficient as the BSC-1 trigger threshold was clearly above the noise level and the offline selection criteria. We measured an overall trigger efficiency of $\varepsilon_{\mathrm{trig}}=92\%\pm2\%$(syst). A weighting process was necessary due to the different slope in $E_T$ of the minimum bias probe data compared to the signal. The trigger efficiency did not show any $\eta$ or $\phi$ dependence for $|\eta|<1$. Monte Carlo signal simulation data samples were generated using the <span style="font-variant:small-caps;">superchic</span> program (version 1.3) [@harland; @superchic] based on recent developments of the Durham KMR model [@exclgg1]. The Monte Carlo samples were passed through a simulation of the detector, <span style="font-variant:small-caps;">cdfsim</span> 6.1.4.m including <span style="font-variant:small-caps;">geant</span> version 3.21/14 [@geant]. The systematic error was estimated by using the binwise uncertainty of the efficiency in the weighting process of the signal Monte Carlo sample. Taking into account a combined detector and offline reconstruction efficiency of $\varepsilon_{\mathrm{rec}}=55\%\pm3\%$(syst), and a photon identification efficiency of $\varepsilon_{\mathrm{id}}=93\%\pm1\%$(syst), we obtained a photon-pair efficiency $\varepsilon_{\mathrm{pho}}=\varepsilon_{\mathrm{trig}}^2*\varepsilon_{\mathrm{
rec } }*\varepsilon_{\mathrm{id}}^2=40\%\pm3\%$(syst). The systematic uncertainties of the reconstruction and identification efficiency were estimated by shifting kinematical input parameters over a reasonable interval motivated by the dominating EM-energy-scale uncertainty [@escale]. The offline selection then required that no activity other than these two showers (or clusters of showers) occurred in the entire detector, $|\eta| < 7.4$. We used the same procedure as in our earlier study of exclusive $e^+e^-$ events [@cdfee], searching all the calorimeters for any signal above noise levels, determined using noninteraction events; 99.2% of such events have no tower (out of 480) with $E_T > 125$ MeV. We also required the CLC counters and the more forward BSC counters to have signals consistent with only noise. Events triggered only on a bunch crossing (zero bias) showed that the exclusive efficiency, $\varepsilon_{excl}$, defined as the factor to be applied to the delivered luminosity to account for the requirement of no pileup, is $\varepsilon_{excl} = 6.8\%\pm0.4\%$(syst). The probability $P(0)$ of a zero-bias event satisfying all the exclusivity cuts, i.e., having no detected inelastic interaction, is $P(0)=A\exp(-\bar{n})=A\exp(-L_{\mathrm{x}} \sigma_{\mathrm{vis}})$,where $L_{\mathrm{x}}$ is the single bunch crossing luminosity (cm$^{-2}$) and $\sigma_{\mathrm{vis}}$ is the visible cross section; $\sigma_{\mathrm{vis}}=\sigma_{\mathrm{inel}}$ if everyinelastic collision is detected. We find $\sigma_{\mathrm{vis}}=67 \pm 6$ mb. In the absence of noise (above our chosen thresholds) $A = 1.0$; we find $A = 0.98 \pm 0.02$. We checked that the rate of candidate events, corrected for the exclusive efficiency, was constant during data taking (one year). The systematic uncertainty was estimated using the spread in slope parameters from fits to data in different time periods.
-------------------------------------------------- --------------------------------------------------------
Integrated luminosity $\mathcal{L}_\mathsf{int}$ $1.11\pm0.07$ fb$^{-1}$
Exclusive efficiency $0.068\pm0.004\,\mathrm{(syst)}$
Exclusive $\gamma\gamma$
Events 43
Photon-pair efficiency $0.40\pm0.02\,\mathrm{(stat)}\pm0.03\,\mathrm{(syst)}$
Probability of no conversions $0.57\pm0.06$ (syst)
$\pi^0\pi^0$ b/g (events) 0.0, $< 15$ (95% C.L.)
Dissociation b/g (events) $0.14\pm0.14\,\mathrm{(syst)}$
Exclusive $e^+e^-$
Events 34
Electron-pair efficiency $0.33\pm0.01\,\mathrm{(stat)}\pm0.02\,\mathrm{(syst)}$
Probability of no radiation $0.42\pm0.08\,\mathrm{(syst)}$
Dissociation b/g (events) $3.8\pm0.4\,\mathrm{(stat)}\pm0.9\,\mathrm{
(syst)}$
-------------------------------------------------- --------------------------------------------------------
: Summary of parameters used for the measurement of the exclusive photon-pair cross section for $E_{T}(\gamma)>$ 2.5 GeV and $|\eta(\gamma)| <
1.0$. Values for the $e^+e^-$ control study are also given. Note that b/g stands for background.\[summarystatgg\]
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![The $e^+e^-$ candidates: invariant mass distribution (a). The two-photon candidates: invariant mass distribution (b), $|\pi-\Delta\phi|$ distribution (c), and $p_T$ distribution of the two photons (d). All error bars are statistical. The MC predictions for $\gamma\gamma$ are normalized to data. The QED prediction for $e^+e^-$ is normalized to the delivered luminosity and efficiencies. The MC samples for the QED process were generated with the <span style="font-variant:small-caps;">lpair</span> program [@lpair]. \[fig:kinGGEE\] ](figure2a.eps "fig:"){width="24.00000%"} ![The $e^+e^-$ candidates: invariant mass distribution (a). The two-photon candidates: invariant mass distribution (b), $|\pi-\Delta\phi|$ distribution (c), and $p_T$ distribution of the two photons (d). All error bars are statistical. The MC predictions for $\gamma\gamma$ are normalized to data. The QED prediction for $e^+e^-$ is normalized to the delivered luminosity and efficiencies. The MC samples for the QED process were generated with the <span style="font-variant:small-caps;">lpair</span> program [@lpair]. \[fig:kinGGEE\] ](figure2b.eps "fig:"){width="24.00000%"}
![The $e^+e^-$ candidates: invariant mass distribution (a). The two-photon candidates: invariant mass distribution (b), $|\pi-\Delta\phi|$ distribution (c), and $p_T$ distribution of the two photons (d). All error bars are statistical. The MC predictions for $\gamma\gamma$ are normalized to data. The QED prediction for $e^+e^-$ is normalized to the delivered luminosity and efficiencies. The MC samples for the QED process were generated with the <span style="font-variant:small-caps;">lpair</span> program [@lpair]. \[fig:kinGGEE\] ](figure2c.eps "fig:"){width="24.00000%"} ![The $e^+e^-$ candidates: invariant mass distribution (a). The two-photon candidates: invariant mass distribution (b), $|\pi-\Delta\phi|$ distribution (c), and $p_T$ distribution of the two photons (d). All error bars are statistical. The MC predictions for $\gamma\gamma$ are normalized to data. The QED prediction for $e^+e^-$ is normalized to the delivered luminosity and efficiencies. The MC samples for the QED process were generated with the <span style="font-variant:small-caps;">lpair</span> program [@lpair]. \[fig:kinGGEE\] ](figure2d.eps "fig:"){width="24.00000%"}
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
The selection of 81 events passing all cuts was made without reference to the track detectors. We found that 34 have exactly two oppositely charged tracks, 43 have no tracks in the COT, and four are in neither class. Visual inspection of the latter showed that two had photon conversions, and two were likely to be $e^+e^-$ events with bremsstrahlung. These numbers are consistent with expectations from the detector simulation. The tracks in the 34 two-track events agree in all aspects with the QED process $p+\bar{p} \rightarrow p + e^+e^- + \bar{p}$ via two virtual photons, previously observed in CDF [@cdfee; @cdfz]. The calorimeter shower energies are consistent with the momenta measured from the tracks. Kinematic distributions, after detector simulation, are as expected. The mass $M(e^+e^-)$ distribution is presented in Fig. 2a, together with the QED prediction normalized to the delivered luminosity and efficiencies, showing that the cross section agrees with the QED prediction in both magnitude and shape. We measured a cross section of $\sigma_{e^+e^-,\mathrm{exclusive}}(|\eta(e)|<1,\,E_T(e)>2.5\,\mathrm{GeV}) =
2.88\,^{+0.57}_{-0.48}(\mathrm{stat})\pm0.63(\mathrm{syst})$ pb, compared to $3.25\pm0.07$ pb (QED, [@lpair]). The systematic uncertainties for the QED study are mostly identical to the photon case. Distinct from photons, electrons leave tracks in the tracking detectors and may radiate. The systematic uncertainty on the radiation probability was estimated by varying the exclusivity cuts by $\pm10$%. This $e^+e^-$ sample provides a valuable check of the exclusive $\gamma\gamma$ analysis.
The 43 events with no tracks have the kinematic properties expected for exclusive $\gamma\gamma$ production [@superchic]. In particular the $M(\gamma\gamma)$ distribution \[Fig. 2b\] extending up to 15 GeV/c$^2$ is as expected, as well as the acoplanarity $\pi - \Delta \phi(\gamma\gamma)$ \[Fig. 2c\] and the 2-vector sum of $p_T$ \[Fig. 2d\]; in these plots \[unlike Fig. 2a\] the <span style="font-variant:small-caps;">superchic</span> Monte Carlo prediction is normalized to the same number of events as the data. An important issue is whether some of these events could be $\pi^0\pi^0$, rather than $\gamma\gamma$. Note that $\gamma\pi^0$ events are forbidden by *C* parity. The CES chambers give information on the number of EM showers. The minimum opening angle $\Delta\theta_{min}$ between the two photons from $\pi^0$ decay is $2\tan^{-1}\left(\frac{m(\pi)}{p(\pi)}\right)$ = 3.1$^\circ$ for $p(\pi)$ = 5 GeV, well separated in the CES chambers, which have a granularity $< 0.5^\circ$. A $\pi^0$ can fake a $\gamma$ only if one photon ranges out before the CES, or falls in an inactive region (8%) of the detector. All of the 68 $e^{\pm}$ events in our sample, with similar energies, had matching showers in the CES chambers. A <span style="font-variant:small-caps;">geant</span> [@geant] simulation predicts the probability that a photon in our energy range produces a shower to be $\gtrsim98.3$%. We summed the number of reconstructed CES showers in the event, mostly 2 or 3 as shown in Fig. \[fig:pi0BG\] (left).
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![Estimate of $\pi^0\pi^0$ background fraction in the candidate sample. Distribution of reconstructed CES showers per event for data compared to $\gamma\gamma$ and $\pi^0\pi^0$ Monte Carlo simulations (a). Background fraction estimate using Pearson’s $\chi^2$ test to fit the composition hypothesis to the data distribution (b). \[fig:pi0BG\] ](figure3a.eps "fig:"){width="24.00000%"} ![Estimate of $\pi^0\pi^0$ background fraction in the candidate sample. Distribution of reconstructed CES showers per event for data compared to $\gamma\gamma$ and $\pi^0\pi^0$ Monte Carlo simulations (a). Background fraction estimate using Pearson’s $\chi^2$ test to fit the composition hypothesis to the data distribution (b). \[fig:pi0BG\] ](figure3b.eps "fig:"){width="24.00000%"}
a) b)
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
The distribution agrees very well with the $\gamma\gamma$ simulation, and strongly disagrees with the $\pi^0\pi^0$ simulation. Fitting to the sum of the two components gives a best fit to the fraction $F(\pi^0\pi^0)$ = 0.0, with a 95% C.L. upper limit of 15 events. Since obtaining this result, a new calculation of exclusive $\pi^0\pi^0$ production [@hkms] predicts $\sigma_{excl}(\pi^0\pi^0)$ = 6 - 24 fb for $E_T(\pi^0) > 2.5$ GeV and $|\eta|
< 1.0$, $\lesssim0.01$ of our measured exclusive $\gamma\gamma$ cross section. In the cross section calculation we take this background to be zero. Exclusive $\eta\eta$ production is also expected to be negligible. The only other significant background could be undetected proton dissociation, about 10% for the QED $e^+e^-$ process but $<$1% for $\pom \:
+\,\pom\,\rightarrow {\gamma}+ {\gamma}$ [@KMRgg; @KMRchi; @vkpriv]. The cross section for both photons with $E_T(\gamma) > 2.5$ GeV and $|\eta(\gamma)| < 1.0$ and no other produced particles is given by: $$\sigma_{\gamma\gamma,\mathrm{exclusive}}= \frac{N(candidates)
- N(background)}{\mathcal{L}_\mathsf{int} . \varepsilon .
\varepsilon_{excl}}\mathrm{,}$$ where $\varepsilon$ is the product of the trigger, reconstruction, identification, and conversion efficiencies (22.8%) in Table \[summarystatgg\]. The systematic uncertainty on the conversion probability was estimated by varying the exclusivity cuts by $\pm10$%. We find $\sigma_{\gamma\gamma,\mathrm{excl}}\,(|\eta(\gamma)|<1,E_T(\gamma)>2.5~
\mathrm{GeV}) =
2.48\,^{+0.40}_{-0.35}(\mathrm{stat})\,^{+0.40}_{-0.51}(\mathrm{syst})$ pb. The theoretical prediction [@harland] is strongly dependent on the low-$x$ gluon density, having central values 1.42 pb (<span style="font-variant:small-caps;">mstw08lo</span>) or 0.35 pb (<span style="font-variant:small-caps;">mrst99</span>), with other uncertainties estimated to be a factor of about $^{\times3}_{\div3}$ [@vkpriv]. A comparison of our measurement with the only theoretical prediction available to date is shown in Fig. \[fig:expvsth\]. The rates of $e^+e^-$ and $\gamma\gamma$ events with $E_T(e/\gamma) > 5$ GeV are consistent with those in our earlier studies [@cdfee; @cdfgg1].
![Comparison of the measured cross section for the exclusive $\gamma\gamma$ production in $p\bar{p}$ collisions at $\sqrt{s}=1.96$ TeV with theoretical predictions [@harland].\[fig:expvsth\]](figure4.eps){width="36.00000%"}
In conclusion, we have observed the exclusive production of two high-$E_T$ photons in proton-antiproton collisions, which constitutes the first observation of this process in hadron-hadron collisions. The cross section is in agreement with the only theoretical prediction, based on $g+g \rightarrow
\gamma+\gamma$, with another gluon exchanged to cancel the color and with the $p$ and $\bar{p}$ emerging intact. If a Higgs boson exists, it should be produced by the same mechanism (see Fig. \[fig:cepDiagrams\]), and the cross sections are related.
We thank the Fermilab staff and the technical staffs of the participating institutions for their vital contributions. This work was supported by the U.S. Department of Energy and National Science Foundation; the Italian Istituto Nazionale di Fisica Nucleare; the Ministry of Education, Culture, Sports, Science and Technology of Japan; the Natural Sciences and Engineering Research Council of Canada; the National Science Council of the Republic of China; the Swiss National Science Foundation; the A.P. Sloan Foundation; the Bundesministerium für Bildung und Forschung, Germany; the Korean World Class University Program, the National Research Foundation of Korea; the Science and Technology Facilities Council and the Royal Society, UK; the Institut National de Physique Nucleaire et Physique des Particules/CNRS; the Russian Foundation for Basic Research; the Ministerio de Ciencia e Innovación, and Programa Consolider-Ingenio 2010, Spain; the Slovak R&D Agency; the Academy of Finland; and the Australian Research Council (ARC). We also thank V.A. Khoze, M.G. Ryskin, and L.A. Harland-Lang for many valuable discussions.
[99]{} M.G. Albrow *et al.*, arXiv:hep-ex/0511057 (2001). V.A. Khoze, A.D. Martin, and M.G. Ryskin, Eur. Phys. J. C **23**, 311 (2002), and references therein. J.R. Forshaw and D.A. Ross, *Quantum Chromodynamics and the Pomeron*, (Cambridge University Press, Cambridge, U.K., 1997). S. Donnachie, G. Dosch, P.V. Landshoff, and O. Nachtmann, *Pomeron Physics and QCD*, (Cambridge University Press, Cambridge, U.K., 2002). V.A. Khoze *et al.*, Eur. Phys. J. C **38**, 475 (2005). V.A. Khoze, A.D. Martin, and M.G. Ryskin, Eur. Phys. J. C **14**, 525 (2000). The Sudakov factor suppresses real gluon radiation that could fill the rapidity gaps. V.A. Khoze, A.D. Martin, and M.G. Ryskin, Eur. Phys. J. C **26**, 229 (2002) and references therein. A. Bialas and P.V. Landshoff, Phys. Lett. B [**256**]{}, 540 (1991). A. Schafer, O. Nachtmann and R. Schopf, Phys. Lett. B [**249**]{}, 331 (1990). L.A. Harland-Lang, V.A. Khoze, M.G. Ryskin, and W.J. Stirling, Eur. Phys. J. C **69**, 179 (2010). T.D. Coughlin and J.R. Forshaw, J. High Energy Phys. 01 (2010) 121. M.G. Albrow, T.D. Coughlin, and J.R. Forshaw, Prog. Part. Nucl. Phys. **65**, 149 (2010). T. Aaltonen *et al.* (CDF Collaboration), Phys. Rev. Lett. [**99**]{}, (2007) 242002. D. Acosta *et al.* (CDF Collaboration), Phys. Rev. D **71**, 032001 (2005) and references therein; D. Amidei *et al.* (CDF Collaboration), Nucl. Instrum. Methods **350**, 73 (1994); F. Abe *et al.* (CDF Collaboration), Phys.Rev. D **50**, 2966 (1994). A. Affolder *et al.* (CDF Collaboration), Nucl. Instrum. Methods Phys. Res. Sect. A **526**, 249 (2004). L. Balka *et al.*, Nucl. Instrum. Methods A [**267**]{}, 272 (1988). M. Gallinaro *et al.*, IEEE Trans. Nucl. Sci. [**52**]{}, 879 (2005). D. Acosta *et al.*, Nucl. Instrum. Methods A [**494**]{}, 57 (2002). <span style="font-variant:small-caps;">superchic</span> Monte Carlo Event Generator,\
http://projects.hepforge.org/superchic/ <span style="font-variant:small-caps;">geant</span>, detector simulation and simulation tool, CERN Program Library Long Writeup W5013 (1993). A. Bhatti [*et al.*]{} (CDF Collaboration), Nucl. Instrum. Methods A [**566**]{}, 375 (2006). A. Abulencia *et al.* (CDF Collaboration), Phys. Rev. Lett. [**98**]{}, 112001 (2007). T. Aaltonen *et al.* (CDF Collaboration), Phys. Rev. Lett. [**102**]{}, 222002 (2009). J. Vermaseren, Nucl. Phys. **B229**, 347 (1983). L.A. Harland-Lang, V.A. Khoze, M.G. Ryskin, and W.J. Stirling, Eur. Phys. J. C [**71**]{} 1714 (2011). V.A. Khoze, A.D. Martin, M.G. Ryskin and W.J. Stirling, Eur. Phys. J. C [**35**]{}, 211 (2004). V.A. Khoze and M.G. Ryskin, private communication.
[^1]: Deceased
[^2]: With visitors from $^a$Istituto Nazionale di Fisica Nucleare, Sezione di Cagliari, 09042 Monserrato (Cagliari), Italy, $^b$University of CA Irvine, Irvine, CA 92697, USA, $^c$University of CA Santa Barbara, Santa Barbara, CA 93106, USA, $^d$University of CA Santa Cruz, Santa Cruz, CA 95064, USA, $^e$Institute of Physics, Academy of Sciences of the Czech Republic, Czech Republic, $^f$CERN, CH-1211 Geneva, Switzerland, $^g$Cornell University, Ithaca, NY 14853, USA, $^h$University of Cyprus, Nicosia CY-1678, Cyprus, $^i$Office of Science, U.S. Department of Energy, Washington, DC 20585, USA, $^j$University College Dublin, Dublin 4, Ireland, $^k$ETH, 8092 Zurich, Switzerland, $^l$University of Fukui, Fukui City, Fukui Prefecture, Japan 910-0017, $^m$Universidad Iberoamericana, Mexico D.F., Mexico, $^n$University of Iowa, Iowa City, IA 52242, USA, $^o$Kinki University, Higashi-Osaka City, Japan 577-8502, $^p$Kansas State University, Manhattan, KS 66506, USA, $^q$Korea University, Seoul, 136-713, Korea, $^r$University of Manchester, Manchester M13 9PL, United Kingdom, $^s$Queen Mary, University of London, London, E1 4NS, United Kingdom, $^t$University of Melbourne, Victoria 3010, Australia, $^u$Muons, Inc., Batavia, IL 60510, USA, $^v$Nagasaki Institute of Applied Science, Nagasaki, Japan, $^w$National Research Nuclear University, Moscow, Russia, $^x$Northwestern University, Evanston, IL 60208, USA, $^y$University of Notre Dame, Notre Dame, IN 46556, USA, $^z$Universidad de Oviedo, E-33007 Oviedo, Spain, $^{aa}$CNRS-IN2P3, Paris, F-75205 France, $^{bb}$Texas Tech University, Lubbock, TX 79609, USA, $^{cc}$Universidad Tecnica Federico Santa Maria, 110v Valparaiso, Chile, $^{dd}$Yarmouk University, Irbid 211-63, Jordan.
|
using NUnit.Framework;
using Testura.Code.Models.References;
namespace Testura.Code.Tests.Models.References
{
[TestFixture]
public class VariableReferenceTests
{
[Test]
public void GetLastMember_WhenHavingNoChild_ShouldReturnNull()
{
Assert.IsNull(new VariableReference("test").GetLastMember());
}
[Test]
public void GetLastMember_WhenHavingMember_ShouldReturnMember()
{
var memberReference = new MemberReference("test");
Assert.AreSame(memberReference, new VariableReference("test", memberReference).GetLastMember());
}
[Test]
public void GetLastMember_WhenHavingChainOfMember_ShouldReturnLastMemberInChain()
{
var memberReference = new MemberReference("test");
Assert.AreSame(memberReference, new VariableReference("test", new MethodReference("test", memberReference)).GetLastMember());
}
}
}
|
Jaipur Day Tours
The picturesque capital of Rajasthan, Jaipur is also known as the Pink city. The colour pink is associated with culture. There is a timeless appeal in the colourful bazaars of Jaipur, where one can shop for Rajasthani handlooms and trinklets. Beautifully laid out gardens and parks, attractive monuments and marvellous heritage hotels,which were once the residence of Maharajas, are worthy of admiration. Not to mention the ambling camels and cheerful people in multi-hued costumes, that make your trip to the pink city a memorable one.
It’s one of the exclusive tours which has been exquisitely drafted out to see one of the most beautiful cities of Rajasthan – Jaipur. Also known as Pink city due to its Pink Coloured walls all over the city |
FILE PHOTO: India's Foreign Secretary Vijay Gokhale speaks during a media briefing in New Delhi, India, February 26, 2019. REUTERS/Adnan Abidi
NEW DELHI (Reuters) - India meets the criteria for trade concessions that the United States eliminated in June, India’s Foreign Secretary Vijay Gokhale said on Thursday.
The United States removed India from the Generalized System of Preferences (GSP) program that allowed duty-free entry for up to $5.6 billion worth of its annual exports to the United States, citing lack of reciprocal market access.
Washington should take a call on reinstating trade concessions under the GSP, Gokhale told a news conference.
Prime Minister Narendra Modi is scheduled to meet U.S. President Donald Trump later this month in the United States. |
1. Field of the Invention
The present invention relates to a display apparatus, a method for driving the display apparatus and electronic equipment. More particularly, the present invention relates to a display apparatus of a flat-panel type, in which pixel circuits each including an electro-optical device are laid out to form a matrix, a method for driving the display apparatus and electronic equipment employing the display apparatus.
2. Description of the Related Art
In recent years, in the field of a display apparatus for displaying an image, a display apparatus of a flat-panel type, in which pixels (or pixel circuits) each including a light emitting device are laid out to form a matrix, has been becoming popular very fast. A light emitting device included in each pixel circuit in the display apparatus of a flat-panel type is an electro-optical device of the so-called current-driven type in which the luminance of a light beam emitted by the device changes in accordance with the magnitude of a current flowing through the device. The development of an organic EL (Electro Luminescence) display apparatus employing such electro-optical devices into a commercial product has been making progress. An example of the electro-optical device of the so-called current-driven type is an organic EL device operating on the basis of a phenomenon in which a light beam is generated by the device when an electric field is applied to an organic film.
The organic EL display apparatus has the following characteristics. The organic EL device employed in the EL display apparatus can be driven by an applied voltage not exceeding 10V so that the power consumption of the device is low. In addition, since the organic EL device is a light emitting device, the organic EL display apparatus is capable of displaying an image which is visible in comparison with a liquid crystal display apparatus for displaying an image by controlling the intensity of a light beam generated by a light source known as a backlight in a liquid crystal cell included in every pixel circuit of the liquid crystal display apparatus. On top of that, the organic EL display apparatus can be made light and thin with ease because the organic EL display apparatus does not need illumination members such as the backlight which is necessary for the liquid crystal display apparatus. Furthermore, the organic EL device has an extremely high speed providing a short response time of the order of several microseconds. Thus, a residual image is not generated in an operation to display a moving image.
Much like the liquid crystal display apparatus, a passive matrix method or an active matrix method can be adopted as a method for driving the organic EL display apparatus. However, even though an organic EL display apparatus adopting the passive matrix method has a simple structure, the apparatus raises problems such as difficulties to implement a large display screen having a high resolution. For the reasons described above, an organic EL display apparatus adopting an active matrix method is developed aggressively. In accordance with this active matrix method, an active device is provided in the same pixel circuit as an electro-optical device. The active device is used for controlling a current flowing through the electro-optical device. An example of the active device is an insulated-gate type field effect transistor which is generally a TFT (thin film transistor).
By the way, the I-V characteristic (that is, the current-voltage characteristic) of an organic EL device is known to deteriorate with the lapse of time in the so-called aging process. In a pixel circuit employing an N-channel TFT for controlling a current flowing through the organic EL device, the organic EL device is connected to the source of the transistor which is referred to hereafter as a driving transistor. Thus, when the I-V characteristic of the organic EL device deteriorates, a voltage Vgs appearing between the gate and source of the driving transistor changes. As a result, the intensity of a light beam generated by the organic EL device also changes as well.
To put it more concretely, an electric potential appearing at the source of the driving transistor is determined by the operating points of the driving transistor and the organic EL device. When the I-V characteristic of the organic EL device deteriorates, the operating points of the driving transistor and the organic EL device change. Thus, the electric potential appearing at the source of the driving transistor also changes even if a voltage applied to the gate of the transistor after the operating points of the driving transistor and the organic EL device change is sustained at the same level as that before the operating points of the driving transistor and the organic EL device change. Accordingly, the voltage Vgs appearing between the gate and source of the driving transistor also changes as well, causing a current flowing through the transistor and a current flowing through the organic EL device to vary. As a result, since the current flowing through the organic EL device varies, the intensity of a light beam generated by the organic EL device also changes as well.
In addition, in the case of a pixel circuit employing a poly-silicon TFT, not only does the I-V characteristic of the organic EL device deteriorate with the lapse of time, but the threshold voltage Vth of the driving transistor and the mobility μ of a semiconductor film composing the channel of the transistor also change with the lapse of time. In the following description, the mobility μ of a semiconductor film composing the channel of a driving transistor is referred to as the mobility μ of the driving transistor. On top of that, the threshold voltage Vth and mobility μ of the driving transistor each vary from pixel to pixel due to variations in fabrication process. That is to say, the characteristic of the driving transistor varies from pixel to pixel.
If the threshold voltage Vth and mobility μ of the driving transistor each vary from pixel to pixel, the current flowing through the transistor also varies from pixel-to-pixel. Thus, the luminance of a light beam generated by the organic EL device also varies from pixel to pixel even for the same voltage applied to the gate of each driving transistor. As a result, the screen loses uniformity.
In order to prevent the luminance of a light beam generated by the organic EL device from varying from pixel to pixel even for the same voltage applied to the gate of each driving transistor and, hence, from being affected by deteriorations of the I-V characteristic of the organic EL device and/or changes of the threshold voltage Vth and mobility μ of the driving transistor even if the I-V characteristic deteriorates with the lapse of time and/or the threshold voltage Vth and the mobility μ change with the lapse of time, it is necessary to provide every pixel circuit with a compensation function and a variety of correction functions as is described in documents such as patent reference 1 which is Japanese Patent Laid-open No. 2006-133542. The compensation function is a function to compensate for characteristic variations of the organic EL device. The correction functions include a threshold-voltage correction function and a mobility correction function. The threshold-voltage correction function is a function to make corrections for threshold voltage (Vth) variations of the driving transistor. On the other hand, the mobility correction function is a function to make corrections for mobility (μ) variations of the driving transistor.
As described above, every pixel circuit is provided with the compensation function to compensate for characteristic variations of the organic EL device, the threshold-voltage correction function to make corrections for threshold voltage (Vth) variations of the driving transistor and the mobility correction function to make corrections for mobility (μ) variations of the driving transistor. Thus, it is possible to prevent the luminance of a light beam generated by the organic EL device from varying from pixel to pixel even for the same voltage applied to the gate of each driving transistor and, hence, from being affected by deteriorations of the I-V characteristic of the organic EL device and/or changes of the threshold voltage Vth and mobility μ of the driving transistor even if the I-V characteristic deteriorates with the lapse of time and/or the threshold voltage Vth and the mobility μ change with the lapse of time. |
Wide-neck aneurysms: which technique should we use?
|
President Trump is defending his unsubstantiated claim that former President Obama ordered the wiretapping of Trump Tower.
“Wiretap covers a lot of different things,” Trump said in an interview with Fox News’s Tucker Carlson set to air Wednesday night. “I think you’re going to find some very interesting items coming to the forefront over the next two weeks.”
The comments are Trump’s first on the matter since making the stunning allegation in a string of tweets earlier this month.
Trump referenced the New York Times' reporting when asked about the source of his wiretapping accusations.
Administration officials have cited news reports in their defense of Trump's claims, but no news reports have found that the Obama or any White House official called for surveillance of Trump.
ADVERTISEMENT
"Well, I've been reading about things. I read in, I think it was Jan. 20, a New York Times article where they were talking about wiretapping. There was an article, I think they used that exact term," Trump said in the interview.
The president also cited reporting by Fox News host Bret Baier about wiretapping.
"We will be submitting certain things and I will be perhaps speaking about this next week. but it's right now before the committee and I think I want to leave it there," Trump added.
White House press secretary Sean Spicer issued a statement one day after Trump’s tweets, calling for a congressional probe and saying, “Neither the White House nor the president will comment further.” When Carlson said Trump could gather evidence himself as president without having to rely on news outlets, Trump said: "I do, I do, but frankly I think we have a lot right now." The article Trump referenced was published in the Times the day before the inauguration about intercepted communications part of the intelligence community's probe into links between Trump associates and Russian officials. The story cites an official who "said intelligence reports based on some of the wiretapped communications had been provided to the White House."
But since then, White House officials have repeatedly spoken about the claim.
Spicer on Tuesday appeared to backtrack when he said the tweets weren’t meant to be taken literally, and that Trump could have been referring to a broad range of surveillance activity.
“He doesn’t really think that President Obama went up and tapped his phone personally,” Spicer said.
Key congressional Republicans have grown frustrated with the lack of evidence produced by the administration to back up the president’s assertions.
“Are you going to take the tweets literally? And if you are, then clearly the president was wrong,” House Intelligence Committee Chairman Devin Nunes (R-Calif.) said Wednesday.
The panel said it might subpoena the Justice Department, which requested on Monday more time to produce evidence relating to Trump’s claim.
The committee initially set a Monday deadline for the panel to turn over the information.
The FBI has told Sens. Lindsey Graham (R-S.C.) and Sheldon Whitehouse (D-R.I.) that it will brief them in response to their demand for any surveillance warrant applications of Trump Tower, Graham said Wednesday.
That response comes after Graham fired a warning shot Tuesday, telling reporters that the bureau was about to "screw up big time" if it didn't respond to his request.
Mallory Shelbourne contributed |
/****************************************************************************
**
** Copyright (C) 2016 The Qt Company Ltd.
** Contact: https://www.qt.io/licensing/
**
** This file is part of the examples of the Qt Toolkit.
**
** $QT_BEGIN_LICENSE:BSD$
** Commercial License Usage
** Licensees holding valid commercial Qt licenses may use this file in
** accordance with the commercial license agreement provided with the
** Software or, alternatively, in accordance with the terms contained in
** a written agreement between you and The Qt Company. For licensing terms
** and conditions see https://www.qt.io/terms-conditions. For further
** information use the contact form at https://www.qt.io/contact-us.
**
** BSD License Usage
** Alternatively, you may use this file under the terms of the BSD license
** as follows:
**
** "Redistribution and use in source and binary forms, with or without
** modification, are permitted provided that the following conditions are
** met:
** * Redistributions of source code must retain the above copyright
** notice, this list of conditions and the following disclaimer.
** * Redistributions in binary form must reproduce the above copyright
** notice, this list of conditions and the following disclaimer in
** the documentation and/or other materials provided with the
** distribution.
** * Neither the name of The Qt Company Ltd nor the names of its
** contributors may be used to endorse or promote products derived
** from this software without specific prior written permission.
**
**
** THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
** "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
** LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
** A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
** OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
** SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
** LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
** DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
** THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
** (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
** OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE."
**
** $QT_END_LICENSE$
**
****************************************************************************/
#ifndef DIAGRAMSCENE_H
#define DIAGRAMSCENE_H
#include "diagramitem.h"
#include "diagramtextitem.h"
#include <QGraphicsScene>
QT_BEGIN_NAMESPACE
class QGraphicsSceneMouseEvent;
class QMenu;
class QPointF;
class QGraphicsLineItem;
class QFont;
class QGraphicsTextItem;
class QColor;
QT_END_NAMESPACE
//! [0]
class DiagramScene : public QGraphicsScene
{
Q_OBJECT
public:
enum Mode { InsertItem, InsertLine, InsertText, MoveItem };
explicit DiagramScene(QMenu *itemMenu, QObject *parent = nullptr);
QFont font() const { return myFont; }
QColor textColor() const { return myTextColor; }
QColor itemColor() const { return myItemColor; }
QColor lineColor() const { return myLineColor; }
void setLineColor(const QColor &color);
void setTextColor(const QColor &color);
void setItemColor(const QColor &color);
void setFont(const QFont &font);
public slots:
void setMode(Mode mode);
void setItemType(DiagramItem::DiagramType type);
void editorLostFocus(DiagramTextItem *item);
signals:
void itemInserted(DiagramItem *item);
void textInserted(QGraphicsTextItem *item);
void itemSelected(QGraphicsItem *item);
protected:
void mousePressEvent(QGraphicsSceneMouseEvent *mouseEvent) override;
void mouseMoveEvent(QGraphicsSceneMouseEvent *mouseEvent) override;
void mouseReleaseEvent(QGraphicsSceneMouseEvent *mouseEvent) override;
private:
bool isItemChange(int type) const;
DiagramItem::DiagramType myItemType;
QMenu *myItemMenu;
Mode myMode;
bool leftButtonDown;
QPointF startPoint;
QGraphicsLineItem *line;
QFont myFont;
DiagramTextItem *textItem;
QColor myTextColor;
QColor myItemColor;
QColor myLineColor;
};
//! [0]
#endif // DIAGRAMSCENE_H
|
Pages
Thursday, November 21, 2013
Opportunity: SAIS Ph.D. Program, China-Africa Economic Engagement
I am looking for an excellent candidate to undertake Ph.D. research on China-Africa economic engagement under my supervision, to enter in the Fall of 2014. SAIS offers fully funded Ph.D. fellowships. Candidates must already have an MA degree, ideally in development studies, economics, or international relations.
The ideal candidate will have some background in China-Africa relations, fluency in Chinese including the ability to read Chinese, field experience, and excellent English. Admission will depend on academic excellence (high GPA, excellent GRE scores), and a convincing statement of research interests that includes China's going-out engagement in Africa, broadly defined. Quantitative skills (econometrics), Portuguese or French would be assets, but not required.
The deadline for applications is December 15. For more information, and to apply, consult the SAIS Ph.D. program website here. |
Tag: Simon Cowell
The dancing eagle at the beginning of the show, danced with judges to get the spirit up and excite the audience. Is that talent? A dancing eagle? Apparently, it was Tyra Banks who was underneath the costume, pranking the judges. I still like when Nick Cannon prank the judges with his mime act, that was hilarious.
Courtney Hadwin,13 is a fantastic singer, and she rocks it on stage, and she got the golden buzzer by Howie Mandel. She will be in the top 5, and she will be going to the live show in Hollywood in a matter of weeks. Howie did it again, make sure you vote for her when she makes it to Hollywood.Watch it here.
Sophie Fatu, she did good, and she did it her way. Cute and adorable. Is she a little too young to compete, we will see.
Noah Guthrie famous from Glee (2009), Emmy and Golden Globe winner, he played Roderick Meeks. He has a fantastic voice. He will be going to Hollywood again and second chance of a music career. Remember to vote for him when he makes to Hollywood. Plus I was a fan of the show, and they had great music almost like a musical. Listen to his hits:
If you haven’t heard American Idol is back, and the votes did not pull through, if you were watching The Voice, on NBC, then you missed the drama on American Idol. Or some other show, they will repeat it, or you could DVR or tape it on your VCR and watch it later.
American Idol is different, they selected the top ten with the TV audience and people who did not vote, didn’t know it was on. It was on. It’s like people don’t care, they think it’s fixed and rigged. It’s not fixed or rigged if people voted someone will win. One time would not help. You can vote many times as you like. They don’t have the phone numbers anymore, everything is online and voting apps too, just go to AmericanIdol.com and download the voting apps.
Sometime people will think that they will win. If they think they will win; can we have thinking power to vote? You have to vote. If you don’t know how to vote, the simply solution is go to AmericanIdol.com and register to vote; and vote for favorite, this is different than the presidential election. You catch the recaps on YouTube or AmericanIdol.com website. Then watch the LIVE show on ABC on Sunday nights. You can DVR your favorite shows you like and watch it later, that’s how DVR works.
Here are your top ten:
Ada Vox
Catie Turner
Cade Foehner
Dennis Lorenzo
Gabby Garrett
Gabe Hutchison
Maddie Poppe
Michelle Sussets
Michale Woodward
Jurnee
Remember, if you don’t vote your favorite will be gone. And who’s fault it is?
Did you watch America Idol? It is on ABC not FOX anymore. They have all new judges R&B legend Lionel Richie, pop diva Katy Perry, country singer Luke Bryan and returning host Ryan Seacrest. All of these judges are from the music business, and they know what they look for a pop singer – not for an opera singer. If you are an opera singer, they could have sung a Mariah Carey song.
AMERICAN IDOL – Ò101 (Auditions)Ó – The gold standard of all music competition series, ÒAmerican Idol,Ó will make its highly anticipated return to television as superstar judges Luke Bryan, Katy Perry and Lionel Richie set out on a journey across the nation to discover a new crop of inspiring talent with a touch of Disney magic, as it premieres its first season at its new home on AmericaÕs network, The ABC Television Network, SUNDAY, MARCH 11 (8:00-10:01 p.m. EDT). (ABC/Eric Liebowitz) KOBY (DENVER, CO)
“Truthfully, we are in the music business this Country, Pop and R&B standing here in front of you and I am telling you that voice is not going to work for records. We are looking for an Idol for popular music that is unique music” Lionel said to Koby, who sang opera and in the musical theater actress.
“I think you know you are quirky, crazy and you get caught up and come down to this route you are doing yourself a disservice,” Luke Bryan said.
In other words, it’s all about your niche, if you sing opera you don’t sing opera to executives who are in country music, pop music, and R&B music. Broadway is your choice. If you want to get exposure, then exposure you got and the humiliation in front of the judges and millions of people who are watching you. Maybe someone will see her and end up on Broadway. Know your niche and your genre of song. Sing a song that voice can sing. Don’t embarrass yourself. And don’t beg to go to Hollywood and don’t have an attitude, be professional they will see right through you. Let the judges decide.
Katy Perry is right, the truth hurts, Simon Cowell was right the whole time when looking for talent. You got to have something to impress the judges. It is not an easy task. You see thousands of bad singers and good ones on YouTube. Some got rejected, and some succeed. It’s hard work. Life learned and Life Lesson.
“You are 16 years old, you have to understand something this is not a business of being cute, and you love to sing. It’s a mental game, ok? If I follow Katy path I don’t want to give that opportunity, and you get here, and it will destroy you. I want you to understand that you are going into the fire!” Lionel Richie said to Layla Spring who is 16 years old and has a beautiful voice. She got the wise words from Lionel Richie, but it’s up to her. It’s drama and pressure of being a teen singer in Hollywood and lots of Hard work and not an easy task.
Harper Grace from being the worst singer when she sang the national anthem at the age of 11 to WOW a beautiful control voice and a gorgeous, quirky, and a country girl. She got a Golden ticket to Hollywood. People can sing better when they are humiliated and learn to be a better singer.
It was an exciting and emotional night on “America’s Got Talent.” Only five acts will go through and rest of the other acts new beginnings.
The AGT Judges Read Mean Tweets – America’s Got Talent 2017
Tyra Banks announced the fourth, fifth and six places and they were Colin Cloud, Diavolo, and Kechi. America had to save only one of the acts, and they had thirty minutes to vote on the AGT app. If you haven’t voted. Download the app because the finals are next week on America Got Talent.
Christian Guardino, Merrick Hanna and Angelica Hale. It was intense Christian was a good singer and Merrick had fun flying and dancing like a robot. Angelica Hale was save. She has a great voice but it was excitement and sadness, Christian and Merrick gave her hug.
Howie Rocks The Boat with Diavolo – America’s Got Talent 2017 (Extra)
Next round was Mandy Harvey, Celine Tam and the Pompeyo Family. They were all great. Mandy Harvey was saved. Mandy was Simon’s golden buzzer. Next week is the final. So the other acts will be competitive to be the next America’s Got Talent winner.
The next round was Light Balance and In the Stairwell. Light Balance was fantastic, and this time everything works, and it was spectacular to see them to perform live.
Now the Dunkin Save results, Kechi was in fourth place, and she was safe. Now it was between Diavolvo and Colin Cloud and it was a three to one vote, so Diavolo wins this round.
Watch America’s Got Talent on Tuesday and Wednesday nights on NBC! Who is going to win? We will find out. Don’t forget to vote! Get the app in google play or iTunes store.
It was an incredible night, last night on America’s Got Talent. Since Hurricane Irma is coming this weekend, we might lose power. I hope not. If we can get a cold front next week in the Carolinas, it could push Hurricane Irma away from us and no flooding too.
Anyway, Tyra Banks announced the bottom three and they were Evie Claire, Chase Goehring, and Eric Jones. If you have America’s Got Talent app, just downloaded on your phone and vote for your favorite.
Buzzer Buddies
The first group was DaNell Daymon Greater Works, it’s a singing group choir, and they were fantastic. I feel it’s was church day on AGT. I like this church and comedian Preacher Lawson. Really? The executive producers put them together. Hallelujah! Preacher Lawson wins.
“I predict that you are the next comedic superstar!” We’ll see what happens at the finals, but I, for one, can’t wait to see Preacher’s next stand-up performance!” Howie Mandel said.
The next round was Yoli Mayor and Johnny Manuel; they were fantastic singers. Unfortunately, they were dismissed. Johnny Manuel would get a second chance in the music business, and he has an incredible singing voice. Please, someone, give Johnny Manuel another chance in the music business. However, judge Simon Cowell made a strong point, saying, “You couldn’t have done any more, you walk out with your heads held high. Everyone is a winner in this round.” I like this Simon Cowell. He knows what music executives want, and performance acts to be performed on stage.
Then came Mike Yung and Darci Lynn. They were incredible performers and one won hearts of America. Eventually, Mike Yung went home, but he will make a record deal since he is in Hollywood. He has an amazing voice. Darci Lynn America’s favorite, her ventriloquist act was fantastic, and she’s an incredible singer and comedienne too. She was saved.
Billy and Emily England, and Sarah and Hero. Remember don’t throw your siblings while you are spinning around, that was incredible and shocking to the judges. The looks on the judges and the audiences who are watching them. I bet many people DVR it and watching it again in a loop. The spinning is over for Billy and Emily, but they were great. Sarah and Hero were saved. Catch!
Doughnuts roll for the Dunkin’ save. Evie Claire was saved. So that means Eric Jones and Chase Goehring the bottom two and the judges must pick one who will go the Semi-finals. It was 3 to 1 the judges pick Chase Goehring.
Last night result of America’s Got Talent was an exciting night to watch TV on NBC.
From Kechi Okwuchi, a plane crash survivor, Mike Yung a New York subway singer, Chase Goehring a singer and song writer and Greater Works Choir all advance to the next round.
The fantastic dance performers of Diavolo, and the mind reader of Colin Cloud. Sara and Hero, dog trainer act also advances to the semi-finals.
Kechi sang an emotional cover of Katy Perry’s “By the Grace of God.” Mike Young sang Ed Sheeran’s song “Thinking Out Loud.” Goehring sang an original song “Illusion,” and Great Works Gospel Choir brought down the house with “You’re the One That I Want” from Grease.
The other acts and performers will be missed, and they will do terrific in the future. So sad to see the amazing acts perform one last time for us.
Mat Franco and Piff the Magic Dragon, did Magic. Mat use a milk carton that delivers different drinks: beer, red wine, water, lemonade, juice, and milk. It was incredible, and it blew my mind. Piff and his magic dog wrote words on a white board; this dog has better penmanship than my doctor.
First, they were some minor issues that happen on the live show on Tuesday night on America’s Got Talent, everything was good. Unfortunately, things that did not go as plan. Maybe it was the Eclipse, blame it on the Eclipse.
The Bottom three was Eric Jones, Evie Clair, and The Masquerade. They had the lowest number of votes.
The first group was Brobots and Mandroids and Light Balance, both were good. Light Balance did have some technical difficulties, so far they had the dress rehearsal performance, and they were fantastic. It was flawless and entertaining. They should put the acts that use technology at the beginning of the show because it takes time to put things together and during commercials breaks. Light Balance was saved.
Next up was Mandy Harvey and Pomeyo Family. Mandy was a fantastic performer, and the Pomeyo Family was brilliant dog act. I want to get a onesie animal print for my cat named Yoda. Maybe a onesie Yoda for my cat? Mandy was saved.
The next act was Johnny Manuel and Demian Aditya. Demian was an illusionist and Daredevil. His act on Tuesday night was inside of the box with fiery flaming of spikes, the box was up in the air and when the box drop it would land on the spikes without Demian in it, and he would appear somewhere near the judges. Unfortunately, the box got stuck between the steel pipes. It was supposed to slide and drop to the fiery flames of spikes. What he should have done, is come out and appear someplace near the judges or scare the crap out of them. Ta da! He should have a backup plan when doing magic. Johnny Manuel was saved. It is not the end for Demian Aditya; he is great Daredevil and illusionist. Where did he go? My cat Yoda just disappear. He is using the force, lucky I had a litter box.
Next group was Celine Tam, Merrick Hanna, and Mirror Image. They were fantastic performers, but one of them has to go, and that was Mirror Image, they made it this far to come to Hollywood. The guys have charisma, and they are likable. I don’t know what the judges Howie Mandel, Heidi Klum, and Mel B see in them. Maybe they have potential to become stars in Hollywood or have a YouTube channel hit, that would put Jake Paul out of business. Celine Tam and Merrick Hanna were saved.
Now the Dunkin save, with the Dunkin save the people have to vote for Evie Clair, the Masqueraders, and Eric Jones. Evie Clair gave her fantastic performance. The Masquerades gave a magnificent performance. And Eric Jones walk through the glass in front of Howie Mandel – that was incredible. Evie Clair was saved. And now the judges have to decide who would go through, and they were unanimous of the two acts. They were tied. The tie breaker winner was Eric Jones. |
1. Introduction {#sec1-jcm-09-02451}
===============
Adoption of a nationwide screening program and recent advances in endoscopic instruments and techniques have led to the increased detection of early colon cancer (ECC) and reduction in colorectal cancer (CRC) incidence and mortality \[[@B1-jcm-09-02451]\]. The diagnosis of submucosal invasive CRC (T1 CRC) is reported as 15--30% \[[@B2-jcm-09-02451],[@B3-jcm-09-02451]\]. T1 CRC has a satisfactory prognosis, with a 5-year survival rate exceeding 90%; complete cure is achieved with endoscopic resection (ER) and/or radical surgery \[[@B4-jcm-09-02451],[@B5-jcm-09-02451]\].
ECC is defined as the carcinoma confined to the mucosal or submucosal layer, regardless of the presence or absence of lymph node metastasis (LNM) \[[@B6-jcm-09-02451]\]. The intramucosal CRC can be cured by a complete ER with safe and reliable en bloc resection, regardless of the tumor size and macroscopic type. However, 5--10% of patients with T1 CRC have LNM or distant metastasis; therefore, they require additional surgical resection with lymph node dissection (ASR) after ER to ensure complete tumor clearance \[[@B7-jcm-09-02451],[@B8-jcm-09-02451]\]. According to the guidelines issued by the Japanese Society for Cancer of the Colon and Rectum (JSCCR) in 2016 \[[@B9-jcm-09-02451]\], non-curative ER (NC-ER) for T1 CRC is defined based on the presence of at least one of the following criteria: (i) unfavorable histologic subtypes (poorly differentiated adenocarcinoma/mucinous carcinoma/signet ring cell carcinoma), (ii) deep submucosal invasion (submucosal invasion depth (SID) ≥1000 μm in non-pedunculated cancers), (iii) positive lymphovascular invasion (LVI) or (iv) positive or undetermined resection margins. In the absence of these factors, curative endoscopic resection (C-ER) is considered for T1 CRCs after ER \[[@B9-jcm-09-02451]\].
It is unclear whether ASR or a surveillance-only approach after ER is an adequate treatment option for T1 CRC patients. The policy for managing pathologic T1 CRC after C-ER is surveillance-only; however, ASR is recommended in NC-ER cases due to the risk of LNM, according to the JSCCR guideline \[[@B9-jcm-09-02451]\]. However, 90% of T1 CRCs do not involve LNM; therefore, implementing ASR in all patients with NC-ER may cause overtreatment \[[@B5-jcm-09-02451]\]. Moreover, in actual clinical practice, some of these patients refuse to or cannot undergo surgery for various reasons, such as old age, several significant comorbidities and individual preference. Furthermore, ASR after ER is associated with an overall mortality of 1--5% and morbidity of 30%, especially in elderly people \[[@B10-jcm-09-02451],[@B11-jcm-09-02451]\].
Although histopathologic features predicting the risk of LNM and residual cancer have been elucidated, the long-term outcomes in patients with T1 CRC undergoing ER and the characteristics of recurrence after ER remain unknown. We aimed to assess the long-term outcomes, viz. 5-year overall survival (OS) and recurrence-free survival (RFS) in patients with endoscopically resected T1 CRCs. Additionally, in the NC-ER group, we compared recurrence and the associated risk factors between the ASR and surveillance-only subgroups.
2. Methods {#sec2-jcm-09-02451}
==========
2.1. Patients {#sec2dot1-jcm-09-02451}
-------------
We conducted a retrospective study on 220 patients with T1 CRC treated with ER from January 2007 to December 2017. Following were the exclusion criteria: (a) surgical resection with LN dissection as the initial treatment (*n* = 129); (b) indeterminate tumor depth (*n* = 70); (c) pedunculated lesions (*n* = 30); (d) previous surgery owing to CRC (*n* = 5); (e) familial history of adenomatous polyposis (*n* = 3); (f) inflammatory bowel disease (*n* = 1); and (g) incomplete follow-up (*n* = 24). All patients, as of their last follow-up in December 2017, had an overall median follow-up period of 44 months (interquartile range \[IQR\] 32--69). All data related to patient background (age and sex), endoscopic features of the lesion (tumor size, location, macroscopic type, resection method and complications), histopathologic features (histologic type, SID, LVI and margin positivity) and follow-up status were obtained from the electronic data records. Data on cause and date of death were retrieved from the Korean Ministry of Statistics. This study was approved by the Institutional Review Board of the Pusan National University Hospital, Busan, South Korea (approval number: H-1902-025-076) and adhered to the Declaration of Helsinki.
2.2. Endoscopic Procedure {#sec2dot2-jcm-09-02451}
-------------------------
The ER procedure included endoscopic mucosal resection (EMR) and endoscopic submucosal dissection (ESD, including hybrid ESD). The EMR and ESD methods have been well-described previously \[[@B12-jcm-09-02451]\]. All the surgical procedures were performed by experienced endoscopists (G.A.S and D.H.B) using the standard methods.
2.3. Histologic Assessment {#sec2dot3-jcm-09-02451}
--------------------------
All resected specimens were immediately stretched, pinned and fixed in 10% buffered formalin for 12--24 h and serially sectioned into 2-mm slices. All specimen slices were examined microscopically to evaluate resection margins and tumor characteristics, including tumor size, histologic type, SID LVI. Grade of differentiation was assessed on standard hematoxylin-eosin stained sections. SID was the distance from the lowest point of the muscularis mucosa (or ulceration surface) to the point of deepest tumor penetration ('classic' method) and the distance from the lowest point of the imaginary line of the plane of the muscularis mucosa to the point of deepest tumor penetration in cases of irregular (discontinuous or hypertrophic) or absent muscularis mucosae ('alternative' method) \[[@B9-jcm-09-02451]\]. LVI was defined as the presence of clusters of malignant cells within an endothelium-lined vascular channel. "Positive lateral and vertical margin" were defined as exposure of the carcinoma at the submucosal margin of the resected specimen. Diagnosis was confirmed by a board-certified pathologist with expertise in gastrointestinal pathology (D.Y.P). The study pathologist reviewed all lesions.
2.4. Data Collection and Follow-up {#sec2dot4-jcm-09-02451}
----------------------------------
Local recurrence following ER was defined as either any histologically identified colorectal neoplasia that occurred at the ER scar site or LNM detected using computed tomography (CT). Distant metastases were also detected using CT. Follow-up colonoscopies were performed annually after ER, as recommended in the JSCCR guidelines \[[@B9-jcm-09-02451]\]. Physical examinations, blood tests and contrast-enhanced chest and abdominopelvic CT were recommended every 6 months in the first 3 years, and the patients were subsequently followed-up annually to evaluate the presence of LNM or distant metastasis. The start of the follow-up period was defined as the index date for ER, while the end was defined as either the date of death or 31 December 2017, whichever occurred first. Cancer-related deaths were defined as deaths due to CRC; cases involving death not related to cancer were examined to ascertain the cause. Patients were censored at the first occurrence of the outcome of interest, death or end of the study period, whichever came first.
2.5. Statistical Analyses {#sec2dot5-jcm-09-02451}
-------------------------
Descriptive statistics were presented as frequencies (%) for categorical variables and mean (±standard deviation) or median (IQR) for continuous variables. Continuous variables were compared using the Student's t-test; categorical variables were compared using the Fisher's exact test. *p* \< 0.05 was considered statistically significant. Univariable and multivariable logistic regression analyses were performed to identify the patient- and tumor-related risk factors associated with tumor recurrence after initial ER.
OS and RFS were retrospectively assessed in each group of patients. RFS was defined as freedom from confirmed recurrence or death from the cancer, whereas OS was defined as freedom from death by any cause. To compare OS and RFS between groups, we constructed Cox regression models and Kaplan--Meier curves, and differences were compared using the log-rank test. We used Cox regression analysis to calculate hazard ratios (HRs) for death and recurrence for the following variables: risk stratification, age, sex, location, tumor size, configuration, resection method, SID, LVI and margin positivity.
All statistical analyses were performed by an independent statistician (Department of Biostatistics, Clinical Trial Center, Biomedical Research Institute, Pusan National University Hospital) using the R statistical package program (version 3.6.0; R Foundation for Statistical Computing, Vienna, Austria).
3. Results {#sec3-jcm-09-02451}
==========
3.1. Patient Characteristics and Clinicopathologic Features of the T1 CRC Treated with ER {#sec3dot1-jcm-09-02451}
-----------------------------------------------------------------------------------------
The clinicopathologic features of all patients (*n* = 220) with endoscopically resected and histologically confirmed T1 CRC are summarized in [Table 1](#jcm-09-02451-t001){ref-type="table"}. The cohort comprised 154 men and 66 women (median age, 65 \[range 30--87\] years).
There were 49 and 171 patients in the C-ER and NC-ER groups, respectively. The causes for undergoing NC-ER were SID \> 1000 µm (*n* = 108, 63.2%); margin positivity (*n* = 54, 31.6%); and LVI (*n* = 11, 6.4%). There were no unfavorable histologic types since these lesions were initially surgically resected. The NC-ER group showed a higher proportion of rectal cancers (20/171, 11.7%) and larger tumor size (15.1 ± 7.1 mm) than the C-ER group, although the difference was insignificant (*p* = 0.175 and *p* = 0.085, respectively). However, lesions in the NC-ER group showed a greater SID (2261.3 ± 1270.9 vs. 599.7 ± 273.4 µm) and higher proportion of LVI (11/171, 6.4% vs. 0/49, 0%) than those in the C-ER group (*p* = 0.001 and *p* = 0.009, respectively). All lesions with positive margins were included in the NC-ER group, and the difference was significant (31.6% vs. 0%, *p* \< 0.001). There were no significant differences between the groups with respect to age, sex, macroscopic type, resection method and procedure-related adverse events. Tumor recurrence (11/171, 6.4%, *p* = 0.129) and one cancer-related death was observed only in the NC-ER group.
3.2. Long-Term Outcome of Patients in the C-ER and NC-ER Groups {#sec3dot2-jcm-09-02451}
---------------------------------------------------------------
To evaluate long-term outcomes, patients in the C-ER and NC-ER groups were further divided into the ASR and surveillance-only subgroups ([Figure 1](#jcm-09-02451-f001){ref-type="fig"}).
1. Long-term outcomes in the C-ER group:
No tumor recurrence or cancer-related death was observed. Twelve of 49 (24.5%) patients showed concern about LNM and opted for surgery; however, LNM or recurrence was not observed in the ASR subgroup during follow-up (median, 44 \[IQR: 32--69\] months).
2. Long-term outcomes in the NC-ER group
In this group, 117 (68.4%) patients underwent ASR, and 54 (31.6%) opted for surveillance-only during follow-up. ASR was performed in 115 patients (98.3%) with an SID ≥ 1000 μm, 11 patients (9.4%) with LVI and 45 patients (38.5%) with margin positivity. The long-term outcomes in the NC-ER group based on treatment strategy are shown in [Figure 2](#jcm-09-02451-f002){ref-type="fig"}. In [Figure 2](#jcm-09-02451-f002){ref-type="fig"}a, the Kaplan--Meier curves of OS showed no significant difference between the ASR and surveillance-only subgroups, with an HR of 2.057 (95% confidence interval \[CI\] 0.689--6.136; *p* = 0.19) for the surveillance-only vs. ASR subgroup. The 5-year OS rates were 75.3% (95% CI 57.8--98.1) and 92.6% (95% CI 86.3--99.2) in the surveillance-only and ASR subgroups, respectively. The Kaplan--Meier curves for RFS are presented in [Figure 2](#jcm-09-02451-f002){ref-type="fig"}b. The ASR subgroup showed better RFS than the surveillance-only subgroup, given the HR of 6.127 (95% CI 1.623--12.134; *p* = 0.0023) for the surveillance-only vs. ASR subgroup. The 5-year RFS rates were 84.0% (95% CI 72.4--97.5) and 97.2% (95% CI 94.3--100) in the surveillance-only and ASR subgroups, respectively.
Additionally, we compared clinicopathologic characteristics between lesions with and without recurrence ([Table 2](#jcm-09-02451-t002){ref-type="table"}) and analyzed the risk factors for recurrence in the surveillance-only subgroup ([Table 3](#jcm-09-02451-t003){ref-type="table"}). Eight of 54 patients (14.8%) showed recurrence. In the multivariate analysis, an SID \> 2500 µm (HR, 7.298; CI, 1.253--42.500; *p* = 0.027) and margin positivity (HR, 7.189; CI, 1.033--50.029; *p* = 0.046) were significantly associated with recurrence. In the ASR subgroup, no variable was found to be a significant risk factor for recurrence in the univariate and multivariate analyses (see [Supplemental Tables S1 and S2](#app1-jcm-09-02451){ref-type="app"}).
3.3. Characteristics of Recurrent T1 CRC {#sec3dot3-jcm-09-02451}
----------------------------------------
The details of patients with cancer recurrence are summarized in [Table 4](#jcm-09-02451-t004){ref-type="table"}. The median time to recurrence was 24.8 (range 4.1--82.2) months. While patients in the C-ER group did not show recurrence, all patients with recurrence (*n* = 11) belonged to the NC-ER group. Local recurrence was observed in eight patients and distant metastasis in three (two with liver metastasis and one with lung and bone metastases). Eight patients belonged to the surveillance-only subgroup (8/54, 14.8%) and three patients to the ASR subgroup (3/117, 2.7%).
In the surveillance-only subgroup, one patient had liver metastasis, which was detected 82.2 months after the index ER. The other seven cases involved local recurrence, observed after a median of 24.8 (IQR 4.1--81.9) months. Of these, the earliest local recurrence was detected 4.1 months after the index ER, which was located in the rectum, with an SID \> 2500 µm and pathologically positive vertical margins. Two recurrence cases were observed after 5 years (one, local recurrence; one, distant metastasis). The remaining six recurrence cases involved local recurrence, detected within 3 years. In the ASR subgroup, all three recurrences (two, distant metastases; one, local recurrence) were observed within 1 year of the index ER. Distant metastases involved liver metastasis and lung and bone metastases. Local recurrence was observed at the anastomosis site of ASR.
In total, 14 patients (14/220, 6.4%) died during the follow-up period, and all death cases belonged to the NC-ER group. There were only three deaths among patients with CRC recurrence, and cancer-related death was confirmed in only one patient with bone metastasis, while two patients died due to cerebrovascular disease and pneumonia. In patients without recurrence (*n* = 11), patients died of cerebrovascular disease (*n* = 2), accidental reasons (*n* = 2), dementia (*n* = 2), pneumonia (*n* = 2), intestinal infection (*n* = 1), hypopharyngeal cancer (*n* = 1) or lung cancer (*n* = 1).
4. Discussion {#sec4-jcm-09-02451}
=============
The present study revealed two main results for long-term outcomes of T1 CRCs after ER. First, we investigated the long-term outcomes in C-ER group. There was no risk of tumor recurrence and cancer-related deaths in patients with C-ER. Second, we compared long-term prognosis in the NC-ER group of T1 CRC patients, with or without ASR. The difference in OS between ASR and surveillance-only subgroups was statistically insignificant. However, RFS rates were significantly different between the ASR (97.2%) and surveillance-only (84.0%) subgroups. Multivariate analysis indicated a submucosal invasion depth (SID) of \>2500 µm and margin positivity to be associated with recurrence. These results suggest surveillance-only approach can be considered as an alternative surgical option for T1 CRCs in selected patients undergoing NC-ER.
T1 CRC has shown steady increase in the incidence rate and accounts for 17% of the total CRC cases. Moreover, T1 CRC are confirmed in about 0.6% of the cases after ER \[[@B13-jcm-09-02451]\]. However, the long-term outcomes in patients with T1 CRC after ER, such as the OS and RFS, and characteristics and types of tumor recurrence, remain unknown. Several studies have reported histopathologic and prognostic factors for predicting LNM in T1 CRC patients treated with surgical resection and lymph node dissection \[[@B14-jcm-09-02451],[@B15-jcm-09-02451]\]. Despite the high clinical and practical relevance, there are only few reports on the follow-up data after ER for patients with T1 CRC \[[@B4-jcm-09-02451],[@B5-jcm-09-02451],[@B16-jcm-09-02451]\].
In this retrospective cohort study, we have examined the long-term outcomes in patients with T1 CRC after ER. Patients in the C-ER group, fulfilling histopathologic factors described in the JSCCR guidelines, showed excellent long-term outcomes with ER alone, did not show tumor recurrence, and none of the patients who underwent ASR showed LNM, consistent with previous reports \[[@B4-jcm-09-02451],[@B17-jcm-09-02451]\]. Therefore, our analysis supports that T1 CRC, which satisfies the curative criteria of the JSCCR guidelines, shows no increased risk of recurrence without ASR. However, this treatment strategy can be allowed after comprehensive evaluation of histological diagnosis by expert gastrointestinal pathologists to ensure surveillance-only. Yoda et al. reported that after reexamination of original pathologic specimen, LVI was detected in 0.8% (1/126) of the cases who were earlier classified as C-ER \[[@B5-jcm-09-02451]\].
The investigation of long-term prognosis in the NC-ER group of T1 CRC patients, with or without ASR, is the highlight of our study. The difference in OS between ASR and surveillance-only subgroups was statistically insignificant. A few studies enrolling large number of patients and long follow-up periods have reported similar results as found in our study. The review of patients with T1 CRC in the SEER database (The Surveillance, Epidemiology and End Results, the National Cancer Institute, USA) showed similar risk of death in the ASR and surveillance-only groups, consistent with our results, after accounting for age and comorbidities and adjusting for propensity quintile \[[@B18-jcm-09-02451]\]. In the report by Yamashita et al. difference in the 5-year OS rates in the ASR and surveillance-only subgroups in patients with NC-ER was statistically insignificant \[[@B19-jcm-09-02451]\]. Surveillance-only and close follow-up after ER in the NC-ER group may serve as good alternative treatment options rather than surgery, especially in patients with old age or significant comorbidities. Various circumstances should be considered to determine the best approach for patients diagnosed with T1 CRC, and therefore, further studies should be ascertained in large cohort studies with long-term follow-up periods to confirm the benefits.
Further, our analysis provides data on the recurrence of patients in the NC-ER group. Of the 117 patients in the NC-ER group, RFS was significantly lower in the ASR subgroup than in surveillance-only subgroup. Similar to our results, recurrence after ER of T1 CRC patients was 9.5% in high-risk lesions (poor differentiation, SID \> 1000 µm, LVI and positive resection margin), which was higher than that in patients with low risk lesions (1.2%) in a recent meta-analysis \[[@B20-jcm-09-02451]\]. This suggests that ASR should be warranted in the T1 CRC patients undergoing NC-ER for the aspect of recurrence. However, the beneficial rate (residual tumor and LNM) in patients with surveillance-only vs. risk rate (post-operative morbidity and mortality) in patients with ASR should be balanced, because these two factors are weighed against each other in the real-world clinical practice. Benizri et al. \[[@B10-jcm-09-02451]\]. reported the rates of benefit and risk of ASR after ER to be 10.9% and 25%, respectively. For the beneficial rate by performing subsequent ASR, Choi et al. \[[@B21-jcm-09-02451]\]. Reported that 14.3% (24/168) of the T1 CRC patients in NC-ER group to have benefited and Rickert et al. \[[@B22-jcm-09-02451]\]. Showed beneficial rate of 41% for residual tumor and 8.6% for LNM. Whereas, for the aspect of risk rate, several studies have reported the postoperative complication rate of 18.8--31.8% in ASR after ER \[[@B10-jcm-09-02451],[@B11-jcm-09-02451],[@B22-jcm-09-02451]\]. The surgical mortality rates of 1.9--6.5% for colon cancer, 3.2--9.8% upon total mesorectal resection for rectal cancer \[[@B23-jcm-09-02451],[@B24-jcm-09-02451]\] and 0.8% upon receiving ASR after ER have been observed \[[@B25-jcm-09-02451]\]. Furthermore, ASR may be unnecessary in most patients because of the overall 10% rate of LNM in T1 CRCs \[[@B5-jcm-09-02451]\]. Therefore, various circumstances need to be considered to determine the best approach for patients diagnosed with T1 CRC, and comprehensive treatment decision should be made based on other factors, such as age, significant comorbidities and physical activity levels in medically fit patients.
Our study elaborates recurrence in ASR and surveillance-only subgroup of NC-ER. Recurrence was observed in 14.8% (8/54) of the patients in the surveillance-only group. In the multivariate analysis with logistic regression, the SID ≥ 2500 µm and margin positivity were found to be independent risk factors for recurrence. According to the JSCCR guidelines, SID ≥ 1000 µm indicates ASR, and several studies have identified deep SID as the most frequent indication for ASR after ER \[[@B26-jcm-09-02451],[@B27-jcm-09-02451]\]. The deeper the SID, the higher the risk of recurrence and risk of recurrence based on SID \> 1000 µm showed increased incidence of metastasis with the relative risk of 3.0--5.93 in previous meta-analysis \[[@B28-jcm-09-02451],[@B29-jcm-09-02451]\]. However, recent studies have shown that even in T1 CRC with SID ≥ 1000 µm, the rate of LNM was only about 1--2% in the absence of other risk factors \[[@B30-jcm-09-02451],[@B31-jcm-09-02451]\]. Consistent with those results, our study also showed cutoff value of SID \> 2500 µm, not SID \> 1000 µm, to be associated with risk of recurrence. Thus, this result indicates that surveillance-only strategy after ER without other risk factors, except for only deep submucosal invasion colorectal cancer (pT1b), may serve as a potential treatment option. However, lesions with SID ≥ 2500 µm was associated with recurrence and further studies are warranted to confirm the exact depth related to recurrence. Margin positivity is the major indication for ASR after ER in T1 CRC because it is significantly associated with residual disease and local recurrence due to the possibility of regrowth of remaining tumor cells \[[@B32-jcm-09-02451]\]. Previous studies have demonstrated that positive margin status, especially vertical margin, is significantly associated with residual disease in patients with endoscopically resected T1 CRC \[[@B33-jcm-09-02451],[@B34-jcm-09-02451]\]. In this study, we also demonstrated that margin positivity significantly associates with recurrence, and therefore, ASR should be recommended in margin positive T1 cancer.
This study has some limitations. First, the outcome of the study was limited by its retrospective nature, along with selection bias. However, it would not be ethical to conduct a randomized study to compare the long-term outcomes in surveillance-only vs. ASR in patients with NC-ER. Second, the statistical power would be insufficient due to the small number of death and recurrence. Therefore, a study based on the population-oriented large cohort would represent the best way to assess the long-term outcomes in current practice. Third, the histological differentiation status and tumor budding were not considered as risk factors of OS and RFS in NC-ER group. We could not analyze tumor differentiation because patients with poor histologic type tumors underwent surgical resection with lymph node dissection as the first-line treatment. Additionally, we could not analyze the effect of tumor budding on OS and recurrence due to unavailability of pathology for about 50% of the patients in the cohort. However, tumor budding neither allows prediction of the adverse prognostic events \[[@B35-jcm-09-02451]\], nor serves as a reliable indicator in regular clinical practice. Therefore, additional data/analysis are required to determine whether these criteria are independent prognostic factors for reliable utility in the pathologic laboratories.
In conclusion, T1 CRC included in the C-ER group according to the JSCCR guidelines has no increased risk of recurrence. While OS of patients in the NC-ER group was not affected by ASR, RFS was significantly lower in the ASR subgroup than that in the surveillance-only subgroup of the NC-ER group. SID ≥ 2500 µm and margin positivity were identified as independent risk factors for prediction of recurrence. Surveillance-only in the NC-ER group may be considered as an alternative surgical option for T1 CRCs in selected NC-ER patients.
We thank the Department of Biostatistics, Clinical Trial Center, Biomedical Research Institute and Pusan National University Hospital for their excellent assistance in the statistical analysis.
The following are available online at <https://www.mdpi.com/2077-0383/9/8/2451/s1>, Table S1: Comparison of clinicopathological characteristics in the ASR subgroup of the NC-ER group according to recurrence, Table S2: Risk factors for recurrences in the ASR subgroup of the NC-ER group.
######
Click here for additional data file.
D.H.B.: study concept and design, E.Y.P. and D.H.B.: writing the first draft of the study, E.Y.P., M.W.L. and D.H.B.: acquisition of data, analysis and interpretation of data, D.Y.P.: histological review, interpretation of data and manuscript review. G.H.K. and G.A.S. supervised the project. All authors interpreted the data and contributed to the writing of the study. All authors have read and agreed to the published version of the manuscript.
This research received no external funding.
The authors declare no conflict of interest.
{#jcm-09-02451-f001}
######
(**a**) Hazard ratios and 95% confidence intervals of overall survival for additional curative surgery vs. surveillance-only; (**b**) hazard ratios and 95% confidence intervals of recurrence free survival for additional curative surgery vs. surveillance-only. Overall, survival and recurrence free survival according to treatment strategy for the NC-ER group according to NC-ER followed by additional surgical resection (ASR) and NC-ER with surveillance-only; (**a**) overall survival (upper) and (**b**) recurrence-free survival (lower).


jcm-09-02451-t001_Table 1
######
Clinicopathologic features of the 220 submucosal invasive colorectal cancers.
-------------------------------------------------------------------------------------------------------------------
Variable C-ER, *N* (%)\ NC-ER, *N* (%)\ *p*-Value
(Overall *N* = 49) (Overall, *N* = 171)
----------------------------------------------------------- -------------------- ---------------------- -----------
Age (years), mean ± SD 64.5 ± 8.8 63.9 ± 10.0 0.688
Sex, *n* (%) \>0.999
Male 34 (69.4) 120 (70.2)
Female 15 (30.6) 51 (29.8)
Location, *n* (%) 0.175
Colon 47 (95.9) 151 (88.3)
Rectum 2 (4.1) 20 (11.7)
Size, mm, mean ± SD 13.2 ± 6.4 15.1 ± 7.1 0.085
Macroscopic type, *n* (%) 0.190
Sessile 17 (34.7) 87 (50.9)
Flat 9 (18.4) 24 (14.0)
LST-G 6 (12.2) 12 (7.0)
LST-NG 17 (34.7) 48 (28.1)
Resection method, *n* (%) 0.195
ESD 39 (79.6) 152 (88.9)
EMR 10 (20.4) 19 (11.1)
Adverse events 0.444
Acute bleeding, *n* (%) 1 (2.1) 11 (6.6)
Delayed bleeding, *n* (%) 0 (0.0) 2 (1.2)
Perforation, *n* (%) 1 (2.0) 3 (1.8)
Pathology
Well or moderately differentiated 49 (100) 171 (100)
Poorly differentiated/mucinous/signet ring cell carcinoma 0 (0.0) 0 (0.0)
Submucosal invasion depth 599.7 ± 273.4 2261.3 ± 1270.9 0.001
Lymphovascular invasion 0 (0) 11 (6.4) 0.009
Margin positivity 0 (0) 54 (31.6) \<0.001
Recurrence 0 (0) 11 (6.4) 0.129
Death 0 (0) 14 (8.2) 0.043
Cancer-related death 0 (0) 1 (0.6) \>0.999
-------------------------------------------------------------------------------------------------------------------
C-ER---curative endoscopic resection; EMR---endoscopic mucosal resection; ESD---endoscopic submucosal dissection; LST-G---laterally spreading tumor--granular type; LST-NG---laterally spreading tumor--non-granular type; NC-ER---non-curative endoscopic resection; SD---standard deviation.
jcm-09-02451-t002_Table 2
######
Comparison of clinicopathologic characteristics in the surveillance-only subgroup of the non-curative endoscopic resection (NC-ER) group according to recurrence.
-----------------------------------------------------------------------------------------------
Recurrence *N* (%)\ No recurrence *N* (%)\ *p*-Value
(Overall *N* = 8) (Overall *N* =46)
------------------------------------ --------------------- ------------------------ -----------
Age (years), mean ± SD 72.5 ± 8.9 65.2 ± 11.0 0.062
Sex, *n* (%) \>0.999
Male 6 (75.0) 35 (76.1)
Female 2 (25.0) 11 (23.9)
Location, *n* (%) 0.577
Colon 8 (100) 39 (84.8)
Rectum 0 (0) 7 (15.2)
Size (mm), mean ± SD 19.3 ± 9.3 15.0 ± 7.4 0.253
Macroscopic type, *n* (%) 0.896
Sessile 4 (50.0) 17 (37.0)
Flat 1 (12.5) 8 (17.4)
LST-G 0 (0) 6 (13.0)
LST-NG 3 (37.5) 15 (32.6)
Resection method, *n* (%) 0.588
ESD 6 (75.0) 40 (87.0)
EMR 2 (25.0) 6 (13.0)
Pathology
Well and moderately differentiated 8 (100) 46 (100)
Submucosal invasion depth 1977 ± 1443 1481 ± 770 0.372
Lymphovascular invasion \>0.999
Positive 0 (0) 5 (10.6)
Negative 8 (100) 41 (89.4)
Margin 0.275
Positive 2 (28.6) 7 (14.9)
Negative 6 (71.4) 39 (85.1)
-----------------------------------------------------------------------------------------------
EMR---endoscopic mucosal resection; ESD; endoscopic submucosal dissection; LST-G---laterally spreading tumor--granular type; LST-NG---laterally spreading tumor--non-granular type; SD---standard deviation; NC-ER---non-curative endoscopic resection.
jcm-09-02451-t003_Table 3
######
Risk factors for recurrence in the surveillance-only subgroup of the NC-ER group.
No. of Patients No. of Events Univariate Multivariate
--------------------------- ----------------- --------------- ------------ --------------- ------- ------- --------------- -------
Age (years)
≥65 36 7 3.328 0.386--28.726 0.245
\<65 18 1 1
Sex
Male 41 6 1
Female 13 2 1.279 0.232--7.037 0.777
Size (mm)
≥15 28 5 0.832 0.167--4.139
\<15 26 3 1 0.822
Submucosal invasion depth
≥2500 10 3 5.383 1.079--26.858 0.040 7.298 1.253--42.500 0.027
\<2500 44 5
Lymphovascular invasion 0.999
Positive 5 0 1
Negative 49 8 0 0.000--Inf
Margin 0.055 0.046
Positive 7 2 5.390 0.965--30.096 7.189 1.033--50.029
Negative 47 6 1 1
CI---confidence interval; OR---odds ratio; NC-ER---non-curative endoscopic resection.
jcm-09-02451-t004_Table 4
######
Details of the 11 patients with recurrence after endoscopic resection.
Patient Sex/Age (years) Location Size, mm Resection Method Resection Type Initial Histology Submucosal Invasion Depth Lympho-Vascular Invasion Margin Status at ER Operation Time to Recurrence, Month Recurrence Type Recurrence Treatment Death Death Cause
--------- ----------------- ---------- ---------- ------------------ ---------------- ------------------- --------------------------- -------------------------- --------------------- ----------- --------------------------- ---------------------------------- ---------------------- ------- -------------------------
1 Female/73 RC 25 ESD En bloc Mode-diff 1450 -- -- Yes 9.1 Distant meta(liver meta) Chemotherapy Yes Cerebrovascular disease
2 Male/72 Rectum 35 ESD Piecemeal Well-diff 2750 -- VM No 4.1 Local recurrence Op. refuse Yes Pneumonia
3 Male/65 LC 15 ESD En bloc Mode-diff 3000 -- VM Yes 9.1 Distant meta(lung and bone meta) Chemotherapy Yes Colon cancer
4 Male/71 RC 30 ESD Piecemeal Well-diff 440 -- VM No 25.9 Local recurrence Endoscopic resection No
5 Male/66 LC 25 EMR Piecemeal Mode-diff 1000 -- VM Yes 10.8 Local recurrence Operation No
6 Female/58 LC 25 ESD En bloc Well-diff 1250 -- -- No 37.8 Local recurrence Operation No
7 Male/66 LC 25 ESD En bloc Mode-diff 1475 -- -- No 82.2 Distant meta(liver meta) Chemotherapy No
8 Male/75 LC 20 EMR En bloc Mode-diff 2250 -- -- No 81.9 Local recurrence Op. refuse No
9 Female/81 LC 14 EMR En bloc Mode-diff 2500 -- -- No 24.8 Local recurrence Operation No
10 Male/87 LC 15 ESD En bloc Mode-diff 4150 -- -- No 12.0 Local recurrence Operation No
11 Male/70 LC 10 EMR En bloc Mode-diff 1000 -- -- No 15.0 Local recurrence Operation No
EMR---endoscopic mucosal resection; ESD---endoscopic submucosal dissection; LC---left colon; RC---right colon; VM---vertical margin positivity; meta---metastasis; Well-diff---well-differentiated; Mode-diff---moderately differentiated.
|
The IWW FJU is a union for all freelance journalists, bloggers, and other writers in the news media. Contact us today! You have nothing to lose but your unpaid invoices!
We’re a group of freelance journalists, bloggers, and other writers in news media from all around the world, organizing to improve our working conditions and assert our rights.
In the tumultuous, insecure world of contemporary news media, more and more of us are forced to work on a freelance basis. While it’s difficult to put a precise estimate on the numbers, self-employed writers make up the majority of the profession in the United States and there are legions of us around the world.
Publicizing this union comes after a months-long organizing effort in which we’ve had one-on-one conversations with hundreds of freelance journalists and group meetings with dozens, discussing the struggles that members of our profession face and how we can collectively overcome them.
Many of us deal with long overdue payments, low rates, vast pay disparities, exploitative contracts and frustrating invoicing systems at publications throughout the industry. While nearly every news outlet relies on freelance labor, few are committed to treating workers with dignity and providing fair compensation.
In order to change these conditions, and to gain power through solidarity, we created the Freelance Journalists Union. The FJU is part of the Industrial Workers of the World, an international, member-run union for all workers, which was established in 1905.
To learn more, contact us today! |
344 S.W.2d 262 (1961)
Melissa MAXWELL, by Her Next Friend and Mother, Claudia Maxwell, Appellant,
v.
Sam FRAZE, Respondent.
No. 23186.
Kansas City Court of Appeals, Missouri.
February 6, 1961.
*263 Maurice E. Benson, Kansas City, for appellant.
Dwight L. Larison (Hogsett, Houts, James, Randall & Hogsett) Kansas City, for respondent.
HUNTER, Presiding Judge.
This is a dog bite case. Plaintiff, Melissa Maxwell, 15 years old, obtained a jury verdict for $100 against defendant, Sam Fraze, because his two year old toy female Boxer dog, "Lady" bit her on her right thumb. The trial court sustained defendant's motion to set aside the jury's verdict and to enter judgment for defendant, and plaintiff appeals from the resultant judgment.
The question presented on this appeal is whether plaintiff made a submissible jury case. The principal controversy as presented by the parties is whether defendant kept a vicious dog after knowledge of her vicious propensities, and whether the injury complained of was the result of any such propensity.
In determining the question of submissibility of the case we must view the evidence in the light most favorable to the plaintiff. As so viewed if no case was made on the issues submitted to the jury then the judgment for defendant must be affirmed; otherwise it should be reversed and the jury's verdict ordered reinstated.
Plaintiff testified that on June 7, 1958, she and her mother went to a friend Golding's house as Lake Lotawana. The dog, "Lady", was around the premises. "Lady" belonged to defendant, a neighbor, who lived about four houses away. Plaintiff had been to Lake Lotawana on numerous previous occasions and was familiar with "Lady". She had often played with "Lady" on other occasions and had fed her. She knew the dog was welcome in the Golding house. She and her mother went into the house and "Lady" followed them in. They were sitting on a porch eating lunch with "Lady" nearby. " * * * we heard voices and the dog heard them, too, * * * and she rushed into the kitchen and tried to get out the door. * * * Well, I heard it start yowling like it was in pain, and I got up to see what was wrong. * * * The dog was crouching on the floor with both its front paws caught on the (screen) door and it was still yelling quite a bit, and I crossed over the kitchen to the door to see what was wrong, and I bent down to see how its feet were caught in the door, and how I could get them out of the door, whether it would be best to open the door or pull the dog out. I decided it wouldn't be a good idea to pull the dog out. * * * so I decided to open the door. I was stooped down with my right hand on my knee and I started to straighten up and open the door with my left hand when the dog bit me. I didn't realize at first it bit me until it went through my thumb nail."
Although she had played with "Lady" before, and "Lady" in play would grab her arm with her teeth "Lady" had never before bitten her. She didn't consider the dog ferocious or mean.
"Q. And you wouldn't have any fear of that dog now, would you? A. If the dog were not in pain or something of that nature. * * *
"Q. You would actually say this dog was just a playful dog, would you not? A. Yes, I would."
*264 Plaintiff's mother, Mrs. Maxwell, testified that when she and her daughter arrived at the Golding home they saw "Lady" and petted and talked to her, and she followed them into the house. They were sitting on the porch, "and all of a sudden we heard this dog howling and yelping, and it seemed to be in extreme pain." Mrs. Maxwell knew the dog was in pain and that it might be dangerous to go near it. Her daughter went to see what was wrong and she did not see her get bitten. After her daughter was bitten Mrs. Maxwell opened the screen door freeing the dog's paws from where they were caught between the bottom of the door and the floor. She was able to do so without getting too close to the dog. The dog remained in the house until defendant came and got her.
Mrs. Maxwell from former visits knew the dog and was friendly with it. She had seen her daughter playing with "Lady" on prior occasions. She had never seen "Lady" attempt to bite anyone. She would not call "Lady" a ferocious or mean dog.
The only evidence relating to any past biting by "Lady" came from plaintiff and two neighbor ladies, all called as witnesses on behalf of plaintiff. They testified that defendant the following day came to see plaintiff and remarked about an earlier occasion when "Lady" had bitten him. Plaintiff testified, "He said the dog had jumped up and bitten him (in a fold of fat) on the side of the stomach. He said he slapped it in an effort to break it of bothering people again. I don't remember if he said in play. I don't believe he did." Mrs. Searle, a neighbor, testified she had heard the above statement by defendant. She didn't regard "Lady" as vicious. "Lady" was at her house every day. She feeds her all the time and is not afraid of her. Mrs. Andrae, a neighbor, testified she also heard defendant's mentioned statement. She thought defendant might have said "Lady" bit him on the side a couple of times. She has been around the dog quite a bit and from her observation "Lady" has a friendly disposition. She has often petted the dog and has never seen her snap or bite at anybody. She would say it was just a playful pet.
Defendant testified and explained the mentioned incident by saying, "I got to rough-housing and running with the dog * * * and had been throwing my arm more or less teasing at her, and she made a pass for my arm and more or less got hung up in my shirt and on my belt, and that is the biting. * * * "Q. Did that leave any marks or any scars or draw any blood or anything? A. No, Sir." Defendant adduced other evidence, not helpful to plaintiff, all to the effect that "Lady" was a friendly, playful dog who had never offered to bite, or injure anyone, including numerous children who frequently played with her.
In an action against the owner or harborer of a dog for injury inflicted by it an essential element of the cause of action is defendant's scienter, i. e. actual or constructive knowledge of the vicious or dangerous propensities of the dog. In numerous Missouri decisions on the subject it is stated, "the gist of the action is the keeping of a vicious dog after knowledge of his vicious propensities." Clinkenbeard v. Reinert, 284 Mo. 569, 578, 225 S.W. 667, 669, 13 A.L.R. 485; State ex rel. Kroger Co. v. Craig, Mo.App., 329 S.W.2d 804; Annotation, 17 A.L.R.2d 459, 460; 3 C.J.S. Animals § 148, p. 1248. Of course, the injury complained of must result from the exercise of the dangerous propensity.
Occasionally decisions appear to pay lip service to the trite phrase that every dog is entitled to one bite, and, inferentially that after one bite his owner or keeper is liable for any additional ones. This is not the law, and is not supported by the decisions. It is not necessary for the dog to have bitten someone before if the dog has demonstrated a vicious propensity for biting. The controlling element is not whether it is a first bite but whether the dog has a vicious propensity for biting known to its keeper. On the other hand, *265 the bare fact of a prior bite does not of itself establish the vicious propensity. The circumstances surrounding the occasion of the biting and its extent demonstrate whether the incident of the prior bite is sufficient evidence or some evidence of a vicious propensity of the dog to inflict injury.
Plaintiff has the burden of proof. As stated in Merritt v. Matchett, 135 Mo.App. 176, 115 S.W. 1066, 1068, "The burden is on the plaintiff to prove that her injury was the direct result of a vice of the animal of which defendant had notice." Plaintiff has admitted that the dog is a friendly one, well known to her, and that she, even after the incident in question would not term it to be a vicious dog. The circumstances to which plaintiff has testified demonstrate that the biting of her occurred when the dog was in severe pain while caught in the door. This incident, as related by plaintiff and by her witnesses does not demonstrate that the dog was vicious or had any vicious propensity. The bite obviously did not arise out of any vicious propensity of the dog, but, instead, arose under circumstances in which any dog, in extreme pain and not understanding the cause of his predicament, might in painful frenzy bite a hand voluntarily placed too near him.
We have examined all of the evidence, from whatever its source, that might aid plaintiff in establishing her cause of action, and after giving her the benefit of it and of all reasonable inferences therefrom have concluded that it is insufficient to permit a jury finding on the required elements of keeping a vicious dog after knowledge, active or constructive, of its vicious propensities and that her injury was the direct result of such vice of the dog. The trial court did not err in entering judgment for defendant.
The judgment is affirmed.
All concur.
|
Orange Sees Green With Silicon Valley Accelerator
From the home of technology titans Facebook, Amazon and Google, the French telecom giant and mobile specialist is priming to grow the global reach of its one-year-old Silicon Valley-based start-up accelerator. |
About Charles River
For over 65 years, Charles River employees have enjoyed rich careers working together to assist in the discovery, development and safe manufacture of new therapies for the patients who need them.
When you join Charles River, you become part of an international family that has had a significant impact on the health and well-being of our families, friends and colleagues across the globe. In the past few years alone we’ve helped our clients with the critical research required to develop new, approved treatments for cancer, weight loss, cystic fibrosis, leukemia, IBS, epilepsy, Cushing’s disease and other conditions.
Working at Charles River provides you with a chance to make a difference in the world. Whether your background is in life sciences, finance, quality, IT, sales or another area, your skills will play an important role in supporting the life-saving and valuable work we perform on behalf of our clients. In return, we’ll offer you opportunities to learn, grow and build a career that you can feel passionate about.
Equal Employment Opportunity
Charles River takes affirmative action to ensure equal employment opportunity for minorities, women, disabled individuals, and covered veterans (recently separated veterans, Armed Forces service medal veterans, disabled veterans, and other protected veterans) in accordance with Executive Order 11246, Section 503 of the Rehabilitation Act of 1973 and the Vietnam Era Veterans' Readjustment Assistance Act of 1974. If you would like more information about our affirmative action program for veterans and disabled individuals, please contact Human Resources.
Charles River is committed to working with and proving reasonable accomodation to individuals with disabilities. If, because of a medical condition or disability, you need a reasonable accommodation for any part of the employment process, please email crrecruitment_US@crl.com or call (781) 222-6244 and let us know the nature of your request and your contact information. Learn More
Join Our Talent Community
Connect With Us
Recent Job Openings
Description: Responsibilities BASIC SUMMARY: Collect and record data in the performance of studies. Responsible for handling and restraining animals, clinical observations, sample collection, monitoring food consumption, animal husbandry, and performing accurate data collection and reporting. ESSENTIAL DUTIES AND RESPONSIBILITIES: Observe animals for general health and overall well being. Collect and record research data and biologica...Reference Code: 151254
Join Us
At Charles River, we take a passionate approach to improving human and animal health. Our motivating force is the knowledge that our high quality of work provides people with the potential to live healthier and better lives. Scientific excellence and outstanding customer service are the hallmarks of Charles River. As one of the world’s largest Contract Research Organizations and a market leader in the provision of product development services, Charles River is able to offer you the chance to build on your skills and knowledge and work with world-class experts. Learn more about the many job opportunities Charles River has available today! |
The Battlefield 1
Loading
EA made the announcement on Twitter , alongside an announcement of a patch for the ongoing free game test The patch disables time limits in the beta's Sinai Desert Conquest mode, meaning games will now run indefinitely until one team reaches the full 250 points limit.Finally, EA also announced that the beta would begin "simulating extreme launch situations" to test the game's servers, which may result in downtime. It's not clear whether this stress test will run for the duration of the beta - we've contacted EA for comment on that.The beta has suffered major server downtime already, although that appeared to be the result of a hacking group attack.After the beta ends, it's a short wait for the main event - Battlefield 1 launches worldwide on October 21.
Joe Skrebels is IGN's UK News Editor, and he could spend days on the roof of that one house at Sinai Desert's really remote capture point. Sniper heaven. Follow him on Twitter |
Q:
Unfortunately has stopped
I have an app where the user submits some data in a form which is then sent to a server. I am testing it on a tablet and an Android smartphone (Galaxy S2). On the tablet, as soon as I click on "Submit", the application stops working with the message "Unfortunately has stopped working". This problem is not seen on either the phone or the emulator, which has me stumped.
There is another screen in the app where the user has the option to re-submit the same credentials. There too, the same problem is encountered. The rest of the app works OK. This has led me to conclude that the problem might lie in the way I am sending data to the server. That code snippet is as follows:
//code to send to server should begin here.
HttpClient hc = new DefaultHttpClient();
HttpPost hp = new HttpPost("http://www.mywebsite.com/takeDetails.php");
try {
// Add your data
List<NameValuePair> nameValuePairs = new ArrayList<NameValuePair>(2);
String val = "new";
nameValuePairs.add(new BasicNameValuePair("mode", val));
nameValuePairs.add(new BasicNameValuePair("name", name));
nameValuePairs.add(new BasicNameValuePair("number", number));
nameValuePairs.add(new BasicNameValuePair("email", emailID));
Log.v(this.toString(), "Email = " + emailID);
hp.setEntity(new UrlEncodedFormEntity(nameValuePairs));
// Execute HTTP Post Request
HttpResponse response = hc.execute(hp);
//Toast.makeText(getApplicationContext(), "Attempting to register.", Toast.LENGTH_LONG).show();
String responseBody = EntityUtils.toString(response.getEntity());
if(responseBody.contains("Success")) {
Toast.makeText(getApplicationContext(), "Thank you for registering! You will receive an email with your username and password shortly.", Toast.LENGTH_LONG).show();
} else {
Toast.makeText(getApplicationContext(), "Attempt to register failed.", Toast.LENGTH_LONG).show();
}
Log.v(this.toString(), "HTTP Response = " + responseBody);
} catch (ClientProtocolException e) {
// TODO Auto-generated catch block
} catch (IOException e) {
// TODO Auto-generated catch block
}
Logcat output:
V/com.sriram.htmldisplay.htmlDisplay@4107bef0( 3766): Line read = Name: jguyjfhf
V/com.sriram.htmldisplay.htmlDisplay@4107bef0( 3766): Line read = Number: 668895898
V/com.sriram.htmldisplay.htmlDisplay@4107bef0( 3766): Line read = Email ID:jvjhfhc@ccf.mkj
V/com.sriram.htmldisplay.htmlDisplay@4107bef0( 3766): User details gleaned = Name = jguyjfhf
V/com.sriram.htmldisplay.htmlDisplay@4107bef0( 3766): 668895898
V/com.sriram.htmldisplay.htmlDisplay@4107bef0( 3766): jvjhfhc@ccf.mkj
V/com.sriram.htmldisplay.htmlDisplay@4107bef0( 3766): Email = jvjhfhc@ccf.mkj
D/AndroidRuntime( 3766): Shutting down VM
W/dalvikvm( 3766): threadid=1: thread exiting with uncaught exception (group=0x409f11f8)
E/AndroidRuntime( 3766): FATAL EXCEPTION: main
E/AndroidRuntime( 3766): android.os.NetworkOnMainThreadException
E/AndroidRuntime( 3766): at android.os.StrictMode$AndroidBlockGuardPolicy.onNetwork(StrictMode.java:1099)
E/AndroidRuntime( 3766): at java.net.InetAddress.lookupHostByName(InetAddress.java:391)
E/AndroidRuntime( 3766): at java.net.InetAddress.getAllByNameImpl(InetAddress.java:242)
E/AndroidRuntime( 3766): at java.net.InetAddress.getAllByName(InetAddress.java:220)
E/AndroidRuntime( 3766): at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:137)
E/AndroidRuntime( 3766): at org.apache.http.impl.conn.AbstractPoolEntry.open(AbstractPoolEntry.java:164)
E/AndroidRuntime( 3766): at org.apache.http.impl.conn.AbstractPooledConnAdapter.open(AbstractPooledConnAdapter.java:119)
E/AndroidRuntime( 3766): at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:360)
E/AndroidRuntime( 3766): at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:555)
E/AndroidRuntime( 3766): at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:487)
E/AndroidRuntime( 3766): at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:465)
E/AndroidRuntime( 3766): at com.sriram.htmldisplay.htmlDisplay.writeSendDetails(htmlDisplay.java:200)
E/AndroidRuntime( 3766): at com.sriram.htmldisplay.htmlDisplay.access$10(htmlDisplay.java:127)
E/AndroidRuntime( 3766): at com.sriram.htmldisplay.htmlDisplay$1.onClick(htmlDisplay.java:110)
E/AndroidRuntime( 3766): at android.view.View.performClick(View.java:3511)
E/AndroidRuntime( 3766): at android.view.View$PerformClick.run(View.java:14105)
E/AndroidRuntime( 3766): at android.os.Handler.handleCallback(Handler.java:605)
E/AndroidRuntime( 3766): at android.os.Handler.dispatchMessage(Handler.java:92)
E/AndroidRuntime( 3766): at android.os.Looper.loop(Looper.java:137)
E/AndroidRuntime( 3766): at android.app.ActivityThread.main(ActivityThread.java:4424)
E/AndroidRuntime( 3766): at java.lang.reflect.Method.invokeNative(Native Method)
E/AndroidRuntime( 3766): at java.lang.reflect.Method.invoke(Method.java:511)
E/AndroidRuntime( 3766): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:784)
E/AndroidRuntime( 3766): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:551)
E/AndroidRuntime( 3766): at dalvik.system.NativeStart.main(Native Method)
D/dalvikvm( 3766): GC_CONCURRENT freed 290K, 7% free 6697K/7175K, paused 4ms+6ms
W/ActivityManager( 1268): Force finishing activity com.sriram.htmldisplay/.htmlDisplay
D/TabletStatusBar( 1340): hiding the MENU button
W/ActivityManager( 1268): Activity pause timeout for ActivityRecord{41406c60 com.sriram.htmldisplay/.htmlDisplay
My questions:
1. Is there a better way to handle errors from the HTTPClient?
2. Any ideas on what may be causing only the tablet to fail are most welcome.
A:
You're trying to run a network request on the main UI thread. Android does not allow you to do that since 3.0 (I believe). Doing so causes your UI to lock up until the request is completed, rendering your app useless during the execution of the request.
You'll either have to run your request in a new Thread or an ASyncTask, to take the load of the UI thread. You can find more info on how to use multiple threads here.
A:
NetworkOnMainThreadException: The exception that is thrown when an application attempts to perform a networking operation on its main thread.
add this code in onCreate
StrictMode.ThreadPolicy policy = new StrictMode.ThreadPolicy.Builder().permitAll().build();
StrictMode.setThreadPolicy(policy);
or add HTTP call code into AsyncTask
class RetreiveFeedTask extends AsyncTask<String, Void, RSSFeed> {
private Exception exception;
protected RSSFeed doInBackground(String... urls) {
try {
// add your all code here
} catch (Exception e) {
this.exception = e;
return null;
}
}
protected void onPostExecute(RSSFeed feed) {
// TODO: check this.exception
// TODO: do something with the feed
}
}
to execute the AsyncTask:
new RetreiveFeedTask().execute(urlToRssFeed);
hope you have added below permission in android manifest file
<uses-permission android:name="android.permission.INTERNET"/>
|
Unspeakable Injustice: What Will It Take?
While writing my daily news recap, I had the news on in the background, as I often do. Just as I finished my piece on Roger Stone, whom the government is doing everything to convict despite the complete lack of a crime (much less evidence of one), I heard something I’d been waiting for since 2015: “verdict expected in Kate Steinle case.”
I knew the facts of the case pretty well. I knew that Steinle, her father and a friend were on a San Francisco pier when a bullet struck Steinle’s lower back and tore through her abdomen. I knew that surveillance video showed the shooter, serial illegal immigrant Garcia Zarate, running away from the scene. I knew that gun powder residue was found on Zarate’s hands right after the murder.
I also knew that the circumstances surrounding the gun’s placement were weird, to say the least. It had been stolen from the Bureau of Land Management, wrapped in a cloth and placed on the pier for some mysterious reason. I knew Zarate had given conflicting statements at the time of his arrest. First, he claimed to be “shooting at a seal,” as if that idiotic story made any sense. Then, he claimed he stepped on the gun, causing it to discharge, which was more plausible since the bullet actually ricocheted before killing Steinle. Finally, after lawyering up on the dime of California taxpayers, he settled on an official story; that he’d found the gun, unwrapped it and it accidentally discharged in the process.
Whatever the real story, one thing is for certain: Zarate pulled the trigger on the gun that killed Kate Steinle. That fact is beyond debate.
Here is another fact not up for debate— Zarate was an ILLEGAL ALIEN CONVICTED FELON who had been deported FIVE TIMES.
As a sane, red-blooded American, that makes my blood boil. To see a young, beautiful girl killed just as her life was beginning is equal parts tragic and maddening, but to know that she was killed by someone who had NO BUSINESS HERE IN THE FIRST PLACE makes it a thousand times worse. We have enough felons of our own. Why in the hell are we allowing others to enter? That we could allow our border to be so porous and then be so lenient toward those who breach it, as San Francisco officials were and are, infuriates me beyond words. I’m past the point of befuddlement. I don’t care how our open border counterparts became so stupid; I’m just sick of others paying the price for their stupidity.
Tonight, the Steinle family paid that price for a second time in the form of a grave injustice. The jury had the option of finding Zarate guilty of an array of charges. If the bullet ricochet showed this to be involuntary manslaughter rather than first degree murder, they had that option. The evidence was right there before them. Zarate picked up a gun, it fired and Kate Steinle is dead as a result. If that’s not at least involuntary manslaughter, WHAT THE HELL IS?
WHAT THE HELL WOULD IT TAKE FOR THE FAR LEFT ZEALOTS WHO INHABIT THE CESSPOOL KNOWN AS SAN FRANCISCO TO PUNISH SOMEONE WHO ISN’T A STRAIGHT WHITE MALE?
Will this finally cause the Left and their enablers on the anti-Trump Right to feel shame? Will the fact that authorities in San Francisco released Zarate from custody prior to this shooting evoke enough shame to finally make us love our citizens more than we hate Donald Trump? If this doesn’t wake up the nation to just how backward our country has become, what will??
What will it take for Congress to pass Kate’s Law? What will it take to assert our sovereignty in a way that allows a father to bring his family for a simple walk on a pier without watching his daughter die in his arms?
As Kate Steinle was leaving this Earth, she looked up at her father with an agonizingly simple request— “Help me.”
.Tonight, before God, I plead the same for our country.
Donations
Hi everyone, if you enjoyed this article and feel that I’ve earned a tip, I would greatly appreciate any help you can give. If you would like to give more than $3, simply change the number in the box to multiply the donation. If not, I still love you and keep up the good fight! |
Treatment of phenolic wastewater in an anaerobic fixed bed reactor (AFBR) - recovery after shock loading.
An anaerobic fixed bed reactor (AFBR) was run for 550 days with a mixed microbial flora to stabilize synthetic wastewater that contained glucose and phenol as main carbon sources. The influent phenol concentration was gradually increased from 2 to 40 mmol/l within 221 days. The microbial flora was able to adapt to this high phenol concentration with an average of 94% phenol removal. Microbial adaptation at such a high phenol concentration is not reported elsewhere. The maximum phenol removal observed before the phenol shock load was 39.47 mmol/l or 3.7 g phenol/l at a hydraulic retention time (HRT) of 2.5 days and an organic loading rate (OLR) of 5.3 g/l.d which amounts to a phenol removal rate of ca. 15.8 mmol phenol/l.d. The chemical oxygen demand (COD) removal before exposing the reactor to a shock load corresponded with phenol removal. A shock load was induced in the reactor by increasing the phenol concentration from 40 to 50 mmol/l in the influent. The maximum phenol removal rate observed after shock load was 18 mmol/l.d at 5.7 g COD/l.d. But this was not a stable rate and a consistent drop in COD and phenol removal was observed for 1 week, followed by a sharp decline and production of fatty acids. Recovery of the reactor was possible only when no feed was provided to the reactor for 1 month and the phenol concentration was increased gradually. When glucose was omitted from the influent, unknown intermediates of anaerobic phenol metabolism were observed for some time. |
Spread the love
Thanks to Wikileaks, we now have the smoking gun email irrefutably proving not only did President Barack Obama know about Hillary Clinton’s non-government-issued email account, he used it in correspondence with her.
Even more damning to the president’s credibility, it appears the lie was a purposeful attempt to protect Clinton’s upcoming bid for the presidency.
Shortly after the New York Times broke the story on March 2, 2015, of Clinton’s use of a personal server to supplant the government system — and, as has been revealed, to thwart transparency — Obama announced publicly his lack of prior knowledge.
“The same time everybody else learned it through news reports,” the president told CBS News White House correspondent Bill Plante, as Zero Hedge reported. “The policy of my administration is to encourage transparency, which is why my emails, the BlackBerry I carry around, all those records are available and archived.
“I’m glad that Hillary’s instructed that those emails about official business need to be disclosed.”
It wouldn’t be a stretch to imagine, however, Obama’s placid public appearance guarded secret internal panic — a panic echoed behind the scenes at the burgeoning Clinton campaign.
An email penned by Clinton campaign spokesman Josh Schwerin to Director of Communications, Jennifer Palmieri, and a few others, calling immediate attention to a tweet by journalist Katherine Miller paints an entirely different picture.
Miller tweeted a snippet of the aforementioned interview in which Plante asked Obama, “Mr. President, When did you first learn that Hillary Clinton used an email system outside the U.S. Government for official business while she was Secretary of State?”
“The same time everybody else learned it,” he responded, “through news reports.”
I have some questions here pic.twitter.com/ufkeoZCx2m — Katherine Miller (@katherinemiller) March 7, 2015
In the email — written a mere five minutes after Miller’s tweet — Schwerin says, “Jen you probably have more on this but it looks like POTUS just said he found out HRC was using her personal email when he saw it in the news.”
That email is then forwarded to aides Cheryl Mills and Heather Samuelson, and Clinton advisor Philippe Reines by Nick Merrill — after which Mills forwards the note to John Podesta with the revealing message:
“we need to clean this up – he has emails from her – they do not say state.gov”
Indeed that one piece of correspondence — sparked by a journalist’s simple question — tells more of the potential depth and breadth of efforts undertaken to preserve Hillary Clinton’s public image in the face of rapidly unfolding scandal.
Prior to the release of this one email, Obama’s knowledge of the private Clinton server could not unassailably be characterized as collusion — but the campaign’s near immediate flurry to erase evidence of the president’s correspondence, coupled with his sober delivery of an outright lie to the American public indisputably proves just that.
In fact, as Zero Hedge also points out, this particularly stunning correspondence explains an email from a prior Wikileaks release. Under the subject heading “Special Category,” Podesta writes to Mills:
“Think we should hold emails to and from potus? That’s the heart of his exec privilege. We could get them to ask for that. They may not care, but I seems like they will.”
Of course, less than a month after these exchanges, Hillary’s server got the Bleachbit treatment — to perhaps clean all that dirty laundry of a presidential lie Obama fed the American public to protect the next chosen leader, and the subsequent innumerable lies and actions undertaken to then cover for all the others.
No wonder Hillary Clinton once sardonically joked about using a drone to assassinate Wikileaks founder Julian Assange.
In any other presidential election year, any one of the items in question would force the nominee to resign from the race in disgrace — and an additional investigation of the sitting president. But this is 2016. Welcome to Aldous Huxley’s America.
Spread the love
Sponsored Content: |
It's that time again. It's time for our monthly-ish update regarding all things MLS Expansion. For those of you new to these updates, I have a bit of a fascination with MLS expansion (and expansion and contraction of pro sports leagues in general). If you need to get caught up, here are the last few updates.
March (Detroit, San Diego, Miami, Phoenix and more)
February (Big San Diego news, Phoenix, St. Louis and lots more)
January (MLS 2 possibilities, expansion timeline, North Carolina, Miami, Tampa Bay and more)
December (FC Cincinnati, St. Louis, San Diego and more)
General Expansion News
Our sister site, Angels of Parade, has confirmed that LAFC will be the only expansion team that will join MLS in 2018. There had long been talk that teams would be coming in as pairs going forward. In 2015 New York City FC and Orlando City SC came in together and this season Minnesota United and Atlanta United joined as a pair. LAFC will come in all alone because Miami, who was long rumored to be the team coming with them, won’t be ready until 2019 (at the soonest).
St. Louis, Missouri
The biggest news this month is for a potential cross-state rival for Sporting Kansas City in St. Louis FC moving up to MLS. They were relying on two ballot measures to pass. One for the MetroLink passed but the one for $60 million in stadium funding failed. The vote was 53% to 47%, or about 3,000 votes. That means only 58,000 people voted in an area that has 315,000 people.
Jim Kavanaugh, one of the investors in SC STL released a statement that while not declaring the bid dead, nearly does.
“While this is likely the final stage of our journey, we owe it to ourselves and to the thousands of people who believed in this effort, and voted for Proposition 2, to step back for a day or two before making an official announcement. In the short term, we will be thanking supporters and volunteers, both within the city and throughout the region.”
I have been saying for quite a while that insistence on public money was a non-starter. Even though SC STL sounds less than optimistic, there could still be reason for hope. It’s just $60 million in funding that is short (easy for me to say). There had been a second bid by the formerly named Foundry St. Louis that had offered to fill the gap before. Maybe they, or another investor, will step in. It seems silly to give up when they were willing to put up the other $255 million.
There was so much news around this I don’t want to leave anyone out:
San Diego’s plan requires no public funds.
San Diego, California
At the same time that St. Louis was failing their vote the group behind the San Diego bid is putting their plan up for a vote even though they apparently don’t have to. One important difference is that San Diego’s plan requires no public funds. This vote wouldn’t be until November 7, 2017 but MLS plans to make their selection by the end of the year. The decision to go to a vote came after the group got their required signatures, beating the goal by about 40,000 signatures.
It seems like a no-brainer as an independent report from the San Diego Regional Economic Development Corporation has said the plan could have an annual economic impact of $2.8 billion.
More big news for San Diego, they have a team name... sort of.
A lot of effort went into that joke.
San Antonio FC
News has been sparse out of Texas but KSAT in San Antonio put out a story on the club’s expansion bid. They make mention of something that could be a deal breaker for their expansion hopes. The current home of SAFC is Toyota Field and is owned by the city and county. There is a plan in place to add about 10,000 seats to the stadium and that would cost about $100 million. The problem is that money may be partially or fully from the public. Ask St. Louis how that works.
Phoenix Rising FC
Phoenix is another city that is on the rise in the MLS expansion race. They recently had their first home game post-rebranding and it would have to be considered a success. 6,890 fans attended the game inside a stadium with only 6,200 seats. They followed that with 6,330 fans in the second home game. The stadium solution is supposed to be temporary with plans in place to build a 20,000+ seat privately financed climate-controlled stadium once expansion is granted.
If you are behind on PRFC then the Arizona Republic has a nice little story to catch you up on their tale.
Detroit, Michigan
The University of Michigan’s Center for Sport & Policy conducted a study that indicates bringing an MLS stadium to the currently half finished jail site instead of finishing the jail would have a huge economic impact. It would generate $2.39 billion in economic impact as opposed to the $352 million the jail would generate. In addition the deal would create 2,106 permanent jobs. This can’t be bad for Detroit trying to get a team.
David Beckham’s Miami United
File this under Las Vegas MLS news if you want but Sports Illustrated is reporting that Beckham doesn’t have the option to switch from Miami to Las Vegas. When the Oakland Raiders were announced to be moving to Vegas there was talk that Beckham may follow. After all, he was in Vegas last year telling the city that with the Raiders potentially coming a MLS team could follow.
That team won’t be his unless he wants to give up the sweet deal he has with Major League Soccer that allows him a franchise for a greatly reduced fee. Apparently that is good for one city only, Miami. Now if they can just get that situation worked out.
Sacramento Republic FC
One by product of the news that LAFC were coming into the league alone next year was word from FiftyFive.One that Sacramento were rumored to be coming in with them. Apparently the unrest in the ownership group and submitting a bid without the Republic brand caused them to just fall back inline as just another bid. Ouch.
Indy Eleven
The Indy Star has grim news for Indy Eleven’s bid to join MLS. The Star is reporting that without “surprise funding” that “Indy Eleven has no discernible path to join America's premier professional soccer league.”
The dilemma? Indy Eleven are relying on the state legislature, ie. public funds are needed for their stadium plan. Public funding is definitely a no go. When will these hopeful investors learn that?
North Carolina FC
NCFC has joined forces with two local area youth clubs, Capitol Area Soccer League and Triangle Futbol Club Alliance. This appears to make them only the second of the 12 franchising vying for MLS admittance that would have any sort of academy. It doesn’t seem to be a true academy in the sense of Sporting KC or FC Dallas, but it’s a start. The youth game is needed and important but a long play when the selection process will be half over by the end of 2017.
LAFC
If the news of coming in alone isn’t enough for you (and it should be considering that means top picks in the SuperDraft, Allocation Order and more) then Yahoo has a little more for you. The team is deep into it’s search for a coach and on top of that they are looking to sign players starting this summer. That’s exciting news for the prospective 14,000 season ticket holders.
Corner Kicks (all that other expansion/contraction news):
EXPANSION POWER RANKINGS
Note: Due to the league limiting expansion to just the 12 markets that have applied, I'm going to limit my power rankings to those 12 markets plus Miami. Until Miami actually is the 24th team, I have this sneaky feeling they may get jumped.
1. San Diego, California (Previous Rank: 1)
Beautiful weather. A workable stadium plan. Possibly their biggest competition fell off the map in the last week too. The plan to put their situation to a vote is odd and could backfire, but things look good right now.
2. FC Cincinnati (Previous Rank: 3)
They move up by way of Sacramento potentially botching their chance to get a MLS club. It won’t hurt FCC when they play their home opener this weekend on the 15th against Saint Louis FC. The game should have killer attendance and only help the cause. Still just a privately financed stadium plan away from being a shoe in.
3. Phoenix Rising FC (Previous Rank: 5)
My personal bias at play? Probably. They are off to a great start with their new owners, new stadium and their plan to privately finance their potential MLS stadium. They are also the biggest market of the 12 clubs up for expansion.
4. Miami, Florida (Previous Rank: 4)
They supposedly will be ready to go in 2019, until they aren’t.
5. Sacramento Republic FC (Previous Rank: 2)
The first news in months is that they blew their chance to join the league with LAFC. The fact that they are being called just another bid doesn’t sound good.
6. Tampa Bay/St. Pete (Previous Rank: 6)
Last I checked they are still the biggest media market bidding for expansion. They are also privately financing their stadium expansion.
7. North Carolina FC (Previous Rank: 7)
They have added a youth setup to their rebrand, stadium news and new NWSL team. They are staying in the news and lingering in a spot that could jump them into the top five at any moment.
8. Detroit, Michigan (Previous Rank: 8)
More good news. Now they just need to acquire that stadium site.
9. Nashville, Tennessee (Previous Rank: 9)
With the club still a year away from their USL debut that has to be the biggest mark against them. They are one of the clubs looking for public funds but the Nashville mayor is on board. Will the voters be?
10. San Antonio FC (Previous Rank: 11)
The news that they may need public funds is a black mark, but they move up simply so St. Louis can move down.
11. St. Louis, Missouri (Previous Rank: 10)
If SC STL could come up with their money (without public funds) I’d put them in second right now. Instead it sounds like they are about to give up all together.
12. Indy Eleven (Previous Rank: 12)
How are they still behind St. Louis? They are a less desirable location and need public money. It’s not happening.
13. Charlotte, North Carolina (Previous Rank: 13)
Same deal as Indy, but even worse is they are by far the worst bid in North Carolina. Billionaires asking for free money just looks bad. |
To link to the entire object, paste this link in email, IM or documentTo embed the entire object, paste this HTML in websiteTo link to this page, paste this link in email, IM or documentTo embed this page, paste this HTML in website
Welcome to the South Dakota School of Mines and
Technology!
Our goal is to provide you with an enriching environment in
which to continue your education. Simply the best! These
three words describe well what the South Dakota School of
Mines and Technology has become over the last century.
SDSM&T has received numerous prestigious awards in
recognition of our academic excellence. These include:
Barron's Best Buys in College Education; America's 100
Best College Buys; Kaplan Newsweek College Catalog 2000
singled out SDSM&T as a top school in schools for the
academically competitive student, best co-op programs, best
value for your money, and schools that are hidden treasures.
The SDSM&T Center for Advanced Manufacturing and Production was also recognized by
Boeing, as the most innovative education program for the year 2000.
You can experience this excellence through our programs offered in all the major areas of
engineering and the physical science. Degrees are offered at the baccalaureate, master's, and
doctoral levels. SDSM&T now offers computer careers in three major areas. The Computer
Science curriculum is the only program in South Dakota that is accredited by the Computer
Science Commission of the Computer Sciences Accreditation Board. The Computer Engineering
curriculum is the only program in the state accredited by the Accreditation Board for Engineering
and Technology. We also offer a wide variety of courses in computer and information technology
to prepare graduates with the latest developments in distributed networks and system software.
Our graduates have also experienced tremendous success as they enter the job market. During the
last year, placement rates for Tech graduates in all engineering and science programs have been
more than 90% within six months of graduation. Starting salaries have averaged more than
$41,000 for our graduates.
Since 1885 students have found the university, nestled at the entrance of the Black Hills, to be a
great place to nurture and more fully develop their educational opportunities, their abilities, their
character, and their spirit. We want you to experience the educational opportunities and the
friendships that bind the graduates of the South Dakota School of Mines and Technology together,
wherever they may be around the world.
We invite you to join the South Dakota Tech family and combine our traditions of excellence with
the newest of technology, We want to help you become prepared to be a leader in solving
tomorrow's problems in an increasingly complex society.
We look forward to your continued growth and success at the South Dakota School of Mines and
Technology!
Sincerely,
Richard J. Gowen
President
SDSM&T 2000/2001 UNDERGRADUATE AND GRADUATE CATALOG/1
MESSAGE FROM THE PRESIDENT
Welcome to the South Dakota School of Mines and
Technology!
Our goal is to provide you with an enriching environment in
which to continue your education. Simply the best! These
three words describe well what the South Dakota School of
Mines and Technology has become over the last century.
SDSM&T has received numerous prestigious awards in
recognition of our academic excellence. These include:
Barron's Best Buys in College Education; America's 100
Best College Buys; Kaplan Newsweek College Catalog 2000
singled out SDSM&T as a top school in schools for the
academically competitive student, best co-op programs, best
value for your money, and schools that are hidden treasures.
The SDSM&T Center for Advanced Manufacturing and Production was also recognized by
Boeing, as the most innovative education program for the year 2000.
You can experience this excellence through our programs offered in all the major areas of
engineering and the physical science. Degrees are offered at the baccalaureate, master's, and
doctoral levels. SDSM&T now offers computer careers in three major areas. The Computer
Science curriculum is the only program in South Dakota that is accredited by the Computer
Science Commission of the Computer Sciences Accreditation Board. The Computer Engineering
curriculum is the only program in the state accredited by the Accreditation Board for Engineering
and Technology. We also offer a wide variety of courses in computer and information technology
to prepare graduates with the latest developments in distributed networks and system software.
Our graduates have also experienced tremendous success as they enter the job market. During the
last year, placement rates for Tech graduates in all engineering and science programs have been
more than 90% within six months of graduation. Starting salaries have averaged more than
$41,000 for our graduates.
Since 1885 students have found the university, nestled at the entrance of the Black Hills, to be a
great place to nurture and more fully develop their educational opportunities, their abilities, their
character, and their spirit. We want you to experience the educational opportunities and the
friendships that bind the graduates of the South Dakota School of Mines and Technology together,
wherever they may be around the world.
We invite you to join the South Dakota Tech family and combine our traditions of excellence with
the newest of technology, We want to help you become prepared to be a leader in solving
tomorrow's problems in an increasingly complex society.
We look forward to your continued growth and success at the South Dakota School of Mines and
Technology!
Sincerely,
Richard J. Gowen
President
SDSM&T 2000/2001 UNDERGRADUATE AND GRADUATE CATALOG/1
MESSAGE FROM THE PRESIDENT |
Q:
Foreign Key references 2 separate tables
I have 2 tables both with primary ids:
CREATE TABLE Table1
( Id INT NOT NULL AUTO_INCREMENT,
CONSTRAINT t1_pkey PRIMARY KEY (Id));
CREATE TABLE Table2
( Id INT NOT NULL AUTO_INCREMENT,
CONSTRAINT t2_pkey PRIMARY KEY (Id));
I have a third table which I am trying to setup a foreign key
CREATE TABLE Action
( TableId INT NOT NULL AUTO_INCREMENT,
CONSTRAINT ac_pkey PRIMARY KEY (Id));
I need to add a foreign key that can reference either table1 or table2 depending on which one has value. Is this possible or am I going to have to setup a parent table for tables 1 and 2?
A:
It is not possible for a foreign key to reference one table or the other.
You could combine table1 and table2 with a type column and then have the combination of id, type be the primary key of the combined table and the foreign key in Action.
You could create a new table that is a parent of both table1 and table2 as well as Action
You could create two separate columns in Action, on that references table1 and the other that references table2 and then create a check constraint that ensures that only one of those is populated.
Which approach you prefer will come down to exactly what you're trying to model.
|
// Copyright (c) Microsoft Open Technologies, Inc. All rights reserved. See License.txt in the project root for license information.
using System.Collections.Generic;
using System.Diagnostics.CodeAnalysis;
using System.Linq;
namespace System.Web.Http.Routing
{
// Represents a segment of a URI that is not a separator. It contains subsegments such as literals and parameters.
internal sealed class PathContentSegment : PathSegment
{
public PathContentSegment(IList<PathSubsegment> subsegments)
{
Subsegments = subsegments;
}
[SuppressMessage("Microsoft.Performance", "CA1800:DoNotCastUnnecessarily", Justification = "Not changing original algorithm.")]
public bool IsCatchAll
{
get
{
// TODO: Verify this is correct. Maybe add an assert.
return Subsegments.Any<PathSubsegment>(seg => (seg is PathParameterSubsegment) && ((PathParameterSubsegment)seg).IsCatchAll);
}
}
public IList<PathSubsegment> Subsegments { get; private set; }
#if ROUTE_DEBUGGING
public override string LiteralText
{
get
{
List<string> s = new List<string>();
foreach (PathSubsegment subsegment in Subsegments)
{
s.Add(subsegment.LiteralText);
}
return String.Join(String.Empty, s.ToArray());
}
}
public override string ToString()
{
List<string> s = new List<string>();
foreach (PathSubsegment subsegment in Subsegments)
{
s.Add(subsegment.ToString());
}
return "[ " + String.Join(", ", s.ToArray()) + " ]";
}
#endif
}
}
|
Prognostic implications of extracapsular extension of pelvic lymph node metastases in urothelial carcinoma of the bladder.
To determine whether extracapsular extension of pelvic lymph node metastases from urothelial carcinoma of the bladder is of prognostic significance. From a consecutive series of 507 patients with urothelial carcinoma of the bladder preoperatively staged N0M0, 101 of 124 patients with lymph node metastases detected on histologic examination fulfilled the inclusion criteria for this study and were evaluated. All underwent radical cystectomy between 1985 and 2000 with standardized extended bilateral pelvic lymphadenectomy in curative intent and were prospectively followed for recurrence-free (RFS) and overall (OS) survival. Staging was done according to UICC 2002. A total of 2375 lymph nodes were examined. The median number of nodes examined per patient was 22 (range, 10-43). The median number of positive nodes was 2 (range, 1-24). Median RFS and OS were 17 and 21 months (range for both, 1-191), respectively. The 5-year RFS and OS rates were 32% and 30%, respectively. There were 59 patients (58%) with extracapsular extension of lymph node metastases. They had a significantly decreased RFS (median, 12 vs. 60 months, P=0.0003) and OS (median, 16 vs. 60 months, P <0.0001) compared with those with intranodal metastases. There were no significant differences in survival between pN1 and pN2 categories with extracapsular extension of the lymph node metastases (RFS, P=0.70; OS, P=0.65) or those without extension (RFS, P=0.47; OS, P=0.34). On a multivariate analysis, extracapsular extension of lymph node metastases was the strongest negative predictor for RFS. Meticulous lymph node resection and subsequent thorough histologic examination in patients undergoing radical cystectomy for bladder cancer reveals a high incidence of lymph node-positive disease (24%) despite negative preoperative staging. Lymph node metastases with extracapsular extension in pN1 and pN2 stages carry a very poor prognosis. Therefore, this feature should be used to designate a separate pN category in the staging system. The discrimination of pN1/pN2 in the UICC 2002 classification seems to be arbitrary and of no significant prognostic relevance. |
481 P.2d 330 (1971)
The FORTY-SECOND LEGISLATIVE ASSEMBLY of the State of Montana, and Frank Murray, Secretary of State of the State of Montana, Plaintiffs and Relators,
v.
Joseph L. LENNON, Clerk and Recorder of Cascade County, Montana, Defendant and Respondent.
No. 12008.
Supreme Court of Montana.
Submitted February 5, 1971.
Decided February 19, 1971.
*331 Robert L. Woodahl, Atty. Gen., Helena, Charles C. Lovell, Asst. Atty. Gen., argued, Great Falls, John Northey, Asst. Atty. Gen., argued, Helena, for plaintiffs and relators.
J. Fred Bourdeau, County Atty., argued, Great Falls, for defendant and respondent.
HASWELL, Justice.
This is an original proceeding in this Court by the present State Legislature and Secretary of State seeking a declaratory judgment determining certain of their legal rights concerning the calling, election of delegates, and implementation of a constitutional convention for the State of Montana.
The specific legal issues sought to be determined herein are:
1. May state and local officers serve as delegates to the constitutional convention? Is a delegate to the constitutional convention a "state officer"?
2. Does the phrase "elected in the same manner" in section 8, Article XIX of the Constitution of the State of Montana refer only to the constitutional provisions for election of representatives or does it also refer to contemporary statutory provisions for "nomination" and "election" of members of the house of representatives? May the Legislative Assembly provide for nonpartisan nomination and election of delegates to the constitutional convention?
3. If the house of representatives is reapportioned based on the 1970 census, shall the constitutional convention be apportioned on the basis of the house of representatives elected November 3, 1970 or the house of representatives to be elected November 7, 1972?
Plaintiffs and relators in this action are the Forty-second Legislative Assembly of the State of Montana and Frank Murray, the Secretary of State of the State of Montana. Defendant and respondent is Joseph L. Lennon, Clerk and Recorder of Cascade County, Montana. The latter two persons are public officials with prescribed duties concerning elections.
The background of the present controversy is undisputed. The 1969 Montana State Legislature, pursuant to authority contained in Article XIX, section 8 of the Montana Constitution, enacted Chapter 65, Montana Session Laws of 1969, providing for a referendum election on the question of calling a constitutional convention to revise, alter, or amend the Constitution of Montana. This question was submitted to the electors of this state at the general election held on November 3, 1970, at which time 133,482 electors voted in favor of calling such constitutional convention and 71,643 electors voted against it. It then became the duty of the present legislative assembly to provide for the calling of such constitutional convention under *332 Article XIX, section 8 of the Montana Constitution providing in pertinent part:
"* * * if a majority of those voting on the question shall declare in favor of such convention, the legislative assembly shall at its next session provide for the calling thereof."
The next legislative assembly mentioned therein is now in session and constitutionally limited to a session of 60 days. The legislative assembly now has under consideration a proposed constitutional convention enabling act designated House Bill 168 prescribing, among other things, the qualifications and manner of electing delegates to the constitutional convention.
This pending legislation, including permissible amendments thereto, has raised grave and bona fide legal questions concerning the authority and powers of the legislative assembly in enacting the required constitutional convention enabling act. The specific areas of legal controversy are defined and encompassed in the issues submitted to us for determination in this action.
Faced with this dilemma and the necessity of prompt resolution thereof, the legislative assembly enacted Senate Bill 6, now Chapter 3, Montana Session Laws of 1971, approved by the Governor and effective on January 18, 1971. This legislation authorized and directed the attorney general of Montana, on behalf of the legislative assembly and secretary of state, to institute an action in this Court under the Montana Uniform Declaratory Judgments Act, Title 93, Chapter 89, R.C.M. 1947, to determine the legal issues in controversy.
On January 21, 1971 the attorney general petitioned this Court for leave to file a complaint accordingly. The petition was heard by this Court on the same day. Thereafter, on the same day, this Court entered its order granting leave to file such original complaint for declaratory judgment and assumed original jurisdiction of the controversy. Personal service was ordered to be made forthwith on the defendant and respondent clerk and recorder of Cascade County who was required to answer by January 27 with briefs to be filed and oral argument presented at a hearing February 5. Such was duly accomplished.
At the conclusion of the hearing on February 5, this case was submitted to the Court for decision and taken under advisement. The pleadings disclose no factual dispute, presenting only legal issues for determination by this Court. This opinion constitutes the declaratory judgment of this Court determining the legal issues presented for decision.
At the outset, we will briefly discuss the jurisdiction of this Court to entertain an original proceeding under the Montana Uniform Declaratory Judgments Act in the instant case, before proceeding to determination of the ultimate issues involved in the present controversy.
A declaratory judgment action is a proper proceeding in which to reach and answer the legal issues raised in this proceeding. A court of record in Montana is specifically granted the power "to declare rights, status, and other legal relations" of a party (section 93-8901, R.C.M. 1947) which "are affected by a statute" (section 93-8902, R.C.M. 1947) and in which a declaratory judgment "will terminate the controversy or remove an uncertainty" (section 93-8905, R.C.M. 1947). This is precisely the situation that exists in the present case. Here we have a presently existing bona fide, justiciable, legal controversy concerning the authority of the legislative assembly under the constitution and statutes of Montana in enacting mandatory enabling legislation for a constitutional convention. Resolution of the issues presented herein is necessary to eliminate or reduce a multiplicity of future litigation; to prevent interminable delay in the election of delegates, the formation, and the functioning of the constitutional convention; and to eliminate needless expenditure of public funds on procedures that otherwise might subsequently be declared illegal. One of the basic purposes of the Montana Declaratory Judgments Act is to provide a procedure *333 for advance determination of such issues, thereby eliminating these otherwise detrimental results.
Under the circumstances of the present case, an original proceeding for declaratory judgment in the Supreme Court is likewise authorized. Jurisdiction is granted this Court to hear and determine "such other original and remedial writs as may be necessary or proper to the complete exercise of its appellate jurisdiction" (Article VIII, section 3, Montana Constitution). A similar provision exists by statute (section 93-214, R.C.M. 1947), and Montana case law is replete with authority sustaining the original jurisdiction of the Supreme Court in declaratory judgment actions in a variety of situations. State ex rel. Schultz-Lindsay v. Board of Equalization, 145 Mont. 380, 403 P.2d 635; Carey, State Treas. v. McFatridge, 115 Mont. 278, 142 P.2d 229; Gullickson v. Mitchell, 113 Mont. 359, 126 P.2d 1106; Bottomly v. Meagher County, 114 Mont. 220, 133 P.2d 770. The foregoing cases establish the original jurisdiction of the Supreme Court in a declaratory judgment action where legal questions of an emergency nature are presented and ordinary legal procedures will not afford timely or adequate relief. Such is the situation here. We have an urgent emergency situation in view of the mandatory legislation required of the present session of the legislative assembly, the absence of any factual controversy but only pure legal questions that must ultimately be answered by this Court in any event, and ordinary legal procedures that will not afford timely relief.
Directing our attention to the first issue before us for determination, we find that it contains two questions which we answer as follows:
Any state and local officers who are prohibited by the constitution or laws of Montana from holding more than one office may not serve as delegates to the constitutional convention. A delegate to the constitutional convention is a "state officer" holding a public office of a civil nature.
Constitutional prohibitions against certain officers holding more than one office include state senators and representatives "during the term for which [they] shall have been elected", Article V, section 7, Montana Constitution; the governor, lieutenant governor, secretary of state, attorney general, state treasurer, state auditor, and superintendent of public instruction "during [their] term of office", Article VII, section 4, Montana Constitution; and justices of the supreme court and district judges "while [they] remain in the office to which [they have] been elected or appointed", Article VIII, section 35, Montana Constitution. (Bracketed words pluralized).
These restrictions prevent such officers from holding any other "public office" or "civil office" of the state, and these two terms are synonymous. State ex rel. Barney v. Hawkins, 79 Mont. 506, 257 P. 411. This Court has heretofore defined the requirements of a "public office" within the meaning of Montana constitutional proscriptions in Barney as follows:
"After an exhaustive examination of the authorities, we hold that five elements are indispensable in any position of public employment, in order to make it a public office of a civil nature: (1) It must be created by the Constitution or by the Legislature or created by a municipality or other body through authority conferred by the Legislature; (2) it must possess a delegation of a portion of the sovereign power of government, to be exercised for the benefit of the public; (3) the powers conferred, and the duties to be discharged, must be defined, directly or impliedly, by the Legislature or through legislative authority; (4) the duties must be performed independently and without control of a superior power, other than the law, unless they may be those of an inferior or subordinate office, created or authorized by the legislature, and by it placed under the general control of a superior officer or body; *334 (5) it must have some permanency and continuity and not be only temporary or occasional." 79 Mont. 528, 257 P. 418.
It is readily apparent that delegates to a constitutional convention possess the requirements listed in (1), (3), and (4) of Barney.
In our view delegates to a constitutional convention also "possess a delegation of a portion of the sovereign power of government, to be exercised for the benefit of the public" satisfying requirement (2) of Barney. Plaintiffs and relators argue that this requirement is not satisfied, drawing a distinction between officers of the executive, legislative and judicial branches of the state government and delegates to a constitutional convention who act as agents of the people occupying no position in any recognized branch of state government. Our attention has been directed to several cases from other states upholding such distinction under their particular state history and the particular provisions of their state constitutions. These cases are not persuasive as applied to the present controversy in Montana, being distinguishable on the basis of such factors as historical considerations peculiar to such state, legislative precedent, existing rather than proposed legislation, inherent legislative powers to call a constitutional convention, different constitutional provisions, and dissimilar issues presented for decision: State v. Doyle, 138 La. 350, 70 So. 322; Frantz v. Autry, 18 Okl. 561, 91 P. 193; Board of Supervisors of Elections v. Attorney Gen., 246 Md. 417, 229 A.2d 388; Harvey v. Ridgeway (Ark. 1970), 450 S.W.2d 281; Wells v. Bain, 75 Pa. 39, 15 Am.Rep. 563; Baker v. Moorhead, 103 Neb. 811, 174 N.W. 430; and Chenault v. Carter, Ky., 332 S.W.2d 623.
In our view any distinction sought to be drawn in Montana between offices or positions in which the incumbent acts for and exercises powers in behalf of the state government as distinguished from the people is more artificial than real an illusory distinction without an actual difference. Under the Montana Constitution, there is no distinction between the "sovereign power of government" referred to in Barney and the "sovereign power of the people". All sovereign power emanates from the people. Article III, section 1, of the Montana Constitution provides:
"All political power is vested in and derived from the people; all government of right originates with the people; is founded upon their will only, and is instituted solely for the good of the whole."
A delegate to the constitutional convention exercises sovereign powers of a legislative character of the highest order. That the final product of such legislative authority is subject to referendum, renders it no less an exercise of sovereign power. The delegation of unlimited power is not essential to the exercise of sovereign power. To draw a distinction between other state officers and delegates to a constitutional convention, both of whom act as agents of the people exercising sovereign powers in their behalf, is to deny our basic concept of government.
The purpose of the Montana constitutional restrictions against certain officers serving as delegates to a constitutional convention is readily apparent. It is to insure independent consideration by the delegates of the provisions of the new constitution, to reduce concentration of political power at the constitutional convention by eliminating as delegates incumbent office holders, and to foreclose the possibility of such officers creating new offices for themselves or increasing the salaries or compensation of their own offices. See Kederick v. Heintzleman, D.C., 132 F. Supp. 582, for the expression of similar principles in prohibiting a state senator from filing for the position of delegate to the Alaskan constitutional convention. These considerations cannot be given effect unless a delegate to the constitutional convention holds a "public office" thereby placing him within the ambit of constitutional prohibitions.
Requirement (5) of Barney that an office must have some permanency and continuity *335 and not be only temporary or occasional in order to constitute a "public office" is satisfied in the case of a delegate to the constitutional convention. This requirement is a relative matter and must be interpreted in the light of the purposes for which the position was created. A delegate to the constitutional convention holds his position for the entire period of time the constitutional convention is in session. His position is permanent and continuous in the sense that it continuously exists until the duties for which it was created have been completed. It is not temporary or occasional in that it is a full time position for the length of time required for completion of the convention's work. While it is true that constitutional conventions are called but seldom, when a particular constitutional convention is called the delegates are elected for that particular constitutional convention alone and the convention possesses permanency and continuity until its purpose is completed; there is nothing temporary or occasional in the work of its delegates while the convention is in session and carrying out its duties. Contemporary experience notwithstanding, a public position need not be conceived and created in perpetuity in order to qualify as a public office.
Proceeding to the second issue for determination herein, we find it likewise encompasses two related questions which we answer in this manner:
The phrase "elected in the same manner" used in Article XIX, section 8, of the Montana Constitution refers both to constitutional and statutory provisions for "nomination" and "election" of members of the house of representatives. The legislative assembly may not now substantially change the election laws for delegates to the constitutional convention and accordingly may not now provide solely for nonpartisan nomination and election of such delegates.
Article XIX, section 8 of the Montana Constitution provides that the number of delegates to the constitutional convention shall be the same as the house of representatives and that the delegates "shall be elected in the same manner, at the same places, and in the same districts" as state representatives. The Constitution contains further general election requirements applicable to all elections. All elections must be "free and open", Article III, section 5; elections "shall be by ballot", Article IX, section 1; voters must meet certain age, citizenship and residence requirements, Article IX, section 3; and the candidate receiving the highest number of legal votes shall be declared elected, Article IX, section 13.
Statutory election procedures implementing these constitutional election requirements and providing a specific procedure for the election of delegates to the constitutional convention have been enacted and have been in effect at all times pertinent to this controversy. Section 23-3301, R.C.M. 1947, expressly provides that delegates to a constitutional convention are chosen by the same nominating and primary election procedure as are members of the house of representatives. The same section expressly provides for a primary election for delegates to the constitutional convention who will be chosen at the ensuing general election. Section 23-3304, R.C.M. 1947, provides for primary election filing by declaration by any person running for nomination on the ticket of a major political party; and section 23-3318, R.C.M. 1947, provides for filing by nominating petitions by independent candidates and candidates of new or minor political parties. Numerous other statutes exist relating to representative districts and apportionment implementing constitutional requirements.
At issue is whether the phrase requiring that constitutional delegates be "elected in the same manner" as members of the house of representatives appearing in Article XIX, section 8 of the Constitution refers only to constitutional requirements for the election of state representatives, or whether it encompasses both constitutional and statutory requirements for election of state representatives. We hold that the phrase *336 "elected in the same manner" means exactly what it plainly says that constitutional delegates are required to be elected by the same election procedures applicable to election of members of the house of representatives without limitation as to the source of such election procedures be they constitutional or statutory. Had the framers of the Constitution intended to limit this phrase to constitutional requirements only, they would hardly have used this particular language knowing that the Constitution contained only broad requirements for elections in general without specific constitutional procedures applicable to election of representatives. By their language coupled with the absence of specific constitutional procedures applicable to the election of representatives, the framers of our Constitution must have intended the requirement to apply to statutory election procedures for representatives to be subsequently enacted by the legislature and amended from time to time. We remain unimpressed with the applicability to Montana of three cited cases from other states to the contrary: Livingston v. Ogilvie, 43 Ill.2d 9, 250 N.E.2d 138; Baker v. Moorhead, 103 Neb. 811, 174 N.W. 430; and In re Opinion of the Justices, 76 N.H. 586, 79 A. 29. These holdings are understandable under their particular state history and their particular constitutional provisions, but their application to Montana in the light of its history and constitutional provisions is entirely unwarranted.
Continuing to the second question propounded on this issue, the point of our holding is simply that the present legislative assembly cannot substantially change the manner of election of delegates to the constitutional convention from those existing at the time of the constitutional convention referendum election, nor provide for a substantially different manner of electing such delegates from that applicable to election of representatives. The question authorized to be submitted to the voters at the constitutional convention referendum was contained in Chapter 65, Montana Session Laws of 1969 "* * * whether the legislative assembly at the 1971 session, and in accordance with Article XIX, section 8 of the Montana constitution, shall call a convention to revise, alter, or amend the constitution of Montana." (Emphasis provided). This question was submitted to the electors.
We have heretofore held that the requirement of Article XIX, section 8 of the Montana Constitution requiring that delegates to the constitutional convention be elected "in the same manner" as members of the house of representatives comprehends statutory as well as constitutional election laws. The voters at the constitutional referendum election cast their votes on the basis of the then existing election laws for representatives and accordingly, constitution convention delegates. To now permit these laws to be substantially changed in midstream by this session of the legislative assembly is to permit a retroactive dilution of voting rights and a fundamental abuse of the elective franchise of voters at the constitutional convention referendum election. Article IX, section 9 of the Montana Constitution grants the legislature the power to pass laws "necessary to secure the purity of elections and guard against abuses of the elective franchise." Conversely, by implication, such constitutional provision prohibits the legislature from enacting laws contravening such goals.
At the time of the constitutional convention referendum election, the election laws applicable to nomination and election of members of the house of representatives and constitutional convention delegates provided for partisan filing by candidates of major political parties by declaration, independent filing without party designation by nominating petition, a primary nominating election, and an ensuing general election. The then existing election laws provided for nonpartisan filing, nomination, and election in the case of judicial candidates only. Pending House Bill No. 168 provides for nonpartisan filing by nominating petition only and eliminates filing as a candidate of a political party, eliminates *337 any primary election, and sets up a different manner of nomination and election of delegates to the constitutional convention than those applicable to nomination and election of members of the house of representatives. The legislative assembly can not thus substantially change the then existing election laws applicable to nomination and election of delegates to the constitutional convention.
As heretofore noted, the then existing election laws permitted the filing, nomination and election of "independent" candidates without party designation and these provisions, of course, are applicable to the nomination and election of delegates to the constitutional convention.
The final issue for determination we answer in this manner:
The constitutional convention must be apportioned on the basis of the 1970 census applicable to the apportionment of the house of representatives to be elected November 7, 1972.
Article VI, section 2 of the Montana Constitution provides in pertinent part:
"(1) The senate and house of representatives of the legislative assembly each shall be apportioned on the basis of population.
"(2) The legislative assembly following each census made by the authority of the United States, shall revise and adjust the apportionment for representatives and senators on the basis of such census."
Article III, section 29 of the Montana Constitution states that the provisions of the Constitution are mandatory unless by express words they are declared to be otherwise. The 1970 United States census is now completed. This session of the legislative assembly must reapportion both houses on the basis of the 1970 United States census in accordance with the foregoing Montana constitutional requirements. Such reapportionment necessarily affects the makeup of districts for the election of state senators and representatives and the number to be elected from each district. In short, it affects the manner of election of representatives. And as Article XIX, section 8 of the Montana Constitution requires that delegates to the constitutional convention be elected in the same number, from the same districts, and "in the same manner" as members of the house of representatives, it necessarily requires that delegates to the constitutional convention be apportioned in like manner. Such reapportionment is required of this session of the legislative assembly which must by law adjourn prior to the contemplated election of delegates to the constitutional convention. Accordingly, such delegates must be apportioned on the basis of the 1970 census applicable to the reapportioned house of representatives to be elected November 7, 1972.
At first glance our holding on this issue may appear to conflict with our holding with reference to nonpartisan nomination and election of delegates to the constitutional convention. On more penetrating analysis however, it is clear that there is no conflict. At the election of November 3, 1970, the electors voted on the basis of existing election laws. At that time the existing election laws then on the statute books provided for reapportionment by the current session of the legislative assembly at this time, which necessarily would be prior to the election of delegates to the constitutional convention. But in the case of nonpartisan nomination and election of delegates to the constitutional convention, there were no existing election laws so authorizing or permitting. The distinction appears clear and the holdings harmonious.
An apparent further question that appears, in view of our holding here and as to Issue No. 2, is the time sequence schedule as set up in House Bill No. 168. The legislature proposes to accomplish the election of delegates, the convening of the convention, the completion of the convention's work, and other matters in time for submission to the people at the general election in 1972 of the revised, altered, or amended constitutional proposals. This, in *338 our view, is permissible and does not constitute a "substantial" change from the "same manner" referred to in Article XIX, section 8. It is noted that Article XIX, section 8, refers to "manner", "place" and "district", but not specifically to time. Additionally, the Constitution contemplates special elections. The immediately foregoing discussion is meant in an advisory way only.
A further observation, albeit unsolicited, is that since the referendum uses the language "revise, alter, or amend the constitution" it must have been contemplated that the work of the convention might be partial or total and that the individual parts might be submitted to the people. Therefore each Article might be separately submitted.
A declaratory judgment is hereby entered in accordance with the foregoing opinion with court costs to be paid by the Forty-second Legislative Assembly, pursuant to Section 5, Senate Bill No. 6, Chapter 3, Montana Session Laws of 1971.
JAMES T. HARRISON, C.J., and JOHN C. HARRISON, DALY and CASTLES, JJ., concur.
|
Tuesday, July 15, 2008
First Steps Into Socialism...
And we thought it would be health care...but it's mortgages:
"In a country that holds itself up as a citadel of free enterprise, Washington has morphed from being the lender of last resort into effectively the only resort for home loans for millions of Americans engaged in the largest transactions of their lives.
Before, the government's more modest mission was to make more loans available at lower rates. Now it is to make sure the loans that matter most to middle class Americans are made at all.
The new reality is scorned by libertarians and conservatives, who fear intrusions by the state in the market, and by populists and progressives, who rue a society in which education and housing increasingly rest upon the government's willingness to finance it.
"If you're a socialist, you should be happy," said Michael Lind, a fellow at the New America Foundation, a research institute in Washington. "But you should really wonder whether you want people's ability to pay for housing and college dependent on the motives of people in Washington." (Read the entire article)
One thing I've never understood (besides people looking to strangers to bail them out from under their own poor decisions) is how there can be a such thing as a bad housing market. If the houses are selling for more than they're worth -that's good for the sellers. And if they're selling for their real value --that's good for the buyers. Unlike many markets, with housing either way someone wins... |
Despite what anyone says, TLC is still very much a channel about learning. We used to watch it as kids and learn about Egyptian pyramids or dinosaurs. Now we watch it and learn that America as a country is screwed.
The network that educates us about midgets, rednecks and birthing extraordinaires continues to keep it super classy with a new special later this month called Extreme Cougar Wives. And by extreme, they basically mean grandmas. The show is set to follow three women ranging from ages 53 to 76 and the super messed up younger men that follow them around as part of an examination of cougar relationships. |
Q:
ConTeXt : underbar behavior with subscripts
With ConTeXt, one generally uses \underbar to underline. \underline also exists, but only for mathematics. I also played a bit with the ideas proposed on this wiki page. My tests can be seen on the following MWE.
\definetextbackground[myunderlinebackground]
[location=text,alternative=1,background=,frame=off]
\def\myunderline#1{
\starttextbackground[myunderlinebackground]
#1
\stoptextbackground
}
\starttext
Hello, this is \underbar{a test}. And now, this is \underbar{a $a_{e}$ test}.
And this is $\underline{\text{a } a_{e} \text{ test}}$. \\
And this is \myunderline{a $a_{e}$ test}.
\stoptext
However, none of these behaviors suit me. When there is a subscript, I would expect the bar to be broken and restart immediately after (like the underbar
example, but restarting earlier and not underlying the subscript).
I also notice that \underbar started inside a math environment behaves exactly like \underline.
Is it possible to tune this behavior ?
A:
Here is the ConTeXt-adapted OPMac solution. Keep in mind that because this is a box it doesn't break across lines.
\define[1]\underlinee{%
\dontleavehmode\vbox to0pt{\vss
\hrule height.4pt
\vskip-\baselineskip \kern2.5pt
\hbox{\strut\rlap{\color[white]{\pdfliteral{2 Tr 1.5 w}#1\pdfliteral{0 Tr 0 w}}}#1}
}}
\starttext
\underlinee{a $a_{e}$ test}
\stoptext
|
Petition Text
At least four labour activists remain in criminal detention following a recent crackdown on labour organisations. From 3 - 5 December, labour NGOs based in Guangdong province were targeted in a harsh and unexpected wave of detentions. At least four labour NGOs have been targeted and 25 NGO staff and volunteers have been detained and questioned by police, seven of whom either remain in detention or cannot be contacted. These include Panyu Workers’ Centre director Zeng Feiyang and staff member Zhu Xiaomei; Foshan Nanfeiyan Social Work Services Organization director He Xiaobo; labour activists Peng Jiayong, Deng Xiaoming, Meng Han, and Tang Jian. Four individuals - Zeng Feiyang, He Xiaobo, Zhu Xiaomei and Deng Xiaoming - are confirmed as being in detention.
The Chinese government purports to advance the “rule of law” within its borders and promotes the idea of a civilized and peaceful rise internationally. However, local governments abuse their power, using violence and arrests to repress and intimidate labour organizations, preventing Chinese workers from pursuing fundamental labour rights, including freedom of association and the right to strike and collective bargaining.
As organisations and individuals working labour rights, we call on the Chinese government to: |
# Contributing guidelines
## Preparing pull requests
1. Follow the [formatting/style settings](.vscode/settings.json), run "Format Document" in Visual Studio Code (default SHIFT+ALT+F) and also look how everything else is formatted.
1. Remember to update README.md and/or CHANGELOG.md with relevant info.
1. When adding or changing images, run `.\build\Update-Documentation.ps1` so IMAGES.md is updated as well as the stats in README.md.
1. When you have changes to `.\build\sitecore-packages.json` you can:
1. Run `.\build\contributing\Test-SitecorePackagesJson.ps1` to verify the urls are working.
1. Run `.\build\contributing\Sort-SitecorePackagesJson.ps1` to sort the packages by name.
## Submitting pull requests
When submitting a pull request to the docker-images repo, we ask that you squash your commits before we merge. Some applications that interact with git repositories will provide a user interface for squashing. Refer to your application's document for more information. If you're familiar with using the command line, you can do the following:
1. Make sure your branch is up to date with the master branch.
1. Run `git rebase -i master`.
1. You should see a list of commits, each commit starting with the word "pick".
1. Make sure the first commit says **"pick"** and change the rest from "pick" to **"squash"**. This will squash each commit into the previous commit, which will continue until every commit is squashed into the first commit.
1. Save and close the editor, it will give you the opportunity to change the commit message, save and close.
1. Finally force push the squashed commit: `git push --force-with-lease origin`.
Squashing commits can be a tricky process but once you figure it out, it's really helpful and keeps our repo concise and clean. |
Re: wip/libreoffice or misc/openoffice?
On Wed, Nov 30, 2011 at 03:51:55PM -0000, David Lord wrote:
> Some sites require confirmation of accepting licence before
> download starts, some others have been parked and there is
Some packages (I think openjdk) handle it beautifully with a message
asking you to download from a given URL.
If the download MUST be interactive above should be adopted by pkgsrc.
For non interactive downloads, why have those sites that don't allow non
interactive download in mirror list anyway? Achieves nothing.
Mayuresh. |
Muhamad Fikri Nasution
Summary
On this great opportunity, I would like to apply for the employee at your company. I’am 20 years old, fresh graduates from Diploma 3 Economy Islamic University of Indonesia with GPA 3.56. Having skills in english, and operating computer. Responsible, honest, and discipline are my personality.
I would greatly appreciate an opportunity to convince you that my services would be an asset to your company. I assure you that a high level of efficiency would be applied to any assignment given to me. I am looking forward to hearing from you in the near future. |
The extraordinary student mobilisation in Quebec has already sustained the longest and largest student strike in the history of North America, and it has already organised the single biggest act of civil disobedience in Canadian history. It is now rapidly growing into one of the most powerful and inventive anti-austerity campaigns anywhere in the world.
Every situation is different, of course, and Quebec's students draw on a distinctive history of social and political struggle, one rooted in the 1960s quiet revolution. Support for the provincial government that opposes them, moreover, has been undermined in recent years by allegations of corruption and bribery. Nevertheless, those of us fighting against cuts and fees in other parts of the world have much to learn from the way the campaign has been organised. It's time that education activists in the UK, in particular, started to pay the Quebecois the highest compliment: when in doubt, imitate.
The first reason for the students' success lies in the clarity of both their immediate aim and its links to a broad range of closely associated aims. Students of all political persuasions support the current "minimal programme", to block the Liberal government's plan to increase tuition fees by 82% over several years. Most students and their families also oppose the many similar measures introduced by federal and provincial governments in Canada in recent years, which collectively represent an unprecedented neoliberal attack on social welfare (new user fees for healthcare, elimination of public sector services and jobs, factory closures, wanton exploitation of natural resources, an increase in the retirement age, restrictions on trade unions and so on).
A growing number of students now also support the fundamental principle of free universal education, long defended by the more militant student groups (loosely co-ordinated in the remarkable new coalition Classe), and back their calls for the unconditional abolition of tuition fees, to be phased out over several years and compensated by a modest and perfectly feasible bank tax, at a time of record bank profits. "This hardline stance," the Guardian's reporter observed, "has catapulted Classe from being a relatively unknown organisation with 40,000 members to a sprawling phenomenon that now numbers 100,000 and claims to represent 70% of striking students." Growing numbers, too, can see how such a demand might help to compensate for the most obvious socioeconomic development in Canada over the last 30 years: the dramatic growth in income inequality, reinforced by a whole series of measures that have profited the rich and very rich at the expense of everyone else.
In Quebec, student resistance to these measures hasn't simply generated a contingent "chain of equivalences" across otherwise disparate demands: it has helped to create a practical, militant community of interest in the face of systematic neoliberal assault. "It's more than a student strike," a Classe spokesman said in April. "We want it to become a struggle of the people." At first scornfully dismissed in the corporate media, this general effort to make the student movement into a social movement has borne fruit in recent weeks, and it would be hard to describe the general tone of reports from the nightly protest marches that are now taking over much of Montreal in terms other than collective euphoria.
Nothing similar has yet happened in the UK, of course, even though the British variant of the same neoliberal assault – elimination of the EMA, immediate trebling of fees, systematic marketisation of provision – has been far more brutal. But the main reasons for this lie less in some uniquely francophone propensity to defend a particular social heritage than in the three basic elements of any successful popular campaign: strategy, organisation and empowerment.
As many students knew well before they launched their anti-fees campaign last summer, the best way to win this kind of fight is to implement a strategy that no amount of state coercion can overcome – a general, inclusive and "unlimited" boycott of classes. One-day actions and symbolic protest marches may help build momentum, but only "an open-ended general strike gives students maximum leverage to make their demands heard", the Classe's newspaper Ultimatum explains. So far, it has been 108 days and counting, and "on ne lâche pas" (we're not backing down) has become a familiar slogan across the province. So long as enough students are prepared to sustain it, their strike puts them in an almost invincible bargaining position.
Ensuring such preparation is the key to Classe as an organisation. It has provided new ways for students previously represented by more cautious and conventional student associations to align themselves with the more militant Assé, with its tradition of direct action and participatory democracy. Activists spent months preparing the ground for the strike, talking to students one at a time, organising department by department and then faculty by faculty, starting with the more receptive programmes and radiating slowly out to the more sceptical.
At every pertinent level they have created general assemblies, which have invested themselves with the power to deliberate and then make, quickly and collectively, important decisions. Actions are decided by a public show of hands, rather than by an atomising expression of private opinion. The more powerful and effective these assemblies have become, the more active and enthusiastic the level of participation. Delegates from the assemblies then participate in wider congresses and, in the absence of any formal leadership or bureaucracy, the "general will" that has emerged from these congresses is so clear that Classe is now the main organising force in the campaign and able to put firm pressure on the other more compromise-prone student unions.
Week after week, assemblies have decided to continue the strike. In most places, this has also meant a decision to keep taking the steps necessary to ensure its successful continuation, by preventing the minority of dissenting students from breaking it. Drawing on his experience at McGill University, strike veteran Jamie Burnett has some useful advice for the many student activists now considering how best to extend the campaign to other parts of Canada: don't indulge in "soft pickets" that allow classes to take place in spite of a strike mandate, and that thus allow staff to isolate and fail striking students. "Enforcing strikes is difficult to do, at least at first," he says, "but it's a lot less difficult than failing a semester. And people eventually come around, building a culture of solidarity and confrontational politics in the process."
The main result of this process so far has been one of far-reaching collective empowerment. Resolved from the beginning to win over rather than follow the more sceptical sectors of the media and "public opinion", the students have made themselves more powerful than their opponents. "[We] have learned collectively," Classe spokesperson Gabriel Nadeau-Dubois said last week, "that if we mobilise and try to block something, it's possible to do it." From rallies and class boycotts, in April the strike expanded to include more confrontational demonstrations and disruptive nightly marches through the centre of town. Soon afterwards, solidarity protests by groups like Mères en colère et solidaires started up in working-class districts of Montreal.
In a desperate effort to regain the initiative by representing the conflict as a criminal rather than political issue, the panicked provincial government rushed through its draconian Bill 78 to restrict the marches, discourage strike enforcement and consolidate its credentials (in advance of imminent elections) as a law-and-order administration. In the resulting escalation, however, it's the government that has been forced to blink. On 23 May, the day after an historic 300,000 people marched through Montreal in support of the students, police kettled and then arrested more than 700 people. But the mobilisation has become too strong to contain, and after near-universal condemnation of the new law it is already unenforceable. Since 22 May, pro-student demonstrations have multiplied in ways and numbers the police can't control, and drawing on Latin-American (and older charivari) traditions, pot-clanging marches have mushroomed throughout the province of Quebec. On Thursday night tense negotiations with the government again broke off without resolution, and business and tourist sectors are already alarmed by the prospect of a new wave of street protests continuing into Montreal's popular summer festival season.
There is now a very real chance that similar mobilisations may spread. Recent polls suggest that most students across Canada would support a strike against tuition increases, and momentum for more forceful action may be building in Ottawa and across Ontario; in Quebec itself they also show that an initially hesitant public is beginning to swing behind the student demands and against government repression. On 30 May, there were scores of solidarity rallies all over Canada and the world. In London around 150 casserolistas clanged their way from Canada House to the Canadian embassy at Grosvenor Square.
If enough of us are willing to learn a few things from our friends in places like Quebec and Chile, then in the coming years such numbers may change beyond all recognition. After much hesitation the NUS recently resolved that education should be "free at all and any level", and activists are gearing up for a massive TUC demonstration on 20 October. After a couple of memorable springs, it's time to prepare for a momentous autumn.
• Follow Comment is free on Twitter @commentisfree |
---
abstract: 'We present a new method of modelling numerical systems where there are two distinct output solution classes, for example tipping points or bifurcations. Gaussian process emulation is a useful tool in understanding these complex systems and provides estimates of uncertainty, but we aim to include systems where there are discontinuities between the two output solutions. Due to continuity assumptions, we consider current methods of classification to split our input space into two output regions. Classification and logistic regression methods currently rely on drawing from an independent Bernoulli distribution, which neglects any information known in the neighbouring area. We build on this by including correlation between our input points. Gaussian processes are still a vital element, but used in latent space to model the two regions. Using the input values and an associated output class label, the latent variable is estimated using MCMC sampling and a unique likelihood. A threshold (usually at zero) defines the boundary. We apply our method to a motivating example provided by the hormones associated with the reproductive system in mammals, where the two solutions are associated with high and low rates of reproduction.'
address: 'Department of Mathematics, College of Engineering, Mathematics and Physical Sciences, University of Exeter'
author:
- Louise Kimpton
- Peter Challenor
- Daniel Williamson
bibliography:
- 'REF1.bib'
title: Modelling Numerical Systems with Two Distinct Labelled Output Classes
---
Classification ,Uncertainty Quantification ,labelled outputs ,computer model ,two solutions
Introduction
============
In many areas of science, complex numerical models are used to represent real life physical systems [@Sacks1989]. In general, a mathematical simulator is used to approximate physical reality and the simulator parameters are estimated to specify those models that best represent the real world. We can reproduce data, make predictions and generally get a better understanding of these complex systems by using such models. For practical applications when making predictions, it is also important to include estimates of uncertainty.
In the majority of cases, the inner workings of the system produce similar output results no matter what small changes are made to the model parameters and we can represent the relationship between model inputs and outputs by a smooth, continuous function. In certain applications of scientific modelling, we find that this is not the case; different areas of input space create significantly different values or properties of the output. There will be regions in the model output space where the overall trend has no resemblence to other regions, whether this be in the shape, range or other properties of the output values. These occurences can produce discontinuities between regions in the output space; examples include tipping points and bifurcations. These discontinuities can create step functions at the transitions between regions, so it is important not to assume any continuity between the separate solutions. For example, in climate science, the Stommel model has a different solution for when the overturning circulation is turned on or off [@Sciences2005]. In other cases, the output may be in a binary or catergorical form such as computer code for a complex model that fails to run for certain input values. This corresponds to separate binary outcomes of ’runs’ and ’fails to run’.
A motivating example has been supplied by [@Voliotis2018] where the subject is the reproductive system in mammals. In particular how this is controlled by connections between the brain, the pituity gland, and the gonads. There are particular neurones in the brain that secrete a specific hormone known as the gonadotrophin-releasing hormone (GnRH). These are vital in regulating gametogenesis and ovulation. Signals are made by the pituitary gland which then simulate the gonads for this cycle to start. One of the regulators of the GnRH neurone is neuropeptide kisspeptin, of which two are located within areas of the hypothalamus (the arcuate nucleus (ARC) and the proptical area). Other research suggests that one of these areas (ARC) is the location of the GnRH pulse regulator of which the core are neurones (ARC kisspeptin or KNDy) that secretes two neuropeptides: neurokinin B (NKB) and dynorphin (Dyn). The object of the model presented is to understand the role of NKB and the firing rate of these neuropeptides on the regulation of GnRH, and subsequentially in controlling reproduction. To do this, the model identifies the population of the KNDy neurones where the GnRH pulse regulator is said to be found. The model consists of a set of coupled ordinary differential equations (ODEs) to describe the dynamics of $m$ synaptically connected KNDy neurones. There are several fixed parameters including the concentration of Dyn, rates at which Dyn and NKB are lost and those that describe the characteristic timescale for Dyn and NKB. The variables are the concentration of NKB secreted at the synaptic ends and the firing rate, measured in spikes/min. Using the population of KNDy neurones is shown to be critical for GnRH pulsatile dynamics and that this can stimulate GnRH secretion. Analysing the output of this model shows that the population can behave as a bistable switch so that the firing rate is either high or low. Hence, this causes us to have a system with two distinct solutions, and is an example of the type of system that we wish to model. This bistable system is coupled with negative feedback leading to sustained oscillations that drive the secretion of GnRH hormones that are involved in reproduction. In being able to model the system and locate the areas of low and high firing rates means that not only can we aide predictions on the repreduction rate but we can also have a better understanding of the specific input parameters that are associated with high rates of reproduction.
A common solution to the problem of uncertainty quantification with ’black box’ models is the use of Gaussian process emulation [@Kennedya]. A Gaussian process emulator is a statistical approximation to the simulator and is fast to run so that uncertainty estimates can be calculated. They are particularly useful when evaluation of the simulator is computationally expensive due to the complex nature of the underlying physical system. If we were dealing with simple and cheap to run models, then emulation would be redundant since the simulator could be run many times resulting in easy analysis of the separate output solutions and the system as a whole.
The main aim of this report is to emulate complex systems with multiple solutions, as in the motivating example above. Initially to simplify the problem, systems with exactly two output solutions are considered, generalising later to $n$ dimensions. We define these to be the output regions for the remainder of the paper. The model output can be either discontinuous or in a binary form, so it seems sensible to avoid current stationary methods of emulating data as a whole. When considering Gaussian processes, these types of regression models do not typically cope well with modelling discontinuities or step functions. This is stated by [@Neal1998], where it is said that Gaussian processes are not appropriate priors for models with discontinuities, or where smoothness varys as a function of the inputs. If applying a Gaussian process to a step function, as the height of the step increases in size, we find that the corresponding emulator becomes increasingly inaccurate over the whole function. It not only overshoots near the discontinuity, but also tends to induce fluctuation in the rest of input space as it tries to model this abrupt jump while still preserving the smoothness assumptions.
Some literature focus on non-stationary Gaussian processes that could be applied to this problem. A non-stationary Gaussian process has a covariance structure that varies throughout the input space, where there may be areas of higher variability. This is applicable to models with two ouput solutions since the two solutions are assumed to have different output trends and hence distinct underlying covariance structures. Examples of these include changes to the covariance function [@Schmidt2003], Composite Gaussian Processes [@Ba2012] and Treed Gaussian processes [@Gramacy2008]. Treed Gaussian processes work by partitioning up the input space to fit different models to data independently in each separate region. They specifically divide the input space up by making binary splits on the value based on a single variable so that the boundaries between regions are parallel to coordinate axes. This is an iterative process, such that new partitions are subpartitions of existing partitions. The main problem with treed Gaussian processes is that partitions are made on straight lines parallel to coordinate axes. This is similar to region partioning using Voronoi tessellation [@Gallier2008] introduced by [@Kim2005]. Input space is also partioned similarly with disadvantages due to straight line partions. Both of these methods result in loss of model flexibility and potential errors when boundaries between output regions are not linear.
We therefore conclude that it would be unrealistic to use information from one region to model the other and that current non-stationary Gaussian process models are not suitable for our exact specification. [@Neal1998] suggests using non-Gaussian models or only including a Gaussian process at the lowest level of the model. Taking this into consideration, we will consider estimation of the boundary between regions and modelling the region outputs separately.
First steps in this direction were made by [@Diggle1998] in using a logit transformation to map the domain of a Gaussian process on to the unit interval. With the two regions, identifying them with binary labels (0 for region 1 and 1 for region 2), they consider the probability of which region an outcome may lie on, hence it is appropriate to be able to model to the unit interval and for the data to be Bernoulli. The main aim of the paper is to address the assumption of data being Gaussian and instead concentrate on situations where the stochastic variation in the data is known to be non-Gaussian. Hence, it would seem appropriate to model the data as Bernoulli trials where a success is treated as being in the specified output region with the probability of success following a Gaussian process.
A similar method is mentioned by [@Chang2015] involving ice sheet models and binary data. They propose a novel calibration method for computer models whose output is in the form of binary spatial data. The approach is based on a generalised linear model framework with a latent Gaussian process. It follows the standard logistic regression framework corresponding to the probability for each observation. By assuming the elements in the model output are conditionally independent given the natural parameters, the likelihood function can be found. Construction of the Gaussian process element differs in that the likelihood maximised is now binomial.
The layout of the problem is also very similar to classification in machine learning mentioned by [@Seeger2004] and [@Nickisch2008]. In their formulation, the input data points, $\textbf{x}_{i}$, are associated with separate class regions with corresponding class labels, $y_{i} \in \{-1,1\}$. The process, $f(\textbf{x})$, becomes latent in the model, and is transformed using a sigmoid function, $\sigma$, so that the probability of being in one of the classes, $P(y = +1|\textbf{x})$, can be modelled. The class labels are assumed to be independently distributed Bernoulli random variables. A posterior distribution over the latent values, $f(\textbf{x})$, is found in terms of both the training and test latent values, $f(\textbf{x})$ and $f(\textbf{x}_{*})$. Note here that $\textbf{x}_{*}$ is a test point where the class memership probability is to be predicted. The predictive class membership probability, $P(y_{*} = 1|\textbf{x}_{*}, \textbf{y}, \textbf{X}, \theta)$, is obtained by averaging out the test set latent variables, $f(\textbf{x}_{*})$. The main disadvantage with the method outlined by [@Nickisch2008] is the fact that part of the posterior distribution is not analytically tractable. This is due to the observation likelihood no longer being Gaussian. The rest of this paper outlines ways to tackle this problem by describing different techniques to numerically approximate the posterior distribution for the predictive class membership. These include Laplace Approximation, Expectation Propagation and Kullback-Leibler divergence.
A similar approach to numerically approximate the posterior distribution is shown by [@Chan2013] that follows a Bayesian approach to generalised linear modelling (GLM). Different Gaussian process models are obtained by changing the form of the likelihood, which [@Chan2013] limit to the exponential family. The model is composed of a latent Gaussian process, a random component, $P(y|\theta)$, that models the output as an exponential family distribution and a link function that relates the mean of the output distribution with the latent function. A prior is placed on the latent function, adding in a Bayesian element to the model. A binomial distribution (with $n=1$ for GP classification) is used in the exponential family form and so the mean is related to the latent space through the logistic function.
All of these methods produce a posterior distribution for the predictive class membership of being in one of the two regions. When we sample from this, or use it to make predictions, we draw from an independent Bernoulli distribution where the 0/1 outputs correspond to either of the two regions. When classifying data into two specific regions, apart from directly at the boundary, we state that all points in the neighbourhood belong in the same region. In our previous example of a computer model that does not run to completion at certain input values, if we knew one point where the model is certain to crash, it is sensible to assume that other similar input values are also likely to cause a crash. Hence, correlation between neighbouring points is valuable and should be incorporated into our model. When we take draws from a Bernoulli distribution, each draw is independent from every other draw, so the classification for input values with the same probability of success is equivalent, regardless of any information in the neighbouring area.
Take a simple example with one input variable and four known classified data points, where it is known that there is exactly one change in region somewhere between the two centre points. Thus, we have two points known to be in region one, followed by two points known to be in region two. The change in region can happen anywhere between the two central points. The input space between each pair of points in the same regions, however, must be classified into the same respective region as the surrounding points. If we drew randomly between the two points in region one, we forfeit this known information, and it is thus important to include some correlation over distance in our model. This loss of information would result in random occurrances of points being classified into the wrong region; something that is not intended in the set up of the example.
Given that there are only two output regions, we assume a hard boundary. As we get closer to this boundary, the probability of being classified into the first region is going to become close to 50% since we are uncertain of where the exact boundary lies. Hence, the draws from the Bernoulli distribution become equally likely to fall on a 0 or a 1, so there will be a section (close to the boundary) where the classification may appear fairly random. Therefore, if we wish to pursue in a classification direction, we would require a classifier that included some correlation to help us obtain a clean cut boundary between regions and a ’smooth’ classification.
An interesting aspect of this method however, is the use of latent modelling with corresponding region class labelling. Each data point has two quantities attached to it; the function output and a class label that corresponds to which output region the point lies in. This may be something to consider when we work with models that have no associated output function or where the output is binary. If we have knowledge of which output region data points are in, we may be able to model only the class labels and ignore any corresponding system outputs.
[@Ranjan2008] propose an alternative method to classification and logistic regression by attempting to model the boundary between the two separate outout regions, specifically as a contour. They try to estimate the contour of a complex computer code based on an improvement function. A relatively small experimental design is performed and points are chosen sequentially based on the improvement function weighted towards choosing points on or near the estimate of the contour, or where the predicted variance is high. This process is aided with the use of Gaussian process emulation. Although this method appears to be an improvement in producing the uncertainty, it requires an underlying smoothness assumption. The whole output space is modelled by one single Gaussian process, where there is a simplifying assumption of the response surface being smooth in the form of the covariance function. Therefore, this method would be unsuitable for models with discontinuities. Also, it is likely to become difficult in higher dimensions.
A process known as history matching is used in a method developed by [@Caiado2015]. History matching is an iterative process designed to reduce the input space of the simulator such that input values that are not likely to result in the observed data are discarded [@Andrianakis2015; @Vernon2010]. Here, it is used to sort data into the separate output regions by discarding regions which are unlikely based on an implausibility criteria. Although this has no smoothness assumption, it may still be difficult in higher dimensions.
Overall, it is clear to see that there is a need for a model that can be used for systems with more than one class of output solution.
Gaussian Process Emulation
==========================
A mathematical simulator is often based on the solution of a set of physically justified PDEs and aims to mimic the behaviour of the complex system so that insight can be gained into the functioning of the system. The main disadvantages in many complex simulators are that run times tend to be lengthy and computationally expensive so it is usually not feasible to run large sets of inputs [@Craig2001]. Emulators are statistical approximations to simulators so encapsulate features of the simulator through a complete probability distribution. Consider observations of a simulator, $y$, assumed to be continuous. For a vector of inputs, $\textbf{x} = [x_{1},x_{2}, ..., x_{n}]$, this can be displayed as follows:
$$y(\textbf{x}) = f(\textbf{x}) + \epsilon \hspace{0.1cm},$$
where $f(\textbf{x})$ is the mean value of the output and $\epsilon$ is an error term. Emulators are a non-parameteric approach to regression such that they find a distribution over the possible functions, $f(\textbf{x})$, that are consistent with the observed data.
A Gaussian process is a generalisation of a Gaussian distribution over an infinite dimensional space and is fully defined by a mean function, $m(\textbf{x})$, and a covariance function, $v(\textbf{x}_{1},\textbf{x}_{2})$. [@Kennedya]. If a function, $f(\textbf{x})$, is distributed as a Gaussian process with mean function $m(\cdot)$ and covariance function $v(\cdot,\cdot)$ for any set of inputs $x_{i},...,x_{n}$, the associated finite set of random variables, $f(x_{1}), f(x_{2}), ..., f(x_{n})$, have distribution,
$$\begin{bmatrix}
f(x_{1}) \\
\vdots \\
f(x_{n})
\end{bmatrix}
\sim
\mathcal{N} \left(
\begin{bmatrix}
m(x_{1}) \\
\vdots \\
m(x_{n})
\end{bmatrix},
\begin{bmatrix}
v(x_{1},x_{1}) & \dots & v(x_{1},x_{n}) \\
\vdots & \ddots & \vdots \\
v(x_{n},x_{1}) & \dots & v(x_{n},x_{n})
\end{bmatrix}
\right).$$
All marginal, joint and conditional distributions are Normal [@Seeger2004]. This can then be formally written as:
$$f(\textbf{x}) \sim GP(m(\textbf{x}), v(\textbf{x}_{i},\textbf{x}_{j})) \hspace{0.1cm},$$
We will restrict ourselves to Gaussian processes with linear prior mean functions, so the prior mean can be specified as $E[f(\textbf{x})|\boldsymbol{\beta}] = \textbf{h}(\textbf{x})^{T}\boldsymbol{\beta}$, where $\textbf{h}(\textbf{x})$ is a vector of basis functions of $\textbf{x}$ and $\boldsymbol{\beta}$ is a vector comprising of unknown coefficients. $v(\textbf{x}_{1},\textbf{x}_{2})$ is a covariance that can be defined as $\sigma^{2}c(\textbf{x}_{i},\textbf{x}_{j})$, where $c$ is a known correlation function of distance. The stationarity of the Gaussian process means that the covariance function does not change over the input space. A common choice of correlation function is the squared exponential; $c(\textbf{x}_{i},\textbf{x}_{j}) = \text{exp} \left\{ - \frac{| \textbf{x}_{i} - \textbf{x}_{j} | ^{2}}{\delta} \right\}$, where $\delta$ is the correlation length parameter and controls the wiggliness of the process.
Model Outline
=============
Initially, we include no knowledge of the function output of the system, simply a label on which region the output belongs to, i.e. whether the input produces an output in region 1 or region 2. This is important since the range of possible applications includes those where the output is simply binary and hence has no associated output value (for example, where the code does not run to completion). Instead of focusing on the function output, we will instead be using a class labelling (similar to that of classification [@Nickisch2008]) that we assign according to whether the input point lies in region 1 or region 2.
In order for the model to be applicable to cases that have either continous or discontinous transitions between regions, we aim to model the output regions separately. Therefore, we need to find where the boundary between these regions lies and be able to make predictions of region classification for other input values. This should include quantifying any uncertainty that is present with these estimates. Once the regions have been identified, if the actual system has real output values that we are interested in, then the regions can be modelled with separate Gaussian process emulators.
Due to limited initial information, the choice was made to model the boundary and region classification in latent space. We also need to ensure that correlation between data points is included in the model. Hence, a latent variable modelled as a Gaussian process is used to structure the two output solutions using our assigned class labelling. A latent variable is a quantity that is not observed directly, but rather inferred from the region classifications that are the observations. The variable is hidden behind the physical model and used to aid our prediction of the boundary. The values of the Gaussian process will not be measured themselves; only the values of the system and which region the points lie in.
As with logistic regression and Gaussian process classification, we are interested in adapting their use of class labels, $y$, for each input data point, $\textbf{x}$, associated with the separate class regions. In contrast to [@Nickisch2008] and [@Seeger2004], who use the values -1 and 1 to denote the different regions, we adapt this to account for minimal knowledge known about the form of the latent variable. We do not assign numerical values as class labels for the state, but simply let the labels be negative for one region and positive for the other. It is important to distinguish that for each input point we have a class label and a separate function output if the system is such that an output exists. We do not use the function output explicitly in the modelling of the two regions. A Gaussian process, $\eta(\textbf{x})$, is estimated with the constraints of having the correct sign for all initial known data points. This is different to a normal Gaussian process since the training data is usually the input variables and the corresponding function output for a few predesigned points. The latent aspect no longer trains the Gaussian process to lie at the function evaluations at the specific input points, but are allowed to take any value providing the sign class labels agree at these inputs. Once this is satisfied and the latent Gaussian process has been estimated, a threshold, $\psi$, is taken to split the input space into the separate regions. Due to the set up of the problem, the theshold is taken to be zero.
Metropolis Hastings
-------------------
For the estimation of the paarameters of the latent Gaussian process, a method is required to provide estimates that agree with the negative and positive region labels that have been defined. We take a Bayesian approach.
Bayesian inference derives a posterior probability distribution from a prior distribution and a likelihood function,
$$\pi(\theta|x) = \frac{\pi(x|\theta)\pi(\theta)}{\pi(x)} \hspace{0.1cm}.$$
$\pi(\theta|x)$ is the probability density function of the model parameters, $\theta$, given the data, $x$. This is made up of the prior knowledge of the model parameters before knowledge of the data, $\pi(\theta)$, along with the likelihood of the data, $\pi(x|\theta)$. What is of most interest is the denominator since this is where non-tractable problems lie. This quantity can be found by integrating $\pi(x,\theta)$ over all possible parameter values. However, this calculation is only analytically possible for very simple models, hence we turn to approximation methods by drawing samples from the posterior.
Markov chain Monte Carlo (MCMC) methods [@Brooks2012] refer to a collection of algorithms that are designed to approximate random samples from probability distributions that would cause problems if trying to sample directly. The result of such an algorithm tends to a Markov chain whose equilibrium distribution matches that of the distribution of interest. See [@Gelman2013] and [@Sivia2006] for more background on Bayesian methods.
One form of Monte Carlo sampling is known as rejection sampling or the acceptance-rejection method. A simple form of rejection sampling when starting with minimal information is Approximate Bayesian Computation (ABC). ABC works by simulating predicted model data and comparing with the known observations to estimate the posterior distribution for the model parameters [@Turner2012]. Hence, it will produce an estimate for the expected posterior of the latent Gaussian process assuming only the input values and some form of output comparison; we do not need to supply a likelihood. From [@Turner2012], model parameters, $\theta$, are accepted if $\rho(X,Y) \leq \delta$, where $\rho(.,.)$ is a measure of distance, usually taken to be the Euclidean distance, $\Vert X-Y \Vert$ (X being the simulated data from $\theta$, Y being the true observations and $\delta$ is a small quantity that specifies how close the approximation is to the true posterior distribution) [@Wilkinson2008].
We propose a modified version of the ABC MCMC algorithm for our problem. This produces an estimate to the posterior distribution for the latent GP parameters, $\theta = (\boldsymbol{\beta},\sigma,\delta)$. As mentioned previously, each data point will be given a class label of positive or negative. This concept is highly important in this algorithm since it will be used as the basis of the rejection criteria. So, instead of requiring samples to be close in value to the observations, we are forcing samples to have the same sign as the observation for comparison. Hence, it will be generating an estimate of the Gaussian process, $\eta$, that is negative in all input space of region 1 and positive in the input space of region 2.
Although using this method of ABC sampling proves to be effective, it is not very efficient and could be improved to include more information already known from the initial data. In a rejection sample such as ABC, the rejection rate tends to be very high, especially when working in higher dimensions. In alternative MCMC methods this is not always the case, and so we proceed by considering a version of the Metropolis Hastings algorithm.
The general Metropolis Hastings algorithm works by constructing Markov chains such that its stationary distribution is the distribution of interest. See [@Gilks1996] and [@Chib2017] for more information. Unlike ABC, Metropolis Hastings requires a likelihood. In this case it needs to reflect the knowledge of which points are definitely positive or negative in association with the different regions. Hence, the likelihood becomes:
$$\begin{split}
\mathcal{L}(\theta;x) & = P \left( \eta(x_{1})<0 , \eta(x_{2})<0 , \ldots , \eta(x_{j})<0 , \eta(x_{j+1})>0 , \ldots , \eta(x_{n})>0 \right) \\
& = \int_{-\infty}^{0} \dots \int_{-\infty}^{0} \int_{0}^{\infty} \dots \int_{0}^{\infty} \phi ( \eta(x_{1}), \eta(x_{2}), \dots, \eta(x_{j}), \eta(x_{j+1}), \dots, \\
& \hspace{1cm} \eta(x_{n}) ) \hspace{0.2cm} dx_{1} dx_{2} \dots dx_{n} .
\end{split}$$
This gives a joint distribution of the first $j$ points falling in negative space and the second $n-j$ points remaining in positive space. The main difference between sampling from an ordinary Gaussian process, to the latent one here is the use of the cumulative distribution function instead of the density function of a Normal distribution. By specifying this to be our likelihood, we are not putting any constraints on the specifc values of the generated Gaussian process, just their sign values. The process is not observed and so there is no concern for exactly what value it takes as long as the change in sign is estimated correctly to give us an estimation for the region boundary. It is also important to note the correlation assumption in using this likelihood. Since nearby points are more likely to fall in a similar region, the correlation must be accounted for through the use of a single multivariate draw of the distribution.
As well as the use of a likelihood, Metropolis Hastings also differs from ABC through the calculations to the final predictive distribution. Where ABC simulates many samples and rejects those that do not fit the criteria, Metropolis Hastings uses the given likelihood to downweight parameters that are less likely to give a sample that fits the comparison with the observations. So, as the iterations progress, the current parameter estimates in the chain progress closer to those that are more likely to result in accepted samples. In otherwords, a certain amount of rejection is done automatically with the use of the likelihood.
At the end of the algorithm, we are left with a chain of values that form an estimate to the posterior distribution of the Gaussian process parameters. We can then find a MAP (Maximum A Posteriori) estimate to give the parameters that maximise this posterior distribution [@Rice2007]. Using this, we can also produce a corresponding posterior distribution for the latent values themselves.
In proceeding with finding the MAP estimate of the parameters, this results in drawing many samples from a joint Gaussian distribution. This is inefficient and highly time consuming since not only do we have to make the draws, but it is also necessary to perform a process similar to a rejection sample to eliminate any of the remaining samples that do not follow the negative and positive region trend. One way in which the efficiency can be increased is to sample each latent value in $\eta(\textbf{x})$ in turn using the normal conditioning equations presented below:
$$\begin{split}
y_{j}|y_{i} & \sim \mathcal{N}(E[y_{j}|y_{i}],\text{var}[y_{j}|y_{i}]) \hspace{0.1cm} , \\
E[y_{j}|y_{i}]& = E[y_{j}] + \text{cov}[y_{j},y_{i}]\text{var}[y_{i}]^{-1}(y_{i} - E[y_{i}]) \hspace{0.1cm} , \\
\text{var}[y_{j}|y_{i}]& = \text{var}[y_{j}] - \text{cov}[y_{j},y_{i}]\text{var}[y_{i}]^{-1}\text{cov}[y_{i},y_{j}] \hspace{0.1cm} ,
\end{split}$$
where $i,j = {1,2,...,n}$. Now, to find a single draw for the latent process, we are only drawing $n$ times from a univariate Gaussian distribution instead of a vector of length $n$ from a multivariate distribution. Using the equations above ensures that the correlation between points remains included in the sample and computational time is greatly saved. On sampling each latent process point in turn, draws from a univariate Gaussian distribution are sampled until the value agrees with the sign of the corresponding region. Hence $\eta(x_{1})$ is first sampled and all values that are positive for region 1 are rejected until a negative value is sampled, stored as the first latent variable point and we move on to generating a value for the next point. The second, $\eta(x_{2})$, is then drawn conditional on the results from the first sample, $\eta(x_{1})$, and again values are rejected where appropriate. This then continues for all data values and ensures only one value is being sampled at a time, hence reducing computation time. Not only is time saved in this way, but also since we are able to reject points that do not concide with the correct region separately at each point. We never reject the whole of a sample, or points that have been shown to agree with the sign for that region. Instead we are resampling each point in turn until a valid value is found.
The main computational expense comes from sampling the first point over the boundary in region 2, i.e. the first time the latent process switches from negative to positive. This jump in changing sign can cause problems in the computation, especially if the resulting variance becomes very small and results in the Gaussian process finding it hard to make the initial jump. To ease the computation, it is useful to sample the boundary and make it obvious near the beginning. It is possible to do this since the numbering of the data points are arbitrary and so the ordering makes no difference to the resulting Gaussian process.
Examples 1
==========
1d Example
----------
To illustrate the concept, a simple toy example of one input variable and one output with two solution regions is presented in Figure \[fig1\]. The inputs are a vector of 12 values ranging between 0 and 20, with the undefined boundary situated in the region $[6,8]$. The points in the range $[0,6]$ are specified as region 1 and have associated class label negative, and the points in the range $[8,20]$ are in output region 2 with class label positive.
The latent Gaussian process is found by applying the Metropolis Hastings approach explained above. We find a posterior distribution for the parameters, $\theta = (\beta,\sigma^{2},\delta)$, by running our algorithm a large number of times to ensure convergence . Samples from the posterior distribution can be taken for the latent process at input points, $\eta(x_{1}), ..., \eta(x_{n})$. After an estimate for the latent process has been found, it is then thresholded at $\eta=0$ to give a value of $x$ for where the boundary between regions lies. This is $x = 7.15$ and is shown by the green dashed line in Figure \[fig1\]. This is a suitable value for the boundary since we set the boundary interval to be $[6,8]$ in the example. Due to the little information we have regarding the location of the boundary and lack of knowledge of the actual system output, we would expect there to be a high level of uncertainty in any results. As shown in Figure \[fig1\], the credible intervals for the estimate are very large and are roughly equal to the extreme bounds that we set the example up with, $[6,8]$. It is interesting to note that our estimate for the boundary is not that of the naive estimate of $x=7$. The estimate has favoured that of the right boundary and so gives us some level of confidence that our models are giving a more informative estimate than that of the centre value between data points. More investigation is needed to test this assumption.
![1 dimensional example with 2 output regions. The posterior mean of the latent Gaussian process (blue) is shown along with the prior mean (red) and boundary estimate (green). Both have 95% credible intervals included. Initial data points are shown in orange with size corresponding to misclassification.[]{data-label="fig1"}](pplot1.png)
Misclassification
=================
The method of model validation used in this example is based on a leave-one-out cross-validation. This usually involves leaving each training point out in turn, fitting a Gaussian process to the remaining points, and then using this to predict the point that was left out [@Seeger2004]. Given that the problem is set up in latent space, it is not possible to strictly follow this layout. Instead, we have looked into methods commonly used in Gaussian process classification to adapt the validation to our problem. As mentioned in [@Seeger2004], it is possible to look into the misclassification of the points during the validation. They provide a binary classification example of sorting images of digits 3 and 5 in the postal service. From a test set of data, they count the number of times a digit is wrongly classified in terms of the process’ standard deviation and length scale parameters.
Adapting this to suit our particular problem, we use a version of leave-one-out to find the misclassification rate. A leave-one-out cross-validation is performed on samples from the posterior distribution for the latent process points to predict the sign of each point left out in turn. We use the latent Gaussian process to predict the sign of the removed point only. From these samples, we calculate the proportion of times each point is classified into the wrong region. The output from this is shown in Figure \[fig1\], where the size of the data points corresponds to the rate of misclassification. As expected, the rate is largest for the two points either side of the boundary. In a 1d example such as this, these points are the most critical since they are the points that restrict the boundary to the precise region of input space. It is also interesting to note here that the remaining points have a misclassification rate of almost (but not quite) zero. From a more in-depth look, we can see that very occasionally the latent process crosses the axis. This is caused by the Gaussian process having a short correlation length parameter; leading to the latent process having the chance to bend quickly over the $x=0$ threshold between known points in the same region. We also identify that the prior mean function placed on the Gaussian process has some influence. The prior mean (as shown by the red line in Figure \[fig1\]) forces the posterior points in each region to follow the same pattern. Therefore, this linear effect appears to force the points to stay in the specified sign.
Based on this example, it has become clear that it is important to place suitable priors on the model parameters and the prior mean function. For the example shown in Figure \[fig1\], we found that a linear prior mean function was a particularly accurate choice when generating our Gaussian process.
One interesting aspect of Gaussian processes is their behaviour in the far field of the input space. They converge to the prior mean asymptotically, which would be a problem for, say, a constant prior mean. If a constant mean function is placed on the Gaussian process, then we start to observe the overall latent process tending towards the prior mean in the edges of our input space. Due to our model layout of negative and positive space, the prior mean would be estimated to be close to the $x=0$ axis and we find that it is very easy for the process to switch signs when the latent variable lies close to zero in the far field, forcing a misclassification in that area.
In a situation such as that in Figure \[fig1\], we have extra knowledge that there are only two output regions and so it should not be the case for the latent variable to slip back over the x-axis. This prior knowledge is incorporated into the method by the selection of the prior mean function. The linear placed on the example in Figure \[fig1\] forces the latent Gaussian process away from the $x=0$ axis in the edges of the input space since we are certain that there are only two distinct regions. If, for example, the two regions in the input space were separated by a circle, then it would be sensible to place a quadratic prior on the mean function, ensuring the Gaussian process would not return to the $x=0$ axis in the far field around the circle. Although this would appear to be a sensible choice, polynomials of a higher order come with a larger number of estimated parameters. Therefore, we should consider whether the classification in the edges of our input space is useful or not.
The prior mean in this example has a linear form of $ax + b$, where $b$ is the value of the latent process when it crosses the vertical axis. Since, it is known that the latent process must cross the $x$-axis at approximately the boundary between regions, we can encorporate this into the prior knowledge of our model. This will then help approximate where the latent crosses the vertical axis, $b$, more efficiently. A transformation is applied to the input points so that the boundary between regions in the $x$-axis now approximately lies at zero in the vertical axis. With this transformation, a tight prior can be placed over the axis intercept, $b$, ensuring the latent process crosses the axis at zero. If we contrast the plot in Figure \[fig2\] compared with that in Figure \[fig1\], we notice a significant difference in the resulting latent process. The prior means for each plot are shown in red. Figure \[fig1\] uses the transformed data and is shown to have an expected mean Gaussian process follow its prior by being close to a straight line through the boundary; something that is expected when there is a problem such as this with minimal information. Figure \[fig2\] does not include the transformation and is shown to differ by the posterior estimate in region 1 leveling out as it approaches zero. This is clearly not appropriate since with no information of the system input, we would expect both sides of the latent process to match. This shows that the transformation in the data greatly improves the estimate in the latent process and any predictions that would follow.
![Same example as of figure \[fig1\], but where the data are not transformed. The prior mean (red) crosses close to the origin (0,0).[]{data-label="fig2"}](pplot2.png)
When considering prior knowledge, it is also important to chose a correct form for the correlation length parameter, $\delta$. The correlation length parameter determines how much the Gaussian process is allowed to bend between each of the initial data points [@Seeger2004]. Particularly when considering the 1d example in Figure \[fig1\], we know that there is only one boundary where the latent Gaussian process is not expected to change signs between data points (apart from the boundary between regions). If the correlation lengths are allowed to become too small, then there is a chance that the Gaussian process would be able to curve round quickly and fall briefly in the wrong sign, causing a misclassification of regions in some input areas. To ensure this does not happen, particular inverse gamma priors are placed on the $\delta$’s so that they are forced away from zero and being too small. An inverse gamma prior is also placed on the variance, $\sigma^{2}$.
Examples 2
==========
2d Example
----------
The method is now expanded to include a simple 2 dimensional version of the 1d example. The general method used is very similar to that of 1 dimension, with the exception that the exact boundary is no longer so easily found. In the example in Figure \[fig3\], 20 input points are generated using a Latin hypercube [@Welch1992] over the region $[-1,7]$. The boundary between regions is defined as the line $x=3$. From Figure \[fig3\], the yellow points are those initial points in region 1 (input space $x < 3$) and the purple points are those in region 2 (input space $x > 3$). The latent GP has been applied to a grid of points over the input space to show where the estimated region lies.
![2 dimensional example where the two region are split by an $x_{1}=3$ plane (red). The dark blue region corresponds high probability of be classified into region 1, whilst light blue corresponds to high probability of being classified into region 2. A misclassification rate is also shown based on point size.[]{data-label="fig3"}](pplot3.png)
To show uncertainty within the 2d example, Figure \[fig3\] shows the probability of input points being in region 1 compared to region 2. The dark blue points represent high probability of being classified into region 1 and light blue represents high probability of being classified into region 2. As expected there is high uncertainty of the predicted region around the boundary. This is due to the minimal information known in this area of input space and so it is easy for an incorrect classification. A misclassification rate is calculated for each point in the same way as the validation performed on the 1d example and also shows high uncertainty in this boundary region. We can see from this plot that the points near the boundary have a larger rate of misclassification than anywhere else in the input space. Where the uncertainty increases here, there is a much higher chance of the latent Gaussian process to flip signs. What is also interesting to note here is the general slope of the estimated boundary. If our method was no more efficient than taking the naive approach to the problem, then we would expect the approximate boundary to misclassify equally across the boundary. This is not the case since the upper section is shown to curve far more into region 2 than that of the lower section. We can see that the latent process is in fact listening to the data.
Another 2d Example
------------------
Another example has been provided by Santner et. al, where their test function is shown in Figure \[fig8\] and has the following form:
$$y(x) =
\begin{cases}
\infty &\quad \text{if} \hspace{0.2cm} x_{1}^{2} + x_{2}^{2} \le c_{1}^{2} \\
\frac{e^{-(a'x + x'Qx)}}{(x_{1}^{2} + x_{2}^{2} - c_{1}^{2})} &\quad \text{if} \hspace{0.2cm} c_{1}^{2} \le x_{1}^{2} + x_{2}^{2} \le c_{2}^{2} \\
- \infty &\quad \text{if} \hspace{0.2cm} x_{1}^{2} + x_{2}^{2} \ge c_{2}^{2} ,\\
\end{cases}$$
where,
$$a = [3,5] \hspace{0.5cm}
Q =
\left(\begin{array}{cc}
2 & 1.5 \\
1.5 & 4 \\
\end{array}\right)
\hspace{0.5cm} c_{1}^{2} = 0.25^{2}, \hspace{0.3cm} c_{2}^{2} = 0.75^{2} \hspace{0.1cm}.$$
The space between the two circles is region one and the remainder is region two, both over the input space $[-1.25,1.25]$. There are function values associated with this example that is only feasible within region 1. We choose to neglect these and just focus on the classification side of our method, where the regions can be modelled separately after they have been fully classified.
![2 dimensional example with two regions. Region 1 lies within the two circles and region 2 is the remaining input space.[]{data-label="fig8"}](testfunction.png)
To set out this problem, 50 data points have been selected where they are given the class label of positive (region 1) if they lie between the two rings, and negative (region 2) otherwise. These are shown by the purple and orange points in Figure \[fig7\] along with the hard boundary (red). The classification after applying our method is also shown in the plot including uncertainty. As in the previous example, the light blue points represent high probability of being sorted into region 1 and the dark blue points shows high probability of being sorted into region 2. The largest areas of uncertainty correspond to the areas where our classification method performed the poorest.
Overall, our method is estimating the regions well with only a few larger deviations in the upper left and right sections of the doughnut region. This is likely to be caused by the lack of information in these areas. Due to the more complicated shape, we chose to fit a constant prior mean function. Overall, this has proven to be successful since no areas have been misclassified in the far corners of the input space, which may have been likely. Alternatively, a quartic polynomial could be used for the prior mean function, but this causes problems with a large number of parameters to be estimated, especially in areas of sparse data.
The two input points that are interesting to point out are those at the bottom of the larger circle; they are classified in different regions but are very close together. In this area, the latent Gaussian process has to change sign quickly but has proven to do so successfully; it is useful to confirm that our method can cope with cases such as this.
A misclassification rate is also included, where the points are more likely to misclassify in region 1 (between rings). This is likely to be due to a higher proportion of points being in region 2 and so the majority of the latent process is negative, making it more likely for points to be classified into region 2. This is supported by the constant mean function estimated to be -2.25; pulling the process to be overall more negative.
![Estimated regions for the 2d example shown in figure \[fig8\]. Initial data points are displayed (orange - region 1 and purple - region 2), with the actual region boundaries shown in red. Uncertainty on the estimate is included where light blue areas correspond to high probability of being classified into region 1 and dark blue areas correspond to high probability of being classified into region 2. A misclassification rate is also shown.[]{data-label="fig7"}](fplot2.png)
Motivating Example
------------------
Returning to the initial motivating example looking into reproduction rates in mammals, the inputs are NKB concentration and firing rate, where a Latin hypercube has been created over the input space of $[0.1,0.2]\times[10,200]$. The choice was made here to transform the data to a $0,1$ scale for computational simplicity. The system is bimodal, so for the 20 initial points, we labelled 5 of them as negative in region 1, and 15 of them as positive in region 2. As with the rest of our examples, we aim to predict the region of any points in the input space and model the system as a whole.
One of the most important choices to be made in this example was the form of the prior mean on the latent Gaussian process. To make the decision, we consulted the expert in the system as well as examining the initial points. From the initial points (yellow and purple) shown in Figure \[fig5\], we can notice that there is likely to be only one change in region, meaning that we have just two disjoint regions seprated by a fairly linear boundary. This agrees with prior knowledge collected from the expert. Therefore, it would be reasonable to either choose a constant or linear mean form. A constant mean would be preferable if we were unsure about the number or shape of regions, but would have a higher chance of misclassifying in sparse areas or in the far field of the input space. We therefore chose to proceed with a linear mean prior. The output of the predicted region boundary from a linear prior mean is shown in Figure \[fig5\] along with the actual boundary (red) and uncertainy. In general, our solution classifies correctly in most areas where, as expected, the area between the regions is the most uncertain. The main issue is that the region of highest uncertainty (where we expect our boundary to lie) does not appear to capture the correct curve of the actual boundary. This is down to lack of information in the area, but since the correct boundary still lies in our area of uncertainty, we can conclude that our model is doing a reasonable job. Misclassification is also shown through the size of the points where it is easier to misclassify points near the boundary. The method is the same as described in previous examples.
![2 dimensional example looking at the effects of hormone release on mammal reproduction. The system has two regions of high and low rates of hormone release where the actual boundary is shown in red. Initial points are displayed (orange - region 1 and purple - region 2), with predicted region classification and uncertainty. Dark blue areas correspond to high probability of being classified into region 1 and light blue areas correspond to high probability of being classified into region 2. A misclassification rate is also shown.[]{data-label="fig5"}](pplot7.png)
Discussion
==========
We have developed a new method for classifying the output of numerical models into one of two classes. Our method is suitable for modelling systems with two output solution classes that are either stationary across the input space, or where there are discontinuities. This includes systems where the output may be in a binary or catagorical form. A major disadvantage of most common methods of classification like those shown by [@Chang2015], [@Seeger2004] and [@Nickisch2008], involves assuming that class labels associated with input points are independently distributed Bernoulli random variables. This is something that causes concern due to any correlation between nearby points being ignored. Neighbouring input points are more likely to result in the same output region, so it is vital that we include this information in our model.
Keeping this in mind, we use aspects of classification from [@Nickisch2008] in the form of class labelling. We use a form of this for limited knowledge and define all input points in one region to have the class label negative and all inputs in the other region to have class label positive. To ensure that correlation between data points is included, a latent variable modelled as a Gaussian process is used to structure the two output solutions using our assigned class labelling. The latent Gaussian process is estimated using a version of the Metropolis Hastings algorithm. As a form of model validation, we have calculated a version of the misclassification rate as shown by [@Seeger2004]. This is based on a leave-one-out cross-validation.
We feel that this method will be applicable to a wide range of applications in computer science, climate science and biology. Our main motivating example is based on assessing reproduction rates in mammals [@Voliotis2018]. We have successfully modelled this bimodel system, where it can be used for class prediction for other inpur points with estimates of uncertainty included.
There are some obvious extensions to the work presented in this paper. One would be to now expand the method to cope with situations when there are more than two output solution classes. This would then increase the numer of applications where it is suitable. There is also room for research in areas of experimental design where we can improve the accuracy of our class classification and boundary estimation with limited intial data.
Acknowledgements {#acknowledgements .unnumbered}
================
We would like to thank T. Santner et al. and M. Voliotis et al. (2018) for providing us with the given examples. I would also like to thank EPSRC for my studentship, allowing me to carry out this research.
References {#references .unnumbered}
==========
|
#cython: boundscheck=False
"""
${NAME}
"""
from __future__ import absolute_import, division, print_function
import logging
cimport numpy
import numpy as np
from mcedit2.rendering import renderstates
from mcedit2.rendering.scenegraph.vertex_array import VertexNode
from mcedit2.rendering.layers import Layer
from mcedit2.rendering.vertexarraybuffer import QuadVertexArrayBuffer
cimport mcedit2.rendering.blockmodels as blockmodels
from libc.stdlib cimport malloc, realloc, free
from libc.string cimport memcpy
log = logging.getLogger(__name__)
lPowerLevels = [p/15. * 0.6 + 0.4 for p in range(16)]
lRedstonePowerColors = [
(p if p > 0.4 else 0.3, max(0.0, p * p * 0.7 - 0.5), max(0.0, p * p * 0.6 - 0.7))
for p in lPowerLevels
]
cdef unsigned char redstonePowerColors[16 * 4]
for i, (r, g, b) in enumerate(lRedstonePowerColors):
redstonePowerColors[i*4] = b * 255
redstonePowerColors[i*4+1] = g * 255
redstonePowerColors[i*4+2] = r * 255
redstonePowerColors[i*4+3] = 255
cdef unsigned char * foliageBitsPine = [0x61, 0x99, 0x61, 0xFF]; # BGRA
cdef unsigned char * foliageBitsBirch = [0x55, 0xA7, 0x80, 0xFF]; # BGRA
class BlockModelMesh(object):
renderstate = renderstates.RenderstateAlphaTest
def __init__(self, sectionUpdate):
"""
:param sectionUpdate:
:type sectionUpdate: mcedit2.rendering.chunkupdate.SectionUpdate
:return:
:rtype:
"""
self.sectionUpdate = sectionUpdate
self.sceneNode = None
self.layer = Layer.Blocks
def createVertexArrays(self):
DEF quadFloats = 32
DEF vertexBytes = 32
cdef numpy.ndarray[numpy.uint16_t, ndim=3] areaBlocks
cdef numpy.ndarray[numpy.uint8_t, ndim=3] areaBlockLight
cdef numpy.ndarray[numpy.uint8_t, ndim=3] areaSkyLight
cdef numpy.ndarray[numpy.uint8_t, ndim=3] areaData
cdef numpy.ndarray[numpy.uint8_t, ndim=2] areaBiomes
cdef numpy.ndarray[numpy.uint8_t, ndim=1] renderType
cdef numpy.ndarray[numpy.uint8_t, ndim=1] opaqueCube
cdef numpy.ndarray[numpy.float32_t, ndim=1] biomeTemp
cdef numpy.ndarray[numpy.float32_t, ndim=1] biomeRain
cdef blockmodels.BlockModels blockModels
cdef short cy = self.sectionUpdate.cy
atlas = self.sectionUpdate.chunkUpdate.updateTask.textureAtlas
blockModels = atlas.blockModels
if not blockModels.cooked:
log.warn("createVertexArrays: Block models not cooked, aborting.")
return
blocktypes = self.sectionUpdate.blocktypes
areaBlocks = self.sectionUpdate.areaBlocks
areaBlockLight = self.sectionUpdate.areaLights("BlockLight")
areaSkyLight = self.sectionUpdate.areaLights("SkyLight")
areaData = self.sectionUpdate.areaData
areaBiomes = self.sectionUpdate.areaBiomes
renderType = self.sectionUpdate.renderType
opaqueCube = np.array(blocktypes.opaqueCube)
biomeTemp = self.sectionUpdate.biomeTemp
biomeRain = self.sectionUpdate.biomeRain
#faceQuadVerts = []
cdef unsigned short waterID = blocktypes["minecraft:water"].ID
cdef unsigned short waterFlowID = blocktypes["minecraft:flowing_water"].ID
cdef unsigned short lavaID = blocktypes["minecraft:lava"].ID
cdef unsigned short lavaFlowID = blocktypes["minecraft:flowing_lava"].ID
# glass, stained glass are special cased to return False for `shouldSideBeRendered`
cdef unsigned short glassID = blocktypes["minecraft:glass"].ID
cdef unsigned short stainedGlassID = blocktypes.get("minecraft:stained_glass", blocktypes["minecraft:glass"]).ID
# more special cases...
def getID(internalName):
bt = blocktypes.get(internalName, None)
if bt is None:
return 0
return bt.ID
cdef unsigned short anvilID = getID("minecraft:anvil")
cdef unsigned short carpetID = getID("minecraft:carpet")
cdef unsigned short dragonEggID = getID("minecraft:dragon_egg")
cdef unsigned short endPortalID = getID("minecraft:end_portal")
cdef unsigned short fenceID = getID("minecraft:fence")
cdef unsigned short fenceGateID = getID("minecraft:fence_gate")
cdef unsigned short hopperID = getID("minecraft:hopper")
cdef unsigned short leavesID = getID("minecraft:leaves")
cdef unsigned short leaves2ID = getID("minecraft:leaves2")
cdef unsigned short paneID = getID("minecraft:glass_pane")
cdef unsigned short barsID = getID("minecraft:iron_bars")
cdef unsigned short pistonHeadID = getID("minecraft:piston_head")
cdef unsigned short netherPortalID = getID("minecraft:portal")
cdef unsigned short pComparatorID = getID("minecraft:powered_comparator")
cdef unsigned short upComparatorID = getID("minecraft:unpowered_comparator")
cdef unsigned short pRepeaterID = getID("minecraft:powered_repeater")
cdef unsigned short upRepeaterID = getID("minecraft:unpowered_repeater")
cdef unsigned short stoneSlabID = getID("minecraft:stone_slab")
cdef unsigned short stoneSlab2ID = getID("minecraft:stone_slab2")
cdef unsigned short woodenSlabID = getID("minecraft:wooden_slab")
cdef unsigned short grassID = getID("minecraft:grass")
cdef unsigned short snowID = getID("minecraft:snow")
cdef unsigned short snowLayerID = getID("minecraft:snow_layer")
cdef unsigned short netherFenceID = getID("minecraft:nether_brick_fence")
cdef unsigned short stainedGlassPaneID = getID("minecraft:stained_glass_pane")
cdef unsigned short woodenDoorID = getID("minecraft:wooden_door")
cdef unsigned short ironDoorID = getID("minecraft:iron_door")
cdef unsigned short birchDoorID = getID("minecraft:birch_door")
cdef unsigned short spruceDoorID = getID("minecraft:spruce_door")
cdef unsigned short jungleDoorID = getID("minecraft:jungle_door")
cdef unsigned short acaciaDoorID = getID("minecraft:acacia_door")
cdef unsigned short darkOakDoorID = getID("minecraft:dark_oak_door")
cdef unsigned short cobbleWallID = getID("minecraft:cobblestone_wall")
cdef unsigned short tripwireID = getID("minecraft:tripwire")
cdef unsigned short redstoneWireID = getID("minecraft:redstone_wire")
cdef list powerSources
powerSources = [
getID("minecraft:stone_pressure_plate"),
getID("minecraft:wooden_pressure_plate"),
getID("minecraft:light_weighted_pressure_plate"),
getID("minecraft:heavy_weighted_pressure_plate"),
getID("minecraft:stone_button"),
getID("minecraft:wooden_button"),
getID("minecraft:trapped_chest"),
getID("minecraft:redstone_block"),
getID("minecraft:daylight_detector"),
getID("minecraft:daylight_detector_inverted"),
getID("minecraft:lever"),
getID("minecraft:detector_rail"),
getID("minecraft:tripwire_hook"),
getID("minecraft:redstone_torch"),
getID("minecraft:unlit_redstone_torch"),
]
powerSources = [p for p in powerSources if p != 0]
#cdef char fancyGraphics = self.sectionUpdate.fancyGraphics
cdef char fancyGraphics = True
if fancyGraphics:
opaqueCube[leavesID] = False
opaqueCube[leaves2ID] = False
waterTexTuple = self.sectionUpdate.chunkUpdate.textureAtlas.texCoordsByName["assets/minecraft/textures/blocks/water_still.png"]
cdef float[4] waterTex
waterTex[0] = waterTexTuple[0]
waterTex[1] = waterTexTuple[1]
waterTex[2] = waterTexTuple[2]
waterTex[3] = waterTexTuple[3]
lavaTexTuple = self.sectionUpdate.chunkUpdate.textureAtlas.texCoordsByName["assets/minecraft/textures/blocks/lava_still.png"]
cdef float[4] lavaTex
lavaTex[0] = lavaTexTuple[0]
lavaTex[1] = lavaTexTuple[1]
lavaTex[2] = lavaTexTuple[2]
lavaTex[3] = lavaTexTuple[3]
cdef float * fluidTex
cdef unsigned short y, z, x, ID, meta
cdef short dx, dy, dz,
cdef unsigned short nx, ny, nz, nID, upID
cdef unsigned char nMeta
cdef blockmodels.ModelQuadList quads
cdef blockmodels.ModelQuad quad
cdef short rx, ry, rz
cdef unsigned char bl, sl
cdef unsigned char tintType
cdef unsigned char biomeID
cdef float temperature, rainfall
cdef unsigned int imageX, imageY
cdef size_t imageOffset
cdef size_t buffer_ptr = 0
cdef size_t buffer_size = 256
cdef float * vertexBuffer = <float *>malloc(buffer_size * sizeof(float) * quadFloats)
cdef float * xyzuvstc
cdef numpy.ndarray vabuffer
cdef unsigned char * vertexColor
cdef unsigned short color
cdef size_t vertex, channel
cdef unsigned char * tintColor
cdef char doCull = 0
cdef char foundActualState
cdef char redstonePower
cdef char wallCount
cdef blockmodels.ModelQuadListObj quadListObj
if vertexBuffer == NULL:
return
for y in range(1, 17):
ry = y - 1 + (cy << 4)
for z in range(1, 17):
rz = z - 1
for x in range(1, 17):
rx = x - 1
ID = areaBlocks[y, z, x]
if ID == 0:
continue
meta = areaData[y, z, x]
actualState = None
redstonePower = 0
if renderType[ID] == 3: # model blocks
# if this block has actualStates, get its actualState
# using its neighbors and look up that state's models
# in blockModels.... ... ...
# to get its actual state, we need to get its current state from
# its id and meta, parse the state into properties,
# change some property values into others according to
# actualState logic, then use the new state to look up the model
# ... ... ...
# all without doing a dict lookup for every block...
#
# absolutely disgusting
def parseProps(ID, meta):
state = blocktypes.statesByID.get((ID, meta))
if state is None:
state = blocktypes.defaultBlockstates.get(ID)
if state is None or '[' not in state:
return {}
state = state.split('[')[1]
props = state[:-1].split(",")
props = [p.split("=") for p in props]
return {k:v for k,v in props}
def combineProps(props):
props = [k + "=" + v for k, v in props.iteritems()]
return tuple(sorted(props))
if grassID and (ID == grassID):
if (areaBlocks[y+1, z, x] == snowID
or areaBlocks[y+1, z, x] == snowLayerID):
actualState = "minecraft:grass", ("snowy=true",)
if (fenceID and (ID == fenceID)
or netherFenceID and (ID == netherFenceID)):
props = []
for direction, dx, dz in [
("north", 0, -1),
("south", 0, 1),
("west", -1, 0),
("east", 1, 0),
]:
nID = areaBlocks[y, z+dz, x+dx]
props.append(direction + "=" + ("true"
if opaqueCube[nID]
or fenceID and nID == fenceID
or fenceGateID and nID == fenceGateID
or netherFenceID and nID == netherFenceID
else "false"))
actualState = blocktypes.namesByID[ID], tuple(sorted(props))
if (paneID and (ID == paneID)
or stainedGlassPaneID and (ID == stainedGlassPaneID)
or barsID and (ID == barsID)):
props = {}
if ID == stainedGlassPaneID:
props = parseProps(ID, meta)
for direction, dx, dz in [
("north", 0, -1),
("south", 0, 1),
("west", -1, 0),
("east", 1, 0),
]:
nID = areaBlocks[y, z+dz, x+dx]
props[direction] = ("true"
if opaqueCube[nID]
or (paneID and nID == paneID)
or (barsID and nID == barsID)
or (stainedGlassPaneID and nID == stainedGlassPaneID)
or (glassID and nID == glassID)
or (stainedGlassID and nID == stainedGlassID)
else "false")
actualState = blocktypes.namesByID[ID], combineProps(props)
if ((woodenDoorID and ID == woodenDoorID)
or (ironDoorID and ID == ironDoorID)
or (birchDoorID and ID == birchDoorID)
or (spruceDoorID and ID == spruceDoorID)
or (jungleDoorID and ID == jungleDoorID)
or (acaciaDoorID and ID == acaciaDoorID)
or (darkOakDoorID and ID == darkOakDoorID)
):
props = parseProps(ID, meta)
if len(props):
if props['half'] == 'upper':
nID = areaBlocks[y-1, z, x]
if nID == ID:
lowerProps = parseProps(areaBlocks[y-1, z, x], areaData[y-1, z, x])
for p in ['facing', 'hinge', 'open']:
props[p] = lowerProps[p]
actualState = blocktypes.namesByID[ID], combineProps(props)
if cobbleWallID and ID == cobbleWallID:
props = parseProps(ID, meta)
props['up'] = "true"
wallCount = 0
for direction, dx, dz in [
("north", 0, -1),
("south", 0, 1),
("west", -1, 0),
("east", 1, 0),
]:
nID = areaBlocks[y, z+dz, x+dx]
if nID == ID:
props[direction] = "true"
wallCount += 1
if wallCount == 2 and (
(props['north'] == props['south'] == 'true')
or (props['east'] == props['west'] == 'true')
):
if areaBlocks[y+1, z, x] == 0:
props['up'] = 'false'
actualState = blocktypes.namesByID[ID], combineProps(props)
if tripwireID and ID == tripwireID:
props = parseProps(ID, meta)
for direction, dx, dz in [
("north", 0, -1),
("south", 0, 1),
("west", -1, 0),
("east", 1, 0),
]:
nID = areaBlocks[y, z+dz, x+dx]
if nID == ID:
props[direction] = "true"
actualState = blocktypes.namesByID[ID], combineProps(props)
if redstoneWireID and ID == redstoneWireID:
props = parseProps(ID, meta)
def isConnectible(nID, nMeta, dx, dz):
if (nID == redstoneWireID
or (upComparatorID and nID == upComparatorID)
or (pComparatorID and nID == pComparatorID)
or (nID in powerSources)
):
return True
elif ((pRepeaterID and nID == pRepeaterID)
or (upRepeaterID and nID == upRepeaterID)):
nProps = parseProps(nID, nMeta)
if (dz != 0 and nProps['facing'] in ('north', 'south')
or dx != 0 and nProps['facing'] in ('east', 'west')):
return True
if len(props):
for direction, dx, dz in [
("north", 0, -1),
("south", 0, 1),
("west", -1, 0),
("east", 1, 0),
]:
nID = areaBlocks[y, z+dz, x+dx]
nMeta = areaData[y, z+dz, x+dx]
if isConnectible(nID, nMeta, dx, dz):
props[direction] = "side"
elif opaqueCube[nID] > 0: # xxx isFullOpaqueCube
nID = areaBlocks[y+1, z+dz, x+dx]
nMeta = areaData[y+1, z+dz, x+dx]
if isConnectible(nID, nMeta, dx, dz):
props[direction] = "up"
else:
nID = areaBlocks[y-1, z+dz, x+dx]
nMeta = areaData[y-1, z+dz, x+dx]
if isConnectible(nID, nMeta, dx, dz):
props[direction] = "side"
redstonePower = int(props['power'])
actualState = "minecraft:redstone_wire", combineProps(props)
if actualState is None:
quads = blockModels.cookedModelsByID[ID][meta]
else:
quadListObj = blockModels.cookedModelsForState(actualState)
quads = quadListObj.quadList
if quads.count == 0:
continue
if areaBiomes is not None:
biomeID = areaBiomes[z, x]
else:
biomeID = 1
for i in range(quads.count):
doCull = 0
quad = quads.quads[i]
if quad.cullface[0]:
nx = x + quad.cullface[1]
ny = y + quad.cullface[2]
nz = z + quad.cullface[3]
nID = areaBlocks[ny, nz, nx]
if (ID == endPortalID
and quad.cullface[2] == -1):
doCull = 0
elif (ID == anvilID
or ID == dragonEggID
or ID == fenceID
or ID == fenceGateID
or ID == hopperID
or ID == pistonHeadID
):
doCull = 0
elif ID == carpetID and quad.cullface[2] == 1:
doCull = 0
elif ((ID == glassID or ID == stainedGlassID)
and ((glassID and nID == glassID)
or (stainedGlassID and nID == stainedGlassID))
):
doCull = 1
elif ((ID == paneID
or ID == barsID)
and (ID == nID and meta == areaData[ny, nz, nx])):
doCull = 1
elif (ID == netherPortalID and False): # hairy, do later
doCull = 1
elif (ID == stoneSlabID
or ID == stoneSlab2ID
or ID == woodenSlabID):
if ((stoneSlabID and nID == stoneSlabID)
or (stoneSlab2ID and nID == stoneSlab2ID)
or (woodenSlabID and nID == woodenSlabID)):
if (meta & 0x8) == (areaData[ny, nz, nx] & 0x8):
doCull = 1
else:
doCull = opaqueCube[nID]
else:
doCull = opaqueCube[nID]
if doCull:
continue
nx = x + quad.quadface[1]
ny = y + quad.quadface[2]
nz = z + quad.quadface[3]
bl = areaBlockLight[ny, nz, nx] # xxx block.useNeighborLighting
sl = areaSkyLight[ny, nz, nx]
xyzuvstc = vertexBuffer + buffer_ptr * quadFloats
memcpy(xyzuvstc, quad.xyzuvstc, sizeof(float) * quadFloats)
temperature = biomeTemp[biomeID]
rainfall = biomeRain[biomeID]
temperature = min(max(temperature, 0.0), 1.0)
rainfall = min(max(rainfall, 0.0), 1.0)
rainfall *= temperature
if quad.biomeTintType:
if quad.biomeTintType == blockmodels.BIOME_GRASS:
imageX = <unsigned int>((1.0 - temperature) * (blockModels.grassImageX - 1))
imageY = <unsigned int>((1.0 - rainfall) * (blockModels.grassImageY - 1))
imageOffset = imageX + blockModels.grassImageX * imageY
tintColor = &blockModels.grassImageBits[imageOffset * 4]
if quad.biomeTintType == blockmodels.BIOME_FOLIAGE:
imageX = <unsigned int>((1.0 - temperature) * (blockModels.foliageImageX - 1))
imageY = <unsigned int>((1.0 - rainfall) * (blockModels.foliageImageY - 1))
imageOffset = imageX + blockModels.foliageImageX * imageY
tintColor = &blockModels.foliageImageBits[imageOffset * 4]
if quad.biomeTintType == blockmodels.BIOME_FOLIAGE_PINE:
tintColor = foliageBitsPine
if quad.biomeTintType == blockmodels.BIOME_FOLIAGE_BIRCH:
tintColor = foliageBitsBirch
if quad.biomeTintType == blockmodels.BIOME_REDSTONE:
tintColor = &redstonePowerColors[redstonePower * 4]
# print("REDSTONE TINT", redstonePower, tintColor[0], tintColor[1], tintColor[2])
vertexColor = <unsigned char *>xyzuvstc
for vertex in range(4):
for channel in range(3):
color = vertexColor[vertexBytes * vertex + vertexBytes - 4 + channel]
# image format is ARGB8, but this is with respect to 4-byte words
# when the words are little endian, the byte ordering becomes BGRA
# what i REALLY SHOULD do is get the pixel as an int and bit shift the bytes out.
color *= tintColor[2-channel]
color >>= 8
vertexColor[vertexBytes * vertex + vertexBytes - 4 + channel] = <unsigned char>color
xyzuvstc[0] += rx
xyzuvstc[1] += ry
xyzuvstc[2] += rz
xyzuvstc[5] += sl
xyzuvstc[6] += bl
xyzuvstc[8] += rx
xyzuvstc[9] += ry
xyzuvstc[10] += rz
xyzuvstc[13] += sl
xyzuvstc[14] += bl
xyzuvstc[16] += rx
xyzuvstc[17] += ry
xyzuvstc[18] += rz
xyzuvstc[21] += sl
xyzuvstc[22] += bl
xyzuvstc[24] += rx
xyzuvstc[25] += ry
xyzuvstc[26] += rz
xyzuvstc[29] += sl
xyzuvstc[30] += bl
buffer_ptr += 1
if buffer_ptr >= buffer_size:
buffer_size *= 2
vertexBuffer = <float *>realloc(vertexBuffer, buffer_size * sizeof(float) * quadFloats)
elif renderType[ID] == 1:
if ID == waterFlowID or ID == waterID:
fluidTex = waterTex
elif ID == lavaFlowID or ID == lavaID:
fluidTex = lavaTex
else:
continue
if meta > 8:
meta = 8 # "falling" water - always full cube
# upID = areaBlocks[y+1, z, x]
# if upID == waterID or upID == waterFlowID or upID == lavaID or upID == lavaFlowID:
# quads = blockModels.fluidQuads[8] # block above has fluid - fill this fluid block
# else:
quads = blockModels.fluidQuads[meta]
bl = areaBlockLight[y, z, x] # xxx block.useNeighborLighting
sl = areaSkyLight[y, z, x]
for i in range(6):
quad = quads.quads[i]
nx = x + quad.quadface[1]
ny = y + quad.quadface[2]
nz = z + quad.quadface[3]
nID = areaBlocks[ny, nz, nx]
if opaqueCube[nID]:
continue
if nID == waterID or nID == waterFlowID or nID == lavaID or nID == lavaFlowID:
nMeta = areaData[ny, nz, nx]
if nMeta > 7 or 7 - (nMeta & 0x7) >= 7 - (meta & 0x7):
continue # cull face as the neighboring block is fuller
xyzuvstc = vertexBuffer + buffer_ptr * quadFloats
memcpy(xyzuvstc, quad.xyzuvstc, sizeof(float) * quadFloats)
xyzuvstc[0] += rx
xyzuvstc[1] += ry
xyzuvstc[2] += rz
xyzuvstc[3] += fluidTex[0]
xyzuvstc[4] += fluidTex[1]
xyzuvstc[5] += sl
xyzuvstc[6] += bl
xyzuvstc[8] += rx
xyzuvstc[9] += ry
xyzuvstc[10] += rz
xyzuvstc[11] += fluidTex[0]
xyzuvstc[12] += fluidTex[1]
xyzuvstc[13] += sl
xyzuvstc[14] += bl
xyzuvstc[16] += rx
xyzuvstc[17] += ry
xyzuvstc[18] += rz
xyzuvstc[19] += fluidTex[0]
xyzuvstc[20] += fluidTex[1]
xyzuvstc[21] += sl
xyzuvstc[22] += bl
xyzuvstc[24] += rx
xyzuvstc[25] += ry
xyzuvstc[26] += rz
xyzuvstc[27] += fluidTex[0]
xyzuvstc[28] += fluidTex[1]
xyzuvstc[29] += sl
xyzuvstc[30] += bl
buffer_ptr += 1
if buffer_ptr >= buffer_size:
buffer_size *= 2
vertexBuffer = <float *>realloc(vertexBuffer, buffer_size * sizeof(float) * quadFloats)
if buffer_ptr: # now buffer size
vertexArray = QuadVertexArrayBuffer(buffer_ptr)
vabuffer = vertexArray.buffer
memcpy(vabuffer.data, vertexBuffer, buffer_ptr * sizeof(float) * quadFloats)
self.sceneNode = VertexNode(vertexArray)
self.sceneNode.name = "cy=%d" % self.sectionUpdate.cy
free(vertexBuffer)
|
Introduction {#sec1-1}
============
Keratoconus is an ectatic noninflammatory bilateral disorder of the cornea that is characterized by abnormalities in the structure and stability of corneal collagen fibers.\[[@ref1][@ref2]\] It commonly presents in the second decade of life with the loss of visual acuity as the cornea develops the characteristic conical shape with thinning and irregular astigmatism.\[[@ref2][@ref3]\] The incidence varies among populations but appears to be more common in the Middle East and Arabian peninsula.\[[@ref4][@ref5][@ref6]\] Earlier onset is associated with more aggressive disease and faster progression.\[[@ref7][@ref8]\]
Corneal collagen cross-linking (CXL) is the only intervention that targets the progressive nature of keratoconus. It appears to strengthen the cornea by the creation of covalent bonds between the collagen fibers and has been associated with a decreased risk of disease progression.\[[@ref9][@ref10][@ref11]\] The safety and efficacy of CXL have been well validated in the adult population. However, studies recently reported in children and adolescents suggested that CXL is safe in the pediatric age group with a rate of complications similar to that found in adults.\[[@ref12][@ref13][@ref14][@ref15][@ref16][@ref17]\]
Keratoconus that manifests in early childhood is commonly associated with vernal keratoconjunctivitis (VKC).\[[@ref18][@ref19][@ref20][@ref21]\] It is possibly related to frequent eye rubbing and chronic corneal exposure to inflammatory mediators and cytokines. The safety and efficacy of CXL has not been validated in pediatric patients with VKC and keratoconus. This study aimed to determine the relative safety and efficacy of CXL in children and adolescents (\<18 years) with keratoconus and VKC.
Methods {#sec1-2}
=======
The Institutional Review Board approved this study, and it adhered to the tenets of the Helsinki Declaration. This was a retrospective case--control analysis of 87 eyes of 58 children and adolescents (\>18 years) that underwent CXL for progressive keratoconus between August 2008 and April 2014 with at least 2-year follow-up. Keratoconus was diagnosed by slit-lamp examination and corneal tomography utilizing the Orbscan II (Bausch and Lomb, Orbtek, Salt Lake City, UT); the diagnosis of progression and decision to treat was at the discretion of the treating physician. Exclusion criteria include advanced keratoconus at the presentation that mandated keratoplasty, history of ophthalmic disease other than keratoconus or VKC (e.g., previous infectious keratitis, ocular trauma or uveitis), previous ocular surgery, and preoperative corneal thickness of \<400 μ.
The patients were divided into two groups: Group 1 with keratoconus and VKC and Group 2 with keratoconus but no VKC. VKC was diagnosed by the presence of either typical limbal follicles or tarsal cobblestone papillae at any time point; patients were not offered treatment until the VKC was deemed to be under control by the treating physician. The control (non-VKC) group included consecutive patients from the same time period as the VKC group. Data analysis of each group included gender, age at the procedure, and time to last follow-up. The main outcome measures included uncorrected distance visual acuity (UCVA), best spectacle-corrected visual acuity (BSCVA), intraocular pressure (IOP), manifest refraction, steep and flat K reading, and thinnest corneal area on Orbscan tomography; each data point was extracted from the pretreatment, 6-month follow-up, and last follow-up examinations. The rate of adverse events including acute keratitis, corneal decompensation, delayed epithelial healing, corneal haze at 1 month, corneal haze at 6 months, corneal vascularization, increased IOP \>21 mmHg, and worsening of VKC in 6 months following intervention were also recorded and compared between the VKC and non-VKC groups. The UCVA and BSCVA were expressed in logarithm of minimal angle of resolution (LogMAR) ± standard deviation (SD). Progression of ectasia was defined as a steepest keratometry (K max) value change of \>2 diopters and/or decrease in thinnest corneal point of \>30 μ, comparing 6-month posttreatment to last follow-up measurements.
Statistical analysis {#sec2-1}
--------------------
Sample size was calculated for a 1:2 case--control study (with VKC patients as cases and non-VKC patients as controls) using a comparison of means test between two independent groups where α error is set to 0.05, confidence intervals to 95%, anticipated power to 80%, design effect to 0.5, and ß error to 0.20. The software used was G\*Power version 3.0.10 (Franz Fauel, University Kiel, Germany, 2008). The actual power after implementation was 81.2%.
Data were collected from the health record using a specific data collection sheet; data were then cleaned, managed, and coded using Microsoft Excel 2013^®^(Microsoft Corporation, Redmond, Washington, USA). The analysis was performed using SPSS version 22 (IBM Inc., Chicago, Illinois, USA).
Demographic variables were analyzed per patient, and ocular data points were analyzed per individual eye. Descriptive analysis was performed, where categorical variables were presented in the form of frequencies and percentages and continuous variables in the form of mean (±SD). Inferential analysis was conducted to test the significance of potential associations across different study groups. Chi-square test (or Fisher exact test whenever indicated) was used to detect any association between different characteristics. Wilcoxon signed rank test was used to investigate whether there was any significant difference between pre- and postintervention measures. A confidence interval level was set to 95% where a corresponding *P* value threshold was identified as 0.05; any output *P* \< 0.05 was interpreted as an indicator of statistical significance. A Bonferroni correction of 1.27 did not alter the statistical significance of any result.
Description of procedure {#sec2-2}
------------------------
All patients underwent epithelium-off CXL utilizing the standard (Dresden) protocol. The treatment was performed under topical anesthesia (or general anesthesia in patients \<12 years of age) with aseptic technique. The eyelashes and eyelid skin were cleaned with 5% povidone-iodine solution. A sterile wire speculum was used to open the lids, and alcohol 20% was applied to the central cornea within an 8-mm optical zone well for 20 s. The alcohol was removed with a sponge, and the ocular surface was irrigated with balanced saline solution; the corneal epithelium was then manually removed. Ultrasound pachymetry was used to confirm a central corneal thickness of at least 400 μ. The cornea was soaked with riboflavin 0.1% solution (10 mg of riboflavin-5-phosphate in 10-ml dextran solution) applied every 2 min for 30 min. After that, CXL was performed with an ultraviolet (UV) energy dose of 5.4 J/cm^2^for 30 min; the UV source was confirmed to be 10 cm from the cornea in every patient. During the CXL procedure, riboflavin was applied every 2 min. At the end of the procedure, topical moxifloxacin 0.5% (Vigamox, Alcon, Fort Worth, TX, USA), topical prednisolone acetate 1%, and a bandage contact lens were administered. After surgery, all patients received topical moxifloxacin 0.5% four times daily for 1 week, topical prednisolone acetate 1% beginning four times daily and tapered off over 4--6 weeks, and lubricants (Tears Naturale Free, Alcon) as needed. Patients who had been receiving topical therapy for VKC continued their pretreatment regimen, typically a mast-cell stabilizer and cyclosporine A 1%. Patients were seen postoperatively at day 2--3 and again at day 7. The bandage contact lens was removed once the epithelial defect was completely healed.
Results {#sec1-3}
=======
Eighty-seven eyes of 58 patients met inclusion criteria. Twenty-seven eyes of 19 patients had the diagnosis of keratoconus with VKC (Group 1) and 60 eyes of 39 keratoconus patients did not have a diagnosis of VKC (Group 2). Four eyes in the VKC and 1 in the non-VKC group had corneal vascularization prior to the intervention (*P* = 0.052); the groups were otherwise similar in preoperative characteristics \[[Table 1](#T1){ref-type="table"}\]. All patients in both groups were reported to have normal preoperative retina and optic nerve status. Seventy-one (81.6%) of the eyes were of male and 16 (18.4%) were of female patients. In the VKC group, 26 (96.3%) were eyes of male patients and 1 (3.7%) was of a female patient (*P* = 0.038). The mean age of the VKC group was 15.8 years (range 9.9--17.9) and 15.6 years (8.4--17.8) for non-VKC. The mean follow-up for Group 1 was 2.8 years (2--7) and 2.9 years (2--7) for Group 2. The proportion of patients with bilateral disease was similar between VKC and non-VKC patients (29.6% of VKC patients compared to 35.6% for non-VKC, *P* = 0.587).
######
Baseline characteristics before cross-linking
Variable VKC (*n*=27), *n* (%) Non-VKC (*n*=60), *n* (%) *P*
------------------------- ----------------------- --------------------------- -------
SPK 2 (7.4) 0 0.174
Corneal scar 1 (3.7) 3 (5.0) 0.998
Descemet's break 1 (3.7) 0 0.998
Corneal vascularization 4 (14.8) 1 (1.7) 0.052
Lens changes 0 1 (1.7) 0.988
Retinal abnormalities 0 0 NA
Optic nerve atrophy 0 0 NA
VKC: Vernal keratoconjunctivitis, SPK: Superficial punctate keratopathy, NA: Not available
Vernal keratoconjunctivitis group {#sec2-3}
---------------------------------
[Table 2](#T2){ref-type="table"} details the preoperative, 6-month post-CXL, and last follow-up parameters. There was no significant difference between the baseline and last follow-up of UCVA (*P* = 0.60) and BSCVA (*P* = 0.99). There was no significant difference between keratometry values at baseline and at last follow-up. The thinnest corneal area became thinner after CXL compared with the preoperative value; however, this difference was not statistically significant (*P* = 0.093 at 6 months and *P* = 0.17 at last follow-up).
######
Vision and keratometric variables before and after cross-linking, vernal keratoconjunctivitis group
Variable Mean±SD *P*\* Last follow-up assessment, mean±SD *P*^†^
------------ -------------- --------------- ------------------------------------ -------------- -------
UCVA 0.35±0.22 0.29±0.19 0.308 0.30±0.24 0.605
BCVA 0.17±0.10 0.23±0.17 0.244 0.23±0.14 0.998
IOP 16.08±2.65 16.56±2.20 0.287 15.91±2.64 0.735
K1 45.23±3.97 45.47±5.00 0.494 45.30±4.57 0.782
K2 51.08±5.91 49.63±6.08 0.107 50.43±6.79 0.104
Pachymetry 451.85±41.11 388.25±146.73 0.093 425.60±99.70 0.173
SE 4.12±3.56 5.21±3.99 0.575 4.25±2.71 0.058
Cylinder 3.71±3.29 4.20±1.86 0.423 4.54±1.69 0.181
\**P* value is comparing preintervention to 6 months. ^†^*P* value is comparing preintervention to last follow-up assessment. SD: Standard deviation, IOP: Intraocular pressure, UCVA: Uncorrected distance visual acuity, SE: Spherical equivalent, BCVA: Best-corrected visual acuity
No vernal keratoconjunctivitis group {#sec2-4}
------------------------------------
As reported in [Table 3](#T3){ref-type="table"}, the UCVA at baseline and last follow-up are similar with no significant difference. There was a statistically significant improvement in BSCVA at 6 months after CXL compared to the baseline (*P* = 0.026). However, the difference at the last follow-up was not (*P* = 0.15). The other variables were analyzed at baseline, 6 months after CXL, and at last follow-up with no statistically significant differences.
######
Vision and keratometric variables before and after crosslinking, nonvernal keratoconjunctivitis group
Variable Mean±SD *P*\* Last follow-up assessment, mean±SD *P*^†^
------------ -------------- -------------- ------------------------------------ --------------- -------
UCVA 0.40±0.32 0.43±0.37 0.337 0.38±0.29 0.338
BCVA 0.24±0.18 0.14±0.15 0.026 0.17±0.16 0.153
IOP 16.44±3.27 16.63±3.03 0.971 17.29±2.69 0.824
K1 46.28±5.75 57.54±72.79 0.657 47.09±6.27 0.997
K2 50.67±6.87 48.21±3.96 0.304 51.9±6.80 0.890
Pachymetry 454.84±57.24 444.74±57.14 0.221 422.18±104.42 0.223
SE 2.70±5.91 3.99±5.16 0.407 6.10±4.19 0.075
Cylinder 2.11±3.32 2.37±3.77 0.723 3.10±2.01 0.528
\**P* value is comparing preintervention to 6 months. ^†^*P* value is comparing preintervention to last follow-up assessment. SD: Standard deviation, IOP: Intraocular pressure, UCVA: Uncorrected distance visual acuity, SE: Spherical equivalent, BCVA: Best-corrected visual acuity
Comparison of efficacy in vernal keratoconjunctivitis and nonvernal keratoconjunctivitis groups {#sec2-5}
-----------------------------------------------------------------------------------------------
[Table 4](#T4){ref-type="table"} compares the differences between the clinical indices at presentation and at the last follow-up in the two groups. There were no statistically significant differences between the two groups. The proportion of eyes developing progression of ectasia was also compared between the two groups. Five of 27 eyes with VKC exhibited progression (18.5%) and 10 of 60 non-VKC eyes exhibited progression (16.7%); there was no significant difference in the proportions (*P* = 0.83).
######
Comparison of change (pretreatment to posttreatment) in mean visual and keratometric variables, vernal keratoconjunctivitis , and nonvernal keratoconjunctivitis patients
Variable difference Mean±SD *P*
--------------------- ------------ ------------- -------
UCVA 0.06±0.27 0.05±0.22 0.820
BCVA 0.00±0.16 0.06±0.17 0.750
IOP −0.05±3.43 −0.36±3.75 0.680
SE −1.58±3.47 −3.40±6.76 0.860
Cylinder −1.53±4.48 −1.08±3.71 0.711
K1 −0.10±1.62 −0.03±2.97 0.765
K2 0.68±1.98 0.23±3.7 0.351
Pachymetry 6.29±28.76 −1.14±54.53 0.649
VKC: Vernal keratoconjunctivitis, SD: Standard deviation, IOP: Intraocular pressure, UCVA: Uncorrected distance visual acuity, SE: Spherical equivalent, BCVA: Best-corrected visual acuity
Adverse events {#sec2-6}
--------------
The adverse events following CXL were compared in each group \[[Table 5](#T5){ref-type="table"}\], with no significant difference for any variable. In all patients, the epithelium healed completely during the 1^st^week after CXL and no patient developed corneal vascularization. One patient developed acute keratitis (in the VKC group); it was diagnosed clinically as herpetic epithelial keratitis which was treated and responded well to topical ganciclovir gel. There was no corneal decompensation documented in either group, and no treated eyes underwent keratoplasty.
######
Adverse events after cross-linking
Variable VKC (*n*=27), *n* (%) Non-VKC (*n*=60), *n* (%) *P*
------------------------------- ----------------------- --------------------------- -------
Acute keratitis (1) 1 (3.7) 0 0.680
Corneal decompensation (2) 0 0 NA
Delayed CED (3) 0 0 NA
Corneal haze month 1 (4) 6 (22.2) 19 (31.7) 0.368
Corneal haze month 6 (5) 0 2 (3.3) 0.852
Corneal vascularizastion (6) 0 0 NA
Increased IOP (7) 0 3 (5.0) 0.584
Worsening VKC in 6 months (8) 2 (7.4) 0 0.174
VKC: Vernal keratoconjunctivitis, IOP: Intraocular pressure, NA: Not available, CED: Corneal endothelial degeneration
Discussion {#sec1-4}
==========
Management of keratoconus in children is difficult and presents unique challenges compared with adults. If keratoconus progresses to an advance stage and the best spectacle-corrected vision is unsatisfying, wearing gas permeable contact lenses carries additional challenges in these patients; this may be especially true when associated with ocular surface disease such as VKC. Furthermore, the rate of developing acute hydrops is inversely proportional to age, and young age is an independent risk factor for requiring keratoplasty in keratoconus.\[[@ref21][@ref22]\] CXL is the only modality of treatment that has been demonstrated to decrease the risk of progression and the associated morbidities.\[[@ref23]\] CXL appears to be a safe and effective intervention in children with and without VKC. No patients in our cohort developed acute hydrops or underwent keratoplasty in the cross-linked eye. Both groups had similar rates of corneal haze, which mostly resolved 6 months after CXL. There were no signs of limbal stem cell deficiency in either group. It seems that corneal vascularization that was present before CXL regressed; this may be attributed to topical steroids, a primary angiodestructive effect of CXL or secondary to inaccuracies in documentation. Only two patients with VKC exhibited exacerbations of VKC in the first 6 months after CXL, whereas no patient had any VKC symptoms or signs following CXL in the non-VKC group, alleviating concerns that the treatment may stimulate severe ocular surface inflammation.
In both groups, CXL was associated with a similar rate of posttreatment progression. Mean UCVA, BSCVA, manifest SE, manifest astigmatism, keratometry values, and thinnest corneal area were stable with no significant differences between the baseline and last follow-up. There was a nonsignificant increase in astigmatism; this could be attributed to the contribution of the subset of patients with progression. Without a placebo group, it is not possible to state with certainty that CXL reduced the risk of progression, but the natural history of progressive keratoconus is accepted to be continued progression without intervention, especially in the pediatric age group. The rate of progression after CXL in this cohort (16.9%) is somewhat higher than reported in some series of adults but similar to other studies of pediatric patients.\[[@ref14][@ref17][@ref24]\] In this study population, there was a strong male association with VKC, which has been previously described but is not well understood.
Weaknesses of this study include the retrospective design and the inherent inaccuracies and imprecision of retrospective data collection. Potential sources of bias include differing indications for recommending cross-linking in each group or other difference in postoperative care that were not measured or controlled in this study. The objective corneal parameters that were derived from automated devices (such as keratometry and pachymetry) are unlikely to vary from what would be gathered in a masked prospective study, but the accuracy of other subjectively documented data points such as corneal haze and vascularization may suffer from the chart review method of data collection. For example, the wide range of corneal thickness measurements in the VKC group at 6 months (388.25 ± 146.73 μ) suggests that some of the measurements were not accurate (not surprising in a pediatric population), but we assume that these types of inaccuracies were evenly distributed between the groups, as well as between pre- and posttreatment measurements. Patients with \<2 years of post-CXL data were excluded which could have introduced selection bias. Finally, the status of the hospital as a tertiary treatment center and the relative prevalence of VKC or other confounding factors in the study population may also result in skewed results compared to other populations.
Future areas of research may include identification of risk factors for progression after crosslinking in children, such as patient characteristics or variations in operative technique. Other modulations of cross-linking technology, such as transepithelial or accelerated treatments, and combining CXL with excimer keratectomy or intracorneal ring segments, should be specifically studied in children before widespread adoption of these techniques in the pediatric keratoconus population. Long-term studies are also required, as there may be a greater risk of treatment failure in children compared to adults.
Conclusions {#sec1-5}
===========
CXL is safe and effective in pediatric patients with and without VKC. At 2-year follow-up, the efficacy and rate of complications after CXL appear to be similar in patients with and without VKC.
Financial support and sponsorship {#sec2-7}
---------------------------------
Nil.
Conflicts of interest {#sec2-8}
---------------------
There are no conflicts of interest.
The authors would like to thank Ahmed Mousa, MSc, PhD, for his contribution to the biostatistical design and analysis of this study.
|
//
// Copyright (c) 2009, Markus Rickert
// All rights reserved.
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are met:
//
// * Redistributions of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
// POSSIBILITY OF SUCH DAMAGE.
//
#ifndef RL_MATH_QUATERNIONPOLYNOMIAL_H
#define RL_MATH_QUATERNIONPOLYNOMIAL_H
#include <cassert>
#include <vector>
#include "Function.h"
#include "Polynomial.h"
#include "Quaternion.h"
#include "Rotation.h"
#include "Vector.h"
namespace rl
{
namespace math
{
template<>
class Polynomial<Quaternion> : public Function<Quaternion>
{
public:
Polynomial<Quaternion>() :
Function<Quaternion>(0, 0),
y0(),
c()
{
}
Polynomial<Quaternion>(const ::std::size_t& degree) :
Function<Quaternion>(0, 0),
y0(),
c(degree + 1)
{
}
virtual ~Polynomial<Quaternion>()
{
}
static Polynomial<Quaternion> CubicFirst(const Quaternion& y0, const Quaternion& y1, const Vector3& yd0, const Vector3& yd1, const Real& x1 = 1)
{
using ::std::abs;
using ::std::atan2;
Quaternion dy = y0.conjugate() * y1;
Real norm = dy.vec().norm();
Real dtheta = 2 * atan2(norm, abs(dy.w()));
Vector3 e = dy.vec();
if (norm > 0)
{
e /= norm;
}
Polynomial<Quaternion> f(3);
f.c[0] = Vector3::Zero();
f.c[1] = x1 * yd0;
f.c[2] = x1 * invB(e, dtheta, yd1) - 3 * e * dtheta;
f.c[3] = e * dtheta;
f.x1 = x1;
f.y0 = y0;
return f;
}
static Polynomial<Quaternion> Linear(const Quaternion& y0, const Quaternion& y1, const Real& x1 = 1)
{
using ::std::abs;
using ::std::atan2;
Quaternion dy = y0.conjugate() * y1;
Real norm = dy.vec().norm();
Real dtheta = 2 * atan2(norm, abs(dy.w()));
Vector3 e = dy.vec();
if (norm > 0)
{
e /= norm;
}
Polynomial<Quaternion> f(1);
f.c[0] = Vector3::Zero();
f.c[1] = e * dtheta;
f.x1 = x1;
f.y0 = y0;
return f;
}
Polynomial<Quaternion>* clone() const
{
return new Polynomial<Quaternion>(*this);
}
Vector3& coefficient(const ::std::size_t& i)
{
return this->c[i];
}
const Vector3& coefficient(const ::std::size_t& i) const
{
return this->c[i];
}
::std::size_t degree() const
{
return this->c.size() - 1;
}
Quaternion operator()(const Real& x, const ::std::size_t& derivative = 0) const
{
assert(derivative <= 2 && "Polynomial<Quaternion>: higher derivatives not implemented");
Vector3 u = this->eval(x);
Real theta = u.norm();
if (theta > 0)
{
u /= theta;
}
Quaternion y = this->y0 * AngleAxis(theta, u);
if (0 == derivative)
{
return y;
}
Polynomial<Quaternion> fd = this->derivative();
Vector3 axisd = fd.eval(x);
Real thetad = u.dot(axisd);
Vector3 w = u.cross(axisd) / theta;
Vector3 omega;
if (theta > 0)
{
omega = u * thetad + ::std::sin(theta) * w.cross(u) - (1 - ::std::cos(theta)) * w;
}
else
{
omega = axisd;
}
Vector3 yomega = y._transformVector(omega);
Quaternion yd = y.firstDerivative(yomega);
if (1 == derivative)
{
return yd;
}
Polynomial<Quaternion> fdd = fd.derivative();
Vector3 axisdd = fdd.eval(x);
Real thetadd = w.cross(u).dot(axisd) + u.dot(axisdd);
Vector3 wd = (u.cross(axisdd) - 2 * thetad * w) / theta;
Vector3 omegad;
if (theta > 0)
{
omegad = u * thetadd + ::std::sin(theta) * wd.cross(u) - (1 - ::std::cos(theta)) * wd + thetad * w.cross(u) + omega.cross(u * thetad - w);
}
else
{
omegad = axisdd;
}
Vector3 yomegad = y._transformVector(omegad);
Quaternion ydd = y.secondDerivative(yd, yomega, yomegad);
if (2 == derivative)
{
return ydd;
}
return Quaternion();
}
Quaternion y0;
protected:
::std::vector<Vector3> c;
private:
Polynomial<Quaternion> derivative() const
{
::std::size_t degree = this->degree() > 0 ? this->degree() - 1 : 0;
Polynomial<Quaternion> f(degree);
f.x0 = this->x0;
f.x1 = this->x1;
if (0 == this->degree())
{
f.c[0] = Vector3::Zero();
}
else
{
for (::std::size_t i = 0; i < this->degree(); ++i)
{
f.c[i] = (static_cast<Real>(this->degree() - (i + 1) + 1) * this->c[i] + static_cast<Real>(i + 1) * this->c[i + 1]) / (this->x1 - this->x0);
}
}
return f;
}
Vector3 eval(const Real& x) const
{
assert(x > this->lower() - this->functionBoundary);
assert(x < this->upper() + this->functionBoundary);
Vector3 axis = Vector3::Zero();
for (::std::size_t i = 0; i < this->degree() + 1; ++i)
{
axis += this->c[this->degree() - i] * ::std::pow(x / this->x1, static_cast<int>(this->degree() - i)) * ::std::pow(x / this->x1 - 1, static_cast<int>(i));
}
return axis;
}
static Vector3 invB(const Vector3& e, const Real& dtheta, const Vector3& x)
{
if (dtheta < ::std::numeric_limits<Real>::epsilon())
{
return x;
}
Real cosdtheta = ::std::cos(dtheta);
Real sindtheta = ::std::sin(dtheta);
return e.dot(x) * e + static_cast<Real>(0.5) * (dtheta * sindtheta) / (1 - cosdtheta) * e.cross(x).cross(e) + static_cast<Real>(0.5) * dtheta * e.cross(x);
}
};
}
}
#endif // RL_MATH_QUATERNIONPOLYNOMIAL_H
|
In its pitch to donors, Cancer Fund of America touted "direct patient aid'' as one of the many ways it helped tens of thousands of Americans struggling with deadly disease.
But instead of medical treatment or financial help, patients got boxes of sample-size soap, seasonal greeting cards and Little Debbie Snack Cakes.
Meanwhile, the family behind Cancer Fund built a network of "sham charities'' that were designed to enrich officers at the expense of sick women and children, as alleged in a complaint filed Tuesday by the Federal Trade Commission, attorneys general and secretaries of state of all of 50 states.
According to the complaint, donations intended for the sick paid for "extravagant insider benefits,'' including cars, college tuition, gym memberships, concert tickets, a Caribbean cruise and trips to Las Vegas and other touristy locales.
"Some charities send children to Disney World,'' South Carolina Secretary of State Mark Hammond said at a Washington, D.C., news conference announcing the complaint. "These charities sent themselves to Disney World.''
Named in the complaint are Cancer Fund and two affiliated charities, Breast Cancer Society and Children's Cancer Fund of America. Combined, those charities raised $187 million over four years, yet spent almost 90 percent of the contributions on for-profit telemarketers and the "steady lucrative employment'' of Cancer Fund founder James Reynolds Sr., his ex-wife, his son and dozens of members of their extended family.
Details in the 148-page complaint mirror what the Tampa Bay Times and the Center for Investigative Reporting uncovered in a yearlong investigation published in 2013. Their report ranked Cancer Fund of America as No. 2 of America's 50 Worst Charities.
Among the plaintiffs is Florida Attorney General Pam Bondi.
Jessica Rich, head of the FTC's consumer protection bureau, hailed the filing as a historic moment' in which the FTC and all 50 states "have joined together to present a united front against charity fraud.''
Officials believe millions of people have given money to the cancer charities, with donations averaging about $20, Rich said.
In response to the complaint, Children's Cancer Fund, headed by Reynolds' ex-wife, Rose Perkins, and Breast Cancer Society, headed by his son, James, II, already have agreed to shut down.
A proposed order imposes $95 million in judgments against them, but in reality they will pay only about $1 million, to be divided among the states to cover investigative costs and to help cancer patients.
"Unfortunately, the money is almost gone,'' Rich said.
Perkins and Reynolds II also will be barred from operating a nonprofit and from soliciting contributions. Neither could be reached for comment.
Not included in the settlement and still in business are the original organization, the Tennessee-based Cancer Fund of America, and its telemarketing arm, Cancer Support Services of Dearborn, Mich.
The elder Reynolds, who is president of both organizations, did not respond to calls and an email seeking comment.
A former Army medic without a college degree, Reynolds worked his way up to head the Knoxville chapter of the prestigious American Cancer Society before being fired for, among other things, sloppy bookkeeping. He founded the Cancer Fund in the mid-1980s and his family began spinning off new cancer charities, each with a relative or close associate in control.
The charities had something else in common. While recognized by the IRS as nonprofit groups, they have spent the vast majority of donations on for-profit solicitation companies, primarily the telemarketers who generated the donations.
"The corporate defendants operated as personal fiefdoms characterized by rampant nepotism, flagrant conflicts of interest and excessive insider compensation, with none of the financial and governance controls that any bona fide charity would have adopted,'' the complaint says.
Among the allegations:
• At Cancer Fund of America, the elder Reynolds employed at least 12 members of his extended family regardless of where in the country they lived. When his son Michael moved to Montana, the Cancer Fund opened a "chapter'' there to keep him on the payroll. The chapter was not successful and later closed.
• At Children's Cancer Fund, Reynold's ex-wife, Perkins, hired 11 friends and relatives including her two daughters and her sister. Between 2008 and 2012, the charity paid those employees more than twice what it provided in financial assistance to young cancer patients. Twice a year, Perkins doled out across-the-board bonuses of up to 10 percent of salary, regardless of employee performance.
• At Breast Cancer Society, Reynolds II, promoted his wife, Kristina, to the new, second-in-command and unadvertised position of "operations and public relations manager.'' She hired several of her relatives including her two sisters and her mother, a caterer who was put to work writing grants. The Arizona-based society also opened a branch in Pennsylvania near the home of its then-board chairman, who hired his wife and mother-in-law to work there.
• All three cancer charities allowed employees to use corporate credit cards for personal expenses and did not require payment until the end of each year, effectively giving them interest-free loans. Credit cards were used to buy food, gas, movie tickets, Jet Ski rentals, video games, meals at Hooter's and purchases at Victoria's Secret — "all ultimately paid for by donors,'' the complaint says.
According to the FTC, telemarketers who raised funds for the charities made pitches that were intended to "tug at donors' heart strings and open their wallets,'' with little concern for accuracy.
The Cancer Fund approved one script saying it never wanted to have to tell a family that it couldn't provide a wig for a child with hair loss because fund-raising goals had fallen short. In fact, the fund did not have a program to provide wigs for children in chemotherapy.
Another script, used by telemarketers for the Children's Cancer Fund, claimed that the organization helps kids with hospice needs, medical supplies and pain medication — all "completely false,'' the complaint says.
And in touting the purported good works of the Breast Cancer Society, telemarketers said contributions supported a "Hope Supply Program" that provided thousands of cancer patients access to local warehouses where they could get baby and women's clothing, toiletries and other items free of charge.
In fact, the complaint says, access was severely limited because there were only three warehouses — in Arizona, Pennsylvania and Arkansas. Between 2009 and 2012, fewer than 500 people ever "shopped'' at the stores.
Contact Susan Taylor Martin at smartin@tampabay.com or (727) 89-38642. |
Q:
for-loops in Python modules
I'm writing a function for an implicit scheme for solving a specific differential equation. The function looks like this:
import numpy as np
def scheme(N,T):
y = np.zeros(N+1) # Array for implicit scheme
h = T/N # Step length
for i in range(N):
y[i+1] = y[i] + h*(1+4*y[i])
print y
I save the file and later import it the usual way, but when I run the scheme function, y = [0 ... 0] where ... are N-1 zeros. It seems like the values are lost in the scope of the for-loop.
If I instead write the whole function in the interpreter (which in my case is Spyder), everything works as it should.
Why doesn't it work when importing the function from the module?
A:
h = T/N
is it possible that T and N are both integers and T < N? In that case h = 0 (and y stays all zeros), because it is an integer division (1/2 == 0).
Try to replace this line with
h = 1. * T / N
and see the results.
y[i+1] = y[i] + h*(1+4*y[i])
can be rewritten as
y[i+1] = y[i] + h + 4 * h * y[i]
^^^
which means that for y[i] = 0, the new y[i+1] will be h. If the integer division T/N makes it zero, then it is what you get.
|
Auto News and Information
Auto insurance is a scam
Too many people accept that mandatory car insurance is a fact of life in the United States. Beginning with Massachusetts in 1927 and appearing in most other states by 1970, compulsory car insurance has been foisted, in the most coercive sense of the word. Exorbitant sums of money are spent on coverage that most people never need. This has sparked claims that exist to this day that auto insurance is a scam perpetrated on the public, and that lawmakers have been bought by the insurance cartels.
Pay and pay without incident
Let’s say that a named insured pays an $500 annual premium for the “privilege” of mandatory car insurance. That’s $5,000 over 10 years, which is worthwhile if the insured is ever involved in a serious accident with leads to personal injury and/or severe property damage. However, various actuarial data studies indicate that on average, most drivers go 20 to 30 years without any at-fault accidents that require their automotive insurance to pay out.
Over 30 years, that’s $15,000 down the drain, money that could go toward more worthwhile investments like retirement savings, stock investments and college funds for children. Imagine losing a college education in a black hole. That’s what mandatory car insurance has been called by critics – as well as a scam.
But wait, auto insurance is necessary risk protection
Supporters of auto insurance claim that it is a financial tool that protects against catastrophe. Much like health insurance, when your aren’t in trouble, you don’t need the coverage, but when you are in trouble, you’re thankful you have it. Some of the ways automotive insurance protects the insured, according to any insurer out there:
Safeguarding the investment the insured has made in their car
Paying for accident-related medical bills
Shielding against personal liability for property damage
Protecting assets against seizure
Protecting against other uninsured or under-insured motorists
Protecting against costs associated with theft and vandalism, or natural disaster
No direct, specifically quantifiable impact of going without
It is theoretically possible to drive without automotive insurance and harm nobody, yourself included. Provided you’re never pulled over by a police officer, you’ll never have to provide proof of automotive insurance. Provided you’re never involved in an accident, either, you’ll never need insurance.
But what about the risk involved in allowing people to drive sans automotive insurance? It could be a valid point, but should theoretical risk that an individual might damage someone’s property or person impose obligation on everyone? Is there a direct, specific negative impact on others when one driver hits the road without insurance?
No, there isn’t. Yet Americans continue to pay because they are forced to do business with a cartel. If car insurance were optional, insurers couldn’t shake down consumers so easily with high surcharges over speeding tickets. As it is considered mandatory, however, consumers have no leverage. And since mandatory car insurance became the norm over the past 20 to 25 years, premiums have soared.
At liberty to choose
Individuals in a free society should, barring definite harm posed to others, be allowed to choose what is best for them. Basing a major financial obligation on the potential that something may happen should be unthinkable in such a society. When premiums were not cut in half in the 1970s when car insurance in the U.S. became mandatory, nobody should have been surprised. And like mindless sheeple, we continue to accept being raped financially. |
For the first time in years, people who want to work for the federal government will have an easier time applying for a job. OPM Director John Berry announced the sweeping changes after President Barack Obama signed an executive order on Tuesday. |
Q:
How to get form data to controller in Magento 2?
I have created a custom module in which I have created a form like this
<form action="<?php echo $this->getUrl('AdminSample/sampleOne/index');?>" method="post" enctype="multipart/form-data">
<table id="views" width="900px">
<tr>
<td>View 1:</td>
<td><input data-form-part="product_form" type="file" name="fileToUpload1" id="fileToUpload" required ></td>
</tr><tr>
<td>View 2:</td>
<td><input data-form-part="product_form" type="file" name="fileToUpload2" id="fileToUpload" required></td>
</tr><tr>
<td>View 3:</td>
<td><input data-form-part="product_form" type="file" name="fileToUpload3" id="fileToUpload" required></td>
</tr><tr>
<td>View 4:</td>
<td><input data-form-part="product_form" type="file" name="fileToUpload4" id="fileToUpload" required></td>
</tr><tr>
<td>View 5:</td>
<td><input data-form-part="product_form" type="file" name="fileToUpload5" id="fileToUpload" required></td>
</tr><tr>
<td>View 6:</td>
<td><input data-form-part="product_form" type="file" name="fileToUpload6" id="fileToUpload" required></td>
</tr><tr>
<td>View 7:</td>
<td><input data-form-part="product_form" type="file" name="fileToUpload7" id="fileToUpload" required></td>
</tr><tr>
<td>View 8:</td>
<td><input data-form-part="product_form" type="file" name="fileToUpload8" id="fileToUpload" required></td>
</tr><tr>
<td>View 9:</td>
<td><input data-form-part="product_form" type="file" name="fileToUpload9" id="fileToUpload" required></td>
</tr><tr>
<td>View 10:</td>
<td><input data-form-part="product_form" type="file" name="fileToUpload10" id="fileToUpload" required></td>
</tr><tr>
<td>View 11:</td>
<td><input data-form-part="product_form" type="file" name="fileToUpload11" id="fileToUpload" required></td>
</tr><tr>
<td>View 12:</td>
<td><input data-form-part="product_form" type="file" name="fileToUpload12" id="fileToUpload" required></td>
</tr>
</table>
<br><br><br>
<h2>Parts & Colors</h2>
<input type="button" onclick="addTable()" value="Add Part">
<div id="divResults">
</div>
<button type="submit">Save</button>
</form>
and i have called custom controller, but i'm getting redirect to admin dashboard. Is there anything wrong in this code?
Please help me to get all images descriptions like name, tmp_name, size and type?
Please ask, If any additional information required to anyone.
This is custom controller i write
<?php
namespace Tym17\AdminSample\Controller\Adminhtml\SampleOne;
class Index extends \Magento\Backend\App\Action
{
/**
* Index Action*
* @return void
*/
protected $resultRawFactory;
public function __construct(\Magento\Backend\App\Action\Context $context) {
parent::__construct($context);
}
public function execute()
{
echo "one";
print_r($_POST);die;
}
}
A:
Seem that you're missing the form key:
<input name="form_key" type="hidden" value="<?php /* @escapeNotVerified */ echo $block->getFormKey(); ?>" />
|
[Retroperitoneal xanthogranuloma: a case report].
A case of retroperitoneal xanthogranuloma is reported. A 51-year-old man was referred to our hospital for the evaluation and treatment of right flank pain and hydronephrosis. Intravenous urography (DIP) and retrograde pyelography revealed the stricture in the middle portion of the right ureter. Ureteroscopy revealed no mucosal lesions. Computed tomography revealed the paraureteric mass lesion. Partial ureterectomy, mass resection and uretero-ureterostomy were performed. Then a double J stent was left in place for 6 weeks. The stricture was due to a yellowish mass adhered to the right side of the ureter. The resected mass measured 1.0 x 2.0 x 1.0 cm. The histopathological diagnosis was xanthogranuloma. The patient is in good health without recurrence 4 months after the surgery. |
Donald Trump eagerly injected himself into the Democratic Party’s email controversy on Monday, calling the revelations that the party apparatus backed Democrat Hillary Clinton over Bernie Sanders proved his charges that the system is rigged.
Trump, kicking off a three-day campaign swing with his vice presidential running mate, Indiana Governor Mike Pence, returned to his freewheeling style after giving a scripted speech on Thursday accepting the Republican presidential nomination.
During an hour-long event in Roanoke, Virginia, Trump labeled Clinton “low-energy,” the same characterization he lobbed at Republican rival Jeb Bush; attacked her running mate, U.S. Senator Tim Kaine of Virginia; and complained about the air conditioning in the hotel ballroom where he spoke.
“I think the ballroom and the people who own this hotel ought to be ashamed of themselves,” Trump said.
Trump took particular delight in making light of Democratic disunity as party loyalists gather in Philadelphia this week to anoint Clinton as their nominee, after a week in which Republicans struggled to unify behind Trump at their convention in Cleveland.
Trump waved away Republican disunity as essentially isolated pockets of resistance and made an apparent reference to U.S. Senator Ted Cruz, who was booed off stage in Cleveland when he did not endorse Trump after losing to him in a bitter primary race.
“We had a couple people who probably destroyed their career, but who knows,” Trump said. “Look what’s going on in Philadelphia. … We had no riots, no nothing. It was unbelievable. I’ll never forget it as long as I live.”
Trump’s strongest words were for Democratic National Committee Chairwoman Debbie Wasserman Schultz, who was forced to resign on Sunday in the fallout over leaked emails showing the committee backed Clinton over democratic socialist Sanders.
The New York businessman said it was proof the “system” is rigged against outsider candidates.
“Debbie was totally loyal to Hillary, and Hillary threw her under the bus,” Trump said, adding, “I don’t want her covering MY back.”
He then launched into a riff about “Hillary Rotten Clinton,” a play on her full name, Hillary Rodham Clinton. |
368 So.2d 1187 (1979)
Louis D. ARNAUD, (Plaintiff-Appellant),
v.
MOLBERT BROTHERS POULTRY & EGG COMPANY, INC., (Defendant-Appellee).
No. 6890.
Court of Appeal of Louisiana, Third Circuit.
March 7, 1979.
John L. Van Norman, III, Lake Charles, for plaintiff-appellant.
Plauche, Smith, Hebert & Nieset, Frank M. Walker, Jr., Raggio, Cappel, Chozen & Berniard by Frederick L. Cappel, Lake Charles, for defendant-appellee.
Before CULPEPPER, FORET and DOUCET, JJ.
DOUCET, Judge.
The issue raised by this appeal is whether or not the filing of a petition for workmen's compensation benefits interrupts the prescriptive period for bringing a tort action arising out of the same accident, when the allegations in the petition are sufficient for bringing the former but not the latter. We hold that prescription is not interrupted in such cases.
This suit was brought by plaintiff to secure indemnification for injuries he allegedly sustained while performing duties arising out of and during the course and scope of his employment by defendant. The original petition, filed July 23, 1973, was captioned as a petition for workmen's compensation *1188 benefits. It contained factual allegations, which although sufficient for a workmen's compensation suit, were inadequate as a basis for pursuing a claim in tort.
On January 14, 1976, a first amending and supplemental petition was filed which included a prayer for tort damages and the factual allegations necessary to support such an action. The district court sustained the peremptory exception of prescription raised by defendant, because the tort claim was asserted more than a year after the date of the accident. Plaintiff appeals this ruling, claiming that the running of prescription was interrupted by the filing of the original petition.
Absent a suspension of interruption, tort actions prescribe in one year. La.Civil Code art. 3536. Plaintiff contends that prescription was interrupted in this case by L.S.A. R.S. 9:5801, which provides:
"The filing of a suit in a court of competent jurisdiction shall interrupt all prescriptions affecting the cause of action therein sued upon, against all defendants, including minors and interdicts."
In order to determine the applicability of this provision to a particular case, an understanding of the meaning of the phrase "cause of action" is necessary.
In Trahan v. Liberty Mutual Insurance Company, 314 So.2d 350 (La.1975), which was relied upon by the district court, our supreme court discussed at length the meaning of this phrase. After reviewing the jurisprudence relevant to this point, the court concluded:
"The causes of action, however, are different, a cause of action being an act by a defendant which gives a plaintiff a right to invoke judicial interference on his behalf."
Plaintiff's original petition did not allege any acts by the defendant which would have entitled him to bring a tort action. There was no assertion that plaintiff's injuries were the result of defendant's breach of any duty owed to him. Therefore, it did not set forth a cause of action in tort.
The difference between the two petitions is not limited to the demands, as suggested by counsel for plaintiff in his brief. The original petition alleged a cause of action for workmen's compensation benefits, based essentially on defendant's employment of plaintiff at the time he was injured. The first amending and supplemental petition alleged an entirely different cause of action, based on defendant's alleged grossly negligent maintenance of its premises which produced a dangerous substance, the presence of which was known to defendant's executives. The factual allegations of the two were significantly different.
Our decision is entirely harmonious with Lemieux v. Cousins, 154 La. 811, 98 So. 255 (1923) in which the court dealt with the converse situation of the institution of a suit in tort, followed by a workmen's compensation action. The original petition in that case contained all of the factual allegations necessary for a workmen's compensation suit. In this case, the original petition did not contain the factual allegations needed to litigate a claim in tort.
For the above and foregoing reasons, the decision of the district court is affirmed. The costs of this appeal are assessed against Plaintiff-Appellant.
AFFIRMED.
|
Turkey shelled Kurdish targets in Syria Sunday despite calls by France and the U.S. to halt such actions. The Turkish military opened a new front against Kurdish forces Saturday after the Kurdish People’s Protection Units (YPG) captured positions in the northern Syrian city of Aleppo.
While the U.S. considers the YPG an ally in the war on the Islamic State group, aka either ISIL or ISIS, Turkey considers it to be closely allied with the outlawed Kurdistan Workers’ Party (PKK). The PKK has carried out an insurgency against the Turkish government for decades.
U.S. Vice President Joe Biden spoke with Turkish Prime Minister Ahmet Davutoğlu by phone Sunday, calling for an end to the artillery strikes on the Kurdish militants. Echoing the appeal made by the U.S., France also demanded “an immediate halt to the bombing, both that of the regime and its allies throughout the country and that of Turkey in the Kurdish zones,” Agence France-Presse reported.
The Syria-based YPG is the armed wing of the Kurdish Democratic Union Party (PYD). The Syrian Civil War has emboldened Kurdish fighters, who now control considerable swaths of war-ravaged Syria. Turkish officials fear the empowered Kurdish forces within Syria could bolster Kurdish forces within Turkey, where they have issued fresh calls for self-rule in recent months.
The YPG managed to seize the Menage air base and several other key positions in Syria from Islamist rebels in recent days. Turkey has demanded it withdraw.
A tenuous ceasefire between the Turkish government and Kurdish militants in the country unraveled last summer amid spillover from the war in Syria. Since then, Turkey has launched an aggressive airstrike campaign against PKK targets in both Iraq and Turkey’s restive southeastern region, which is predominantly Kurdish. The recent developments in Syria has stirred concern that fighting between the Turkish forces and Kurdish militants could spread.
Human rights groups have raised alarms over the massive humanitarian crisis created by Turkey’s military campaign targeting Kurdish fighters, as civilian casualties have climbed in recent months. |
Blizzard has canceled one of the largest, most ambitious games it ever tried to create. At least, it might have been. No one knew that much about it. And now it's dead.
The quotes from the company about the decision are hard to read. The work and the budget that went into Titan have now been sunk, although it's likely some of the ideas and technology behind the game will aid Blizzard in some way. It's rare that these things are a total loss, although this means that there are likely people who have put the better part of a decade into a game that will never see the light of day.
But this move signals a new direction for the company, and the industry is bound to pay attention.
"I wouldn't say no to ever doing an MMO again," Blizzard co-founder and CEO Mike Morhaime told Polygon. "But I can say that right now, that's not where we want to be spending our time."
Why would anyone?
The changing of the guard
World of Warcraft has long been one of the most popular, and profitable, games in the business, but things are changing. It's not just that it's going to become harder to hang onto players. The fact is that there has yet to be an MMO that has remained profitable, or has kept its subscription fee, in quite some time.
It's a space filled with battered wrecks of games that have been forced to move to a free-to-play model, and it's unlikely that any of these titles will ever come close to World of Warcraft now from a business perspective, much less the numbers the game enjoyed at its height. Taking another run at that hill would require a huge investment, and a very special product. And Blizzard didn't feel Titan would have been that game.
"We took a step back and realized that it had some cool hooks. It definitely had some merit as a big, broad idea, but it didn't come together. It did not distill," Chris Metzen, Blizzard's senior vice president of story and franchise development, told Polygon. "The music did not flow. For all our good intentions and our experience and the pure craftsmanship that we brought together, we had to make that call."
Blizzard didn't feel Titan would have been that game
Being able to make that call at all is one of the most telling examples of Blizzard's power in the video game world. In nearly every other situation, with this much time and money riding on the seven years the game had been in development, the publisher would likely have forced some kind of product to be released.
Blizzard was given the privilege of taking its own dog behind the shed to put it down, which is an odd way for a company to show its muscle. But make no mistake that this is a company flexing its ability to react to the market and only release the games it's completely sure of.
It's unlikely that we'll see another company create the kind of MMO we're used to seeing from companies like Blizzard, and instead the sort of "always-online but please don't call it an MMO" class of games like Titanfall and Destiny are bound to take over. Blizzard is moving into other areas with the upcoming Heroes of the Storm, a clear shot at the monstrous MOBA market, and Hearthstone, a trading card game that has become a massive success.
The puck had disappeared
This is how Blizzard works: They follow the crowd, but the company is talented and clever enough that it has always been able to make the definitive version of the game they're trying to emulate.
"Let's take a game that we all love playing, do what we want to do to make it ours, just like we've done with every single game from the past. Vikings was Lemmings. Rock and Roll Racing, name any of those car games out there. Warcraft came from Dune, so it's the same thing with Heroes of the Storm," Sam Didier, a senior art director at Blizzard told me when discussing Heroes of the Storm.
"It's like, we take a game that we like and then we make our version of it. If we like it, it turns out that people like it as well."
During the course of Titan, it sounds like Blizzard lost that path, and even worse they were no longer skating towards where the puck would be in a few years. The puck had disappeared. Blizzard was chasing its own tail, and may have been inspired by its own success with a previous product rather than anything that ignited the company's imagination.
Killing the game now, and still having two of the most promising titles in active development in Hearthstone and Heroes of the Storm, not to mention the still-popular World of Warcraft, shows the power and might of Blizzard, and it may be a situation where a failed project makes them one of the more admired developers in the business. |
Drugs against leishmaniasis: a synergy of technology and partnerships.
To date, there are no vaccines against any of the major parasitic diseases, and chemotherapy is the main weapon in our arsenal. There is an urgent need for better drugs against Leishmania. With the completion of the human genome sequence and soon that of Leishmania, for the first time we have the opportunity to identify novel chemotherapeutic treatments. This requires the exploitation of a variety of technologies. The major challenge is to take the process from discovery of drug candidates all the way along the arduous path to the marketplace. A crucial component will be the forging of partnerships between the pharmaceutical industry and publicly funded scientists to ensure that the promise of the current revolution in biology lives up to our hopes and expectations. |
Ade B12 Lama Brand
Formulated exclusively for llamas and alpacas this top choice Lama Brand ADE B12 paste is used for conformation problems in growing animals as well as stressful situations. This paste is camelid specific for the ideal amounts of vitamins for your animals |
In the past few months I had fallen victim to dining out ennui, meaning I often ended up at Sichuan restaurants or something within a 2-block radius of my apartment. So one night, following the recommendation of my friend Sandra, Jacob and I took a 20-minute cab ride to Qin Tang Fu, a restaurant specializing in Shaanxi cuisine.
What most intrigued me about this spot, other than the decor of all low kiddie-sized tables and chairs, was the Shaanxi rice wine. It was slightly sweet, almost non-alcoholic, and immensely more drinkable than the stronger rice wines I've tasted. (Obvious disclaimer: I am not a fan of Chinese rice wine.) It reminded me somewhat of horchata, or makgeolli, though Sandra puts describes it best on her website as "a grown-up's version of soybean milk."
Most people who have experienced street food in Beijing, Shanghai, or Xi'an will recognize roujiamo, a Shaanxi specialty consisting of a pita-like bun filled with fatty pork and shredded vegetables. The version at Qin Tang Fu is much bigger than street versions, and contains no chili sauce. In the front window, however, you can watch the cooks knead the buns and bake them on a coal-heated drum.
Shaanxi food is also known for its liberal use of vinegar and garlic. Chewy hand-pulled noodles and wide flat spinach noodles both come doused with vinegar, garlic, and optional chili sauce. Vegetables are frequently doused with vinegar and garlic. (Note to anyone contemplating this place for a first date: NO.) The favorite dish of the night was a star anise braised chicken dish that, although much uglier than in the picture menu, was so tender and juicy neither of us cared.
Flash forward to two weeks later, the morning of our departure to San Francisco. We had just dropped off some kitchenware at my cooking school and had time for a quick last Chinese meal before the long flight. We were surrounded by Sichuan restaurants but nothing jumped out. Stacey, who co-owns The Hutong, suggested a Guangxi restaurant across the street.
The rustic charm of the interior once again sucked me in. Not only were there cute wooden chairs and traditional flowery costumes decorating the walls, but the waitstaff changed the music from Snoop Dogg to Chinese folk music while we were eating. Ah, the power of quaintness.
(Guangxi is the province to the west of Guangdong, a.k.a. Canton, and has cuisine that also emphasizes light, non-spicy sauces.)
Our pork-stuffed fried tofu puffs came with a mild chili garlic sauce. There was also a stir-fried pork and snowpea dish that was a less fatty version of twice-cooked pork. And the Cantonese side of me came out when I insisted on a big vat of chicken feet and mushroom soup. (Or maybe it the side of me that was freezing from the 19 degree Fahrenheit temperatures outside.)
I drank the soup and took a few bites of the chicken feet, thinking both of my childhood love of sugary dim sum claws (fong zhao) and, later, an acquired Western aversion to the stuff. The soup was wonderful and warming, but also made us wonder...would war break out if the rest of the world stopped exportingall its chicken feet to this country?
Yu Chou basil coconut curry from July's Tangra. Our next dinner will be Aug 23 at 61 Local and tix now available at tangrasummeraug2015.eventbrite.com 🍅🍠🍑🌽🍆 pic reframed from @chitra
Icy cold raspberry smoothie, so needed for today ☀️
Love this mini print by @jordangraceowens. Going onto office wall ❤️
Dumpling totes finally back in stock! Fulfilling pre-orders now...nab yours for summer beaching, farmers market-ing, an picnicking before they sell out again ☀️👙🍑🍇🍉
Cold sesame noodles, perfect for summer. Learn to make the perfect version & many other dishes in my online class with #Craftsy. This week use this link for $10 off www.craftsy.com/ext/DianaKuan_5211_D
Visiting the Society of Illustrators. Love this piece done with merlot, ink, & gouache. Must start drawing more with wine. 🍷✒️🎨
Two months ago I traveled to Denver to film an online class series with #Craftsy. It was the perfect project for reaching & teaching techniques to home cooks outside of NY. Today I'm excited to announce Chinese Takeout Favorites just launched! And you can also use this link for exclusive $20 (50%!) off this week. Enjoy! www.craftsy/ext/DianaKuan_5211_H |
Slow channel kinetics in heart muscle.
The cardiac slow inward current (Isi) is mediated by a specific conductance system, the slow channel. It is highly selective for Ca and other bivalent cations as for instance Sr, whilst Na permeability is extremely small. The kinetics of activation, inactivation and recovery from inactivation are voltage- and temperature-sensitive. In contrasts to the Hodgkin-Huxley model, development and removal of inactivation operate with different time constants, at least in the ventricular myocardium of cats. Moreover, both processes exhibit a different pharmacological susceptibility. Thus a second inactivation variable having smaller rate constants than the inactivation variable if has to be introduced, which simultaneously suggests the existence of slow inactivation in cardiac slow channels. |
---
layout: post
comments: true
title: "Bài 25: Matrix Factorization Collaborative Filtering"
title2: "25. Matrix Factorization Collaborative Filtering"
date: 2017-05-31 15:22:00
permalink: 2017/05/31/matrixfactorization/
mathjax: true
tags: Recommendation-systems, dimensionality-reduction
category: Recommendation-systems
sc_project: 11358048
sc_security: 5bef7cd2
img: /assets/25_mf/mf1.png
summary: Trong bài viết này, chúng ta sẽ làm quen với một hướng tiếp cận khác cho Collaborative Filtering dựa trên Matrix Factorization (hoặc Matrix Decomposition), tức Phân tích ma trận thành nhân tử.
---
**Trong trang này:**
<!-- MarkdownTOC -->
- 1. Giới thiệu
- 2. Xây dựng và tối ưu hàm mất mát
- 2.1. Hàm mất mát
- 2.2. Tối ưu hàm mất mát
- 3. Lập trình Python
- 3.1. `class MF`
- 3.2. Áp dụng lên MovieLens 100k
- 3.3. Áp dụng lên MovieLens 1M
- 4. Thảo luận
- 4.1. Khi có bias
- 4.2. Nonnegative Matrix Factorization
- 4.3. Incremental Matrix Factorization
- 4.4. Others
- 6. Tài liệu tham khảo
<!-- /MarkdownTOC -->
<a name="-gioi-thieu"></a>
## 1. Giới thiệu
Trong [Bài 24](/2017/05/24/collaborativefiltering/), chúng ta đã làm quen với một hướng tiếp cận trong Collaborative Filtering dựa trên hành vi của các _users_ hoặc _items_ lân cận có tên là Neighborhood-based Collaborative Filtering. Trong bài viết này, chúng ta sẽ làm quen với một hướng tiếp cận khác cho Collaborative Filtering dựa trên _Matrix Factorization_ (hoặc _Matrix Decomposition_), tức _Phân tích ma trận thành nhân tử_.
Nhắc lại rằng trong [Content-based Recommendation Systems](/2017/05/17/contentbasedrecommendersys/), mỗi _item_ được mô tả bằng một vector \\(\mathbf{x}\\) được gọi là _item profile_. Trong phương pháp này, ta cần tìm một vector hệ số \\(\mathbf{w}\\) tương ứng với mỗi _user_ sao cho _rating_ đã biết mà _user_ đó cho _item_ xấp xỉ với:
\\[
y \approx \mathbf{xw}
\\]
Với cách làm trên, [_Utility Matrix_](/2017/05/17/contentbasedrecommendersys/#-utility-matrix) \\(\mathbf{Y}\\), giả sử đã được điền hết, sẽ xấp xỉ với:
\\[
\mathbf{Y} \approx \left[ \begin{matrix}
\mathbf{x}_1\mathbf{w}_1 & \mathbf{x}_1\mathbf{w}_2 & \dots & \mathbf{x}\_1 \mathbf{w}_N\\\
\mathbf{x}_2\mathbf{w}_1 & \mathbf{x}_2\mathbf{w}_2 & \dots & \mathbf{x}\_2 \mathbf{w}_N\\\
\dots & \dots & \ddots & \dots \\\
\mathbf{x}_M\mathbf{w}_1 & \mathbf{x}_M\mathbf{w}_2 & \dots & \mathbf{x}\_M \mathbf{w}_N\\\
\end{matrix} \right]
= \left[ \begin{matrix}
\mathbf{x}_1 \\\
\mathbf{x}_2 \\\
\dots \\\
\mathbf{x}_M \\\
\end{matrix} \right]
\left[ \begin{matrix}
\mathbf{w}_1 & \mathbf{w}_2 & \dots & \mathbf{w}\_N
\end{matrix} \right] = \mathbf{XW}
\\]
với \\(M, N\\) lần lượt l
à số _items_ và số _users_.
Chú ý rằng, \\(\mathbf{x}\\) được xây dựng dựa trên thông tin mô tả của _item_ và quá trình xây dựng này độc lập với quá trịnh đi tìm hệ số phù hợp cho mỗi _user_. Như vậy, việc xây dựng _item profile_ đóng vai trò rất quan trọng và có ảnh hưởng trực tiếp lên hiệu năng của mô hình. Thêm nữa, việc xây dựng từng mô hình riêng lẻ cho mỗi _user_ dẫn đến kết quả chưa thực sự tốt vì không khai thác được đặc điểm của những _users_ gần giống nhau.
Bây giờ, giả sử rằng ta không cần xây dựng từ trước các _item profile_ \\(\mathbf{x}\\) mà vector đặc trưng cho mỗi _item_ này có thể được huấn luyện đồng thời với mô hình của mỗi _user_ (ở đây là 1 vector hệ số). Điều này nghĩa là, biến số trong bài toán tối ưu là cả \\(\mathbf{X}\\) và \\(\mathbf{W}\\); trong đó, \\(\mathbf{X}\\) là ma trận của toàn bộ _item profiles_, mỗi **hàng** tương ứng với 1 _item_, \\(\mathbf{W}\\) là ma trận của toàn bộ _user models_, mỗi **cột** tương ứng với 1 _user_.
Với cách làm này, chúng ta đang cố gắng xấp xỉ _Utility Matrix_ \\(\mathbf{Y} \in \mathbb{R}^{M \times N}\\) bằng tích của hai ma trận \\(\mathbf{X}\in \mathbb{R}^{M\times K}\\) và \\(\mathbf{W} \in \mathbb{R}^{K \times N}\\).
Thông thường, \\(K\\) được chọn là một số nhỏ hơn rất nhiều so với \\(M, N\\). Khi đó, cả hai ma trận \\(\mathbf{X}\\) và \\(\mathbf{W}\\) đều có rank không vượt quá \\(K\\). Chính vì vậy, phương pháp này còn được gọi là _Low-Rank Matrix Factorization_ (xem Hình 1).
<hr>
<div class="imgcap">
<img src ="/assets/25_mf/mf1.png" align = "center" width = "800">
<div class = "thecap" align = "left">Hình 1: Matrix Factorization. Utility matrix \(\mathbf{Y}\) được phân tích thành tích của hai ma trận low-rank \(\mathbf{X}\) và \\(\mathbf{W}\) </div>
</div>
<hr>
Có một vài điểm lưu ý ở đây:
* Ý tưởng chính đằng sau Matrix Factorization cho Recommendation Systems là tồn tại các _latent features_ (tính chất ẩn) mô tả sự liên quan giữa các _items_ và _users_. Ví dụ với hệ thống gợi ý các bộ phim, tính chất ẩn có thể là _hình sự_, _chính trị_, _hành động_, _hài_, ...; cũng có thể là một sự kết hợp nào đó của các thể loại này; hoặc cũng có thể là bất cứ điều gì mà chúng ta không thực sự cần đặt tên. Mỗi _item_ sẽ mang tính chất ẩn ở một mức độ nào đó tương ứng với các hệ số trong vector \\(\mathbf{x}\\) của nó, hệ số càng cao tương ứng với việc mang tính chất đó càng cao. Tương tự, mỗi _user_ cũng sẽ có xu hướng thích những tính chất ẩn nào đó và được mô tả bởi các hệ số trong vector \\(\mathbf{w}\\) của nó. Hệ số cao tương ứng với việc _user_ thích các bộ phim có tính chất ẩn đó. Giá trị của biểu thức \\(\mathbf{xw}\\) sẽ cao nếu các thành phần tương ứng của \\(\mathbf{x}\\) và \\(\mathbf{w}\\) đều cao. Điều này nghĩa là _item_ mang các tính chất ẩn mà _user_ thích, vậy thì nên gợi ý _item_ này cho _user_ đó.
* Vậy tại sao Matrix Factorization lại được xếp vào Collaborative Filtering? Câu trả lời đến từ việc đi tối ưu hàm mất mát mà chúng ta sẽ thảo luận ở Mục 2. Về cơ bản, để tìm nghiệm của bài toán tối ưu, ta phải lần lượt đi tìm \\(\mathbf{X}\\) và \\(\mathbf{W}\\) khi thành phần còn lại được cố định. Như vậy, mỗi hàng của \\(\mathbf{X}\\) sẽ phụ thuộc vào toàn bộ các cột của \\(\mathbf{W}\\). Ngược lại, mỗi cột của \\(\mathbf{W}\\) lại phục thuộc vào toàn bộ các hàng của \\(\mathbf{X}\\). Như vậy, có những mỗi quan hệ ràng buộc _chằng chịt_ giữa các thành phần của hai ma trận trên. Tức chúng ta cần sử dụng thông tin của tất cả để suy ra tất cả. Vậy nên phương pháp này cũng được xếp vào Collaborative Filtering.
* Trong các bài toán thực tế, số lượng _items_ \\(M\\) và số lượng _users_ \\(N\\) thường rất lớn. Việc tìm ra các mô hình đơn giản giúp dự đoán _ratings_ cần được thực hiện một cách nhanh nhất có thể. [Neighborhood-based Collaborative Filtering](/2017/05/24/collaborativefiltering/) không yêu cầu việc _learning_ quá nhiều, nhưng trong quá trình dự đoán (_inference_), ta cần đi tìm độ _similarity_ của _user_ đang xét với *toàn bộ* các _users_ còn lại rồi suy ra kết quả. Ngược lại, với Matrix Factorization, việc _learning_ có thể hơi phức tạp một chút vì phải lặp đi lặp lại việc tối ưu một ma trận khi cố định ma trận còn lại, nhưng việc _inference_ đơn giản hơn vì ta chỉ cần lấy tích của hai vector \\(\mathbf{xw}\\), mỗi vector có độ dài \\(K\\) là một số nhỏ hơn nhiều so với \\(M, N\\). Vậy nên quá trình _inference_ không yêu cầu khả năng tính toán cao. Việc này khiến nó phù hợp với các mô hình có tập dữ liệu lớn.
* Thêm nữa, việc lưu trữ hai ma trận \\(\mathbf{X}\\) và \\(\mathbf{W}\\) yêu cầu lượng bộ nhớ nhỏ khi so với việc lưu toàn bộ _Similarity matrix_ trong Neighborhood-based Collaborative Filtering. Cụ thể, ta cần bộ nhớ để chứa \\(K(M+N)\\) phần tử thay vì lưu \\(M^2\\) hoặc \\(N^2\\) của _Similarity matrix_.
Tiếp theo, chúng ta cùng đi xây dựng hàm mất mát và cách tối ưu nó.
<a name="-xay-dung-va-toi-uu-ham-mat-mat"></a>
## 2. Xây dựng và tối ưu hàm mất mát
<a name="-ham-mat-mat"></a>
### 2.1. Hàm mất mát
[Tương tự như trong Content-based Recommendation Systems](/2017/05/17/contentbasedrecommendersys/#-xay-dung-ham-mat-mat), việc xây dựng hàm mất mát cũng được dựa trên các thành phần đã được điền của Utility Matrix \\(\mathbf{Y}\\), có khác một chút là không có thành phần bias và biến tối ưu là cả \\(\mathbf{X}\\) và \\(\mathbf{W}\\). Việc thêm bias vào sẽ được thảo luận ở Mục 4. Việc xây dựng hàm mất mát cho Matrix Factorization là tương đối dễ hiểu:
\\[
\mathcal{L}(\mathbf{X}, \mathbf{W}) = \frac{1}{2s} \sum_{n=1}^N \sum_{m:r_{mn} = 1} (y_{mn} - \mathbf{x}_m\mathbf{w}_n)^2 + \frac{\lambda}{2} (\|\|\mathbf{X}\|\|_F^2 + \|\|\mathbf{W}\|\|_F^2) ~~~~~ (1)
\\]
trong đó \\(r_{mn} = 1\\) nếu _item_ thứ \\(m\\) đã được đánh giá bởi _user_ thứ \\(n\\), \\(\|\|\bullet\|\|\_F^2\\) là [Frobineous norm](/math/#chuan-cua-ma-tran), tức căn bậc hai của tổng bình phương tất cả các phần tử của ma trận (giống với norm 2 trong vector), \\(s\\) là toàn bộ số _ratings_ đã có. Thành phần thứ nhất chính là trung bình sai số của mô hình. Thành phần thứ hai trong hàm mất mát phía trên là [\\(l_2\\) regularization](/2017/03/04/overfitting/#-\\l\\-regularization), giúp tránh [overfitting](/2017/03/04/overfitting/).
>**Lưu ý:** Giá trị _ratings_ thường là các giá trị đã được chuẩn hoá, bằng cách trừ mỗi hàng của Utility Matrix đi trung bình cộng của các giá trị đã biết của hàng đó (item-based) hoặc trừ mỗi cột đi trung bình cộng của các giá trị đã biết trong cột đó (user_based). Trong một số trường hợp nhất định, ta không cần chuẩn hoá ma trận này, nhưng kèm theo đó phải có thêm các kỹ thuật khác để giải quyết vấn đề _thiên lệch_ trong khi _rating_.
Việc tối ưu đồng thời \\(\mathbf{X}, \mathbf{W}\\) là tương đối phức tạp, thay vào đó, phương pháp được sử dụng là lần lượt tối ưu một ma trận trong khi cố định ma trận kia, tới khi hội tụ.
<a name="-toi-uu-ham-mat-mat"></a>
### 2.2. Tối ưu hàm mất mát
Khi cố định \\(\mathbf{X}\\), việc tối ưu \\(\mathbf{W}\\) chính là bài toán tối ưu trong Content-based Recommendation Systems:
\\[
\mathcal{L}(\mathbf{W}) = \frac{1}{2s} \sum_{n=1}^N \sum_{m:r_{mn} = 1} (y_{mn} - \mathbf{x}_m\mathbf{w}_n)^2 + \frac{\lambda}{2} \|\|\mathbf{W}\|\|_F^2 ~~~~~ (2)
\\]
Khi cố định \\(\mathbf{W}\\), việc tối ưu \\(\mathbf{X}\\) được đưa về tối ưu hàm:
\\[
\mathcal{L}(\mathbf{X}) = \frac{1}{2s} \sum_{n=1}^N \sum_{m:r_{mn} = 1} (y_{mn} - \mathbf{x}_m\mathbf{w}_n)^2 + \frac{\lambda}{2} \|\|\mathbf{X}\|\|_F^2 ~~~~~ (3)
\\]
Hai bài toán này sẽ được tối ưu bằng [Gradient Descent](/2017/01/12/gradientdescent/).
Chúng ta có thể thấy rằng bài toán \\((2)\\) có thể được tách thành \\(N\\) bài toán nhỏ, mỗi bài toán ứng với việc đi tối ưu một cột của ma trận \\(\mathbf{W}\\):
\\[
\mathcal{L}(\mathbf{w}\_n) = \frac{1}{2s} \sum_{m:r_{mn} = 1} (y_{mn} - \mathbf{x}_m\mathbf{w}_n)^2 + \frac{\lambda}{2}\|\|\mathbf{w}_n\|\|_2^2 ~~~~ (4)
\\]
Vì biểu thức trong dấu \\(\sum\\) chỉ phụ thuộc vào các _items_ đã được _rated_ bởi _user_ đang xét, ta có thể đơn giản nó bằng cách đặt \\(\hat{\mathbf{X}}\_n\\) là ma trận được tạo bởi các hàng tương ứng với các _items_ đã được _rated_ đó, và \\(\hat{\mathbf{y}}\_n\\) là các _ratings_ tương ứng. Khi đó:
\\[
\mathcal{L}(\mathbf{w}\_n) = \frac{1}{2s} \|\|\hat{\mathbf{y}}_n - \hat{\mathbf{X}}_n\mathbf{w}_n\|\|^2 + \frac{\lambda}{2}\|\|\mathbf{w}_n\|\|_2^2 ~~~~~(5)
\\]
và đạo hàm của nó:
\\[
\frac{\partial \mathcal{L}(\mathbf{w}\_n)}{\partial \mathbf{w}_n} = -\frac{1}{s}\hat{\mathbf{X}}_n^T(\hat{\mathbf{y}}_n - \hat{\mathbf{X}}_n\mathbf{w}_n) + \lambda \mathbf{w}_n ~~~~~ (6)
\\]
**Vậy công thức cập nhật cho mỗi cột của \\(\mathbf{W}\\) là:**
\\[
\mathbf{w}_n = \mathbf{w}_n - \eta \left(-\frac{1}{s}\hat{\mathbf{X}}_n^T (\hat{\mathbf{y}}_n - \hat{\mathbf{X}}_n\mathbf{w}_n) + \lambda \mathbf{w}_n\right) ~~~~~(7)
\\]
Tương tự như thế, mỗi cột của \\(\mathbf{X}\\), tức vector cho mỗi _item_, sẽ được tìm bằng cách tối ưu:
\\[
\begin{eqnarray}
\mathcal{L}(\mathbf{x}\_m) &=& \frac{1}{2s}\sum_{n:r_{mn} = 1} (y_{mn} - \mathbf{x}_m\mathbf{w}_n)^2 + \frac{\lambda}{2}\|\|\mathbf{x}_m\|\|_2^2 ~~~~ (8)
\end{eqnarray}
\\]
Đặt \\(\hat{\mathbf{W}}\_m\\) là ma trận được tạo bằng các cột của \\(\mathbf{W}\\) ứng với các _users_ đã đánh giá _item_ đó và \\(\hat{\mathbf{y}}^m\\) là vector _ratings_ tương ứng. \\((8)\\) trở thành:
\\[
\mathcal{L}(\mathbf{x}\_m)
= \frac{1}{2s}\|\|\hat{\mathbf{y}}^m - {\mathbf{x}}_m\hat{\mathbf{W}}_m\|\|_2^2 + \frac{\lambda}{2} \|\|\mathbf{x}_m\|\|_2^2 ~~~~~ (9)
\\]
Tương tự như trên, **công thức cập nhật cho mồi hàng của \\(\mathbf{X}\\) sẽ có dạng:**
\\[
\mathbf{x}_m = \mathbf{x}_m - \eta\left(-\frac{1}{s}(\hat{\mathbf{y}}^m - \mathbf{x}_m\hat{\mathbf{W}}_m)\hat{\mathbf{W}}_m^T + \lambda \mathbf{x}_m\right) ~~~~~ (10)
\\]
_Bạn đọc có thể muốn xem thêm [Đạo hàm của hàm nhiều biến](/math/#-dao-ham-cua-ham-nhieu-bien)_
<a name="-lap-trinh-python"></a>
## 3. Lập trình Python
Tiếp theo, chúng ta sẽ đi sâu vào phần lập trình.
<a name="-class-mf"></a>
### 3.1. `class MF`
**Khởi tạo và chuẩn hoá:**
```python
import pandas as pd
import numpy as np
from sklearn.metrics.pairwise import cosine_similarity
from scipy import sparse
class MF(object):
"""docstring for CF"""
def __init__(self, Y_data, K, lam = 0.1, Xinit = None, Winit = None,
learning_rate = 0.5, max_iter = 1000, print_every = 100, user_based = 1):
self.Y_raw_data = Y_data
self.K = K
# regularization parameter
self.lam = lam
# learning rate for gradient descent
self.learning_rate = learning_rate
# maximum number of iterations
self.max_iter = max_iter
# print results after print_every iterations
self.print_every = print_every
# user-based or item-based
self.user_based = user_based
# number of users, items, and ratings. Remember to add 1 since id starts from 0
self.n_users = int(np.max(Y_data[:, 0])) + 1
self.n_items = int(np.max(Y_data[:, 1])) + 1
self.n_ratings = Y_data.shape[0]
if Xinit is None: # new
self.X = np.random.randn(self.n_items, K)
else: # or from saved data
self.X = Xinit
if Winit is None:
self.W = np.random.randn(K, self.n_users)
else: # from daved data
self.W = Winit
# normalized data, update later in normalized_Y function
self.Y_data_n = self.Y_raw_data.copy()
def normalize_Y(self):
if self.user_based:
user_col = 0
item_col = 1
n_objects = self.n_users
# if we want to normalize based on item, just switch first two columns of data
else: # item bas
user_col = 1
item_col = 0
n_objects = self.n_items
users = self.Y_raw_data[:, user_col]
self.mu = np.zeros((n_objects,))
for n in range(n_objects):
# row indices of rating done by user n
# since indices need to be integers, we need to convert
ids = np.where(users == n)[0].astype(np.int32)
# indices of all ratings associated with user n
item_ids = self.Y_data_n[ids, item_col]
# and the corresponding ratings
ratings = self.Y_data_n[ids, 2]
# take mean
m = np.mean(ratings)
if np.isnan(m):
m = 0 # to avoid empty array and nan value
self.mu[n] = m
# normalize
self.Y_data_n[ids, 2] = ratings - self.mu[n]
```
**Tính giá trị hàm mất mát:**
```python
def loss(self):
L = 0
for i in range(self.n_ratings):
# user, item, rating
n, m, rate = int(self.Y_data_n[i, 0]), int(self.Y_data_n[i, 1]), self.Y_data_n[i, 2]
L += 0.5*(rate - self.X[m, :].dot(self.W[:, n]))**2
# take average
L /= self.n_ratings
# regularization, don't ever forget this
L += 0.5*self.lam*(np.linalg.norm(self.X, 'fro') + np.linalg.norm(self.W, 'fro'))
return L
```
**Xác định các _items_ được đánh giá bởi 1 _user_, và _users_ đã đánh giá 1 _item_ và các _ratings_ tương ứng:**
```python
def get_items_rated_by_user(self, user_id):
"""
get all items which are rated by user user_id, and the corresponding ratings
"""
ids = np.where(self.Y_data_n[:,0] == user_id)[0]
item_ids = self.Y_data_n[ids, 1].astype(np.int32) # indices need to be integers
ratings = self.Y_data_n[ids, 2]
return (item_ids, ratings)
def get_users_who_rate_item(self, item_id):
"""
get all users who rated item item_id and get the corresponding ratings
"""
ids = np.where(self.Y_data_n[:,1] == item_id)[0]
user_ids = self.Y_data_n[ids, 0].astype(np.int32)
ratings = self.Y_data_n[ids, 2]
return (user_ids, ratings)
```
**Cập nhật \\(\mathbf{X}, \mathbf{W}\\):**
```python
def updateX(self):
for m in range(self.n_items):
user_ids, ratings = self.get_users_who_rate_item(m)
Wm = self.W[:, user_ids]
# gradient
grad_xm = -(ratings - self.X[m, :].dot(Wm)).dot(Wm.T)/self.n_ratings + \
self.lam*self.X[m, :]
self.X[m, :] -= self.learning_rate*grad_xm.reshape((self.K,))
def updateW(self):
for n in range(self.n_users):
item_ids, ratings = self.get_items_rated_by_user(n)
Xn = self.X[item_ids, :]
# gradient
grad_wn = -Xn.T.dot(ratings - Xn.dot(self.W[:, n]))/self.n_ratings + \
self.lam*self.W[:, n]
self.W[:, n] -= self.learning_rate*grad_wn.reshape((self.K,))
```
**Phần thuật toán chính:**
```python
def fit(self):
self.normalize_Y()
for it in range(self.max_iter):
self.updateX()
self.updateW()
if (it + 1) % self.print_every == 0:
rmse_train = self.evaluate_RMSE(self.Y_raw_data)
print 'iter =', it + 1, ', loss =', self.loss(), ', RMSE train =', rmse_train
```
**Dự đoán:**
```python
def pred(self, u, i):
"""
predict the rating of user u for item i
if you need the un
"""
u = int(u)
i = int(i)
if self.user_based:
bias = self.mu[u]
else:
bias = self.mu[i]
pred = self.X[i, :].dot(self.W[:, u]) + bias
# truncate if results are out of range [0, 5]
if pred < 0:
return 0
if pred > 5:
return 5
return pred
def pred_for_user(self, user_id):
"""
predict ratings one user give all unrated items
"""
ids = np.where(self.Y_data_n[:, 0] == user_id)[0]
items_rated_by_u = self.Y_data_n[ids, 1].tolist()
y_pred = self.X.dot(self.W[:, user_id]) + self.mu[user_id]
predicted_ratings= []
for i in range(self.n_items):
if i not in items_rated_by_u:
predicted_ratings.append((i, y_pred[i]))
return predicted_ratings
```
**Đánh giá kết quả bằng cách đo Root Mean Square Error:**
```python
def evaluate_RMSE(self, rate_test):
n_tests = rate_test.shape[0]
SE = 0 # squared error
for n in range(n_tests):
pred = self.pred(rate_test[n, 0], rate_test[n, 1])
SE += (pred - rate_test[n, 2])**2
RMSE = np.sqrt(SE/n_tests)
return RMSE
```
<a name="-ap-dung-len-movielens-k"></a>
### 3.2. Áp dụng lên MovieLens 100k
Chúng ta cùng quay lại với cơ sở dữ liệu [MovieLens 100k](/2017/05/17/contentbasedrecommendersys/#-co-so-du-lieu-movielens-k)
```python
r_cols = ['user_id', 'movie_id', 'rating', 'unix_timestamp']
ratings_base = pd.read_csv('ml-100k/ub.base', sep='\t', names=r_cols, encoding='latin-1')
ratings_test = pd.read_csv('ml-100k/ub.test', sep='\t', names=r_cols, encoding='latin-1')
rate_train = ratings_base.as_matrix()
rate_test = ratings_test.as_matrix()
# indices start from 0
rate_train[:, :2] -= 1
rate_test[:, :2] -= 1
```
Kết quả nếu sư dụng cách **chuẩn hoá dựa trên _user_:**
```python
rs = MF(rate_train, K = 10, lam = .1, print_every = 10,
learning_rate = 0.75, max_iter = 100, user_based = 1)
rs.fit()
# evaluate on test data
RMSE = rs.evaluate_RMSE(rate_test)
print '\nUser-based MF, RMSE =', RMSE
```
iter = 10 , loss = 5.67288309116 , RMSE train = 1.20479476967
iter = 20 , loss = 2.64823713338 , RMSE train = 1.03727078113
iter = 30 , loss = 1.34749564429 , RMSE train = 1.02937828335
iter = 40 , loss = 0.754769340402 , RMSE train = 1.0291792473
iter = 50 , loss = 0.48310745143 , RMSE train = 1.0292035212
iter = 60 , loss = 0.358530096403 , RMSE train = 1.02921183102
iter = 70 , loss = 0.30139979707 , RMSE train = 1.02921377947
iter = 80 , loss = 0.27520033847 , RMSE train = 1.02921421055
iter = 90 , loss = 0.263185542009 , RMSE train = 1.02921430477
iter = 100 , loss = 0.257675693217 , RMSE train = 1.02921432529
User-based MF, RMSE = 1.06037991127
Ta nhận thấy rằng giá trị `loss` giảm dần và `RMSE train` cũng giảm dần khi số vòng lặp tăng lên. RMSE có cao hơn so với Neighborhood-based Collaborative Filtering (~0.99) một chút nhưng vẫn tốt hơn kết quả của Content-based Recommendation Systems rất nhiều (~1.2).
Nếu **chuẩn hoá dựa trên _item_:**
```python
rs = MF(rate_train, K = 10, lam = .1, print_every = 10, learning_rate = 0.75, max_iter = 100, user_based = 0)
rs.fit()
# evaluate on test data
RMSE = rs.evaluate_RMSE(rate_test)
print '\nItem-based MF, RMSE =', RMSE
```
iter = 10 , loss = 5.62978100103 , RMSE train = 1.18231933756
iter = 20 , loss = 2.61820113008 , RMSE train = 1.00601013825
iter = 30 , loss = 1.32429630221 , RMSE train = 0.996672160644
iter = 40 , loss = 0.734890958031 , RMSE train = 0.99621264651
iter = 50 , loss = 0.464793412146 , RMSE train = 0.996184081495
iter = 60 , loss = 0.340943058213 , RMSE train = 0.996181347407
iter = 70 , loss = 0.284148579208 , RMSE train = 0.996180972472
iter = 80 , loss = 0.258103818785 , RMSE train = 0.996180914097
iter = 90 , loss = 0.246160195903 , RMSE train = 0.996180905172
iter = 100 , loss = 0.240683073898 , RMSE train = 0.996180903957
Item-based MF, RMSE = 1.04980475198
Kết quả có tốt hơn một chút.
Chúng ta cùng làm thêm một thí nghiệm nữa khi không sử dụng regularization, tức `lam = 0`:
```python
rs = MF(rate_train, K = 2, lam = 0, print_every = 10, learning_rate = 1, max_iter = 100, user_based = 0)
rs.fit()
# evaluate on test data
RMSE = rs.evaluate_RMSE(rate_test)
print '\nItem-based MF, RMSE =', RMSE
```
Nếu các bạn chạy đoạn code trên, các bạn sẽ thấy chất lượng của mô hình giảm đi rõ rệt (RMSE cao).
<a name="-ap-dung-len-movielens-m"></a>
### 3.3. Áp dụng lên MovieLens 1M
Tiếp theo, chúng ta cùng đến với một bộ cơ sở dữ liệu lớn hơn là [MovieLens 1M](https://grouplens.org/datasets/movielens/1m/) bao gồm xấp xỉ 1 triệu _ratings_ của 6000 người dùng lên 4000 bộ phim. Đây là một bộ cơ sở dữ liệu lớn, thời gian _training_ cũng sẽ tăng theo. Bạn đọc cũng có thể thử áp dụng mô hình Neighborhood-based Collaborative Filtering lên cơ sở dữ liệu này để so sánh kết quả. Tôi dự đoán là thời gian _training_ sẽ nhanh nhưng thời gian _inference_ sẽ rất lâu.
**Load dữ liệu:**
```python
r_cols = ['user_id', 'movie_id', 'rating', 'unix_timestamp']
ratings_base = pd.read_csv('ml-1m/ratings.dat', sep='::', names=r_cols, encoding='latin-1')
ratings = ratings_base.as_matrix()
# indices in Python start from 0
ratings[:, :2] -= 1
```
**Tách tập training và test, sử dụng 1/3 dữ liệu cho test**
```python
from sklearn.model_selection import train_test_split
rate_train, rate_test = train_test_split(ratings, test_size=0.33, random_state=42)
print X_train.shape, X_test.shape
```
(670140, 4) (330069, 4)
**Áp dụng Matrix Factorization:**
```python
rs = MF(rate_train, K = 2, lam = 0.1, print_every = 2, learning_rate = 2, max_iter = 10, user_based = 0)
rs.fit()
# evaluate on test data
RMSE = rs.evaluate_RMSE(rate_test)
print '\nItem-based MF, RMSE =', RMSE
```
iter = 2 , loss = 6.80832412832 , RMSE train = 1.12359545594
iter = 4 , loss = 4.35238943299 , RMSE train = 1.00312745587
iter = 6 , loss = 2.85065420416 , RMSE train = 0.978490200028
iter = 8 , loss = 1.90134941041 , RMSE train = 0.974189487594
iter = 10 , loss = 1.29580344305 , RMSE train = 0.973438724579
Item-based MF, RMSE = 0.981631017423
Kết quả khá ấn tượng sau 10 vòng lặp. Kết quả khi áp dụng Neighborhood-based Collaborative Filtering là khoảng 0.92 nhưng thời gian _inference_ khá lớn.
<a name="-thao-luan"></a>
## 4. Thảo luận
<a name="-khi-co-bias"></a>
### 4.1. Khi có bias
Một lợi thế của hướng tiếp cận Matrix Factorization cho Collaborative Filtering là khả năng linh hoạt của nó khi có thêm các điều kiện ràng buộc khác, các điều kiện này có thể liên quan đến quá trình xử lý dữ liệu hoặc đến từng ứng dụng cụ thể.
Giả sử ta chưa chuẩn hoá các giá trị _ratings_ mà sử dụng trực tiếp giá trị ban đầu của chúng trong đẳng thức \\((1)\\). Việc chuẩn hoá cũng có thể được tích hợp trực tiếp vào trong hàm mất mát. Như tôi đã đề cập, các _ratings_ thực tế đều có những thiên lệch về _users_ hoặc/và _items_. Có _user_ dễ và khó tính, cũng có những _item_ được _rated_ cao hơn những _items_ khác chỉ vì _user_ thấy các _users_ khác đã đánh giá _item_ đó cao rồi. Vấn đề thiên lệch có thể được giải quyết bằng các biến gọi là _biases_, phụ thuộc vào mỗi _user_ và _item_ và có thể được tối ưu cùng với \\(\mathbf{X}\\) và \\(\mathbf{W}\\). Khi đó, _ratings_ của _user_ \\(n\\) lên _item_ \\(m\\) không chỉ được xấp xỉ bằng \\(\mathbf{x}\_m\mathbf{w}\_n\\) mà còn phụ thuộc vào các _biases_ của _item_ \\(m\\) và _user_ \\(n\\) nữa. Ngoài ra, giá trị này cũng có thể phụ thuộc vào giá trị trung bình của toàn bộ _ratings_ nữa:
\\[
y\_{mn} \approx \mathbf{x}\_m \mathbf{w}\_n + b_m + d_n + \mu
\\]
với \\(b_m, d_n, \mu\\) lần lượt là bias của _item_ \\(m\\), _user_ \\(n\\), và giá trị trung bình của toàn bộ các _ratings_ (là hằng số).
Lúc này, hàm mất mát có thể được thay đổi thành:
\\[
\begin{eqnarray}
\mathcal{L}(\mathbf{X}, \mathbf{W}, \mathbf{b}, \mathbf{d}) &=& \frac{1}{2s} \sum_{n=1}^N \sum_{m:r_{mn} = 1} (\mathbf{x}\_m\mathbf{w}\_n + b_m + d_n +\mu - y\_{mn})^2 + \\\
&& + \frac{\lambda}{2} (\|\|\mathbf{X}\|\|_F^2 + \|\|\mathbf{W}\|\|_F^2 + \|\|\mathbf{b}\|\|_2^2 + \|\|\mathbf{d}\|\|_2^2)
\end{eqnarray}
\\]
Việc tính toán đạo hàm cho từng biến không quá phức tạp, tôi sẽ không bàn tiếp ở đây. Tuy nhiên, nếu bạn quan tâm, bạn có thể tham khảo [source code mà tôi viết tại đây](https://github.com/tiepvupsu/tiepvupsu.github.io/tree/master/assets/25_mf/python). Link này cũng kèm theo các ví dụ nêu trong Mục 3 và dữ liệu liên quan.
<a name="-nonnegative-matrix-factorization"></a>
### 4.2. Nonnegative Matrix Factorization
Khi dữ liệu chưa được chuẩn hoá, chúng đều mang các giá trị không âm. Nếu dải giá trị của _ratings_ có chứa giá trị âm, ta chỉ cần cộng thêm vào Utility Matrix một giá trị hợp lý để có được các _ratings_ là các số không âm. Khi đó, một phương pháp Matrix Factorization khác cũng được sử dụng rất nhiều và mang lại hiệu quả cao trong Recommendation Systems là Nonnegative Matrix Factorization, tức phân tích ma trận thành tích các ma trận có các phần tử không âm.
Bằng Matrix Factorization, các _users_ và _items_ được liên kết với nhau bởi các _latent features_ (tính chất ẩn). Độ liên kết của mỗi _user_ và _item_ tới mỗi latent feature được đo bằng thành phần tương ứng trong feature vector hệ số của chúng, giá trị càng lớn thể hiện việc _user_ hoặc _item_ có liên quan đến latent feature đó càng lớn. Bằng trực giác, sự liên quan của một _user_ hoặc _item_ đến một latent feature nên là một số không âm với giá trị 0 thể hiện việc _không liên quan_. Hơn nữa, mỗi _user_ và _item_ chỉ _liên quan_ đến một vài _latent feature_ nhất định. Vậy nên feature vectors cho _users_ và _items_ nên là các vectors không âm và có rất nhiều giá trị bằng 0. Những nghiệm này có thể đạt được bằng cách cho thêm ràng buộc không âm vào các thành phần của \\(\mathbf{X}\\) và \\(\mathbf{W}\\).
Bạn đọc muốn tìm hiểu thêm về Nonnegative Matrix Factorization có thể tham khảo các tài liệu ở Mục 6.
<a name="-incremental-matrix-factorization"></a>
### 4.3. Incremental Matrix Factorization
Như đã đề cập, thời gian _inference_ của Recommendation Systems sử dụng Matrix Factorization là rất nhanh nhưng thời gian _training_ là khá lâu với các tập dữ liệu lớn. Thực tế cho thấy, Utility Matrix thay đổi liên tục vì có thêm _users_, _items_ cũng như các _ratings_ mới hoặc _user_ muốn thay đổi _ratings_ của họ, vì vậy hai ma trận \\(\mathbf{X}\\) và \\(\mathbf{W}\\) phải thường xuyên được cập nhật. Điều này đồng nghĩa với việc ta phải tiếp tục thực hiện quá trình _training_ vốn tốn khá nhiều thời gian.
Việc này được giải quyết phần nào bằng Incremental Matrix Factorization. Bạn đọc quan tâm có thể đọc [Fast incremental matrix factorization for recommendation with positive-only feedback](https://ai2-s2-pdfs.s3.amazonaws.com/c827/d2267640a7a913250fa5046a16ff078a5ce4.pdf).
<a name="-others"></a>
### 4.4. Others
* Bài toán Matrix Factorization có nhiều hướng giải quyết khác ngoài Gradient Descent. Bạn đọc có thể xem thêm [Alternating Least Square (ALS)](https://stanford.edu/~rezab/classes/cme323/S15/notes/lec14.pdf), [Generalized Low Rank Models](https://stanford.edu/~rezab/classes/cme323/S15/notes/lec14.pdf). Trong bài tiếp theo, tôi sẽ viết về [Singular Value Decomposition (SVD)](https://en.wikipedia.org/wiki/Singular_value_decomposition), một phương pháp phổ biến trong Matrix Factorization, được sử dụng không những trong (Recommendation) Systems mà còn trong nhiều hệ thống khác. Mời các bạn đón đọc.
* [Source code](https://github.com/tiepvupsu/tiepvupsu.github.io/tree/master/assets/25_mf/python)
<a name="-tai-lieu-tham-khao"></a>
## 6. Tài liệu tham khảo
[1] [Recommendation Systems - Stanford InfoLab](http://infolab.stanford.edu/~ullman/mmds/ch9.pdf)
[2] [Collaborative Filtering - Stanford University](https://www.youtube.com/watch?v=h9gpufJFF-0&t=436s)
[3] [Recommendation systems - Machine Learning - Andrew Ng](https://www.youtube.com/watch?v=saXRzxgFN0o&list=PL_npY1DYXHPT-3dorG7Em6d18P4JRFDvH)
[4] Ekstrand, Michael D., John T. Riedl, and Joseph A. Konstan. "[Collaborative filtering recommender systems.](http://herbrete.vvv.enseirb-matmeca.fr/IR/CF_Recsys_Survey.pdf)" Foundations and Trends® in Human–Computer Interaction 4.2 (2011): 81-173.
[5] [Matrix factorization techniques for recommender systems](https://datajobs.com/data-science-repo/Recommender-Systems-%5BNetflix%5D.pdf)
[6] [Matrix Factorization For Recommender Systems](http://joyceho.github.io/cs584_s16/slides/mf-16.pdf)
[7] [Learning from Incomplete Ratings Using Non-negative Matrix Factorization](http://www.siam.org/meetings/sdm06/proceedings/059zhangs2.pdf)
[8] [Fast Incremental Matrix Factorization for Recommendation with Positive-Only Feedback](https://ai2-s2-pdfs.s3.amazonaws.com/c827/d2267640a7a913250fa5046a16ff078a5ce4.pdf)
|
/*
Unix SMB/CIFS implementation.
Python DNS server wrapper
Copyright (C) 2015 Andrew Bartlett
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#include <Python.h>
#include "includes.h"
#include <pyldb.h>
#include <pytalloc.h>
#include "dns_server/dnsserver_common.h"
#include "dsdb/samdb/samdb.h"
#include "dsdb/common/util.h"
#include "librpc/gen_ndr/ndr_dnsp.h"
#include "librpc/rpc/pyrpc_util.h"
/* FIXME: These should be in a header file somewhere */
#define PyErr_LDB_OR_RAISE(py_ldb, ldb) \
if (!py_check_dcerpc_type(py_ldb, "ldb", "Ldb")) { \
PyErr_SetString(py_ldb_get_exception(), "Ldb connection object required"); \
return NULL; \
} \
ldb = pyldb_Ldb_AsLdbContext(py_ldb);
#define PyErr_LDB_DN_OR_RAISE(py_ldb_dn, dn) \
if (!py_check_dcerpc_type(py_ldb_dn, "ldb", "Dn")) { \
PyErr_SetString(py_ldb_get_exception(), "ldb Dn object required"); \
return NULL; \
} \
dn = pyldb_Dn_AsDn(py_ldb_dn);
static PyObject *py_ldb_get_exception(void)
{
PyObject *mod = PyImport_ImportModule("ldb");
if (mod == NULL)
return NULL;
return PyObject_GetAttrString(mod, "LdbError");
}
static PyObject *py_dnsp_DnssrvRpcRecord_get_list(struct dnsp_DnssrvRpcRecord *records,
uint16_t num_records)
{
PyObject *py_dns_list;
int i;
py_dns_list = PyList_New(num_records);
if (py_dns_list == NULL) {
return NULL;
}
for (i = 0; i < num_records; i++) {
PyObject *py_dns_record;
py_dns_record = py_return_ndr_struct("samba.dcerpc.dnsp", "DnssrvRpcRecord", records, &records[i]);
PyList_SetItem(py_dns_list, i, py_dns_record);
}
return py_dns_list;
}
static int py_dnsp_DnssrvRpcRecord_get_array(PyObject *value,
TALLOC_CTX *mem_ctx,
struct dnsp_DnssrvRpcRecord **records,
uint16_t *num_records)
{
int i;
struct dnsp_DnssrvRpcRecord *recs;
PY_CHECK_TYPE(&PyList_Type, value, return -1;);
recs = talloc_array(mem_ctx, struct dnsp_DnssrvRpcRecord,
PyList_GET_SIZE(value));
if (recs == NULL) {
PyErr_NoMemory();
return -1;
}
for (i = 0; i < PyList_GET_SIZE(value); i++) {
bool type_correct;
PyObject *item = PyList_GET_ITEM(value, i);
type_correct = py_check_dcerpc_type(item, "samba.dcerpc.dnsp", "DnssrvRpcRecord");
if (type_correct == false) {
return -1;
}
if (talloc_reference(mem_ctx, pytalloc_get_mem_ctx(item)) == NULL) {
PyErr_NoMemory();
return -1;
}
recs[i] = *(struct dnsp_DnssrvRpcRecord *)pytalloc_get_ptr(item);
}
*records = recs;
*num_records = PyList_GET_SIZE(value);
return 0;
}
static PyObject *py_dsdb_dns_lookup(PyObject *self, PyObject *args)
{
struct ldb_context *samdb;
PyObject *py_ldb;
char *dns_name;
TALLOC_CTX *frame;
NTSTATUS status;
WERROR werr;
struct dns_server_zone *zones_list;
struct ldb_dn *dn;
struct dnsp_DnssrvRpcRecord *records;
uint16_t num_records;
if (!PyArg_ParseTuple(args, "Os", &py_ldb, &dns_name)) {
return NULL;
}
PyErr_LDB_OR_RAISE(py_ldb, samdb);
frame = talloc_stackframe();
status = dns_common_zones(samdb, frame, &zones_list);
if (!NT_STATUS_IS_OK(status)) {
PyErr_SetNTSTATUS(status);
return NULL;
}
werr = dns_common_name2dn(samdb, zones_list, frame, dns_name, &dn);
if (!W_ERROR_IS_OK(werr)) {
PyErr_SetWERROR(werr);
return NULL;
}
werr = dns_common_lookup(samdb,
frame,
dn,
&records,
&num_records,
NULL);
if (!W_ERROR_IS_OK(werr)) {
PyErr_SetWERROR(werr);
return NULL;
}
return py_dnsp_DnssrvRpcRecord_get_list(records, num_records);
}
static PyObject *py_dsdb_dns_extract(PyObject *self, PyObject *args)
{
PyObject *py_dns_el;
TALLOC_CTX *frame;
WERROR werr;
struct ldb_message_element *dns_el;
struct dnsp_DnssrvRpcRecord *records;
uint16_t num_records;
if (!PyArg_ParseTuple(args, "O", &py_dns_el)) {
return NULL;
}
if (!py_check_dcerpc_type(py_dns_el, "ldb", "MessageElement")) {
PyErr_SetString(py_ldb_get_exception(),
"ldb MessageElement object required");
return NULL;
}
dns_el = pyldb_MessageElement_AsMessageElement(py_dns_el);
frame = talloc_stackframe();
werr = dns_common_extract(dns_el,
frame,
&records,
&num_records);
if (!W_ERROR_IS_OK(werr)) {
PyErr_SetWERROR(werr);
return NULL;
}
return py_dnsp_DnssrvRpcRecord_get_list(records, num_records);
}
static PyObject *py_dsdb_dns_replace(PyObject *self, PyObject *args)
{
struct ldb_context *samdb;
PyObject *py_ldb, *py_dns_records;
char *dns_name;
TALLOC_CTX *frame;
NTSTATUS status;
WERROR werr;
int ret;
struct dns_server_zone *zones_list;
struct ldb_dn *dn;
struct dnsp_DnssrvRpcRecord *records;
uint16_t num_records;
/*
* TODO: This is a shocking abuse, but matches what the
* internal DNS server does, it should be pushed into
* dns_common_replace()
*/
static const int serial = 110;
if (!PyArg_ParseTuple(args, "OsO", &py_ldb, &dns_name, &py_dns_records)) {
return NULL;
}
PyErr_LDB_OR_RAISE(py_ldb, samdb);
frame = talloc_stackframe();
status = dns_common_zones(samdb, frame, &zones_list);
if (!NT_STATUS_IS_OK(status)) {
PyErr_SetNTSTATUS(status);
return NULL;
}
werr = dns_common_name2dn(samdb, zones_list, frame, dns_name, &dn);
if (!W_ERROR_IS_OK(werr)) {
PyErr_SetWERROR(werr);
return NULL;
}
ret = py_dnsp_DnssrvRpcRecord_get_array(py_dns_records,
frame,
&records, &num_records);
if (ret != 0) {
return NULL;
}
werr = dns_common_replace(samdb,
frame,
dn,
false, /* Not adding a record */
serial,
records,
num_records);
if (!W_ERROR_IS_OK(werr)) {
PyErr_SetWERROR(werr);
return NULL;
}
Py_RETURN_NONE;
}
static PyObject *py_dsdb_dns_replace_by_dn(PyObject *self, PyObject *args)
{
struct ldb_context *samdb;
PyObject *py_ldb, *py_dn, *py_dns_records;
TALLOC_CTX *frame;
WERROR werr;
int ret;
struct ldb_dn *dn;
struct dnsp_DnssrvRpcRecord *records;
uint16_t num_records;
/*
* TODO: This is a shocking abuse, but matches what the
* internal DNS server does, it should be pushed into
* dns_common_replace()
*/
static const int serial = 110;
if (!PyArg_ParseTuple(args, "OOO", &py_ldb, &py_dn, &py_dns_records)) {
return NULL;
}
PyErr_LDB_OR_RAISE(py_ldb, samdb);
PyErr_LDB_DN_OR_RAISE(py_dn, dn);
frame = talloc_stackframe();
ret = py_dnsp_DnssrvRpcRecord_get_array(py_dns_records,
frame,
&records, &num_records);
if (ret != 0) {
return NULL;
}
werr = dns_common_replace(samdb,
frame,
dn,
false, /* Not adding a record */
serial,
records,
num_records);
if (!W_ERROR_IS_OK(werr)) {
PyErr_SetWERROR(werr);
return NULL;
}
Py_RETURN_NONE;
}
static PyMethodDef py_dsdb_dns_methods[] = {
{ "lookup", (PyCFunction)py_dsdb_dns_lookup,
METH_VARARGS, "Get the DNS database entries for a DNS name"},
{ "replace", (PyCFunction)py_dsdb_dns_replace,
METH_VARARGS, "Replace the DNS database entries for a DNS name"},
{ "replace_by_dn", (PyCFunction)py_dsdb_dns_replace_by_dn,
METH_VARARGS, "Replace the DNS database entries for a LDB DN"},
{ "extract", (PyCFunction)py_dsdb_dns_extract,
METH_VARARGS, "Return the DNS database entry as a python structure from an Ldb.MessageElement of type dnsRecord"},
{ NULL }
};
void initdsdb_dns(void);
void initdsdb_dns(void)
{
PyObject *m;
m = Py_InitModule3("dsdb_dns", py_dsdb_dns_methods,
"Python bindings for the DNS objects in the directory service databases.");
if (m == NULL)
return;
}
|
package com.github.badoualy.telegram.tl.api.request;
import com.github.badoualy.telegram.tl.TLContext;
import com.github.badoualy.telegram.tl.api.TLAbsMessageEntity;
import com.github.badoualy.telegram.tl.api.TLAbsReplyMarkup;
import com.github.badoualy.telegram.tl.api.TLInputBotInlineMessageID;
import com.github.badoualy.telegram.tl.core.TLBool;
import com.github.badoualy.telegram.tl.core.TLMethod;
import com.github.badoualy.telegram.tl.core.TLObject;
import com.github.badoualy.telegram.tl.core.TLVector;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import static com.github.badoualy.telegram.tl.StreamUtils.readInt;
import static com.github.badoualy.telegram.tl.StreamUtils.readTLObject;
import static com.github.badoualy.telegram.tl.StreamUtils.readTLString;
import static com.github.badoualy.telegram.tl.StreamUtils.readTLVector;
import static com.github.badoualy.telegram.tl.StreamUtils.writeInt;
import static com.github.badoualy.telegram.tl.StreamUtils.writeString;
import static com.github.badoualy.telegram.tl.StreamUtils.writeTLObject;
import static com.github.badoualy.telegram.tl.StreamUtils.writeTLVector;
import static com.github.badoualy.telegram.tl.TLObjectUtils.SIZE_CONSTRUCTOR_ID;
import static com.github.badoualy.telegram.tl.TLObjectUtils.SIZE_INT32;
import static com.github.badoualy.telegram.tl.TLObjectUtils.computeTLStringSerializedSize;
/**
* @author Yannick Badoual yann.badoual@gmail.com
* @see <a href="http://github.com/badoualy/kotlogram">http://github.com/badoualy/kotlogram</a>
*/
public class TLRequestMessagesEditInlineBotMessage extends TLMethod<TLBool> {
public static final int CONSTRUCTOR_ID = 0x130c2c85;
protected int flags;
protected boolean noWebpage;
protected TLInputBotInlineMessageID id;
protected String message;
protected TLAbsReplyMarkup replyMarkup;
protected TLVector<TLAbsMessageEntity> entities;
private final String _constructor = "messages.editInlineBotMessage#130c2c85";
public TLRequestMessagesEditInlineBotMessage() {
}
public TLRequestMessagesEditInlineBotMessage(boolean noWebpage, TLInputBotInlineMessageID id, String message, TLAbsReplyMarkup replyMarkup, TLVector<TLAbsMessageEntity> entities) {
this.noWebpage = noWebpage;
this.id = id;
this.message = message;
this.replyMarkup = replyMarkup;
this.entities = entities;
}
@Override
@SuppressWarnings({"unchecked", "SimplifiableConditionalExpression"})
public TLBool deserializeResponse(InputStream stream, TLContext context) throws IOException {
final TLObject response = readTLObject(stream, context);
if (response == null) {
throw new IOException("Unable to parse response");
}
if (!(response instanceof TLBool)) {
throw new IOException(
"Incorrect response type, expected " + getClass().getCanonicalName() + ", found " + response
.getClass().getCanonicalName());
}
return (TLBool) response;
}
private void computeFlags() {
flags = 0;
flags = noWebpage ? (flags | 2) : (flags & ~2);
flags = message != null ? (flags | 2048) : (flags & ~2048);
flags = replyMarkup != null ? (flags | 4) : (flags & ~4);
flags = entities != null ? (flags | 8) : (flags & ~8);
}
@Override
public void serializeBody(OutputStream stream) throws IOException {
computeFlags();
writeInt(flags, stream);
writeTLObject(id, stream);
if ((flags & 2048) != 0) {
if (message == null) throwNullFieldException("message", flags);
writeString(message, stream);
}
if ((flags & 4) != 0) {
if (replyMarkup == null) throwNullFieldException("replyMarkup", flags);
writeTLObject(replyMarkup, stream);
}
if ((flags & 8) != 0) {
if (entities == null) throwNullFieldException("entities", flags);
writeTLVector(entities, stream);
}
}
@Override
@SuppressWarnings({"unchecked", "SimplifiableConditionalExpression"})
public void deserializeBody(InputStream stream, TLContext context) throws IOException {
flags = readInt(stream);
noWebpage = (flags & 2) != 0;
id = readTLObject(stream, context, TLInputBotInlineMessageID.class, TLInputBotInlineMessageID.CONSTRUCTOR_ID);
message = (flags & 2048) != 0 ? readTLString(stream) : null;
replyMarkup = (flags & 4) != 0 ? readTLObject(stream, context, TLAbsReplyMarkup.class, -1) : null;
entities = (flags & 8) != 0 ? readTLVector(stream, context) : null;
}
@Override
public int computeSerializedSize() {
computeFlags();
int size = SIZE_CONSTRUCTOR_ID;
size += SIZE_INT32;
size += id.computeSerializedSize();
if ((flags & 2048) != 0) {
if (message == null) throwNullFieldException("message", flags);
size += computeTLStringSerializedSize(message);
}
if ((flags & 4) != 0) {
if (replyMarkup == null) throwNullFieldException("replyMarkup", flags);
size += replyMarkup.computeSerializedSize();
}
if ((flags & 8) != 0) {
if (entities == null) throwNullFieldException("entities", flags);
size += entities.computeSerializedSize();
}
return size;
}
@Override
public String toString() {
return _constructor;
}
@Override
public int getConstructorId() {
return CONSTRUCTOR_ID;
}
public boolean getNoWebpage() {
return noWebpage;
}
public void setNoWebpage(boolean noWebpage) {
this.noWebpage = noWebpage;
}
public TLInputBotInlineMessageID getId() {
return id;
}
public void setId(TLInputBotInlineMessageID id) {
this.id = id;
}
public String getMessage() {
return message;
}
public void setMessage(String message) {
this.message = message;
}
public TLAbsReplyMarkup getReplyMarkup() {
return replyMarkup;
}
public void setReplyMarkup(TLAbsReplyMarkup replyMarkup) {
this.replyMarkup = replyMarkup;
}
public TLVector<TLAbsMessageEntity> getEntities() {
return entities;
}
public void setEntities(TLVector<TLAbsMessageEntity> entities) {
this.entities = entities;
}
}
|
Prolonged mechanical support of the left ventricle.
An interdisciplinary group has developed a left ventricular assist pump system composed of a modified sac type pump, a pneumatic power unit, and a synchronizer. The pump fills from the left ventricle and discharges into the aorta. The system was employed for left ventricular assistance in a series of 12 normal calves, with an average pumping period of 70 +/- 8 days. The system was then evaluated in a series of calves in whom profound left ventricular failure had been produced. These studies indicate that the assist pump is effective in supporting the circulation and completely unloading the left ventricle. The assist system has now been employed in four patients who could not be weaned from cardiopulmonary bypass following cardiac valve replacement. The assist pump supported the circulation in three instances. In one patient, the assist pump was employed for 8 days until left ventricular function had improved sufficiently to permit pump removal; the patient was subsequently discharged from the hospital. |
/*
* GRUB -- GRand Unified Bootloader
* Copyright (C) 2002,2005,2006,2007 Free Software Foundation, Inc.
*
* GRUB is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* GRUB is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with GRUB. If not, see <http://www.gnu.org/licenses/>.
*/
#ifndef GRUB_EFI_CONSOLE_HEADER
#define GRUB_EFI_CONSOLE_HEADER 1
#include <grub/types.h>
#include <grub/symbol.h>
/* Initialize the console system. */
void grub_console_init (void);
/* Finish the console system. */
void grub_console_fini (void);
#endif /* ! GRUB_EFI_CONSOLE_HEADER */
|
For someone who has never been officially employed by the New England Patriots, Alex Guerrero has become a big figure in the Patriots’ dynasty.
Guerrero, famous for being Tom Brady’s personal trainer, appears to be at least in some way involved in what appears to be some friction between the Patriots and tight end Rob Gronkowski.
Scroll to continue with content
Ad
Karen Guregian of the Boston Herald wrote that coach Bill Belichick “chastised” Gronkowski for being a client of TB12, the sports therapy center that Brady uses and was co-founded by Guerrero, in front of the other Patriots players last season. Guregian speculated that perhaps that was Belichick’s way of keeping all players from leaving the team’s training staff for Guerrero, and it might be the source of friction between the Patriots and Gronkowski, who still hasn’t officially said he won’t retire (though there’s no clear sign he’ll step away this offseason). NBC Sports Boston’s Tom Curran said earlier this offseason Gronkowski felt “singled out” over the Guerrero issue. Mike Giardi of NBC Sports Boston said Brady and Gronkowski were “miserable” last season and a lot of it was over the Guerrero issue.
The Guerrero-Belichick dynamic isn’t new. Last season, Guerrero had his sideline access revoked and was told he couldn’t treat players other than Brady at the team’s headquarters anymore. The memorable and explosive ESPN story about the tension between Brady, Belichick and Patriots owner Robert Kraft in early January spent a lot of time on Guerrero. In the story, his presence was seen as being divisive among players. They didn’t know whether to use Guerrero or the team’s medical and training staff.
And Gronkowski is in that web too.
Gronkowski expressed after the Super Bowl that he might retire. Since then, speculation has gone in many directions, mostly because Gronkowski hasn’t really said anything and the Patriots never say anything substantial. Gronkowski has taken a lot of punishment in his career, and that probably weighs on him. Perhaps he wants a pay raise, and that would fix everything. We’ve seen offseason drama get fixed very easily with money. But it seems there are some issues between the team and its star tight end. None of the drama mattered much last season, as the Patriots came a couple minutes away from winning another Super Bowl. Belichick calls out players all the time, that has been well established. But the Guerrero issue seems to be looming over the Patriots longer than most people could have assumed.
Perhaps this is just another offseason story and it won’t ultimately matter to how long Gronkowski plays or how many games the Patriots win. But if Gronkowski ends up retiring still within his prime, whether that’s this offseason or a couple years from now, there will be interesting questions about what drove him away.
Patriots tight end Rob Gronkowski started an offseason of speculation by saying after the Super Bowl he’d consider retirement. (AP) |
The present invention relates to the combination of a cap and flashlight holding construction. More particularly, the invention relates to the combination of a baseball type cap and a pair of flashlights secured to the sides of the cap by flashlight attachment means.
There are many tasks which are performed at night or in a dark environment which require the use of a flashlight. Examples are numerous, but those that often arise are the tasks associated with hunting or fishing at night, exercising at night, working on an automobile engine or the like, and performing household or other repairs in a dark space. It has been a problem in the past that individuals performing these or similar tasks have had difficulty in controlling a flashlight while attempting to perform the task. It has also been a problem in the past that many of the aforementioned tasks, especially those involving repairs, require accurate placement of the beam of light. In the situation where the individual is attempting to hold the beam of light while conducting repairs such accuracy has been difficult to maintain.
It has long been the practice to mount a light source to various types of headgear. As an example, it is well known in the prior art for miners to rigidly mount carbide lamps on their hard hats. More recently, battery powered lamps have replaced the carbide lamps, but it has continued to be the practice to rigidly attach the various types of lamps to the hard hat. These various types of coal miner lamps enable the coal miner to freely use both of his hands while working. This type of mounting is an effective means of securing a light source to a hard hat. As one would expect, however, the hard hat, the mounting structure, and the lamp itself are heavy and cumbersome to use as well as being expensive. An additional problem associated with a heavy light mounted on a hard hat is that typically the hard hat/light combination is ill fitting and does not allow for minor adjustments in the direction of the beam of light to be made.
The prior art also contains numerous examples of light sources being attached to headbands of various sorts. U.S. Pat. No. 4,462,064 issued to Schweitzer discloses an elastic headband with an adhesively bonded tubular clip orientated laterally and transversely on the headband by a wedge whereby friction retains the flashlight in the clip. However, Schweitzer's invention does not embody means for retaining more than one flashlight thereon.
U.S. Pat. No. 4,718,126 issued to Slay discloses an apparatus for holding a flashlight comprising a longitudinally aligned first and a second strap, and a section of elastic material affixed to the second strap and arranged so as to define an expandable flashlight receiving hole suitable for receiving a flashlight.
Although a headband provides a suitable means of supporting a light source, a headband does not readily allow for minor adjustments in the fit and position of the headband to be made as is possible with a typical baseball cap. A headband also lacks the forward extending bill of a baseball cap, as well as the front strap present on many baseball caps, both of which provide support and serve to secure the light source, and, in particular, the light emitting forward end of the light source when the wearer is in motion. Because of these various limitations, the typical headband is only capable of supporting a single light source and the size of the light source, and thus the intensity and brightness of the light emitted, is limited by the minimal supporting apparatus provided by a headband.
In order to overcome the various problems associated with the prior art it is desirable to provide a lightweight and inexpensive means of attaching a light to a baseball style cap. A cap having a pair of flashlights attached to the sides provides tremendous utility and versatility for an individual attempting to work in an area with a limited supply of light such as under a sink or in a similarly confined area. As another example, night fisherman commonly wear a baseball or similar cap formed entirely of cloth or with a cloth bill or brim and a plastic mesh crown. A baseball cap having the attachment means claimed herein is capable of supporting a larger diameter and thus brighter flashlight than the various support structures of the prior art. The use of a baseball cap as a support mechanism as disclosed and claimed in the claimed invention also allows for a much greater level of comfort for the wearer than the various support mechanisms of the prior art. |
Jack F. Shaw
John F. "Jack" Shaw (June 1, 1938 – January 9, 2009) was a Western Michigan University track and cross-country coach whose tenure spanned 32 years. Shaw took over the head coaching reins from George Dales in 1970; he retired from the position in June 2002. Shaw was born in Kane, Pennsylvania; he died at the age of 70, in Kalamazoo, Michigan, on January 9, 2009.
Mentor to 56 All-Americans at WMU
As Men's Head Coach at Western Michigan University, Shaw was named Mid-American Conference Coach of the Year in outdoor track six times; five times in cross country and once for indoor track. A four-time recipient of the Central Collegiate Conference Coach of the Year award, in 1995; he was also named National Collegiate Athletic Association District IV Coach of the Year. Shaw was enshrined within the University's Athletic Hall of Fame in 1997, after leading the WMU Bronco men's track and field team to six Mid-American Conference outdoor championships - including back-to-back titles in 1995 and 1996. WMU also finished as the MAC runner-up nine times with Shaw at the helm.
During Shaw's tenure, the University's indoor track and field team finished in the top-three (at the MAC Invitational) on six occasions. Western Michigan University also claimed five MAC cross-country titles during Shaw's career; including consecutive championships in 1976 and 1977. With Shaw's guidance, the Broncos won two Central Collegiate Conference outdoor track titles, in 1995 and 1996; one indoor CCC title in 1993, and three CCC cross country crowns in 1970, 1996 and 1999. Nationally, Jack Shaw's squads placed fifth in 1970 and 12th in 1989 at the NCAA Division I Cross Country Championships.
Shaw produced more NCAA All-Americans than any other coach in Western Michigan University history, with a total of fifty-six. Twenty-nine of Shaw's outdoor track and field athletes earned All-America Team recognition; as did twenty-one indoor track athletes and six cross country runners.
Early career and education
Prior to arriving at Western Michigan, Shaw had served as an assistant coach for the University of Pittsburgh, for Marshall University, and Ohio University; Jack had arrived on the collegiate scene after guiding Warren High School of Pennsylvania to a state cross-country championship in 1966.
Jack Shaw earned his master's degree from Western Michigan University, and his bachelor's degree (in Geology) from Muskingum College of Ohio. As an undergraduate, Shaw was captain of the Muskingum track and field team; in 1960, Jack established a varsity record in the 120-yard hurdles. Shaw also served in the United States Army from 1962 until 1968; it was during his military service that Jack developed and nurtured a desire to coach.
The 1971 marriage of Jack and Karen Olson-Shaw brought two sons, Scott and Timothy. Shaw's legacy is also carried on by WMU; the annual home outdoor track meet — The Jack Shaw Classic — takes place every spring at Western's Kanley Track Stadium.
Shaw's NCAA All-Americans
Outdoor Track and Field
2002 Dale Cowper, 13th, hammer throw, 208-0
1998 Phil McMullen, 5th, decathlon, 7,613
1997 Phil McMullen, 2nd, decathlon, 7,731
1996 Burger Lambrechts, 8th,No-This guy went to MemphisStmshot put, 61-0.5
1995 Jeff Brandenburg, 8th, shot put, 59-11.75
1995 Brian Keane,Brian Competed for Cincinnati, Not Western Michingan 11th, javelin, 221-6
1993 Brian Keane, 4th, javelin, 235-1
1993 Nate Langlois, 10th, 200 meters, 20.79
1992 Vinton Bennett, 3rd, high jump, 7-4.25
1992 Brian Keane, 7th, javelin, 224-4
1991 Vinton Bennett, 3rd, high jump, 7-5.25
1990 Jesse McGuire, 7th, 10,000 meters, 29:05.09
1985 Alex Washington, 6th, 110 meter Hurdles, 13.74 (w)
1983 Alex Washington, 10th, 110 meter Hurdles, 13.7 (ht), 14.01*
1982 Jack Mclntosh, 2nd, 800 meters, 1:48.1
1981 Chuck Greene, 8th, javelin, 249-1
1980 Jack Mclntosh, 7th, 800 meters, 1:49.86
1979 Jack Mclntosh, 2nd, 800 meters, 1:46.76
1978 Ron Parisi, 6th, javelin, 248-4
1977 Tom Duits, 5th, 1500 meters, 3:41.3
1971 Jeromee Liebenberg, 3rd, 3000 meter steeplechase, 8:37.0
1971 John Bennett, 6th, six-mile, 27:54.3
Indoor Track and Field
2001 Dale Cowper, 12th, weight throw, 66-11.5
1996 Burger Lambrechts, 3rd, shot put, 61-9.5
1995 Jeff Brandenburg, 7th, shot put, 58-9.75
1994 Jeff Brandenburg, 7th, shot put, 60-4
1993 Nate Langlois, 7th, 200 meters, 21.38
1988 Robert Louis, 5th, 200 meters, 21.34
1988 Jamie Hence, 5th, 55 meter Hurdles, 7.38
1985 Tom Broekema, Robert Louis, Brad Mora, Eric Teutsch - 5th, distance medley relay, 9:42.53
1985 Alex Washington, 8th, 55 meter Hurdles, 7.32
1982 Mike Fowler, Gordon Mclntosh, Kurt Liechty, Jack Mclntosh - 6th, two-mile relay, 7:30.60
1981 Mike Ericksen, 7th, 600 meters, 1:12.11
1981 Dave Beauchamp, Gordon Mclntosh, Dana Houston, Kurt Liechty - 3rd, two-mile relay, 7:34.15
1979 Dave Beauchamp, Mike Karasiewicz, Mike Thompson, Jack Mclntosh - 2nd, two-mile relay, 7:31.9
1978 Tom Duits, 5th, mile, 4:13.45
1976 Mike Schomer, 2nd, weight throw, 62-5.75
1972 Gary Harris, 2nd, two-mile, 8:37.4
Cross-Country
1989 Jesse McGuire, 14th, 30:09.34
1989 Brad Kirk, 33rd, 30:36
1988 Jesse McGuire, 26th, 30:01
1976 Tom Duits, 35th, 29:32.98
1970 Jeromee Liebenberg, 14th, 28:46
1970 Gary Harris, 18th, 28:50
References and external links
Category:1938 births
Category:2009 deaths
Category:College track and field coaches in the United States |
One of the most significant developments in semiconductor technology in recent years has been the increased use and importance of compound semiconductors, particularly the group III-V compounds composed of elements III and V of the periodic table such as gallium arsenide and indium phosphide. Such materials are used, for example, for making lasers, light emitting diodes, microwave oscillators and light detectors. Also promising are the group II-VI compounds such as cadmium telluride which may be used for making light detectors and other devices.
Most commercial use of compound semiconductors requires the growth of large single-crystal ingots from which wafers can be cut for the subsequent fabrication of useful devices. One of the more promising methods for such crystal growth is the vertical gradient freeze (VGF) method, particularly the VGF method described in the U.S. patent of W. A. Gault, U.S. Pat. No. 4,404,172, granted Sept. 13, 1983, and the paper, "The Novel Application of the Vertical Gradient Freeze Method to the Growth of High Quantity III-V Crystals," by W. A. Gault et al., Journal of Crystal Growth, Vol. 74, pp. 491-506, 1986, both of which are hereby incorporated herein by reference. According to this method, raw semiconductor material is placed in a vertically extending crucible, typically of pyrolytic boron nitride, which includes a small crystal seed well portion at its bottom end snugly containing a monocrystalline seed crystal. Initially, the raw material and a portion of the seed crystal are melted. An encapsulant material such as boric oxide can be included to aid in containing volatile vapors within the melt. The temperature of the system is then reduced in such a manner that freezing proceeds vertically upwardly from the seed crystal, with the crystal structure of the grown ingot corresponding to that of the seed crystal.
It is known that the high thermal conductivity of the pyrolytic boron nitride (PBN) crucible relative to the semiconductor melt creates a concave solidliquid interface near the crucible wall during crystal growth. It is believed that this shape, in combination with chemical interaction at the crucible wall and other parameters such as the stability of the growth rate, can be a cause of a crystallographic dislocation defect known as "twinning" when one attempts to grow the crystal, in particular, an indium phosphide crystal, in the <100> crystallographic orientation. Growth in the <100> orientation is desired because wafers can thereafter be cut perpendicularly to the ingot axis to obtain the appropriate crystallographic surface for device processing.
A common solution to the problem of twinning is to orient the seed crystal in a PBN crucible such that the semiconductor crystal grows in the <111> crystallographic direction (or, more specifically, the <111>.sub.B direction), which has been recognized to avoid or to reduce the incidence of twinning. When this is done, however, the wafer must be cut at an angle with respect to the ingot central axis of about 35.3 degrees so that the upper surfaces of such wafers lie in a crystallographic plane that is appropriate for device fabrication. Since the ingot is cylindrical, the slices or wafers cut at this angle are elliptical. It is difficult to use elliptically shaped semiconductor wafers efficiently and a significant wastage of usable semiconductor wafer area inherently accompanies their use.
Workers have alternatively tried to overcome the twinning problem by using quartz crucibles which have a much lower thermal conductivity and a different surface chemistry than that of the PBN crucible. However, quartz crucibles raise a new set of problems. During ingot cooling, there is a strong adherence between the glass crucible and the ingot by way of an intermediate layer of boric oxide, the material used as an encapsulant. The cooling results in a differential thermal contraction of the crucible, the boric oxide, and the ingot, which creates stresses that tend to fracture the ingot, rendering it in most cases useless.
Accordingly, there has been a long-felt need for a method for reducing the number of defects in grown compound semiconductor ingots. There has also been a need for such a method that is consistent with the growth of ingots in the <100> crystallographic direction. |
The Clackamas County Soil and Water Conservation District is pleased to publish a Request for Qualifications (RFQ) for architectural design services. The District is seeking a firm that can develop a conceptual design for a permanent headquarters office and meeting facility. In response to changing conditions and the growth of the District’s programs, the District has moved from place to place over the last few decades. […]
The Clackamas County Soil and Water Conservation District prohibits discrimination against its customers, employees, and applicants for employment on the basis of race, color, national origin, age, disability, sex, gender identity, religion, reprisal, and where applicable, political beliefs, marital status, familial or parental status, sexual orientation, or all or part of an individual’s income is derived from any public assistance program, or protected genetic information in employment or in any program or activity conducted or funded by the District. The District is an Equal Opportunity Employer. |
Tracking the market and economic trends that shape your finances.
Bill Gross puts U.S. on notice about debt binge
If the bond vigilantes are ready to ride again, there should be little doubt who will be leading the charge.
Bond guru Bill Gross at Pimco in Newport Beach this week has ramped up his warnings to the Obama administration and the Federal Reserve about the perils of unfettered government borrowing.
In an interview in Time magazine on Tuesday, Gross suggested that Pimco, which manages nearly $1 trillion in mostly fixed-income assets, now feels more comfortable owning German government debt than U.S. Treasury debt:
"There are a number of reasons to have doubts about Treasuries, not just because of America's sovereign risk but also from the standpoint of an over-owned currency [the dollar]. . . . At Pimco we would probably try and substitute for our Treasuries with sovereign bonds of potentially higher quality. Germany looks interesting to us. Germany has problems, but it's in a much better budget situation than the U.S. because of a constitutional amendment three months ago that forces a balanced budget in four years."
Gross, 65, continued with that theme in his January commentary on Pimco’s website, published Wednesday. “The fact is that investors, much like national citizens, need to be vigilant and there has been a decided lack of vigilance in recent years from both camps in the U.S.,” he wrote. “The shifting of private investment dollars to more fiscally responsible government bond markets may make for a very real outcome in 2010 and beyond.”
Historically, “vigilance” on the part of bond investors has meant driving up interest rates to levels that, at least in theory, should force governments to rein in their borrowing and spending.
Pimco in November sharply cut back on holdings of government securities in the firm’s flagship bond fund, Pimco Total Return. It was a smart move: The yield on the 10-year Treasury note has surged from 3.2% at the end of November to 3.8% now, devaluing outstanding Treasury issues.
The next big test for the Treasury market comes on Friday, when the government will report on December employment trends. A better-than-expected report could sent bond yields higher if investors figure the economy is continuing to recover, putting upward pressure on all interest rates. |
v.; u; u
IN THE COURT OF APPEALS OF THE STATE OF WASHINGTON
DONALD BURDICK; SUSAN
BYINGTON; LISA CARFAGNO; PETER DIVISION ONE
and JANICE ELLIOT, and their marital
community; BERNARD E. GOLDBERG; No. 73459-8-
PAUL E. GOLSTEIN; TOM and LaVOE
MULGREW, and their marital
community; SUSAN ROSEN; MARTIN
SILVERMAN; SHARON SILVERMAN; UNPUBLISHED OPINION
and BARRY and ROBIN STUCK, and
their marital community,
Appellants,
v.
ROSENTHAL COLLINS GROUP, LLC,
an Illinois limited liability corporation,
Respondent. FILED: May 31, 2016
Dwyer, J. —This appeal arises from a trial court order granting summary
judgment dismissal of securities and negligence claims brought against
Rosenthal Collins Group, LLC (RCG) for its alleged role in a Ponzi scheme fraud
perpetrated by Enrique Villalba.1 Because RCG was not involved in the sale of
the securities herein at issue and owed no special duty to the investors in
Villalba's scheme, we affirm.
1Relief is also sought from a protective order obtained by RCG prohibiting discovery of
certain information related to the account involved in the fraud. We conclude thatthis order was
proper.
No. 73459-8-1/2
A. Villalba's Ponzi Scheme
This case begins with the collapse of a Ponzi scheme perpetrated by
Villalba, through his company Money Market Alternatives, LLC (MMA), from late
1996 until September 2009.
Villalba held himself out to investors as an "investment manager" who
managed his clients' assets in accordance with their individual investment
objectives and by utilizing his trading strategy, which he referred to as the
"Money Market Plus Method." In reality, Villalba stole the money that he was
supposedly managing. After receiving investors' funds into his bank accounts,
Villalba used the funds to, among other things, pay himself huge management
fees, fund his lavish lifestyle and other business ventures, and make over $3
million in Ponzi-type payments to other investors. Villalba concealed his theft
from his clients with lies and false account statements reflecting steady gains in
their accounts.2 Based upon these fake statements and believing Villalba was
earning impressive returns, investors sent more and more money to Villalba for
him to manage on their behalf.
The 26 victims of the fraud, who include the appellants herein, lost more
than $30 million.
B. Appellants Invest With Villalba
Appellants (the investors) hired Villalba to manage their money and
deposited funds with him at different times between 1996 and 2009.
2There is no dispute that RCG played no role in creating (and had no knowledge of)
these fake account statements.
No. 73459-8-1/3
The investors had different relationships with Villalba and different
understandings of how he would manage their money. Bernard Goldberg, for
example, met Villalba years before Villalba opened an account at RCG.
Goldberg and Villalba formed a general partnership in 1996, through which
Goldberg effectively hired Villalba to manage certain assets in return for a share
of the trading profits. Given his close, longstanding relationship with Villalba,
Goldberg was able to convince many of his friends to hire Villalba as their
investment advisor, including (directly or indirectly) all of the other investors in
this case.
After being introduced to Villalba, the other investors each entered into
Investment Management Agreements (IMA) with Villalba. The IMAs detailed
Villalba's role as "investment manager" of individually managed accounts and
expressly provided the investor with the right to manage his or her own account
and change the investment strategy to conform with his or her investment
objectives. The IMAs also gave each investor the right to choose or change the
brokerage firm handling the investor's individual account.
The IMAs made no mention of RCG3 and, by and large, the investors had
no knowledge of the brokerage firms that Villalba was using. The investors
typically wired money to Villalba by sending money directly to one of his bank
accounts. Villalba then transferred money from MMA's bank accounts to futures
accounts in MMA's name, including one at RCG, to trade futures. None of the
investors sent any money to RCG. Indeed, the investors admitted that they had
3 RCG also had no knowledge of the IMAs.
No. 73459-8-1/4
no interaction whatsoever with RCG, that they never had a written agreement
that mentioned RCG, and that RCG played no role in their decision to invest or in
the sale of securities.
C. Villalba's Futures Trading
In June 1998, 18 months after the first of the investors invested with
Villalba, RCG agreed to open a nondiscretionary commodityfutures trading
account for MMA. RCG is a Futures Commission Merchant (FCM) registered
with the Commodities Futures Trading Commission (CFTC) and the National
Futures Association (NFA) to conduct trading of futures contracts. As a
"nondiscretionary" customer, MMA retained complete control over its futures
account and had full responsibility and liability for all trading decisions.
RCG reviewed an offering circular that Villalba prepared to help him solicit
$100 million from investors for the MMA account.4 According to the investment
plan described in the circular, funds from Villalba's customers would be pooled to
invest in treasury bills or money market funds "within a vehicle similar to a mutual
fund." Villalba would also occasionally5 purchase S&P 500 futures contracts
based on his purported expertise in predicting certain market trends. Those
transactions would supposedly add 2 percent to 5 percent additional value for his
customers. The investment would have "minimal" risk, it asserted, because the
futures transactions would be made with "little or no leverage" and stop orders
would be used to limit losses.
4 None of the investors ever saw, received, or signed any subscription agreement or
offering circular relating to their investment.
5The timing was variously described as "a few days per month," "on average a week per
month," and "approximately [1]0% of the year."
No. 73459-8-1/5
The circular claimed that the fund was not subject to state or federal
regulation. RCG recognized, however, that because it would contain pooled
investments, the fund would constitute a commodity pool.6 That made Villalba,
or his company, a commodity pool operator. Neither were registered as
commodity pool operators as required by the CFTC.
A form was provided to Villalba with the new account documents
identifying two potential exceptions to the registration requirement. RCG's file
shows that Villalba selected an exemption that was only applicable if he neither
received any direct or indirect compensation for managing the anticipated $100
million pool, nor advertised for participants. The circular stated, however, that he
expected to receive management fees from the proceeds and that MMA would
be "offering these securities to the public."
RCG's compliance procedures mandate that a new account should not be
opened if illegal activity is suspected. After RCG's review of the offering circular
and the other information provided by Villalba, it opened the MMA account for
trading.
Villalba never followed his purported investment plan. Instead of keeping
the investors' money in treasury bills with occasional transactions in S&P 500
futures contracts, Villalba traded futures with RCG almost daily. Also, the trades
were highly leveraged and risky. The promised "stop orders" to limit losses were
not used. Single day losses of more than $100,000 were not uncommon and, in
March 2008 alone, the MMA account lost more than $9 million.
6A"commodity pool" or "pool" is "any investment trust, syndicate or similar form of
enterprise operated for the purpose of trading commodity interests." 17 C.F.R. § 4.10(d)(1).
Essentially, it is the futures industry-equivalent ofa mutual fund.
No. 73459-8-1/6
Villalba's scheme began to unravel in 2009, after he suffered significant
trading losses, making it difficult for him to pay investors as they requested their
money back. Villalba closed his RCG account in June 2009. Around that time,
Villalba opened a new futures account at a different firm. In early September
2009, Villalba started ignoring his clients' phone calls and e-mails, arousing their
suspicions. By September 2010, after an investigation by the Securities and
Exchange Commission and the Federal Bureau of Investigation, Villalba pleaded
guilty to felony wire fraud and was ordered to pay over $30 million in restitution
and sentenced to almost nine years in federal prison.
D. The CFTC investigates RCG
Shortly afterVillalba was convicted, the CFTC investigated RCG's role in
Villalba's fraud. In April 2012, RCG entered into a consent order with the CFTC
related to its handling ofthe MMA account. The CFTC found that RCG ignored
many "red flags" appearing in the account records and that it should have acted
in light of "the lack of regard for trading losses, commissions, and fees in the
MMA account." As part of the settlement offer underlying the order, RCG did not
admit or deny these findings.7
7Asthe trial court below recognized, such consentjudgments are not admissible
evidence ofthe allegations stated therein. See In re Platinum and Palladium Commodities
Litigation 828 F. Supp. 2d 588 (S.D.N.Y. 2011) (striking references to a CFTC order from civil
complaint); Carpenters Health &Welfare Fund v. The Coca-Cola Co.. 2008 WL 9358563, *3
(N.D. Ga. Apr. 23, 2008) (a consent judgment "falls squarely into the class of evidence deemed
inadmissible pursuant to Rule 408"). This is so because of the "high public policy value of
encouraging entities ... to settle their disputes with ... governmental agencies," and the "chilling
effect" that "would likely" result from admitting the consent judgment as evidence ofwrongdoing
by private litigants. Coca-Cola. 2008 WL 9358563, at*3; see also In re Blech Sec. Litiq.. 2003
WL 1610775 (S.D.N.Y. Mar. 26, 2003); N.J. Turnpike Auth. v. PPG Indus.. Inc.. 16 F. Supp. 2d
460, 474 (D.N.J. 1998).
-6-
No. 73459-8-1/7
E. Procedural history
The investors filed a motion for summary judgment, seeking a ruling that
their transactions with Villalba were securities under multiple state securities
acts, including the The Securities Act of Washington, chapter 21.20 RCW, and
the Ohio Securities Act, chapter 1707 Ohio Rev. Code Ann. The trial court
granted that motion, except as to the investments made by Goldberg.8
RCG filed two summary judgment motions. The first sought a ruling that
claims for some transactions were barred under the Ohio and California statutes
of repose. The investors conceded the claims under California's securities act,
but contested the applicability of the Ohio provision. That motion was not
decided because the trial court granted RCG's second motion for summary
judgment in an order that: (1) ruled that the investors could bring claims under
the Ohio securities act, (2) dismissed the investors' securities claims, holding that
RCG could not be secondarily liable for Villalba's violations of the securities acts,
and (3) dismissed the investors' claims for negligent supervision of the account
and violation of the Washington Consumer Protection Act. The trial court did not
rule on RCG's claim that the state securities acts were preempted by the
Commodities Exchange Act.
Prior to the filing of the summary judgment motions, RCG moved the trial
court for a protective order from the investors' discovery inquiries concerning
RCG's suspicious activity monitoring and investigation practices, particularly
regarding the MMA account, under the federal Bank Secrecy Act (BSA), 31
8 The court ruled that his investments were not securities.
No. 73459-8-1/8
U.S.C. § 5318(g). The court entered that order on March 9, 2015. The investors
then moved the court to modify the protective order. On April 23, 2015, the trial
court modified the protective order to exclude from its scope any information that
was already publicly available or in the investors' possession. The investors also
appeal from that modified order.
II
The investors contend that the trial court erred by granting summary
judgment dismissal of their state securities claims. This is so, they assert,
because RCG is secondarily liable to the investors under the Washington
securities act for its role in Villalba's fraud. We disagree.
Our review is de novo. Lokan &Assocs.. Inc. v. Am. Beef Processing,
LLC, 177 Wn. App. 490, 495, 311 P.3d 1285 (2013). When reviewing an order
granting summary judgment, we engage in the same inquiry as the trial court,
viewing the facts and all reasonable inferences therefrom in the light most
favorable to the nonmoving party. Brown v. Brown, 157 Wn. App. 803, 812, 239
P.3d 602 (2010). "Summary judgment is appropriate if the pleadings, affidavits,
depositions, answers to interrogatories, and admissions on file show that there is
no genuine issue of material fact and that the moving party is entitled to judgment
as a matter of law." Keithlv v. Sanders, 170 Wn. App. 683, 686, 285 P.3d 225
(2012) (citing CR 56(c)).
The investors claim that RCG is liable under RCW 21.20.430, subsections
(1)and(3).
No. 73459-8-1/9
RCW 21.20.430(1), which pertains to seller liability, provides, in pertinent
part:
Any person, who offers or sells a security in violation of any
provisions of RCW 21.20.010, 21.20.140(1) or (2), or 21.20.180
through 21.20.230,[9] is liable to the person buying the security from
him or her.
"'[Liability may be imposed [under this provision] on a person in addition
to the immediate seller if the person's participation was a substantial contributive
factor in the violation.'" Haberman v. Wash. Pub. Power Supply Svs., 109 Wn.2d
107, 130, 744 P.2d 1032, 750 P.2d 254 (1987) (emphasis added) (quoting
Uniform Securities Act, § 605 cmt., 7B U.L.A. 81 (Supp. 1987)).
RCW 21.20.430(3), which pertains to participant liability, provides, in
pertinent part:
[Ejvery broker-dealer. . . who materially aids in the transaction is
also liable jointly and severally with and to the same extent as the
seller or buyer, unless such person sustains the burden of proof
that he or she did not know, and in the exercise of reasonable care
could not have known, of the existence of the facts by reason of
which the liability is alleged to exisU101
(Emphasis added.)
Thus, to establish their claims under this provision, the investors were
required to show (1) that they purchased "securities," (2) that Villalba violated the
securities laws when he sold those securities to the investors, and (3) that RCG's
9Application ofthis subsection is triggered by Villalba's violation ofRCW 21.20.010
(securities sales involving fraud or deceit) and RCW 21.20.140 (sales of unregistered securities).
10 Liability under subsection (1) generally stems from being a seller/buyer, whereas
liability under subsection (3) generally stems from a party's formal relationship to a seller/buyer.
However, as our Supreme Court recognized in Haberman, 109 Wn.2d at 133, by expanding seller
liability to cover parties who were not actually sellers/buyers, but who substantially contributed to
the sales transaction, it created significant overlap between the parties liable under each ofthe
subsections.
No. 73459-8-1/10
involvement with the scheme was sufficient for secondary liability under either
the "substantia^] contribution]" standard or the "material[] aid" standard.
Although the parties focus on the third component of the investors' claims,
we begin by briefly addressing the first two components, which help identify the
securities transaction to which RCG must have substantially contributed or given
material aid. As to the first component, "a security [is defined] as (1) an
investment of money (2) in a common enterprise and (3) the efforts of the
promoter or a third party must have been fundamentally significant ones that
affected the investment's success or failure." Ito v. Int'l Corp. v. Prescott, Inc., 83
Wn. App. 282, 291, 921 P.2d 566 (1996) (citing Cellular Enq'a Ltd. v. O'Neill, 118
Wn.2d 16, 26-31, 820 P.2d 941 (1991)). The trial court granted the investors'
motion for summary judgment, ruling that the investors (except Goldberg)
purchased securities when they provided money to Villalba's MMA program. No
appeal was taken from that decision. Regarding the violation question, it is
uncontested that Villalba violated the Washington securities act by selling
unregistered securities and defrauding the investors.
As to the contribution standard, in Hines v. Data Line Systems, Inc., 114
Wn.2d 127, 149, 787 P.2d 8 (1990), the controlling case on this subject, our
Supreme Court held that service providers, such as RCG, are not a "substantial
contributive factor" in a securities offering (i.e., not a "seller"), absent some level
of "active participation" in the sales transaction itself. Thus, even though the law
firm in Hines had advised the issuer of the security, the court held that itwas not
10
No. 73459-8-1/11
a "seller" because it had no "personal contact with any of the investors [and was
not] in any way involved in the solicitation process." Hines, 114 Wn.2d at 149.
We have consistently interpreted Hines to mean that a service provider is
not a "seller" under the law unless it "take[s].. . part in the actual sales process
by acting as the 'catalyst' between the [seller] and the [purchaser]." Brin v.
Stuzman, 89 Wn. App. 809, 830, 951 P.2d 291 (1998). Indeed, '"but for'
causation alone does not satisfy proximate causation" of the securities sales
transaction. Brin, 89 Wn. App. at 830 (citing Haberman, 109 Wn.2d at 131);
accord Viewpoint-North Stafford LLC v. CB Richard Ellis, Inc., 175 Wn. App. 189,
197, 303 P.3d 1096 (2013) (referring purchasers to an investment company was
not a "substantial contributive factor" in the sale); Shinn v. Thrust IV, Inc., 56 Wn.
App. 827, 851, 786 P.2d 285 (1990) (same).
No Washington appellate court has opined in any significant way on the
"materially aids" standard. However, othercourts interpreting identical provisions
have required the material aid to be given in the course of the sales transaction.11
See, e.g., Benton v. Merrill Lvnch & Co., 524 F.3d 866, 871 (8th Cir. 2008) ("It is
not enough for the investors to allege [financial institution] was [investment
manager's broker-dealer; they must also allege [financial institution] materially
aided in the sale of the promissory notes." (emphasis added)); Katz v. Sunset
11 There does not appear to be similar consistency with regard to the quality of actions
that might constitute "material[] aid[]." Compare In re Nat'l Century Fin. Enters., Inc., 846 F.
Supp. 2d 828, 890 (S.D. Ohio 2012) ("Establishing that the actof assistance was material can be
satisfied by showing, among other things, the act influenced or induced the decision to purchase."
(citing analogous statutes in several states)) with Nicholas v. Saul Stone &Co. LLC, 1998 WL
34111036, *19 (D.N.J. June 30, 1998), affd, 224 F.3d 179 (3d Cir. 2000) ("To establish liability on
the part of a broker-dealer for 'materially aid[ing]' in the sale ofa security, the plaintiff must
demonstrate that the broker-dealer's involvement in the sale is 'considerable, significant or
substantial."' (alteration in original) (quoting Schor v. Hope. 1992 WL 22189, at *6 (E.D. Pa. Feb.
4, 1992))).
-11 -
No. 73459-8-1/12
Fin. Servs., Inc., 650 F. Supp. 2d 962, 969 (D. Neb. 2009) ("The . . . [c]omplaint
is devoid of allegations that [broker-dealer] took any action that could be
construed as aiding [investment manager's sale of promissory notes to
Plaintiffs." (emphasis added)); Nicholas v. Saul Stone & Co. LLC, 1998 WL
34111036, *19 (D.N.J. June 30, 1998), affd, 224 F.3d 179 (3d Cir. 2000)
(analogous provision "requires that the offender must. .. 'materially aid' in the
sale of th[e] securities" (emphasis added)).
Thus, under either subsection, the substantial contribution must be made,
or the material aid given, in the course of the sales transaction. This insight
forecloses both of the investors' claims. RCG did not participate at all in
Villalba's sale of interest in MMA to the investors. The investors admit that RCG
did not factor into their decision to invest with Villalba. RCG did not issue,
promote, or solicit the sale of alleged securities and, in fact, had absolutely no
contact whatsoever with the investors. The securities sales were completed well
before Villalba would send any money to an account at RCG to trade futures.
Thus, RCG's role in the sale of the relevant securities was insufficient as a matter
of law.
Because RCG had no involvement whatsoever with Villalba's sale of
securities, the trial court's order granting summary judgment dismissal of the
investors' Washington securities act claims was proper.
Ill
The investors also brought claims pursuant to the Ohio securities act.
RCG contends that these duplicative claims are barred by Washington's well-
12-
No. 73459-8-1/13
established conflict of laws principles. This is so, it asserts, because claims may
be brought pursuant to only one state's laws and, in this case, Washington law
applies.
In general,
[w]hen parties dispute choice of law, there must be an actual
conflict between the laws or interests of Washington and the laws
or interests of another state before Washington courts will engage
in a conflict of laws analysis. Bumside v. Simpson Paper Co., 123
Wn.2d 93, 100-01, 864 P.2d 937 (1994). When the result of the
issues is different under the law of the two states, there is a "real"
conflict. Pacific Gamble Robinson Co. v. Lapp, 95 Wn.2d 341, 344-
45, 622 P.2d 850 (1980). The situation where laws or interests of
concerned states do not conflict is known as a "false" conflict.
Burnside, 123 Wn.2d at 101. If a false conflict exists, the
presumptive local law is applied. Rice v. Dow Chem. Co., 124
Wn.2d 205, 210, 875 P.2d 1213 (1994).
Seizer v. Sessions, 132 Wn.2d 642, 648-49, 940 P.2d 261 (1997) (emphasis
added); accord Woodward v. Taylor, 184 Wn.2d 911, 918, 366 P.3d 432 (2016)
("If there is no actual conflict, the local law ofthe forum applies and the court
does not reach the most significant relationship test."); Rice, 124 Wn.2d at 210
("To engage in a choice of law determination, there must first be an actual
conflict between the laws or interests of Washington and the laws or interests of
another state. Burnsidef, 123 Wn.2d at 100-01]. Where there is no conflict
between the laws or interests of two states, the presumptive local law is applied.
Burnside, at 101.").
The investors acknowledge that there is no actual conflict between the
Washington and Ohio securities laws.12 Yet, they assert that the result of the
12 Indeed, the statutes share the same interest of protecting investors.
-13-
No. 73459-8-1/14
lack of conflict is that both laws apply. This, however, is not an option in the
standard framework.13
In effect, the investors are arguing for the adoption of the so-called "Blue
Sky exception." See Danielle Beth Rosenthal, Navigating the Stormy Skies: Blue
Sky Statutes & Conflict of Laws, 2:1 Stan. J. Complex Lit. 96 (2014). Under the
Blue Sky exception, state securities laws, also known as Blue Sky laws, are
treated as "additive rather than exclusive." Mass. Mut. Life Ins. Co. v.
Countrywide Fin. Corp., 2012 WL 1322884, *2 (CD. Cal. April 16, 2012). In
other words, just as a litigant can bring claims under both state law and federal
law, under the Blue Sky exception, so can a litigant can bring claims under
multiple state's securities laws. Simms Inv. Co. v. E.F. Hutton &Co., 699 F.
Supp. 543, 545 (M.D.N.C. 1988) ("[T]he securities laws oftwo or more states
may be applicable to a single transaction without presenting a conflict of laws
question."); Lintz v. Carey Manor Ltd., 613 F. Supp. 543, 551 (W.D. Va. 1985)
("Just as the same act can violate both federal and state law simultaneously, or a
state statute as well as state common law, so too can it violate several Blue Sky
laws simultaneously."). The Blue Sky exception appears to be the strong
majority rule. See Countrywide, 2012 WL 1322884, at *2 (referring to the
"growing weight of authority" applying the exception). However, no Washington
appellate court has directly addressed whether claims may be brought under
multiple states' securities laws.
13 RCG's contentions are similarly muddled. It asserts both that there is an actual conflict
between the securities law ofWashington and Ohio and that the outcome is the same under both
statutes (namely, that RCG is not secondarily liable for Villalba's fraud). Because an actual
conflict oflaws requires that "the result ofan issue is different under the laws of the interested
states," Woodward, 184 Wn.2d at 918, these positions are internally inconsistent.
14
No. 73459-8-1/15
The Washington case closest to the point is FutureSelect Portfolio Mgmt,
Inc. v. Tremont Grp. Holdings, Inc., 180 Wn.2d 954, 331 P.3d 29 (2014). In that
case, a Washington purchaser asserted claims under the Washington securities
act against a New York seller. FutureSelect, 180 Wn.2d at 959. The New York
seller moved to dismiss, arguing that New York securities laws, which do not
recognize a private cause of action, controlled the plaintiff's claim. FutureSelect,
180 Wn.2d at 959. Given the actual conflict, the court engaged in a full-scale
conflict of law analysis, weighing the contacts with each state and each state's
interest in the dispute. FutureSelect, 180 Wn.2d at 967. The court ultimately
concluded that "Washington has a more compelling interest in protecting its
investors from fraud and misrepresentation than [the seller's state] does in
regulating sellers of securities that may have perpetrated [a] fraud or
misrepresentation in another state." FutureSelect, 180 Wn.2d at 970.
RCG contends that, by engaging in a full conflict of law analysis, the
FutureSelect court implicitly rejected the Blue Sky exception. Adopting the
investors' position, it asserts, would render unnecessary the conflict analysis
engaged in by the FutureSelect court. The investors contend, by contrast, that
FutureSelect is inapposite. A conflict analysis was required therein, they assert,
only because the New York securities law was offered to defeat the Washington
law claim, rather than to supplement it.
In truth, the FutureSelect opinion permits of both parties' readings. Thus,
there is no determinative Washington law on this issue.
-15
No. 73459-8-1/16
As we demonstrate below, the result in this case would be the same
regardless of whether we decide this issue. Because it is unnecessary to the
case's resolution, our pronouncement—were we to make one—would be mere
dicta. For this reason, we decline to further address the question of the
applicability of the Blue Sky exception in Washington.
IV
The investors further contend that RCG is also liable under the Ohio
securities act. This is so, they assert, because it "participated or aided" Villalba
in making the sale. We disagree.
The Ohio securities act extends secondary liability for securities violations
to those who "participated in" the illegal sale or "aided the seller in any way."
[E]very sale or contract for sale made in violation of [the securities
law] is voidable at the election of the purchaser. The person making
such sale or contract for sale, and every person [who] has
participated in oraided the sellerin any way in making such sale or
contract for sale, arejointly and severallyliable to the purchaser, . .
. unless the court determines that the violation did not materially
affect the protection contemplated by the violated provision.
Ohio Rev. Code Ann. § 1707.43(A) (emphasis added).
The crux of secondary liability under section 1707.43 of the Ohio
securities act is participation or aid by the defendant in "making [the] sale." Ohio
Rev. Code Ann. § 1707.43(A). Although section 1707.43 extends liability to non-
sellers, the act "do[es] not impose liability on anyone who aided the seller 'in any
way.' Rather, [it] impose[s] liability on anyone who aided the seller in any way in
making an unlawful sale orcontract for sale." In re Nat'l Century Fin. Enters.,
Inc. Inv. Litig., 2006 WL 2849784, *10 (S.D. Ohio Oct. 3, 2006).
16
No. 73459-8-1/17
The recent Ohio appellate court decision in Wells Fargo v. Smith, 2013
WL 938069 (Ohio Ct. App. Mar. 11, 2013), makes clear the importance of the
sales transaction. Therein, the court analyzed and synthesized all of the Ohio
cases applying Section 1707.43(A). Wells Fargo, 2013 WL 938069, at *5-6. The
court found that Ohio courts consider "several factors in deciding whether a
person or entity shall be responsible for the sale of illegal securities under [Ohio
Rev. Code Ann.] 1707.43(A)," all of which are directly connected to "making such
sale", including: (i) "relaying information, such as the proposed terms of the sale,
from the sellers to the investors," (ii) "arranging or attending meetings between
the investors and the sellers," (iii) "collecting money for investments," (iv)
"distributing promissory notes and other documents to the investors from the
sellers," (v) "distributing . . . payments to the investors," and (vi) "actively
marketing the security or preparing documents to attract investors." Wells Fargo,
2013 WL 938069, at *5.
As was explained above, in the context of the discussion of liability under
the Washington securities act, the investors did not proffer any evidence that
RCG "participated or aided" Villalba in "making [the] sale" of securities to them.
Thus, even if the Ohio securities act were applicable to this case, summary
judgment dismissal was properly granted on the investors' section 1707.43(A)
claims.
Because each of the investors' securities claims fails, as explained above,
determination of the conflict of law issue is unnecessary to the resolution of this
17
No. 73459-8-1/18
case and any explanation offered in response to that issue would constitute only
dicta.
V
The investors also contend that RCG is liable to them in tort for its role in
Villalba's fraud. This is so, they assert, because RCG's negligent supervision of
the MMA account facilitated Villalba's fraud. We disagree.
A negligence action may proceed only ifthe plaintiffs can establish that (1)
a duty of care was owed to them by the defendant; (2) there was a breach of that
duty; (3) that breach was the cause of their harm; and (4) they suffered injury as
a result. Keller v. City of Spokane. 146 Wn.2d 237, 242, 44 P.3d 845 (2002).
The only element at issue herein is the existence of a duty of care.
Our Supreme Court has repeatedly made clear that "there is no duty to
prevent a third party from intentionally harming another unless a 'special
relationship exists between the defendant and either the third party or the
foreseeable victim.'" Niece v. Elmview Grp. Home, 131 Wn.2d 39, 43, 929 P.2d
420 (1997) (internal quotation marks omitted) (quoting Hutchins v. 1001 Fourth
Ave. Assocs., 116 Wn.2d 217, 227, 802 P.2d 1360 (1991)): accord Folsom v.
Burger King, 135 Wn.2d 658, 674-75, 958 P.2d 301 (1998) (absent a special
relationship "no legal duty to come to the aid of a stranger exists"); Restatement
(Second) Torts § 315.
Consistent with this principle, Washington follows the rule that financial
institutions do not owe a duty of care to protect non-customers from fraud. See,
e.g., Zabkav. Bankof Am. Corp., 131 Wn. App. 167, 173, 127 P.3d 722 (2005)
18
No. 73459-8-1/19
(bank owed no duty to defrauded investors absent a direct relationship). Zabka
illustrates the strength of this rule. Therein, investors sued Bank of America (BA)
in tort for its alleged role in a fraud perpetrated by one of the bank's customers
using an account at the bank. We held that the investors' negligence claims
were properly dismissed for failure to state a claim because the bank owed no
duty to the investors, with whom it had no relationship. This was our holding
despite evidence to support a finding that the bank had failed to meet certain
procedural and monitoring requirements with respect to the account. As we
stated:
There is evidence that BA failed to follow standard
procedures and monitor transactions according to its own internal
standards. BA's failures may have facilitated the theft of the
Zabkas' money, but BA did not have a duty to prevent their loss.
The trial court correctly dismissed the negligence claims on a CR
12(b)(6) motion.
Zabka, 131 Wn. App. at 173.
Our approach is in accordance with that taken across the country. Indeed,
every court to address the precise issue presented herein has held that FCMs
owe no duty to protect non-customers from a customer's fraud. See, e.g.,
Soitzer Mgmt., Inc. v. Interactive Brokers, LLC, 2013 WL 6827945, *4 (N.D. Ohio
Dec. 20, 2013) (FCM did not owe any duty of care to non-customer plaintiffs who
lost money in a Ponzi scheme); In re Agape Litig., 681 F. Supp. 2d 352, 357-58,
360 (E.D.N.Y. 2010) (same); Nicholas, 1998 WL 34111036, at *22 (same);
Kolbeckv. LIT Am., Inc., 923 F. Supp. 557, 571-72 (S.D.N.Y. 1996), affd 152
F.3d 918 (2d Cir. 1998) (samp); see also Frederick v. Smith, 7 A.3d 780, 783-84
(N.J. Super. 2010) ("[A] brokerage firm is under no obligation to be a fraud
19
No. 73459-8-1/20
watchdog for non-customers."); Bottom v. Bailey, 767 S.E.2d 883, 896-87 (N.C.
App. 2014) (a broker has no legal duty to "supervise" or "monitor" the
investments of its customers to protects is customer's clients from fraud); accord
Unity House, Inc. v. N. Pac. Inv., Inc., 918 F. Supp. 1384, 1392-93 (D. Haw.
1996) (treating as well-established under Washington law that a brokerage firm
has no duty to its own customers—much less non-customers—to prevent
unsuitable trading in a nondiscretionary account).
Herein, the evidence established that the investors were not customers of
RCG and never did business with RCG. The investors admitted that they had no
contact with anyone at RCG before the scheme collapsed and never sent any
money or documentation to RCG. In short, the investors had no relationship with
RCG, let alone a "special relationship" pursuant to which RCG might have owed
them a duty.
Despite their lack of direct connection to RCG, the investors contend that
RCG owed a duty—to them—to police the activity and trading in the MMA
account. The investors' argument in this regard relies on Garrison v. Sagepoint
Fin.. Inc., 185 Wn. App. 461, 345 P.3d 792, review denied, 183Wn.2d 1009
(2015). Therein, we held that AIG Financial Advisors Inc., a securities broker-
dealer, could be responsible for negligently supervising the transactions of an
employee who was also acting as an independent investment advisor. Garrison,
185 Wn. App. at 484-85; accord McGrawv. Wachovia Sec, LLC, 756 F. Supp.
2d 1053, 1075 (N.D. Iowa 2010) (case upon which Garrison significantly relied).
20-
No. 73459-8-1/21
This case does not involve the particular factual scenario addressed in
Garrison. The investors were Villalba's customers, for sure, but Villalba was not
RCG's employee and registered agent. Rather, Villalba was RCG's customer or,
more precisely, he was the manager of RCG's customer. Thus, the investors'
reliance on Garrison is misplaced.
Because RCG owed the investors no special duty to supervise Villalba,
the trial court's order granting summary judgment dismissal of the investors'
negligence claim was proper.
VI
The investors also challenge the trial court's protective order, asserting
that it improperly prevented them from obtaining relevant information from RCG
in the discovery process. Because the information was privileged pursuant to the
BSA, we disagree.
The investors served RCG discovery requests for information regarding
the opening of the MMA account, what RCG did to monitor the account, and any
actions it took with respect to the account. While these requests were pending,
RCG filed a motion seeking a protective order prohibiting the investors from
"conducting discovery relating to RCG's internal investigations and monitoring of
suspicious activity," including: (1) RCG's inquiries and monitoring of Villalba and
the MMA account specifically; (2) RCG's practices and methods of investigation
and monitoring generally; or (3) The identities of RCG employees charged with
suspicious activity monitoring and investigations.
The motion contended that this discovery was prohibited under the BSA.
21
No. 73459-8-1/22
The BSA requires that banks and other financial institutions report certain types
of suspicious activity to the federal government in a suspicious activity report
(SAR). 31 U.S.C. § 5318(g)(1). The act affords a privilege to the federal
government, allowing it to keep these reports confidential, and prohibits
disclosure by others of the actual SARs, or other information indicating that an
SAR was filed.
The requested order was granted but, pursuant to the investors' motion for
reconsideration, the trial court modified the order so that it would not apply to
"materials which are already publically available from prior litigation on the MMA
account against RCG." The investors contend that the modified order was also
erroneous.
We review de novo issues interpreting the privilege provided by the BSA.
Norton v. U.S. Bank, 179 Wn. App. 450, 324 P.3d 693, review denied, 180
Wn.2d 1023 (2014).
The trial court's protective order mirrored the order that we affirmed in
Norton, a case substantially similar to this one, except that it involved a bank,
rather than an FCM.14 Therein, this court held that a financial institution "may not
be ordered to describe or disclose its internal investigations, either generally or
those specifically related" to a Ponzi scheme. Norton, 179 Wn. App. at 461-62.
As FCMs are expressly included in the BSA's definition of covered "financial
institutions," 31 U.S.C. §§ 5312(c)(1)(A), 5318(g), the BSA's protections apply
14 The protective order affirmed in Norton applied to information related to the bank's
monitoring practices and internal investigations "generally or those specifically related" to the
activity in question. 179 Wn. App. at 462. By comparison, the order at issue herein protected _
information related to RCG's "practices and methods of investigation and monitoring generally"
and "inquiries and monitoring of Villalba and the MMA account specifically."
22
No. 73459-8-1/23
equally to RCG as to the bank in Norton.15
The trial court's order, which was compelled by our decision in Norton,
was proper.
Affirmed.
We concur:
15 We are unmoved by the investors' contention that the outcome should be different in
this case than in Norton based on differences in the regulations applicable to FCMs versus
banks. Even were we to accept the investors' assertion that FCMs in general are exempted by
regulation from some SAR reporting requirements as a member of the NFA, RCG was
nevertheless required to make these reports. See NFA Interpretive Notice 9045, "NFA
Compliance Rule 2-9; FCM and IB Anti-Money Laundering Program."
-23
|
---
abstract: 'I review the Higgs sector of the $U(1)_{B-L}$ extension of the minimal supersymmetric standard model (MSSM). I will show that the gauge kinetic mixing plays a crucial role in the Higgs phenomenology. Two light bosons are present, a MSSM-like one and a $B-L$-like one, that mix at one loop solely due to the gauge mixing. After briefly looking at constraints from flavour observables, new decay channels involving right-handed (s)neutrinos are presented. Finally, it will be reviewed how model features pertaining to the gauge extension affect the model phenomenology, concerning the existence of R-Parity-conserving minima at loop level and the Higgs-to-diphoton coupling.'
author:
- Lorenzo Basso
bibliography:
- 'BL.bib'
title: 'The Higgs sector of the minimal SUSY $B-L$ model'
---
Introduction
============
The recently discovered Higgs boson is considered as the last missing piece of the standard model (SM) of particle physics. Nonetheless, several firm observations univocally call for its extension. Mainly but not limited to, the presence of dark matter, the neutrino masses and mixing pattern, the stability of the SM vacuum, the hierarchy problem. Supersymmetry (SUSY) has long been considered as the most appealing framework to extend the SM. Its minimal realisations (MSSM and its constrained versions [^1]) start however to feel considerable pressure to accommodate the recent findings, especially the measured Higgs mass of $125$ GeV. Despite not in open contrast with the MSSM, the degree of fine tuning required to achieve it is more and more felt as unnatural. In order to alleviate this tension, non-minimal SUSY realisations can be considered. One can either extend the MSSM by the inclusion of extra singlets ([*e.g.*]{}NMSSM [@Ellwanger:2009dp]) or by extending its gauge group. Concerning the latter, one of the simplest possibilities is to add an additional Abelian gauge group. I will focus here on the presence of an $U(1)_{B-L}$ group which can be a result of an $E_8 \times
E_8$ heterotic string theory (and hence M-theory) [@Buchmuller:2006ik; @Ambroso:2009sc; @Ambroso:2010pe]. This model, the minimal $R$-parity-conserving [[$B-L$]{}]{}supersymmetric standard model ([BLSSM]{}in short), was proposed in [@Khalil:2007dr; @FileviezPerez:2010ek] and neutrino masses are obtained via a type I seesaw mechanism. Furthermore, it could help to understand the origin of $R$-parity and its possible spontaneous violation in supersymmetric models [@Khalil:2007dr; @Barger:2008wn; @FileviezPerez:2010ek] as well as the mechanism of leptogenesis [@Pelto:2010vq; @Babu:2009pi].
It was early pointed out that the presence of two Abelian gauge groups in this model gives rise to kinetic mixing terms of the form $$\label{eq:offfieldstrength}
- \chi_{ab} \hat{F}^{a, \mu \nu} \hat{F}^b_{\mu \nu}, \quad a \neq b$$ that are allowed by gauge and Lorentz invariance [@Holdom:1985ag], as $\hat{F}^{a, \mu \nu}$ and $\hat{F}^{b, \mu \nu}$ are gauge-invariant quantities by themselves, see [*e.g.*]{}[@Babu:1997st]. Even if these terms are absent at tree level at a particular scale, they will in general be generated by RGE effects [@delAguila:1988jz; @delAguila:1987st]. These terms can have a sizable effect on the mass spectrum of this model, as studied in detail in [Ref.]{} [@O'Leary:2011yq], and on the dark matter, where several scenarios would not work if it is neglected, as thoroughly investigated in [Ref.]{} [@Basso:2012gz]. In this work, I will review the properties of the Higgs sector of the model. Two light states exist, a MSSM-like boson and a $B-L$-like boson. After reviewing the model, I will show that a large portion of parameter space exists where the SM-like Higgs boson has a mass compatible with its measure, both in a “normal” ($M_{H_2} > M_{H_1}=125$ GeV) and in a “inverted” hierarchy ($M_{H_1} < M_{H_2}=125$ GeV), also in agreement with bounds from low energy observables and dark matter relic abundance. The phenomenological properties of the two lightest Higgs bosons will be systematically investigated, where once again the gauge mixing is shown to be fundamental. The presence of extra D-terms arising from the new $U(1)_{B-L}$ sector, as compared to models based on the SM gauge symmetry, has a large impact on the model phenomenology. They affect both the vacuum structure of the model and the Higgs sector, in particular enhancing the Higgs-to-diphoton coupling. Both these issues will be reviewed here, despite the latter is disfavoured by recent data [@Khachatryan:2014jba], to show model features beyond the MSSM.
The model {#sec:model}
=========
For a detailed discussion of the masses of all particles as well as of the corresponding one-loop corrections we refer to [@O'Leary:2011yq]. Attention will be payed on the main aspects of the $U(1)$ kinetic mixing since it has important consequence for the scalar sector. For the numerical investigations that will be shown, we used the [[SPheno]{}]{}version [@Porod:2003um; @Porod:2011nf] created with [[SARAH]{}]{}[@Staub:2008uz; @Staub:2009bi; @Staub:2010jh; @Staub:2012pb; @Staub:2013tta] for the [BLSSM]{}. For the standardised model definitions, see [Ref.]{} [@Basso:2012ew], while for a review of the model implementation in [[SARAH]{}]{}, see [Ref.]{} [@Staub:2015kfa]. This spectrum calculator performs a two-loop RGE evaluation and calculates the mass spectrum at one loop. In addition, it calculates the decay widths and branching ratios (BRs) of all SUSY and Higgs particles as well as low-energy observables like $(g-2)_\mu$. We will discuss the most constrained scenario with a universal scalar mass $m_0$, a universal gaugino mass $M_{1/2}$ and trilinear soft-breaking couplings proportional to the superpotential coupling ($T_i = A_0 Y_i$) at the GUT scale. Other input parameters are $\tan\beta$, $\tan\beta'$, $M_{Z'}$, $Y_x$, and $Y_\nu$. They will be defined in the following section. The numerical study here presented has been performed by randomly scanning over the independent input parameters above described via the [[SSP]{}]{}toolbox [@Staub:2011dp], while low energy observables such as BR($\mu\to e\gamma$) and BR($\mu\to 3e$) have been evaluated with the [[FlavourKit]{}]{}package [@Porod:2014xia]. Furthermore, during the scans all points have been checked with [[HiggsBounds]{}]{}-4.1.1 [@Bechtle:2008jh; @Bechtle:2011sb; @Bechtle:2013gu; @Bechtle:2013wla], both in the “normal” hierarchy and in the “inverted“ hierarchy case.
Particle content and superpotential
-----------------------------------
The model consists of three generations of matter particles including right-handed neutrinos which can, for example, be embedded in $SO(10)$ 16-plets. Moreover, below the GUT scale the usual MSSM Higgs doublets are present as well as two fields $\eta$ and $\bar{\eta}$ responsible for the breaking of the [[$U(1)_{B-L}$]{}]{}. The $\eta$ field is also responsible for generating a Majorana mass term for the right-handed neutrinos and thus we interpret its [[$B-L$]{}]{}charge as its lepton number. Likewise is for $\bar{\eta}$, and we call these fields bileptons since they carry twice the lepton number of (anti-)neutrinos. The quantum numbers of the chiral superfields with respect to $U(1)_Y
\times SU(2)_L \times SU(3)_C \times {{\ensuremath{U(1)_{B-L}}}\xspace}$ are summarised in Table \[tab:cSF\].
Superfield Spin 0 Spin $\frac{1}{2}$ Generations $G_{SM}\otimes\, {{\ensuremath{U(1)_{B-L}}}\xspace}$
-------------------- ----------------- ---------------------- ------------- ----------------------------------------------------------- --
$\hat{Q}$ $\tilde{Q}$ $Q$ 3 $(\frac{1}{6},{\bf 2},{\bf 3},\frac{1}{6}) $
$\hat{d}^c$ $\tilde{d}^c$ $d^c$ 3 $(\frac{1}{3},{\bf 1},{\bf \overline{3}},-\frac{1}{6}) $
$\hat{u}^c$ $\tilde{u}^c$ $u^c$ 3 $(-\frac{2}{3},{\bf 1},{\bf \overline{3}},-\frac{1}{6}) $
$\hat{L}$ $\tilde{L}$ $L$ 3 $(-\frac{1}{2},{\bf 2},{\bf 1},-\frac{1}{2}) $
$\hat{e}^c$ $\tilde{e}^c$ $e^c$ 3 $(1,{\bf 1},{\bf 1},\frac{1}{2}) $
$\hat{\nu}^c$ $\tilde{\nu}^c$ $\nu^c$ 3 $(0,{\bf 1},{\bf 1},\frac{1}{2}) $
$\hat{H}_d$ $H_d$ $\tilde{H}_d$ 1 $(-\frac{1}{2},{\bf 2},{\bf 1},0) $
$\hat{H}_u$ $H_u$ $\tilde{H}_u$ 1 $(\frac{1}{2},{\bf 2},{\bf 1},0) $
$\hat{\eta}$ $\eta$ $\tilde{\eta}$ 1 $(0,{\bf 1},{\bf 1},-1) $
$\hat{\bar{\eta}}$ $\bar{\eta}$ $\tilde{\bar{\eta}}$ 1 $(0,{\bf 1},{\bf 1},1) $
: Chiral superfields and their quantum numbers under $G_{SM}\otimes\, {{\ensuremath{U(1)_{B-L}}}\xspace}$, where $G_{SM} = $ $(U(1)_Y\otimes\, SU(2)_L\otimes\, SU(3)_C)$ .[]{data-label="tab:cSF"}
The superpotential is given by $$\begin{aligned}
\nonumber
W & = & \, Y^{ij}_u\,\hat{u}^c_i\,\hat{Q}_j\,\hat{H}_u\,
- Y_d^{ij} \,\hat{d}^c_i\,\hat{Q}_j\,\hat{H}_d\,
- Y^{ij}_e \,\hat{e}^c_i\,\hat{L}_j\,\hat{H}_d\, \\ \nonumber
& & +\mu\,\hat{H}_u\,\hat{H}_d\,
+Y^{ij}_{\nu}\,\hat{\nu}^c_i\,\hat{L}_j\,\hat{H}_u\,
- \mu' \, \hat{\eta}\,\hat{\bar{\eta}}\,
+Y^{ij}_x\,\hat{\nu}^c_i\,\hat{\eta}\,\hat{\nu}^c_j\, \\
\label{eq:superpot}
& &\end{aligned}$$ and we have the additional soft SUSY-breaking terms: $$\begin{aligned}
\nonumber \mathscr{L}_{SB} &= & \mathscr{L}_{MSSM}
- \lambda_{\tilde{B}} \lambda_{\tilde{B}'} {M}_{B B'}
- \frac{1}{2} \lambda_{\tilde{B}'} \lambda_{\tilde{B}'} {M}_{B'}\\ \nonumber
&& - m_{\eta}^2 |\eta|^2 - m_{\bar{\eta}}^2 |\bar{\eta}|^2
- {m_{\nu^c,ij}^{2}} (\tilde{\nu}_i^c)^* \tilde{\nu}_j^c \\
&& - \eta \bar{\eta} B_{\mu'} + T^{ij}_{\nu} H_u \tilde{\nu}_i^c \tilde{L}_j
+ T^{ij}_{x} \eta \tilde{\nu}_i^c \tilde{\nu}_j^c \end{aligned}$$ $i,j$ are generation indices. Without loss of generality one can take $B_\mu$ and $B_{\mu'}$ to be real. The extended gauge group breaks to $SU(3)_C \otimes U(1)_{em}$ as the Higgs fields and bileptons receive vacuum expectation values ([*vev*s]{}): $$\begin{aligned}
H_d^0 = & \, \frac{1}{\sqrt{2}} \left(\sigma_{d} + v_d + i \phi_{d} \right),
\hspace{1cm}
H_u^0 = \, \frac{1}{\sqrt{2}} \left(\sigma_{u} + v_u + i \phi_{u} \right)\\
\eta
= & \, \frac{1}{\sqrt{2}} \left(\sigma_\eta + v_{\eta} + i \phi_{\eta} \right),
\hspace{1cm}
\bar{\eta}
= \, \frac{1}{\sqrt{2}} \left(\sigma_{\bar{\eta}} + v_{\bar{\eta}}
+ i \phi_{\bar{\eta}} \right)\end{aligned}$$ We define $\tan\beta' = v_{\eta}/v_{\bar{\eta}}$ in analogy to the ratio of the MSSM [*vev*s]{}($\tan\beta = v_{u}/v_{d}$).
Gauge kinetic mixing {#subsec:kineticmixing}
--------------------
As already mentioned in the introduction, the presence of two Abelian gauge groups in combination with the given particle content gives rise to a new effect absent in any model with just one Abelian gauge group: gauge kinetic mixing. This can be seen most easily by inspecting the matrix of the anomalous dimension, which for our model at one loop reads $$\label{eq:gammaMatrix}
\gamma = \frac{1}{16 \pi^2}
\left( \begin{array}{cc} \frac{33}{5} & 6 \sqrt{\frac{2}{5}} \\
6 \sqrt{\frac{2}{5}} & 9 \end{array} \right) ,$$ with typical GUT normalisation of the two Abelian gauge groups, [*i.e*.]{}$\sqrt{{3/5}}$ for $U(1)_{Y}$ and $\sqrt{{3/2}}$ for [[$U(1)_{B-L}$]{}]{} [@FileviezPerez:2010ek]. Therefore, even if at the GUT scale the $U(1)$ kinetic mixing terms are zero, they are induced via RGE evaluation at lower scales. It turns out that it is more convenient to work with non-canonical covariant derivatives rather than with off-diagonal field-strength tensors as in [eq.]{} (\[eq:offfieldstrength\]). However, both approaches are equivalent [@Fonseca:2011vn]. Therefore, in the following, we consider covariant derivatives of the form $\displaystyle D_\mu = \partial_\mu - i Q_{\phi}^{T} G A $ where $Q_{\phi}$ is a vector containing the charges of the field $\phi$ with respect to the two Abelian gauge groups, $G$ is the gauge coupling matrix $$G = \left( \begin{array}{cc} g_{YY} & g_{YB} \\
g_{BY} & g_{BB} \end{array} \right)$$ and $A$ contains the gauge bosons $A = ( A^Y_\mu, A^B_\mu )^T$.
As long as the two Abelian gauge groups are unbroken, we have still the freedom to perform a change of basis by means of a suitable rotation. A convenient choice is the basis where $g_{B Y}=0$, since in this case only the Higgs doublets contribute to the gauge boson mass matrix of the $SU(2)_L \otimes U(1)_Y$ sector, while the impact of $\eta$ and $\bar{\eta}$ is only in the off-diagonal elements. Therefore we choose the following basis at the electroweak scale [@Chankowski:2006jk]: $$\begin{aligned}
\label{eq:gYYp}
g'_{Y Y}
= & \frac{g_{YY} g_{B B} - g_{Y B} g_{B Y}}{\sqrt{g_{B B}^2 + g_{B Y}^2}}
= g_1 \\
g'_{B B} = & \sqrt{g_{B B}^2 + g_{B Y}^2} = {{\ensuremath{g_{BL}^{}}}\xspace} \\
\label{eq:gtilde}
g'_{Y B}
= & \frac{g_{Y B} g_{B B} + g_{B Y} g_{YY}}{\sqrt{g_{B B}^2 + g_{B Y}^2}}
= {{\ensuremath{\bar{g}}}\xspace}\\
g'_{B Y} = & 0
\label{eq:gBYp}\end{aligned}$$
When unification at some large scale ($\sim 2 \cdot 10^{16}$ GeV) is imposed, [*i.e*.]{}, $g_1^{GUT}=g_2^{GUT}={{\ensuremath{g_{BL}^{}}}\xspace}^{GUT}$ and $g^{\prime\, (GUT)}_{Y B}=g^{\prime\, (GUT)}_{B Y} = 0$, at SUSY scale we get [@O'Leary:2011yq] $$\begin{aligned}
\label{eq:gBLsusy}
{{\ensuremath{g_{BL}^{}}}\xspace} &=& 0.548\, ,\\ \label{eq:gtildesusy}
\bar{g} &\simeq& -0.147\, .\end{aligned}$$
Tadpole equations {#subsec:tadpoles}
-----------------
The minimisation of the scalar potential is here described in the so-called tadpole method. We can solve the tree-level tadpole equations arising from the minimum conditions of the vacuum with respect to $\mu, B_\mu, \mu'$ and $B_{\mu'}$. Using $v_x^2=v_{\eta}^{2} + v_{\bar{\eta}}^{2}$ and $v^2=v_{d}^{2}+ v_{u}^{2}$ we obtain
$$\begin{aligned}
\label{eq:tadmu}
|\mu|^2 = & \frac{1}{8} \Big(\Big(2 {{\ensuremath{\bar{g}}}\xspace}{{\ensuremath{g_{BL}^{}}}\xspace} v_x^{2}
\cos(2 {\beta'})
-4 m_{H_d}^2 + 4 m_{H_u}^2 \Big)\sec(2 \beta)
-4 \Big(m_{H_d}^2 + m_{H_u}^2\Big)
- \Big(g_{1}^{2} + {{{\ensuremath{\bar{g}}}\xspace}}^{2} + g_{2}^{2}\Big)v^{2} \Big)\\ \label{eq:tadBmu}
B_\mu =&-\frac{1}{8} \Big(-2 {{\ensuremath{\bar{g}}}\xspace}{{\ensuremath{g_{BL}^{}}}\xspace} v_x^{2}
\cos(2 {\beta'})
+ 4 m_{H_d}^2 -4 m_{H_u}^2
+ \Big(g_{1}^{2} + {{{\ensuremath{\bar{g}}}\xspace}}^{2} + g_{2}^{2}\Big)v^{2} \cos(2 \beta)
\Big)\tan(2 \beta ) \\
|\mu'|^2 =& \frac{1}{4} \Big(-2 \Big( {{\ensuremath{g_{BL}^{2}}}\xspace} v_x^{2}
+ m_{\eta}^2 + m_{\bar{\eta}}^2\Big) + \Big(2 m_{\eta}^2 - 2 m_{\bar{\eta}}^2
+ {{\ensuremath{\bar{g}}}\xspace}{{\ensuremath{g_{BL}^{}}}\xspace} v^{2} \cos(2 \beta) \Big)
\sec(2 {\beta'}) \Big) \\
\label{eq:tadBmuP}
B_{\mu'} =& \frac{1}{4} \Big(-2 {{\ensuremath{g_{BL}^{2}}}\xspace} v_x^{2} \cos(2 {\beta'})
+ 2 m_{\eta}^2 -2 m_{\bar{\eta}}^2
+ {{\ensuremath{\bar{g}}}\xspace}{{\ensuremath{g_{BL}^{}}}\xspace} v^{2} \cos(2 \beta)
\Big) \tan(2 {\beta'} )\end{aligned}$$
${{\ensuremath{M_{Z^{\prime}}}}\xspace}\simeq {{\ensuremath{g_{BL}^{}}}\xspace} v_x$ and, thus, we find an approximate relation between ${{\ensuremath{M_{Z^{\prime}}}}\xspace}$ and $\mu'$ $$\begin{aligned}
\nonumber
{{\ensuremath{M_{Z^{\prime}}}}\xspace}^2 &\simeq &
- 2 |\mu'|^2\\ \nonumber
&& + \frac{4 (m_{\bar{\eta}}^2 - m_{\eta}^2 \tan^2 \beta')
- v^2 {{\ensuremath{\bar{g}}}\xspace}{{\ensuremath{g_{BL}^{}}}\xspace} \cos\beta (1+\tan\beta') }{2 (\tan^2 \beta'
- 1) }\\
&& \label{eq:tadpole_MZp}\end{aligned}$$ For the numerical results, the one-loop corrected equations are used, which lead to a shift of the solutions in eqs. (\[eq:tadmu\])–(\[eq:tadBmuP\]).
The scalar sector {#sec:ScalarsHiggsSector}
-----------------
In this model, $2$ MSSM complex doublets and $2$ bilepton complex singlets are present, yielding $4$ [*[CP]{}*]{}-even, $2$ [*[CP]{}*]{}-odd, and $2$ charged physical scalars.
Concerning the [*[CP]{}*]{}-even scalars, the MSSM and bilepton sectors are almost decoupled, mixing exclusively due to the gauge kinetic mixing. In first approximation, the mass matrix is block-diagonal, and has mass eigenstates that mimic the MSSM case. In practice, it turns out that only two Higgs bosons are light (hereafter called $H_1$ and $H_2$, one per sector), while the other two are very heavy (above the TeV scale). The lightest scalars are well defined states, being either almost exclusively doublet-like or bilepton-like. It is worth stressing that their mixing is small (see [Fig.]{} \[fig:h2mixing\]) and solely due to the gauge kinetic mixing (see also [Ref.]{} [@Abdallah:2014fra]).
Concerning the physical pseudoscalars $A^0$ and $A^0_\eta$, their masses are given by $$m^2_{A^0} = \frac{2 B_\mu}{\sin2\beta} \thickspace, \hspace{1cm}
m^2_{A^0_\eta} = \frac{2 B_{\mu'}}{\sin2\beta'} \thickspace.$$ For completeness we note that the mass of charged Higgs boson reads as in the MSSM as $$m^2_{H^+} = B_\mu \left( \tan\beta+\cot\beta\right) + m^2_W\, .$$
In this model, the [*[CP]{}*]{}-odd and charged Higgses are typically very heavy. In eq. (\[eq:tadBmu\]) we see that compared to the MSSM, there is a non-negligible contribution from the gauge kinetic mixing. LHC searches limit $\tan{\beta '} < 1.5$ and $v_x \gtrsim 7$ TeV, since [@Aad:2014cka; @Khachatryan:2014fba] $$\label{eq:Zplimit}
M_{Z'} \gtrsim 3.5~\mbox{TeV}$$ at $95\%$ C.L.. Notice that recent reanalysis of LEP precision data also constrain $v_x \gtrsim 7$ TeV at $99\%$ C.L. [@Cacciapaglia:2006pk]. A consequence of this strong constraint in the [BLSSM]{}is that the first terms in eqs. (\[eq:tadBmu\])–(\[eq:tadBmuP\]) can be large, pushing for [*[CP]{}*]{}-odd and charged Higgs masses in the TeV range.
The very large bound on the $Z'$ mass is in contrast with the non-SUSY version of the model, where the gauge couplings are free parameters and can be much smaller, hence yielding lower mass bounds. The latter need to be evaluated as a function of both gauge couplings [@Basso:2012ux].
Next, we describe the sneutrino sector, that shows two distinct features compared to the MSSM. Firstly, it gets enlarged by the superpartners of the right-handed neutrinos. Secondly, even more drastically, a splitting between the real and imaginary parts of each sneutrino occurs resulting in twelve states: six scalar sneutrinos and six pseudoscalar ones [@Hirsch:1997vz; @Grossman:1997is]. The origin of this splitting is the $Y^{ij}_x\,\hat{\nu}^c_i\,\hat{\eta}\,\hat{\nu}^c_j$ term in the superpotential, eq. (\[eq:superpot\]), which is a $\Delta L=2$ operator after the breaking of $U(1)_{B-L}$.
In the case of complex trilinear couplings or $\mu$-terms, a mixing between the scalar and pseudoscalar particles occurs, resulting in 12 mixed states and consequently in a $12\times 12$ mass matrix.
To gain some feeling for the behaviour of the sneutrino masses we can consider a simplified setup: neglecting kinetic mixing as well as left-right mixing, the masses of the [R-sneutrinos]{}at the SUSY scale can be expressed as $$\begin{aligned}
{{\ensuremath{m^{2}_{{\tilde{\nu}}^{S}}}}\xspace}\simeq & \,\, m_{\nu^c}^2
+ {{\ensuremath{M_{Z^{\prime}}^{2}}}\xspace}\left( \frac{1}{4} \cos(2 \beta')
+ \frac{2 Y_x^2}{{{\ensuremath{g_{BL}^{2}}}\xspace}} \sin\beta'^2 \right)\nonumber \\ \label{eq:mSnuA}
& \, \,
+ {{\ensuremath{M_{Z^{\prime}}}}\xspace}\frac{\sqrt{2} Y_x}{{{\ensuremath{g_{BL}^{}}}\xspace}}
\left(A_x \sin\beta'-\mu' \cos\beta' \right)\, ,\\
{{\ensuremath{m^{2}_{{\tilde{\nu}}^{P}}}}\xspace}\simeq & \,\, m_{\nu^c}^2
+ {{\ensuremath{M_{Z^{\prime}}^{2}}}\xspace}\left( \frac{1}{4} \cos(2 \beta')
+ \frac{2 Y_x^2}{{{\ensuremath{g_{BL}^{2}}}\xspace}} \sin\beta'^2 \right)\nonumber \\
& \, \,
- {{\ensuremath{M_{Z^{\prime}}}}\xspace}\frac{\sqrt{2} Y_x}{{{\ensuremath{g_{BL}^{}}}\xspace}}
\left(A_x \sin\beta'-\mu' \cos\beta' \right)\, .
\label{eq:mSnuB}\end{aligned}$$ In addition, we treat the parameters $A_x$, $m_{\nu^c}^2$, ${{\ensuremath{M_{Z^{\prime}}}}\xspace}$, $\mu'$, $Y_x$ and $\tan\beta'$ as independent. The different effects on the sneutrino masses can easily be understood by inspecting [eqs.]{} (\[eq:mSnuA\]) and (\[eq:mSnuB\]). The first two terms give always a positive contribution whereas the third one gives a contribution that can be potentially large which differs in sign between the scalar and pseudoscalar states, therefore inducing a large mass splitting between the states. Further, this contribution can either be positive or negative depending on the sign of $A_x \sin\beta'-\mu' \cos\beta'$. For example choosing $Y_x$ and $\mu'$ positive, one finds that the [*[CP]{}*]{}-even ([*[CP]{}*]{}-odd) sneutrino is the lightest one for $A_x < 0$ ($A_x > 0$). This is pictorially shown in [Fig.]{} \[fig:sneumasses\], as a function of the GUT-scale input parameter $A_0$, for a choice of the other parameters. One notices that the [*[CP]{}*]{}-even ([*[CP]{}*]{}-odd) sneutrino is the lightest one when the $125$ GeV Higgs boson is predominantly $H_1$ ($H_2$). It is worth pointing out here that, as will be described in the following section, when $M_{H_1}=125$ GeV, the next-to-lightest Higgs boson can decay into pairs of [*[CP]{}*]{}-even sneutrinos, but not into the similar channel with [*[CP]{}*]{}-odd sneutrinos. Being $H_2$ predominantly a bilepton field, when this decay is open it saturates its BRs, see [Fig.]{} \[fig:h2BR\]. Regarding the decay into [*[CP]{}*]{}-odd sneutrinos, this channel is accessible ([*i.e*.]{}$\widetilde{\nu}^P$ is light enough) only in the region where $H_2$ is the SM-like Higgs boson, [*i.e*.]{}mainly coming from the doublets. In this case however, this decay channel is mitigated by the small scalar mixing and is not overwhelming (unlike for $H_1$, now mainly from the bileptons).
![Masses of [*[CP]{}*]{}-even ($\widetilde{\nu}^S$, cyan) and [*[CP]{}*]{}-odd ($\widetilde{\nu}^P$, red) [R-sneutrinos]{}as a function of $A_0$. For comparison, also the masses of the lightest ($H_1$, black) and next-to-lightest ($H_2$, blue) Higgs bosons are shown. In green, it is shown configurations when $M_{H_1}=125$ GeV. []{data-label="fig:sneumasses"}](figures/BLSSM_sneu_masses.eps){width="0.9\linewidth"}
Depending on the parameters, either type of sneutrinos can get very light. If the LSP, it can be a suitable dark matter candidate [@Basso:2012gz] and yield extra fully invisible decay channels to the Higgs bosons, thereby increasing their invisible widths. In the case of the decay into the [*[CP]{}*]{}-odd sneutrino, since this can happen mainly for the SM-like Higgs boson, one should account for the constraints on the former [@Khachatryan:2014jba]. Eventually, the [R-sneutrinos]{}could also get tachyonic or develop dangerous $R$-parity-violating [*vev*s]{}. While the first possibility is taken into account in our numerical evaluation by [[SPheno]{}]{}, and such points are excluded from our scans, the second case will be reviewed in the following subsection.
The last important sector for considerations that will follow is the one of the charged sleptons. See [Ref.]{} [@Basso:2012tr] for further details. New SUSY breaking D-term contributions to the masses appear, that can be parametrised as a function of the $Z'$ mass and of $\tan\beta'$ as $$\label{B-L-Dterms}
\frac{Q^{B-L}}{2} \frac{M_{Z'} (\tan^2\beta' -1)}{1 + \tan^2 \beta'}.$$ Their impact is larger for the sleptons than for the squarks by a factor of $3$ due to the different [[$B-L$]{}]{}charges ($Q^{B-L}$). It is possible to vary the stau mass by $\pm\mathcal{O}(100)$ GeV with respect to the MSSM case while keeping the impact on the squarks under control. Having different sfermion masses in the [BLSSM]{}as compared to the MSSM has a net impact onto the Higgs phenomenology, in particular in enhancing the $h\gamma\gamma$ coupling while keeping unaltered the SM-like Higgs coupling to gluons. As described at the end of this review, the new D-terms coming from the [[$B-L$]{}]{}sector can further reduce the stau mass entering in the $h\gamma\gamma$ effective interaction (while ensuring a pole mass of $\sim 250$ GeV, compatible with exclusions) [^2] leading this mechanism to work also in the constrained version of the model. This mechanism has been recently reanalysed also in [Ref.]{} [@Hammad:2015eca] in the very same model.
The issue of R-Parity conservation
----------------------------------
We have encountered so far several neutral scalar fields with could develop a [*vev*]{}, beside the Higgs bosons. If [*vev*s]{}of fields charged under QCD and electromagnetism are forbidden because the latter are good symmetries, [R-sneutrino]{}[*vev*s]{}, which are not by themselves problematic, would unavoidably break R-Parity. The issue of conserving R-Parity is of fundamental importance, since this is a built-in symmetry in our model where [[$B-L$]{}]{}is gauged. We will therefore restrain ourselves to parameter configurations where the global minimum is R-Parity conserving.
When all neutral scalar fields are allowed to get a [*vev*]{}, it is not trivial even at the tree level to find which is the deeper global minimum, and whether it is of a “good” type, here defined as having the correct broken symmetries and being R-Parity conserving. One possible way to study this issue is to start from a simplified set of input parameters yielding a correct tree level global minimum when only the Higgs fields get a [*vev*]{}, and then look for the true global minimum when all other neutral fields (mainly [R-sneutrinos]{}) acquire a [*vev*]{}, both at the tree level and at loop level. See [Ref.]{} [@CamargoMolina:2012hv] for further details.
At the tree level there seems to exist regions where the [BLSSM]{}has a stable, R-Parity-conserving global minimum with the correct broken and unbroken gauge groups. For this to happen one needs the [R-sneutrino]{}Yukawa coupling $Y_x$ to be not so large, and the trilinear parameter $A_0$ to be not large compared to the soft scalar mass $m_0$, as, intuitively, large $Y_x$ and $A_0$ can lead to large negative contributions to the potential energy for large values of $v_x$, as well as reducing the effective [R-sneutrino]{}masses, as described above and clear from [Fig.]{} \[fig:sneumasses\]. It turns out that when loop corrections are taken into account, few points all over such regions of parameters exist where R-Parity is not preserved anymore, or where $SU(2)_L$ or $U(1)_{B-L}$ are unbroken. This is apparently due to a very finely-tuned breaking of $SU(2)_L$ and $U(1)_{B-L}$ which often does not survive loop corrections. The reason for this is that besides the known large contributions of third generation (s)fermions, the additional new particles of the [[$B-L$]{}]{}sector also play an important role. As previously for the charged sleptons sector, new SUSY breaking D-term contributions to the masses appear, see eq. (\[B-L-Dterms\]). Since, as shown in eq. (\[eq:Zplimit\]), the experimental bounds require $M_{Z'}$ to be in the multi-TeV range, these contributions can be much larger than in the MSSM sector, resulting in the observed importance of the corresponding loop contributions. Furthermore, these contributions are also responsible for the restoration of $U(1)_{B-L}$ at the one-loop level.
Ultimately, overall safe regions of parameters cannot be found where the correct vacuum structure can be ensured. At the same time, if naive trends can be spotted for bad points to appear, these have nonetheless to be checked case-by-case due to the highly non-trivial scalar potential, and it might be possible that neighbour configurations still hold a valid global minimum. We will not check the validity of our scans from the vacuum point of view in the following, being confident that if any point is ruled out, a neighbour one yielding a very similar phenomenology can be found, which is allowed.
A quick look to flavour observables
===================================
Before moving to the Higgs phenomenology, we briefly show the impact on the [BLSSM]{}model when considering the constraints arising from low energy observables. For a review of the observables as well as for the impact onto general SUSY models encompassing a seesaw mechanism, see Refs. [@Abada:2014kba; @Vicente:2015cka].
We consider here only the two most constraining ones, BR($\mu\to e\gamma$) and BR($\mu\to 3e$). The present exclusions are BR($\mu\to e\gamma$) $<5.7\cdot 10^{-13}$ [@Adam:2013mnn] and BR($\mu\to 3e$) $< 1\cdot 10^{-12}$ [@Bellgardt:1987du]. In [Fig.]{} \[fig:flavour\] we plot these branching ratios as a function of the mass of the lightest (in black) and next-to-lightest (in red) SM-like neutrino, which display some pattern for evading the bounds. In particular, they are required to be rather light, below $0.5$ eV, while the model, ought to the scans here performed, seems to prefer configurations with neutrinos heavier than 0.01 eV, hence the preferred region in between. Lighter mass values are nonetheless also allowed.
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![(Upper plot) BR($\mu \to e \gamma$) and (lower plot) BR($\mu \to 3e$) as a function of the light neutrino masses in GeV (black: $\nu_1$, red: $\nu_2$). The blue horizontal lines represent the actual experimental limits, from Refs. [@Adam:2013mnn] and [@Bellgardt:1987du], respectively. The parameters have been chosen as $m_0\in [0.4,2]$ TeV, $M_{1/2} \in [1.0,2.0]$ TeV, $\tan\beta\in [5,40]$, $A_0 \in [-4.0,4.0]$ TeV, $\tan\beta ' \in [1.05,1.15]$, $M_{Z'} \in [2.5,3.5]$ TeV, $Y_x \in {\bf 1} \cdot [0.002,0.4]$, $Y_\nu \in {\bf 1} \cdot [0.05,5]\times 10^{-6}$.[]{data-label="fig:flavour"}](figures/BLSSM_BR_meg.eps "fig:"){width="0.76\linewidth"}
![(Upper plot) BR($\mu \to e \gamma$) and (lower plot) BR($\mu \to 3e$) as a function of the light neutrino masses in GeV (black: $\nu_1$, red: $\nu_2$). The blue horizontal lines represent the actual experimental limits, from Refs. [@Adam:2013mnn] and [@Bellgardt:1987du], respectively. The parameters have been chosen as $m_0\in [0.4,2]$ TeV, $M_{1/2} \in [1.0,2.0]$ TeV, $\tan\beta\in [5,40]$, $A_0 \in [-4.0,4.0]$ TeV, $\tan\beta ' \in [1.05,1.15]$, $M_{Z'} \in [2.5,3.5]$ TeV, $Y_x \in {\bf 1} \cdot [0.002,0.4]$, $Y_\nu \in {\bf 1} \cdot [0.05,5]\times 10^{-6}$.[]{data-label="fig:flavour"}](figures/BLSSM_BR_mu3e.eps "fig:"){width="0.76\linewidth"}
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
For convenience, the impact of satisfying the earlier bounds will be shown only in the inverted hierarchy case, due to the smaller density of configurations therein. Instead, points not allowed in the normal hierarchy case are automatically dropped.
Regarding the long-lasting $(g-2)_\mu$ discrepancy, in the setup investigated here charginos and charged Higgses are too heavy, same for the $Z'$ boson, while the neutralino and sneutrino are too weakly coupled, to give a significant enhancement over the SM prediction.
Higgs phenomenology
===================
We review here the phenomenology of the Higgs sector, showing a first survey of its phenomenological features. First, results when normal hierarchy is imposed are presented. Then, we will show that the inverted hierarchy is also possible on a large portion of the parameter space. Without aim for completeness, the results are here presented as the starting point for a more thorough investigation. Finally, it is described how model features pertaining to the extended gauge sector impinge onto the Higgs phenomenology, and in particular how the Higgs-to-diphoton branching ratio can be easily enhanced in this model, despite the experimental data now converging to a more SM-like behaviour than in the recent past.
Normal hierarchy
----------------
In this subsection we discuss the normal hierarchy case, with the lightest Higgs boson being the SM-like one ([*i.e*.]{}, predominantly from the doublets), and a heavier Higgs boson predominantly from the bilepton fields (those carrying $B-L$ number and responsible for its spontaneous breaking). Their mixing is going to be small and solely due to the kinetic mixing.
![Branching ratios for $H_2$ with $M_{H_2} > M_{H_1} = 125$ GeV. The [*[CP]{}*]{}-even sneutrino channel (brown) is superimposed. \[fig:h2BR\]](figures/BLSSM_BR_H2.eps){width="50.00000%"}
In [Fig.]{} \[fig:h2BR\] we first inspect the heavy Higgs boson branching ratios. Besides the standard decay modes, the decay into a pair of SM Higgs bosons exist, as well as two new characteristic channels of this model, comprising right-handed (s)neutrinos.
1. $H_2 \to H_1 H_1$. Its BR can be up to $40\%$ before the top quark threshold, and around $30\%$ afterwards;
2. $H_2 \to \nu_h \nu_h$. A similar decay channel exists for the $Z'$ boson. The BR are $\mathcal{O}(10)\%$, up to $20\%$ depending on the heavy Higgs and neutrino masses;
3. $H_2 \to \widetilde{\nu}^S\widetilde{\nu}^S$, where, $\widetilde{\nu}^S$ is the [*[CP]{}*]{}-even sneutrino and the LSP, hence providing fully invisible decays of the heavy Higgs. If kinematically open, it saturates the Higgs BRs. Notice that only points with very light [*[CP]{}*]{}-even sneutrinos are shown, possible only for very large and negative $A_0$ (see [Fig.]{} \[fig:sneumasses\]).
![Mixing between Higgs boson mass eigenstates (blue-orange: $H_1$, cyan-red: $H_2$) and scalar doublet fields, as a function of $M_{H_2}$. $ZH[i,j]$ is the scalar mixing matrix. Orange/red points are the subset corresponding to BR$(H_2\to \widetilde{\nu}^S\widetilde{\nu}^S) > 90\%$ .\[fig:h2mixing\]](figures/BLSSM_H2_mixing.eps){width="50.00000%"}
While the first two channels exist also in the non-SUSY version of the model [^3] (see, [*e.g.*]{}, [@Basso:2010yz]), the last one, involving the [*[CP]{}*]{}-even sneutrino, is truly new and rather intriguing. This is because the sneutrino is light and it can be a viable LSP candidate if with mass lower than $H_2$, as in this case [@Basso:2012gz]. It however implies that the heavy Higgs is predominantly bilepton-like, with a light Higgs very much SM-like. This can be seen in [Fig.]{} \[fig:h2mixing\], where the points with large BR($H_2 \to \widetilde{\nu}^S\widetilde{\nu}^S$) (in red) have the lowest mixing between $H_2$ and the SM scalar doublet fields, of the order of $0.1\%$. It immediately follows that this channel will have very small cross section at the LHC, when considering SM-like Higgs production mechanisms. This is true for all heavy Higgs masses $M_{H_2} > 140$ GeV. The $125$ GeV Higgs is well SM-like, with tiny reduction of its couplings to the SM particle content. On the other side, the heavy Higgs is feebly mixed with the doublets, suppressing its interactions with the SM particles, and hence its production cross section. This can be seen in [Fig.]{} \[fig:h2xs\] (top frame). Considering only the gluon fusion production mechanism, and multiplying it by the relevant BR, we get the cross sections for the choice of channels displayed therein. The most constraining channels, $H\to WW \to \ell\nu jj$ and $H\to WW \to 2\ell 2\nu$, are also compared to the exclusions at the LHC for $\sqrt{s}=8$ TeV from Refs. [@CMS-PAS-HIG-13-027] and [@Chatrchyan:2013yoa], respectively. The $H\to ZZ$ channels are well below current exclusions, that are hence not shown.
![Cross sections at $\sqrt{s} = 8$ TeV for (upper plot) the SM-like channels (lower plot) the new channels, as a function of the heavy Higgs mass. The solid lines above are the exclusion curves from [@CMS-PAS-HIG-13-027; @Chatrchyan:2013yoa]. \[fig:h2xs\]](figures/BLSSM_H2_xsSM.eps "fig:"){width="50.00000%"}\
![Cross sections at $\sqrt{s} = 8$ TeV for (upper plot) the SM-like channels (lower plot) the new channels, as a function of the heavy Higgs mass. The solid lines above are the exclusion curves from [@CMS-PAS-HIG-13-027; @Chatrchyan:2013yoa]. \[fig:h2xs\]](figures/BLSSM_H2_xsNEW.eps "fig:"){width="50.00000%"}
We see that all [^4] the displayed configurations are allowed by the current searches (the exclusions shown by solid curves of same color as the depicted channel). This is because of the suppression of the heavy Higgs boson cross sections due to the small scalar mixing.
In the lower plot are displayed the cross sections for the new channels. Those pertaining to model configurations for which the heavy Higgs boson decays to the [*[CP]{}*]{}-even sneutrino (LSP), yielding a fully invisible decay mode, are displayed in red. Contrary to the all other cases, the production of the heavy Higgs for this channel is via vector boson fusion as searched for at the LHC [@CMS-PAS-HIG-14-038]. Typical cross sections range between $0.1$ fb and $1$ fb. The $H_2\to H_1H_1$ channel is shown in blue and it can yield cross sections of $1\div 10$ fb for $250 < M_{H_2} < 400$ GeV. Last is the $H_2 \to \nu_h \nu_h$ channel. It can be sizable only for very light $H_2$ masses: $\sim 10\div 100$ fb for $140 < M_{H_2} < 160$ GeV, although the further decay chain of the heavy neutrinos have to be accounted for. The latter can give spectacular multi-leptonic final states of the heavy Higgs boson ($4\ell 2\nu$ and $3\ell 2j \nu$) or high jet multiplicity ones ($2\ell 4j$), via $\nu_h \to \ell^\mp W^\pm$ and $\nu_h \to \nu Z$ in a $2:1$ ratio (modulo threshold effects). Further, these decays are typically seesaw-suppressed and can therefore give rise to displaced vertices [@Basso:2008iv].
Inverted hierarchy
------------------
In this subsection we discuss the inverted hierarchy case, where $H_2$ is the SM-like boson and a lighter Higgs boson exists.
![Branching ratios for the $125$ GeV Higgs boson ($H_2$). The decay into heavy neutrinos is displayed with diamonds. All others with circles. Gray points are excluded by the low energy observables and by [[HiggsBounds]{}]{}. The decay into [*CP*]{}-odd sneutrinos is not shown. \[fig:invH2BR\]](figures/BLSSM_BRH2_invlog.eps){width="50.00000%"}
We start once again by presenting the BRs for the next-to-lightest Higgs boson in [Fig.]{} \[fig:invH2BR\]. This time however this is the SM-like boson, hence predominantly from the doublets. It has the same new channels as the heavy Higgs in the normal hierarchy, the only difference being the [*[CP]{}*]{}-odd [R-sneutrino]{}instead of the [*[CP]{}*]{}-even one. This is simply because the inverted hierarchy can happen only for large positive $A_0$ values, where only the [*[CP]{}*]{}-odd [R-sneutrino]{}can be light, see [Fig.]{} \[fig:sneumasses\]. The configurations not allowed by the low energy observables or by [[HiggsBounds]{}]{}are displayed as gray points. We see that $H_2$ may have sizable decays into pairs of the lighter Higgs bosons, yielding $4b$-jets final states. This decay is still allowed with rates up to few percent. Further, rare decays into pairs of heavy neutrinos are also present, with BRs below the permil level. This channel can give rise to rare multi-lepton/jets decays for the SM-like Higgs boson, that are searched for at the LHC, even in combination with searches for displaced vertices [@Basso:inpreparation]. The last available channel is the decay into pairs of [*[CP]{}*]{}-odd [R-sneutrinos]{}. Being the LSP, it will increase the invisible decay width and hence give larger-than-expected widths for the SM-like boson. Its rate is obviously constrained, and a precise evaluation of the allowed range is needed. It however goes beyond the scope of the present review and we postpone it to a future publication.
Regarding the lightest Higgs boson ($H_1$), this will obviously decay predominantly into pairs of $b$-jets. Notice that due to its large bilepton fraction it can also decay into pairs of very light RH neutrinos, at sizable rates depending on the neutrino masses. As in the in previous figure, the non-allowed configurations are displayed as gray points. We see that the pattern of decays is not affected by the inclusion of the constraints, in the sense that this channel stays viable. Once again, the latter will yield multi-lepton/jet final state, which will be very soft, and hence very challenging for the LHC. However, also in this case displaced vertices may appear.
![Same as in [Fig.]{} \[fig:invH2BR\] for the lightest Higgs boson ($H_1$).\[fig:invH1BR\]](figures/BLSSM_BR_lowH1.eps){width="50.00000%"}
As in the previous section, we show in [Fig.]{} \[fig:Hmixinv\] the mixing between the Higgs mass eigenstates and the doublet fields as a function of the light Higgs mass, to show that $H_2$ is here rather SM-like. Once more, the gray points displayed here are excluded by the low energy observables and by [[HiggsBounds]{}]{}.
![Mixing between scalar mass eigenstates and Higgs doublets. (black: $H_1$, red: $H_2$) and scalar doublet fields, as a function of $M_{H_1}$. $ZH[i,j]$ is the scalar mixing matrix. Gray points are excluded by the low energy observables and by [[HiggsBounds]{}]{}. \[fig:Hmixinv\]](figures/BLSSM_mixing_inv.eps){width="50.00000%"}
Finally, the production cross sections for the lightest Higgs boson can be evaluated. In [Fig.]{} \[fig:h1xsinv\] we compare the direct production (for the main SM production mechanisms, gluon fusion and vector boson fusion) with the pair production via $H_2$ decays only via gluon fusion, $gg \to H_2 \to H_1 H_1$. When the latter channel is kinematically open, [*i.e*.]{}$2M_{H_1} < 125$ GeV, the lightest Higgs boson pair production has cross sections up to $1$ pb at the LHC at $\sqrt{s}=8$ TeV, and it can give rare $4b$, $2b2V$ or $4V$ ($V=W,\,Z$) decays of the SM-like Higgs boson. A thorough analysis of the phenomenology of the Higgs sector in the [BLSSM]{}for the upcoming LHC run 2, based on the first investigations shown here, will be performed soon.
![Cross sections at $\sqrt{s} = 8$ TeV for different production mechanisms. Gluon-fusion (in red) and vector-boson-fusion (in green) mechanisms are displayed only for $M_{H_1}>50$ GeV for simplicity. Gray points are excluded by the low energy observables and by [[HiggsBounds]{}]{}. \[fig:h1xsinv\] ](figures/BLSSM_XSH1_inv.eps){width="50.00000%"}
Enhancement of the diphoton rate
--------------------------------
A feature of gauge-extended models is that new SUSY-breaking D-terms arise, that give further contributions to the sparticle masses. In the case of the model under consideration, we showed discussing eq. (\[B-L-Dterms\]) that these terms can be large, and that they bring larger corrections to sleptons than to squarks. We already discussed how the vacuum structure of the [BLSSM]{}is affected by this. Here, we discuss their impact on the Higgs phenomenology, focusing on the Higgs-to-diphoton decay, despite disfavoured by most recent data [@Khachatryan:2014jba], as an illustrative case. See [Ref.]{} [@Basso:2012tr] for further details.
To start our discussion let us briefly review the partial decay width of the Higgs boson $h$ into two photons within the MSSM and its singlet extensions. This can be written as (see, e.g., [@Djouadi:2005gi]) $$\begin{aligned}
\label{eq:decaywidth}
&\Gamma_{h \rightarrow\gamma\gamma}
= \frac{G_\mu\alpha^2 m_{h}^3}
{128\sqrt{2}\pi^3} \bigg| \sum_f N_c Q_f^2 g_{h ff} A_{1/2}^{h}
(\tau_f) + g_{h WW} A_1^{h} (\tau_W) \nonumber \\
& \hspace*{0.2cm} + \frac{m_W^2 g_{h H^+ H^-} }{2c_W^2
m_{H^\pm}^2} A_0^h(\tau_{H^\pm}) + \sum_{\chi_i^\pm} \frac{2 m_W}{ m_{\chi_i^\pm}} g_{h \chi_i^+
\chi_i^-} A_{1/2}^{h} (\tau_{\chi_i^\pm}) \nonumber \\
& \hspace*{0.2cm}
+\sum_{\tilde e_i} \frac{ g_{h \tilde e_i \tilde e_i} }{
m_{\tilde{e}_i}^2} \, A_0^{h} (\tau_{ {\tilde e}_i}) +\sum_{\tilde q_i} \frac{ g_{h \tilde q_i \tilde q_i} }{
m_{\tilde{q}_i}^2} \, 3 Q_{\tilde q_i}^2 A_0^{h} (\tau_{ {\tilde q}_i}) \bigg|^2\,,\end{aligned}$$ corresponding to the contributions from charged SM fermions, $W$ bosons, charged Higgs, charginos, charged sleptons and squarks, respectively. The amplitudes $A_i$ at lowest order for the spin–1, spin–$\frac{1}{2}$ and spin–0 particle contributions, can be found for instance in [Ref.]{} [@Djouadi:2005gi]. $g_{hXX}$ denotes the coupling between the Higgs boson and the particle in the loop and $Q_X$ is its electric charge. In the SM, the largest contribution is given by the $W$-loop, while the top-loop leads to a small reduction of the decay rate. In the MSSM, it is possible to get large contributions due to sleptons and squarks, although it is difficult to realise such a scenario in a constrained model with universal sfermion masses [@Carena:2011aa; @Ellwanger:2011aa; @Benbrik:2012rm]. In singlet or triplet extension of the MSSM also the chargino and charged Higgs can enhance the loop significantly [@SchmidtHoberg:2012yy; @Delgado:2012sm]. However, this is only possible for large singlet couplings which lead to a cut-off well below the GUT scale. In contrast, it is possible to enhance the diphoton ratio in the [BLSSM]{}due to light staus even in the case of universal boundary conditions at the GUT scale. We show this by calculating explicitly the contributions of the stau: $$\begin{aligned}
& A(\tilde{\tau}) = \frac{1}{3} \frac{\partial \text{det} m_{\tilde{\tau}}^2}{\partial \log v} \\
\simeq& -\frac{2}{3} \frac{2 m_{\tau}^2 (A_\tau - \mu \tan\beta)^2}{(m_E^2 + D_R)(m_L^2 + D_L) + m_{\tau}^2 \mu \tan\beta(2 A_\tau - \mu \tan\beta)}\, .\end{aligned}$$ Here, $D_L$ and $D_R$ represent the D-term contributions of the left- and right-handed stau and we have neglected sub-leading contributions. Given that $2 A_\tau < \mu \tan\beta$, for fixed values of the other parameters, $D_R$ and $D_L$ can be used to enhance the $\gamma\gamma$ rate by suppressing the denominator.
We turn now to a fully numerical analysis to demonstrate the mechanism to enhance the Higgs-to-diphoton rate as a feature of the model with an extended gauge sector. This is a result of reducing the stau mass at the Higgs mass scale via extra D-terms as shown discussing [eq.]{} (\[B-L-Dterms\]). We remind here that this mechanism leaves the stop mass and hence, as we will show, the Higgs-to-gluons effective coupling nearly unchanged.
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![(Top plot) The mass of the SM-like Higgs \[bottom(blue line)\], of the stau \[middle(black) line, where the dashed line represents a reference unchanged value\] and of the lightest neutralino \[top(red) line\]; (middle plot) the diphoton branching ratio; (bottom plot) the neutralino relic density as a function of $\tan\beta'$. The other parameters have been chosen as $m_0=673$ GeV, $M_{1/2} = 2220$ GeV, $\tan\beta=42.2$, $A_0 = -1842.6$, $M_{Z'} = 2550$ GeV, $Y_x = {\bf 1} \cdot 0.42$[]{data-label="fig:varTBp"}](figures/tanbp_Mi.eps "fig:"){width="0.765\linewidth"}
![(Top plot) The mass of the SM-like Higgs \[bottom(blue line)\], of the stau \[middle(black) line, where the dashed line represents a reference unchanged value\] and of the lightest neutralino \[top(red) line\]; (middle plot) the diphoton branching ratio; (bottom plot) the neutralino relic density as a function of $\tan\beta'$. The other parameters have been chosen as $m_0=673$ GeV, $M_{1/2} = 2220$ GeV, $\tan\beta=42.2$, $A_0 = -1842.6$, $M_{Z'} = 2550$ GeV, $Y_x = {\bf 1} \cdot 0.42$[]{data-label="fig:varTBp"}](figures/tanbp_BR.eps "fig:"){width="0.76\linewidth"}
![(Top plot) The mass of the SM-like Higgs \[bottom(blue line)\], of the stau \[middle(black) line, where the dashed line represents a reference unchanged value\] and of the lightest neutralino \[top(red) line\]; (middle plot) the diphoton branching ratio; (bottom plot) the neutralino relic density as a function of $\tan\beta'$. The other parameters have been chosen as $m_0=673$ GeV, $M_{1/2} = 2220$ GeV, $\tan\beta=42.2$, $A_0 = -1842.6$, $M_{Z'} = 2550$ GeV, $Y_x = {\bf 1} \cdot 0.42$[]{data-label="fig:varTBp"}](figures/tanbp_DM.eps "fig:"){width="0.757\linewidth"}
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
In Table \[tab:benchmark\] we have collected two possible scenarios that provide a SM-like Higgs particle in the mass range preferred by LHC results displaying an enhanced diphoton rate. In the first point, the lightest [*[CP]{}*]{}-even scalar eigenstate is the SM-like Higgs boson while the light bilepton is roughly twice as heavy. In Fig. \[fig:varTBp\] we show that all the features arise from the extended gauge sector: it is sufficient to change only $\tan\beta'$ to obtain an enhanced diphoton signal $R^1_{\gamma \gamma}\equiv \frac{\left[ \sigma (gg\to h_1) \cdot BR(h_1\to \gamma\gamma)\right]_{B-L}}{\left[ \sigma (gg\to h_1) \cdot BR(h_1\to \gamma\gamma)\right]_{SM}}$ and the correct dark matter relic density while keeping the mass of the SM-like Higgs nearly unchanged. The dark matter candidate in this scenario is the lightest neutralino, that is mostly a bileptino (the superpartner of the bileptons). The correct abundance for $\tan\beta' \simeq 1.156$ is obtained due to a co-annihilation with the light stau. In the second point, the SM-like Higgs is accompanied by a light scalar around $98$ GeV which couples weakly to the SM gauge bosons, compatibly with the LEP excess [@Barate:2003sz; @Belanger:2012tt; @Drees:2012fb]. In this case, the LSP is a [*[CP]{}*]{}-odd sneutrino which annihilates very efficiently due to the large $Y_x$. This usually results in a small relic density. To get an abundance which is large enough to explain the dark matter relic, the mass of the sneutrino has to be tuned below $m_W$ [@Basso:2012gz]. This can be achieved by slightly increasing $\tan{\beta '}$ and by tuning the Majorana Yukawa couplings $Y_x$, that tends to increase the SM-like Higgs mass for the given point. It is worth mentioning that a neutralino LSP with the correct relic density in the stau co-annihilation region can also be found in this scenario. Notice that both points yield rates consistent with observations in the $WW^*/ZZ^*$ channels (measured at the LHC) (being $c_{hZZ}\sim 1$), as well as an effective Higgs-to-gluon coupling close to 1.
---------------------------- --------- -----------
$m_{h_1}$ \[GeV\] 125.2 98.2
$m_{h_2}$ \[GeV\] 186.9 123.0
$m_{\tilde{\tau}}$ \[GeV\] 267.0 237.3
doublet fr. \[%\] 99.5 8.7
bilepton fr. \[%\] 0.5 91.3
$c_{h_1 g g}$ 0.992 0.087
$c_{h_1 Z Z}$ 1.001 0.085
$c_{h_2 g g}$ 0.005 0.911
$c_{h_2 Z Z}$ 0.005 0.921
$\Gamma(h_1) $ \[MeV\] 4.13 0.22
$R^1_{ \gamma \gamma}$ 1.57 0.085
$R^1_{b \overline{b}}$ 1.03 0.089
$R^1_{WW^*}$ 0.98 0.05
$\Gamma(h_2) $ \[MeV\] 4.8 3.58
$R^2_{ \gamma \gamma}$ 0.005 1.79
$R^2_{b \overline{b}}$ 0.006 0.95
$R^2_{WW^*}$ 0.01 0.88
LSP mass \[GeV\] $253.9$ $82.9$
$\Omega h^2 $ $0.10$ $10^{-2}$
---------------------------- --------- -----------
: The input parameter used: Point I: $m_0 = 673$ GeV , $M_{1/2} = 2220$ GeV, $A_0 = -1842$ GeV, $\tan\beta=42.2$, $\tan\beta'=1.1556$, $M_{Z'} = 2550$ GeV, $Y_x = {\bf 1} \cdot 0.42$ (neutralino LSP). Point II: $m_0 = 742$ GeV , $M_{1/2} = 1572$ GeV, $A_0 = 3277$ GeV, $\tan\beta=37.8$, $\tan\beta'=1.140$, $M_{Z'} = 2365$ GeV, $Y_x=\text{diag}(0.40,0.40,0.13)$ ([*[CP]{}*]{}-odd sneutrino LSP). $c_{SVV}$ denotes the coupling squared of the Higgs fields to vector bosons normalised to the SM values. []{data-label="tab:benchmark"}
Conclusions
===========
In this review I described the $U(1)_{B-L}$ extension of the MSSM, focusing in particular on the scalar sector, described in details. The fundamental role that the gauge kinetic mixing plays in this sector has been underlined.
The comparison to the most constraining low energy observables showed that a preferred region for the light neutrino masses exists to evade these bounds. Then, I presented a first systematic investigation of the phenomenology of the Higgs sector of this model, showing that both the normal hierarchy and the inverted hierarchy of the two lightest Higgs bosons are naturally possible in a large portion of the parameter space. Particular attention has been devoted to analyse the new decay channels comprising both the [*[CP]{}*]{}-even and [*[CP]{}*]{}-odd [R-sneutrinos]{}, which are a peculiarity of the [BLSSM]{}. Based on these first findings, a thorough analysis of the Higgs sector in the [BLSSM]{}at the upcoming LHC run 2 will be soon prepared. The fit of the SM-like Higgs boson to the LHC data will also be performed with [[HiggsSignals]{}]{} [@Bechtle:2013xfa].
Finally, I described how in the [BLSSM]{}model (and in general in gauge-extended MSSM models) the Higgs-to-diphoton decay can be easily enhanced. Despite disfavoured by most recent data, this feature is a consequence of the potentially large new SUSY-breaking D-terms arising from the [[$B-L$]{}]{}sector. At the same time these terms affect also the vacuum structure of the model, where naive R-Parity conserving configurations at the tree level, could develop deeper R-Parity violating global minima, or partially restore the $SU(2)_L\times U(1)_{B-L}$ symmetry at one loop. It is however possible to still find R-Parity conserving global minima on the whole parameter space, which can either accommodate an enhancement of the Higgs-to-diphoton decay or fit the most recent Higgs data.
Acknowledgments {#acknowledgments .unnumbered}
===============
I would like to thank S. Moretti and C. H. Shepherd-Themistocleous for helpful discussions in the early stages of this work. I am also really grateful to all my collaborators, and in particular to Florian Staub. I further acknowledge support from the Theorie-LHC France initiative of the CNRS/IN2P3 and from the French ANR 12 JS05 002 01 BATS@LHC.
[^1]: For a review, see Ref.[@Ellis:2012nv].
[^2]: With pole mass we denote the one-loop corrected mass at $Q=M_{SUSY}=\sqrt{\tilde{t}_1\tilde{t}_2}$, while in the loop, leading to the effective $h\gamma\gamma$ coupling, the running $\overline{\text{DR}}$ tree-level mass at $Q=m_h$ enters, being $h$ the SM-like Higgs boson,[*i.e*.]{}$m_h=125$ GeV.
[^3]: However, in the non-SUSY [[$B-L$]{}]{}model the Higgs mixing angle is a free parameter, directly impacting on these branching ratios.
[^4]: Starting from $M_{H_2} > 130$ GeV.
|
In 1922, St. Luke's Protestant Episcopal Church, which in 1892 had relocated
to 435 West 141st Street from its Hudson Street location in Greenwich Village,
established the St. Luke's Episcopal Mission for Negroes. A chapel seating 300
was created in an old row house at 28 Edgecombe Avenue, near 136th Street. By
the 1920s, African-Americans were moving into Harlem, and the mission was probably
an effort to keep African-Americans segregated from the regular parish. The church
is now home to New Hope Seventh-day Adventist Church.
The Factory Specification (Aug. 18, 1930) shows that Hillgreen, Lane & Company, of Alliance, Ohio, would build a two-manual organ for St. Luke's Mission for a consideration of $4,000. All of the pipes were enclosed in one expression box, and the console was detached.
The 2009 status of this organ is unknown.
Great Organ (Manual I) – 61 notes, enclosed
16
Sub Flute [unit]
97
4
Flute
—
8
Diapason
73
2 2/3
Dolce Quint
SW
8
Melodia
—
2
Flautino
—
8
Dulciana
SW
Swell Organ (Manual II) – 61 notes, enclosed
16
Contra Dolce [unit]
97
2 2/3
Quint Flute
—
8
Salicional
73
2
Super Dolce
—
8
Vox Celeste [TC]
61
Dolce Mixture III
derived
8
Dolce
—
8
Schalmei [Synth.]
—
8
Flute
GT
8
Cornopean
73
4
Octave Flute
GT
* 8' Salicional + 2-2/3' Quint Flute
4
Dolcette
—
Pedal Organ – 32 notes
16
Bourdon
—
8
Octave Dolce
—
16
Dolce Bass
—
4
Octave Flute
—
8
Flute
—
Couplers
Great to Pedal 8'
Great 16', 4', Unison Cancel
Swell to Pedal 8', 4'
Swell 16', 4', Unison Cancel
Swell to Great 16', 8', 4'
Coupler Cancel
Great to Swell 8'
Combinations (adjustable at console, visibly operating registers)
Swell Organ
Pistons 1-2-3-4-5
Great Organ
Pistons 1-2-3-4
Pedal Organ
Pistons 1-2-3
Entire Organ
Pistons 1-2-3-4-5
Accessories
Tremolo
Wind Indicator by light
Crescendo Indicator by light
Pedal Movements
Great to Pedal Reversible
Swell expression
Crescendo and Full Organ
Sources:The American Organist (1922). Courtesy Jonathan Bowen.
Gray, Christopher. "Streetscapes/141st Street and Convent Avenue; 1892 Church for a Congregation That Moved Uptown," The New York Times (Oct. 20, 2002).
Trupiano, Larry. Factory Specifications of Hillgreen, Lane & Company Organ (1930). |
(b) j (c) 2/3
c
Let l = 1.8 - 7.8. Let h = l + 6. Let i = 9 + -8.9. What is the closest to h in 4, i, 0.2?
i
Let f = -40 + 361/9. Let p = 88.89 - 88. Let u = -0.11 - p. Which is the nearest to u? (a) -3 (b) f (c) 1/5
b
Let y = 198.3 - 195.8. What is the nearest to -0.1 in -0.4, 4, y?
-0.4
Let t = -21.9 + 22.1. Which is the closest to 1? (a) -21 (b) t (c) 0.4
c
Let g = 1.25 + 0.75. Which is the closest to g? (a) -1 (b) 0.2 (c) -0.07
b
Let l = -28.014 - -0.014. Let q = 27 + l. What is the closest to q in -1/6, 0.3, -5/2?
-1/6
Let h = -38 - -41. What is the closest to 2 in -0.3, h, 2/9?
h
Let q(u) = -3*u**2 - 16*u + 69. Let b be q(-8). What is the nearest to 4/5 in 2/7, 2, b, 0.2?
2/7
Let g = -0.97 + -0.03. Which is the closest to 6? (a) 0.2 (b) -0.1 (c) g
a
Let u be ((-153)/(-986))/(1/4). What is the nearest to -2/5 in 3, 0.1, u?
0.1
Let x = 15.95 + 0.05. Let z = 17 - x. Let u(i) = 6*i + 1. Let o be u(-1). Which is the closest to z? (a) 1 (b) 2/5 (c) o
a
Let h = 249/1430 + 1/130. Let w = 0.14 + -0.24. Let r = 0 + 1. Which is the closest to w? (a) r (b) -5 (c) h
c
Suppose -4*z + z = 4*j - 143, -5*z + 5 = 0. Let v = 104/3 - j. Which is the nearest to v? (a) -4 (b) 1/11 (c) 1/8
b
Suppose -2*d - 2*p = -3*p - 10, 2*p + 16 = 3*d. Let q = -2.8 + 0.4. Let m = -2 - q. Which is the nearest to m? (a) -1/2 (b) -5/3 (c) d
a
Let y = 2/111 + 146/111. Suppose 3*v - 3*i - 15 = 0, 5*v + 4*i + 5 = 3*i. Let c(u) = 2*u + 4. Let o be c(v). What is the nearest to 1/3 in y, -3, o?
y
Let r = 12681/4 - 3211. Let k = r - -1459/36. Which is the nearest to -0.1? (a) -2 (b) -0.2 (c) k
b
Let w = -0.06 + 1.06. Let p = 297 + -296.5. What is the closest to 0 in -2, p, w?
p
Let z = -0.2 + 0.1. Let x = 27.2 - 30. Let m = 2.5 + x. Which is the closest to 0.1? (a) m (b) -0.2 (c) z
c
Let y = 39 - 43.6. Let z = 3.6 + y. Which is the nearest to -0.1? (a) -0.8 (b) 0.3 (c) z
b
Let i = -1.25 + -0.73. Let j = -0.08 - i. Let x = 2.1 + j. What is the closest to 0 in x, 2/5, -5?
2/5
Let q = -2.7 - 1.3. Suppose -5*x = 3*m - 0 + 15, -3*x = -3*m + 9. Suppose -15 = -m*n - 3*n. What is the closest to 2 in q, -2/7, n?
-2/7
Let a be 3 + -1 + 3 - 1. Suppose 8 = -a*i, -x - 2*i - 4 = -3*x. What is the nearest to 0.1 in 2/9, x, -1/3?
x
Let o = -93 + 94. Which is the closest to o? (a) -16 (b) -2/11 (c) 0.03
c
Let u = -67.9 + 68. Which is the closest to u? (a) 2/7 (b) 7 (c) 0.1
c
Let u = 712 - 711. Let p = -0.1 - -0.1. What is the nearest to u in -2/7, 3, p?
p
Let v = 0.89 - 0.69. Let t(r) = -r**3 - 6*r**2 - 4. Let j be t(-6). Which is the nearest to j? (a) 4 (b) v (c) 0
c
Let a = 4922/15 + -328. What is the nearest to -1 in -1/3, -0.4, a?
-0.4
Let y = -106 + 107.6. Which is the nearest to 5? (a) -1 (b) -0.1 (c) y
c
Let y be (-2)/(-7) - (-380)/(-7). Let u = y - -158/3. Let x be (-1)/(-20)*-2*4. Which is the closest to x? (a) 3/4 (b) -0.4 (c) u
b
Let d = 1.06 - -81.94. Let c = d + -83.1. What is the nearest to 0 in 3/2, c, 2/13?
c
Let f = -3.6 + 0.6. Let v = -18.2 - -18.2. What is the nearest to v in f, 0, -2/7?
0
Let n = 0.0027 + -0.2027. Let j = 1 + -1. Which is the closest to -1? (a) n (b) -4 (c) j
a
Let d = -15.2 + 18. Which is the closest to d? (a) 1/3 (b) 1/2 (c) 5/4
c
Let p be 9/(-6)*(-4)/3. Suppose 2*j - u - 80 = -j, -5*j - 3*u = -124. Suppose -5*c + g = j, -2*c - 14 = -0*g - 4*g. What is the nearest to p in -1, c, -4?
-1
Let f be -3 + (11/(-4))/(-1). Let h = -539 - -539. Suppose -12 = -3*k - 3. What is the nearest to h in f, k, -2?
f
Let m be 12/(-4)*6/(-9). Suppose -8 = -5*l + m. Suppose -l*u - 3*n = n + 10, u = 3*n + 10. Which is the nearest to 2/3? (a) u (b) -5 (c) 4/5
c
Let u = -50 + 347/7. Let m be ((-4)/(-27))/((-4)/6). Let p be 2/3*(0 - 9)/(-27). Which is the nearest to u? (a) m (b) -1 (c) p
a
Let m(q) = -2*q - 1. Let h = 80 - 78. Let x be m(h). Which is the closest to -0.1? (a) x (b) -2 (c) 1
c
Let l = 61.45 - 62. Let i = -2.55 - l. Which is the closest to -0.1? (a) 0.15 (b) i (c) 5
a
Let m = 158.5 + -121. Let n = m - 38. What is the closest to 0.1 in -10, -4/7, n?
n
Let n(j) = 7*j + 5. Let f be n(15). Suppose -4*w - 2*g - f = 0, 2*w + w - 5*g + 50 = 0. What is the nearest to 1 in w, 1/2, 2/9?
1/2
Let f = 0.2 - 0.1. Let x = -0.743 + -0.257. Let o = -1.995 - 0.005. Which is the nearest to x? (a) o (b) -3/2 (c) f
b
Let t = -22 - -18. Let j = t - -6. What is the closest to j in 2/9, 1, 0.1?
1
Let w = 5.39 - 4.39. Which is the nearest to -1/6? (a) -0.4 (b) 2/5 (c) w
a
Let h = -678 + 594. Which is the nearest to 2? (a) -3/8 (b) h (c) -0.5
a
Suppose 15*p - 18*p + 39 = 0. Suppose p*f - 60 = f. What is the nearest to -1 in f, -3, -3/7?
-3/7
Let d be 4/(-14) - 22595/175. Let z = 129 + d. Which is the nearest to -0.3? (a) z (b) 1 (c) 0.4
a
Let w be 70/14*-29*2/(-10). Suppose -w*m + 30*m + 4 = 0. Suppose 3*i + 1 = 2*o + 18, i - 3*o = 15. Which is the closest to 1/4? (a) i (b) -2/3 (c) m
b
Let s be (((-216)/(-15))/(-12))/2. Let h = -132 + 658/5. Let u = -1.6 + -0.4. What is the nearest to 0 in h, u, s?
h
Let x(l) = -l**3 + 5*l**2 + 6*l - 4. Let c be x(6). Let r = -3.0386 - -0.0386. Let g = -28/9 + 214/63. What is the closest to g in c, r, 1/2?
1/2
Let v be (-18)/(-27)*306/4. Suppose -t - 2*t - v = 0. Let j be -2 + -2 + (-66)/t. Which is the closest to -1? (a) -5 (b) 5/6 (c) j
c
Suppose 34 = -9*v + 61. What is the closest to -2/3 in v, -5, 5?
v
Suppose 2*h - 54 = -7*h. Let s be (-20)/24*h/(-40). Which is the nearest to 0.2? (a) -3 (b) -4 (c) s
c
Let o(l) = -3*l - 17. Suppose 0 = -4*k - 2*z - 14, -3*k = 2*z + 14 - 5. Let y be o(k). What is the closest to 0 in y, -4/7, -1/3?
-1/3
Let t = -193 + 1929/10. What is the closest to -1 in 5, t, -5?
t
Let j(m) = -2*m**2 + 2*m - 4. Let y be j(6). Let a = 69 + y. What is the nearest to 0 in a, 0.2, 2?
0.2
Let v = 9.6 - -15.2. Let h = 25 - v. Let y = 137/5 + -27. Which is the closest to y? (a) 0.1 (b) 2/5 (c) h
b
Let l = 37 - 37.31. Let w = -0.41 - l. Which is the nearest to w? (a) 0 (b) 5 (c) 1
a
Suppose 0*s - s = 2. Let u(p) = 6*p**2 - 2. Let t be u(-1). Let d be 352/(-693) - -3*t/54. Which is the closest to 0? (a) -0.5 (b) s (c) d
c
Let a = -7 + 4. Let o be -1*(-8)/(-10) + 0. What is the nearest to -1/4 in a, o, -1?
o
Let d = 0.054 - -31.946. Let j = 31 - d. What is the nearest to j in 5, 3, 0.4?
0.4
Let g = -214 + 214.1. Let y be -4*(-2 - 3/(-2)). Suppose 10 = -5*s - 2*t - 3*t, -2*s + 2*t = 0. Which is the closest to s? (a) y (b) -1/4 (c) g
b
Let z be -1*((-38)/(-8) - (-21 - -26)). Which is the closest to -2? (a) -2/37 (b) 1/8 (c) z
a
Let q = 534 - 534. What is the nearest to q in 3/2, 1, 0.1, 16?
0.1
Let x = -53.9 - -54. Which is the closest to x? (a) -0.3 (b) 10 (c) -4
a
Let y(h) = 529*h + 1. Let v be y(0). Let p = -0.09 - -0.79. Which is the closest to v? (a) p (b) -3/7 (c) -2/3
a
Suppose -2*h = -f + 3, f + 5*h + 3 = 2*f. Let z = -76 + 85. Suppose f*x + z = -0. Which is the closest to -1/4? (a) 2/7 (b) x (c) 2/15
c
Let y = -53 - -58. Let f = 3 + -2. Let l = -1.8 - 0.2. Which is the closest to 0.1? (a) l (b) f (c) y
b
Let p(b) = 2*b - 4. Let o be p(3). Let k = 37/25 - -116/175. What is the closest to 0 in 3/4, k, o?
3/4
Let w be ((-4)/3)/(2/(-3)). Let i = -42 - -64.5. Let j = i - 22. What is the nearest to 1/3 in j, w, 3?
j
Let n = 10 + 0. Let l be (20/(-30))/(n/6). Which is the nearest to l? (a) -5 (b) 4 (c) 3
c
Let g be (54/(-24))/(-9) - 42/8. Let x be (7/35)/(1/(g/(-3))). What is the closest to -2 in -0.3, x, -5?
-0.3
Let b = 6 - 3. Let v = -121.6 + 140. Let p = v + -18. What is the closest to -0.1 in b, p, 5?
p
Let a = 0.042 - 0.842. What is the nearest to 0.2 in 5/4, 2, a?
a
Let j(f) = f**3 - 6*f**2 - 13*f + 37. Let d be j(7). Which is the closest to 4? (a) 0.03 (b) 1 (c) d
b
Suppose -3*s - 10 = 149*z - 147*z, 0 = -2*s - 5*z - 25. Let v = -2.5 - -2. Which is the nearest to -2? (a) s (b) v (c) 3/4
b
Suppose 2*a - 5*a - 15 = 0. Let q = -241.9 - -237.9. What is the closest to 0.2 in 0.3, a, q?
0.3
Let p = 7.48 - 7.18. What is the closest to 2/5 in p, -2/3, -4, 3?
p
Let d = 0.91 + -0.27. Let h = 1.64 - d. Which is the closest to 0? (a) 1/2 (b) -4 (c) h
a
Let h = 3.7 + -6. Let n = -2.6 - h. Let m = -6 + 4. What is the nearest to |
Madame Gagné
Madame Gagné was a photographer who worked between 1886 and 1891 in Montreal, Quebec, Canada. She and her husband, Édouard C. Gagné (also a photographer) had a total of three studios over time. At least one of her prints can be found at Montreal's McCord Museum.
Madame Gagné reportedly had a rapport with the new Chinese immigrants to Montreal, and often made portraits of them and their families. Since most photographers of the time catered to more well-to-do clients, this was an unusual custom.
Her photography studio was located at 211 Saint Laurent Boulevard, which is in the heart of today's Old Montreal.
References
External links
Portrait by Madame Gagne
Another portrait
— A brief discussion of Mme. Gagné’s photo in the McCord Museum
Category:Canadian women photographers
Category:Artists from Montreal
Category:19th-century Canadian photographers |
/*
* Copyright (c) 2000, 2003, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License version 2 only, as
* published by the Free Software Foundation. Oracle designates this
* particular file as subject to the "Classpath" exception as provided
* by Oracle in the LICENSE file that accompanied this code.
*
* This code is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
* version 2 for more details (a copy is included in the LICENSE file that
* accompanied this code).
*
* You should have received a copy of the GNU General Public License version
* 2 along with this work; if not, write to the Free Software Foundation,
* Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
*
* Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
* or visit www.oracle.com if you need additional information or have any
* questions.
*/
package com.sun.corba.se.impl.encoding;
import com.sun.corba.se.spi.ior.iiop.GIOPVersion;
public class CDRInputStream_1_1 extends CDRInputStream_1_0
{
// See notes in CDROutputStream
protected int fragmentOffset = 0;
public GIOPVersion getGIOPVersion() {
return GIOPVersion.V1_1;
}
// Template method
public CDRInputStreamBase dup() {
CDRInputStreamBase result = super.dup();
((CDRInputStream_1_1)result).fragmentOffset = this.fragmentOffset;
return result;
}
protected int get_offset() {
return bbwi.position() + fragmentOffset;
}
protected void alignAndCheck(int align, int n) {
checkBlockLength(align, n);
// WARNING: Must compute real alignment after calling
// checkBlockLength since it may move the position
int alignment = computeAlignment(bbwi.position(), align);
if (bbwi.position() + n + alignment > bbwi.buflen) {
// Some other ORBs may have found a way to send 1.1
// fragments which put alignment bytes at the end
// of a fragment
if (bbwi.position() + alignment == bbwi.buflen)
{
bbwi.position(bbwi.position() + alignment);
}
grow(align, n);
// We must recalculate the alignment after a possible
// fragmentation since the new bbwi.position() (after the header)
// may require a different alignment.
alignment = computeAlignment(bbwi.position(), align);
}
bbwi.position(bbwi.position() + alignment);
}
//
// This can be overridden....
//
protected void grow(int align, int n) {
bbwi.needed = n;
// Save the size of the current buffer for
// possible fragmentOffset calculation
int oldSize = bbwi.position();
bbwi = bufferManagerRead.underflow(bbwi);
if (bbwi.fragmented) {
// By this point we should be guaranteed to have
// a new fragment whose header has already been
// unmarshalled. bbwi.position() should point to the
// end of the header.
fragmentOffset += (oldSize - bbwi.position());
markAndResetHandler.fragmentationOccured(bbwi);
// Clear the flag
bbwi.fragmented = false;
}
}
// Mark/reset ---------------------------------------
private class FragmentableStreamMemento extends StreamMemento
{
private int fragmentOffset_;
public FragmentableStreamMemento()
{
super();
fragmentOffset_ = fragmentOffset;
}
}
public java.lang.Object createStreamMemento() {
return new FragmentableStreamMemento();
}
public void restoreInternalState(java.lang.Object streamMemento)
{
super.restoreInternalState(streamMemento);
fragmentOffset
= ((FragmentableStreamMemento)streamMemento).fragmentOffset_;
}
// --------------------------------------------------
public char read_wchar() {
// In GIOP 1.1, interoperability with wchar is limited
// to 2 byte fixed width encodings. CORBA formal 99-10-07 15.3.1.6.
// WARNING: For UTF-16, this means that there can be no
// byte order marker, so it must default to big endian!
alignAndCheck(2, 2);
// Because of the alignAndCheck, we should be guaranteed
// 2 bytes of real data.
char[] result = getConvertedChars(2, getWCharConverter());
// Did the provided bytes convert to more than one
// character? This may come up as more unicode values are
// assigned, and a single 16 bit Java char isn't enough.
// Better to use strings for i18n purposes.
if (getWCharConverter().getNumChars() > 1)
throw wrapper.btcResultMoreThanOneChar() ;
return result[0];
}
public String read_wstring() {
// In GIOP 1.1, interoperability with wchar is limited
// to 2 byte fixed width encodings. CORBA formal 99-10-07 15.3.1.6.
int len = read_long();
// Workaround for ORBs which send string lengths of
// zero to mean empty string.
//
// IMPORTANT: Do not replace 'new String("")' with "", it may result
// in a Serialization bug (See serialization.zerolengthstring) and
// bug id: 4728756 for details
if (len == 0)
return new String("");
checkForNegativeLength(len);
// Don't include the two byte null for the
// following computations. Remember that since we're limited
// to a 2 byte fixed width code set, the "length" was the
// number of such 2 byte code points plus a 2 byte null.
len = len - 1;
char[] result = getConvertedChars(len * 2, getWCharConverter());
// Skip over the 2 byte null
read_short();
return new String(result, 0, getWCharConverter().getNumChars());
}
}
|
“My mom was the one who brought it up, actually,” said center Matt Armstrong, a four-year starter who is set to complete his degree in criminal justice. “She said, ‘Oh, now you’re not going to be able to walk.’ I just got a big smile on my face.”
Missing commencement gets back to a tradition for mid-year graduates on the football team. The Lakers, 11-2, are in the playoffs for the first time since 2010 following 10 consecutive postseason appearances that usually passed over the fall ceremonies.
Green, who leads the team in receptions with 45, had planned to attend commencement.
“I wanted to, but with football and everything, that trumps commencement, no question,” he said. “Once we won, I knew I wouldn’t be doing it. My mom was a little upset at first, but she understands. She’s a big football fan anyway.” |
TEXAS COURT OF APPEALS, THIRD DISTRICT, AT AUSTIN
NO. 03-12-00047-CR
NO. 03-12-00048-CR
Johnny Ray Embers, Jr., Appellant
v.
The State of Texas, Appellee
FROM THE DISTRICT COURT OF BELL COUNTY, 264TH JUDICIAL DISTRICT
NOS. 62717 & 68012, HONORABLE MARTHA J. TRUDO, JUDGE PRESIDING
M E M O R A N D U M O P I N I O N
In 2007, appellant Johnny Ray Embers, Jr. was charged with sexual assault of a child.
He pled guilty under a plea bargain and was placed on deferred adjudication for ten years. In May 2011,
he was charged with failure to register as a sex offender. Shortly after that indictment issued, the State
filed a motion to adjudicate Embers's sexual-assault charge, alleging he had violated the terms of his
community supervision by, among other things, possessing marihuana, failing to register as a sex
offender, missing probation appointments, failing to pay court costs, fees, or restitution, and failing to
serve a required jail term with work release. Without entering into a plea bargain, Embers pled true to
the motion to adjudicate and pled guilty to failure to register. The trial court adjudicated his guilt for
sexual assault and found him guilty of failing to register and gave him concurrent sentences of eighteen
years for sexual assault and ten years for failure to register. Embers's appointed attorney has filed a
brief concluding that the appeal is frivolous and without merit.
Counsel's brief meets the requirements of Anders v. California, 386 U.S. 738, 743-44
(1967), by presenting a professional evaluation of the record and demonstrating that there are no
arguable grounds to be advanced. See Penson v. Ohio, 488 U.S. 75, 80 (1988); Anders, 386 U.S. at 743-44; High v. State, 573 S.W.2d 807, 811-13 (Tex. Crim. App. 1978); Currie v. State, 516 S.W.2d 684,
684 (Tex. Crim. App. 1974); Gainous v. State, 436 S.W.2d 137, 138 (Tex. Crim. App. 1969). Embers's
attorney sent Embers a copy of the brief and advised him that he had the right to examine the record and
file a pro se brief. See Anders, 386 U.S. at 744; Jackson v. State, 485 S.W.2d 553, 553 (Tex. Crim.
App. 1972). No pro se brief has been filed.
Having reviewed the record and the procedures that were observed, we agree with
counsel that the appeal is frivolous and without merit. We grant counsel's motion to withdraw and
affirm the judgment of conviction. (1)
__________________________________________
David Puryear, Justice
Before Justices Puryear, Pemberton and Henson
Affirmed
Filed: August 15, 2012
Do Not Publish
1. No substitute counsel will be appointed. Should Embers wish to seek further review by
the court of criminal appeals, he must either retain an attorney to file a petition for discretionary
review or file a pro se petition for discretionary review. See generally Tex. R. App. P. 68-79
(governing proceedings in the Texas Court of Criminal Appeals). A petition for discretionary review
must be filed within thirty days from the date of either this opinion or the date this Court overrules
the last timely motion for rehearing filed. See Tex. R. App. P. 68.2. The petition must be filed with
this Court, after which it will be forwarded to the court of criminal appeals along with the rest of the
filings in the cause. See Tex. R. App. P. 68.3, 68.7. Any petition for discretionary review should
comply with rules 68.4 and 68.5 of the rules of appellate procedure. See Tex. R. App. P. 68.4, 68.5.
|
import React from "react";
import {Box, BoxProps} from "@chakra-ui/core";
interface Props extends Omit<BoxProps, "size"> {
size?: number;
}
const GridIcon: React.FC<Props> = ({size = 24, ...props}) => {
return (
<Box {...props}>
<svg
className="feather feather-grid"
fill="none"
height={size}
stroke="currentColor"
strokeLinecap="round"
strokeLinejoin="round"
strokeWidth="2"
viewBox="0 0 24 24"
width={size}
xmlns="http://www.w3.org/2000/svg"
>
<rect height="7" width="7" x="3" y="3" />
<rect height="7" width="7" x="14" y="3" />
<rect height="7" width="7" x="14" y="14" />
<rect height="7" width="7" x="3" y="14" />
</svg>
</Box>
);
};
export default GridIcon;
|
Gilberto Arias
Gilberto Arias (born 1964) was the Panamanian Ambassador to the United Kingdom from July 2009 to November 2011.
Education
Arias attended the University of Virginia from 1982 to 1986, graduating with a Bachelor of Arts degree in philosophy and economics. In 1986, following his graduation, he attended the University of Cambridge, graduating with a degree in law in 1988 and an LLM in 1989.
Career
Arias became an associate lawyer at his father's firm of Arias Fabrega & Fabrega in 1990 and worked in the firm for ten years until 2000. He subsequently was an executive at the newspaper publishing group Editora Panamá America S.A., Panama, until 2009. In 2008 he became Director of Capital Bank, Panama. In 2009 he was appointed as Panama's ambassador to The United Kingdom of Great Britain and Northern Ireland where he would serve until November 2011. He currently works as a consultant on climate change and sustainable development between Panama, Latin America, and the United Kingdom.
References
Category:Living people
Category:Panamanian lawyers
Category:Place of birth missing (living people)
Category:University of Virginia alumni
Category:Alumni of the University of Cambridge
Category:1964 births |
Jiangnan Daying
Jiangnan Daying ( or the Jiangnan Battalion; (first battalion: 1853–1856; second battalion: 1857–1860) was an army group assembled by the Qing dynasty. The army group consist of mostly Green Standard Army, and their goal was to quell the Taiping Rebellion around the Jiangnan region. The army group twice encircled Nanjing, the capital of the Taiping Heavenly Kingdom, but were defeated by the Taiping forces on both occasions.
First Jiangnan DaYing
Time
1853—1856: when the armies of the Taiping Rebellion occupied Nanjing, after 10 days Xiang Rong in command of a 10,000 strong Green Standard Army tracked them to Taiping and stationed outside the Nanjing wall.
Headquarters
The headquarters of the Jiangnan DaYing were located in Ming Xiaoling Mausoleum.
Leaders
Imperial Commissioner: First Class Senior General Xiang Rong (向榮)
Military commander: Second Class Senior General Her Chyun
Lieutenant General: Zhang GuoLiang
Taiping Generals
Shi Dakai, Yang Xiuqing, Qin Rigang (秦日綱), Li Xiucheng
Strength
The 80,000 soldiers in the regular Army faced 460,000+ in the Taiping Rebellion militia force.
Outcome
On June 1, 1856, an army tried to stop the Taiping forces but the Governor of Jiangsu Jeer Hungar (吉爾杭阿), Mayor of Nanjing, lost and his army of 7,800 were all killed.
There was a heated battle from June 16 to June 20, but the Qing army of 80,000 was defeated and the surviving 36,000 followers of Xiang Rong retreated north. On August 9, Xiang Rong committed suicide in Danyang, but this strategy stopped the forces' march north.
Second Jiangnan DaYing
Time
1858—1860
Headquarters
Leaders
Imperial Commissioner: First Class Senior General Her Chyun
Viceroy of Liangjiang: He Guiqing (何桂清) (escaped to Shanghai and was executed by Qing)
Military commander: Second Class Senior General Zhang GuoLiang (KIA early May 1860)
Governor of Zhejiang province 1st Luo Zundian (羅遵殿) (died early March 1860, forced suicide)
Governor of Zhejiang province 2nd Wang Youling (王有齡) (died early October 1861, forced suicide)
Governor of Jiangsu province Xu Youren (徐有壬) (KIA December 21, 1860)
Lieutenant General: Zhang YuLiang (張玉良) (KIA October 1861)
Lieutenant General: Zhou Tengso (周天受) (KIA middle of May 1860)
Lieutenant General: Wang Jung (王浚) (KIA late April 1860)
Victory of Taiping Generals
Li Xiucheng, Lai Wenguang, Tong Zonghai (童容海), Chen Yucheng, Yang Fuqing (楊輔清), Li Shixian, Liu Qeuling (劉瑲琳)
Strength
The regular Army had only 180,000 soldiers while the Taiping Rebellion militia force had at least 560,000 soldiers.
Outcome
Taiping Rebellion forces occupied Jiangsu in 1860. The next year, they occupied Zhejiang. The Jiangnan DaYing was destroyed. The Second Opium War took place and the Xianfeng Emperor died in 1861. The Xiang Army and Huai Army combined to become the Green Standard Army in 1862 and for the third time they surrounded and attacked Nanjing, successfully ending the civil war in July 1864.
Commentary
The Jiangnan DaYing had trouble making payroll for its forces, and these forces were insufficient to fight off the British and French forces in northern China.
The leaders intrigued against each other: Xiang Rong (向榮) and Her Chyun in the first Jiangnan DaYing Her Chyun group, and the He Guiqing and Zeng Guofan groups disputed inner officials' system, which allowed the Taiping Rebellion to gain momentum.
Her Chyun could use the Brigadier General's works but he belittled the Taiping Rebellion, He Guiqing's cowardice and Zeng Guofan's selfishness, which were the three reasons for their loss.
See also
Battle of Nanking
Second rout the Army Group Jiangnan
Second Opium War
References
Draft History of Qing
Category:Military history of the Qing dynasty
Category:Military units and formations of the Qing Dynasty
Category:19th-century conflicts
Category:History of Nanjing
Category:Green Standard Army |
Shortly after being re-elected for his second term, President Obama told the nation that "our top priority has to be jobs and growth." As I've been arguing for some time now, to get serious about that, we must focus on the cities and local factors that actually fuel economic growth. Rana Foroohar put it succinctly back in September in TIME: post-election is the time to move away from the current "simplistic conversation about tax cuts versus spending" and focus on the different growth strategies and experiments that are working in many of our cities.
Productivity at the national level has stalled since the Great Recession and even before. Productivity growth was 1.9 percent in the third quarter and just 1.5 percent for the past year. A number of leading economists, led by George Mason University's Tyler Cowen,argue that the United States has in fact entered into a period of prolonged stagnation, having exhausted its capacity for innovation and productivity improvement.
A very different picture emerges when we consider the United States not just as a single national economy but as a collection of city and metro economies. Some have dramatic productivity growth, while others are stagnating.
To get at this, I turn to a simple metric — the Metro Productivity Index— developed by José Lobo of Arizona State University. It is a ratio that compares the level of economic output per person for metros to the gross domestic product (GDP) per person for the nation as a whole. It covers the period 2001 to 2010 and is based on data from the Department of Commerce’s Bureau of Economic Analysis (BEA).
Map courtesy of MPI's
Zara Matheson
The map above by Zara Matheson of the Martin Prosperity Institute charts the Metro Productivity Index for all U.S. metros. The table below shows the 20 highest-ranking large metros (with a population of more than one million).
There is considerable geographic variation in the Metro Productivity Index. The most productive metros have a ratio of almost two to one; the least productive have a ratio of less than 0.4. Just six metros have productivity rates of 50 percent or more than the national average, and 26 metros have 25 to 50 percent more. This urban productivity advantage is concentrated in a relatively small number of metros: 85 metros have productivity rates above the national rate, while a whopping 281 fall below it.
Rank
Metro Area
Metro
Productivity Ratio
1
San Jose-Sunnyvale-Santa Clara, CA
1.82
2
San Francisco-Oakland-Fremont, CA
1.62
3
Washington-Arlington-Alexandria, DC-VA-MD-WV
1.56
4
Boston-Cambridge-Quincy, MA-NH
1.42
5
Houston-Sugar Land-Baytown, TX
1.42
6
Seattle-Tacoma-Bellevue, WA
1.41
7
Hartford-West Hartford-East Hartford, CT
1.40
8
New York-Northern New Jersey-Long Island, NY-NJ-PA
1.38
9
Denver-Aurora-Broomfield, CO
1.35
10
Minneapolis-St. Paul-Bloomington, MN-WI
1.31
11
Dallas-Fort Worth-Arlington, TX
1.30
12
New Orleans-Metairie-Kenner, LA
1.26
13
Indianapolis-Carmel, IN
1.26
14
Los Angeles-Long Beach-Santa Ana, CA
1.21
15
Chicago-Joliet-Naperville, IL-IN-WI
1.20
16
Philadelphia-Camden-Wilmington, PA-NJ-DE-MD
1.20
17
Atlanta-Sandy Springs-Marietta, GA
1.19
18
San Diego-Carlsbad-San Marcos, CA
1.18
19
Portland-Vancouver-Hillsboro, OR-WA
1.16
20
Milwaukee-Waukesha-West Allis, WI
1.15
The top ranked metro is San Jose (Silicon Valley) with a ratio of 1.82. San Francisco is second with 1.62. The metros of the Bos-Wash corridor do well: Greater D.C. is third, Greater Boston fourth, Hartford seventh, and Greater New York eighth. Houston and Seattle are fifth and sixth. Denver and Minneapolis round out the top 10.
A number of smaller metros also perform well on this metric. Bridgeport, Connecticut (1.94); Casper, Wyoming (1.79); Sioux Falls, South Dakota (1.5); Midland, Texas (1.57); and Anchorage, Alaska (1,48) all outpace national productivity by a considerable margin.
The metros with the highest levels of consistent productivity growth over the past decade are those with high-tech knowledge based economies or strong energy economies.
On the flip side, the other metros with the lowest ratios — less than .5 — all come from three states: Arizona, Texas, and Florida. They include McAllen and Brownsville, Texas; Lake Havasu and Prescott, Arizona; and Punta Gorda and Palm Coast, Florida.
Productivity growth is the backbone of healthy economy. There's much to gain from understanding the uneven geography of productivity and the kinds of metros that drive it. Brookings economist Alice Rivlin has long argued that the state and local level is the appropriate one for implementing policies for innovation, productivity improvement, and economic development. The diversity of our cities and metro areas is a veritable petri dish for discovering the key factors that can drive future U.S. growth.
As I wrote on this site last month, "It's time to recognize that the U.S. economy is not only made up of industries which grow and decline at different rates, but hundreds of metro regions that do so as well. There is a great deal national economic policy makers can gain from studying the factors that underpin the metros with more consistent and resilient growth."
If we want to restore growth and generate good jobs, America needs to move quickly away from stalled national economic strategies and toward the cities and metros that are showing us how to do it.
About the Author
Richard Florida is Co-founder and Editor at Large of CityLab.com and Senior Editor at The Atlantic. He isdirector of the Martin Prosperity Institute at the University of Toronto and Global Research Professor at NYU.
More |
Q:
Casting an ImmutableList to List
Although my research was going in circles, it looks like Google Guava's ImmutableList implements List. So I can cast it up to a List, right? I know it will throw an error if one of the List's modification methods are used, but is that all that can go wrong?
public List<Integer> getList() {
return ImmutableList.of(1,2,3);
}
A:
If it implements List, there's no need to cast it to List. You can assign it to a List variable or pass it to a method that expects a List without casting.
And, yes, calling any of the methods that modify a List would throw an exception, but it would happen regardless to whether or not you cast it to List.
|
It was a gamble, but polling suggested it might be working. His opponent, Lt. Gov. Ralph Northam, saw his lead over Gillespie erode over the past few weeks. Trump weighed in for Gillespie in a series of tweets and with automated Election Day phone calls encouraging turnout.
On Tuesday night, the bottom fell out. Largely on the strength of an unexpected surge in turnout, Northam won easily. Expected to prevail by a bit over three points, he’ll end up with a victory of at least twice that size. After a close loss in a Senate race in 2014, Gillespie lost again, this time by much more.
AD
AD
Trump wasted no time in distancing himself from Gillespie, enjoying the spaciousness of his now-280-character tweets.
This was not a wise tweet.
We’ve noted before that Trump has an insurance premium against any calls for his impeachment. His popularity with Republicans has slipped since the beginning of his presidency, but he’s still very popular with them, particularly more conservative members of his party. (Per Gallup, more than 9 in 10 conservative Republicans approve of Trump.) Because Republican Party primaries see an overrepresentation of conservatives, that meant that Republicans eager to win reelection to Congress were less likely to turn on the president.
AD
What happened after those primaries, though, was anyone’s guess. Tuesday night offered some sense of what that might be.
Trump’s tweet distancing himself from Gillespie sugarcoats the election in a way that may make Trump feel better but probably isn’t fooling anyone on Capitol Hill. His claim that the GOP won four of four federal races misses a few important points. The first is that those races were in Republican-held districts. The second is that the Democrats saw big gains in most of those races relative to past elections. The third is that the figure is actually four of five; Trump likes to ignore a race in California won by the Democrats.
AD
But it also does something very dangerous for Trump right now. It shows, yet again, that he isn’t loyal to his political partners.
AD
We’ve seen this before. When Trump backed the House effort to repeal and replace Obamacare (having no plan of his own), he responded to its passage by declaring the bill to be “mean” — as though he hadn’t previously claimed it was nearly without flaws. (It was health care, not immigration, that was the big issue in Virginia, according to exit polls. Northam won among voters concerned about health care by a more than 3-to-1 margin.) Even before Election Day in Alabama earlier this year, Trump began to distance himself from his preferred candidate Republican Senate primary, Luther Strange, hinting that he had perhaps made a mistake — a shift that was certainly informed by polls showing a likely Strange loss. When that happened, Trump deleted some of his tweets of endorsement.
Strange’s campaign, unlike Gillespie’s, didn’t embrace Trumpist politics such as the threat of the gang MS-13. He tried to win as a more typical establishment conservative, to no avail. Gillespie tried to more directly embrace Trump politics — and lost badly. And then saw Trump turn on him.
AD
Think of the message that Trump has sent to Republicans. Stand with him on policy and have him bad-mouth what you passed. Embrace his endorsement and see a loss followed by Trump playing down his support. Embrace his endorsement and his politics, and see a loss followed by actual criticism. These are all one-offs — but politics generally suffers from a small sample size from which to draw conclusions, and no one spends more time trying to draw conclusions than politicians.
What’s the upside? There is no race in which one can say that Trump helped the Republican win. In Georgia’s 6th District, for example, the Republican prevailed — but by about the same margin that the Republican candidates had enjoyed in the primary.
It’s not only Trump who was proved to be disloyal Tuesday night. Stephen K. Bannon told The Washington Post this week: “[I]t was the Trump-Stewart talking points that got Gillespie close and even maybe to victory. It was embracing Trump’s agenda as personified by Corey’s platform. This was not a competitive race four weeks ago. You could have stuck a fork in Gillespie.”
Four weeks ago, Gillespie trailed by about six points in the RealClearPolitics average. He lost by more than that. And after he lost, Bannon’s Breitbart, the site he manages, declared in a main headline that Gillespie was a “Republican swamp thing” who, it was implied, deserved to lose.
Bannon’s track record in electoral politics? He helped Trump lose the popular vote and win the electoral college in 2016. He embraced Luther Strange’s opponent after Strange was trailing. And now he watched the “Trump-Stewart talking points” lead nowhere.
AD
AD
On its home page, Breitbart also championed Trump’s argument that Gillespie should have embraced him more robustly. That’s a flawed theory. Trump is very unpopular in Virginia, and Northam won among those who disapprove of Trump by a 7-to-1 margin, according to preliminary exit polls. A third of voters said their vote in the race was meant to send a message of opposition to Trump — twice as many as said it was a message of support.
What’s more, Trump made his feelings clear. Those who wanted to vote for Trump’s candidate knew who that candidate was. As in Alabama, voters went in another direction.
It’s critical to remember that Democrats were supposed to win this race, albeit not necessarily by as wide a margin as they did. Democrats hold the governor’s mansion and Hillary Clinton won by five points last year. Trump could have congratulated Gillespie on a hard-fought race and noted the uphill battle. Instead, he decided to try to spin the loss to his advantage.
AD
AD
It’s unlikely that many Republicans worried about next November will be convinced by Trump’s argument. Instead, they’re likely to take another lesson: Trump can’t deliver a victory for you when you’re trailing, and neither can Trumpism. (In fact, there’s every reason to think that Trump was the liability that his poll numbers would suggest, with Gillespie doing fine in western Virginia but getting beaten badly in more-Democratic Northern Virginia.) Nor will Trump stand with you should things go south.
If, next summer, the question of Trump’s fate as president is raised, how might Republicans in center-right districts be expected to evaluate that decision? |
# Transfer-Feature
**Summary**:
This CLI Feature is for transfering security tokens to another account.
**How it works**
1. This feature command takes three inputs:
2. i. Your Token Symbol
3. ii. The ETH address you want to transfer tokens to
4. iii. The amount of tokens you want to transfer
5. Tokens are then transferred once the command is run.
**How to Use this CLI Feature \(Instructions\):**
To start, run the following command while inputting your own specific information in the < > brackets:
```text
$ node CLI/polymath-cli transfer <tokenSymbol> <transferTo> <transferAmount>
```
**Example**
```text
$ node CLI/polymath-cli transfer LFV 0x23f95b881149018e3240a6c98d4ec3a111adc5df 10
```
```text
Token deployed at address 0xE447e88c37017550a9f85511cDaAEbC9529e845b.
---- Transaction executed: transfer - Gas limit provided: 67735 ----
Your transaction is being processed. Please wait...
TxHash: 0xbe78a11f1c7f4609f859036067202c94e71137082fa37983771bd4381f31e325
Congratulations! The transaction was successfully completed.
Gas used: 56446 - Gas spent: 0.0028223 Ether
Review it on Etherscan.
TxHash: 0x**be78a11f1c7f4609f859036067202c94e71137082fa37983771bd4381f31e325**
**Account 0x23f95b881149018E3240A6c98d4Ec3A111aDc5DF**
**transferre**d 10 tokens to account 0x23f95b881149018E3240A6c98d4Ec3A111aDc5DF
```
**Above, 10 tokens were transferred to a selected address from another address \(for simplicity, the two addresses in the example are the same\)**
**Troubleshooting / FAQs**
* If there is an error while processing the transfer transaction, the most probable cause for this error is one of the involved accounts not being in the whitelist or under a lockup period.
|
The Rector of the Maritime Academy of Nigeria (MAN), Oron, Joshua Okpo, says the school will admit only 980 cadets for its new academic session, in order to maintain a manageable student population.
Okpo said that the decision was to enable the institution maintain its standard in and outside the country.
His words: “For example, out of the about 7,000 who applied to be part of the institution in the next academic session, only 980 may be considered for admission.
Okpo explained that the resolve was intended to boost quality training, and apply a policy thrust to make things work. |
Q:
Forwarding to external resource using an Ingress controller
I have been using the GCLB Ingress Controller to forward outside traffic to my in-cluster services, and this has been working great so far.
But, is there a way that based on a route/path match, traffic could be forwarded to outside of cluster resource. From the documentation, I can't seem to find anything and I don't think it can be achieved using GCLB Ingress Controller; but I haven't yet tried the NGINX Ingress Controller.
Is this a behavior that can be achieved using any of these 2 controllers? I would prefer using the native gcloud one, the GCLB but the other one works too.
A:
Hope this can help you kubernates external service
|
Sofitel Al Khobar The Corniche hotell
Exklusivt hotell för högpresterande kosmopoliter och livsnjutare
The magnificent 5-star Sofitel Al Khobar the Corniche hotel is a triumph of contemporary architecture. The spectacular lobby features dazzling panoramic elevators overlooking the Arabian Gulf and the beautiful Corniche seaside promenade. Luxury is the keyword for this hotel located in the heart of the vibrant city of Al-Khobar with its thriving business district and convention centre. Free WIFI, amazingly modern décor and superb amenities: all are yours when you stay at the Sofitel Al Khobar.
Våra rum
The 198 bedrooms and 31 Suites has been created for your wellness only. Mr. Mark Young, the hotel designer, liaised Contempory style with the most up to date technology to create serenity cocoon with a impressive view over the Gulf
Våra boenden
The 198 bedrooms and 31 Suites has been created for your wellness only. Mr. Mark Young, the hotel designer, liaised Contempory style with the most up to date technology to create serenity cocoon with a impressive view over the Gulf
LUXURY BALCONY ROOM, 1 Queen Bed, Balcony City View
FAMILY SUITE, Club Millésime Access, 1 Queen Bed and 2 Single Beds, Balcony City View
x4
Från 79m²
Wireless internet in your room
High speed internet
Bathrobe
Slippers
Pillow Menu(...)
SUPERIOR BALCONY ROOM, 1 Queen Bed, Balcony City View
x3
Från 39m²
Wireless internet in your room
High speed internet
Bathrobe
Slippers
Pillow Menu(...)
Våra restauranger och barer
CAFÉ CHICTyp av kök: Internationell
Lunch
12:30 - 15:30
Mån
Tis
Ons
Tors
Fre
Lör
Sön
Middag
19:00 - 23:00
Mån
Tis
Ons
Tors
Fre
Lör
Sön
CAFÉ CHICTyp av kök: Internationell
Spectacular show cooking is at the heart of this chic restaurant in Al Khobar, where French gastronomy blends with Asian flavors. Choose buffet or à la carte, dining out on the terrace or inside overlooking Al Khobar's bright lights and the Arabian Gulf.
CHOCOLATE LOUNGETyp av kök: Kafé
Lunch
08:00 - 12:00
Mån
Tis
Ons
Tors
Fre
Lör
Sön
Middag
12:00 - 00:00
Mån
Tis
Ons
Tors
Fre
Lör
Sön
CHOCOLATE LOUNGETyp av kök: Kafé
At vibrant Chocolate Lounge you may indulge in a unique chocolate-tasting experience... or simply sink back in velvet armchairs and chat with friends or colleagues over an excellent latte, French pastry or a slice of tiramisu.
Se alla våra restauranger
MANZANITA
Photo non contractuelle / Strictly non binding
Mitt på dagen
Mån
Tis
Ons
Tors
Fre
Lör
Sön
Kväll
Mån
Tis
Ons
Tors
Fre
Lör
Sön
MANZANITA
This relaxed café-bar is ideal for a quick mid-afternoon snack, or a vitamin-packed shake after a spa treatment or workout. Juices, teas, and pastries tempt, so sit out on the terrace and drink in the city views. |
One-Eyed Cat: Heathenry / Slavic Paganism
Exploring the wider Eurasian influences on central and northern European religion, including Norse, Slavic, Celtic, Baltic, Siberian, Mediterranean and ancient Indo-European beliefs and applying them to contemporary practice.
A Visit from the Yule King
It's the season of mistletoe and holly, when bells are ring-jing-jing-a-ling and the year-round Northern outdoor signs that say, "Beware of Falling Ice" finally have meaning. The night is hushed in a way it only gets when there is a blanket of snow, on the eve before a holiday, when everything is closed. Snuggled in a hotel room in upstate New York, red and blue-foil snowflakes covering presents gleam out of the corner of my eye, while real ones slowly fall, dancing over the parking lot.
It's almost midnight. Drowsy with hot cider, lying on my husband's chest and listening to his heartbeat, there's nowhere else I'd rather be…
I feel, rather than hear, the Yule King's call at first: a pull like I'm standing in a river, and then his voice flows across my mind.
"Come, Sister…. Come."
He calls out to me tenderly, but with insistence, close but yet so far, a cold-hushed voice across the snows. But I am warm and happy in my love's arms; I do not seek wisdom tonight, nor adventure. I do not want to make this journey now. I tell the Yule King this, as gently as I can. And to my surprise… he tells me to come to him later—
* * *
Radiant like the returning sun, his armor gleams warmly against the dazzling brilliance of a snowy, moonlit hillside. No mortal ever wore such armor, so elegantly curved and shaped to his body, intricately fitted with enamel ornamenting a restive horse. Long hair spills over his shoulders, darker than the seasoned ash pole in his hand, the gray winter-bare trunk of Yggdrasil rising at his back. A holly crown rings his black hair, gleaming with stars. He holds Gungnir, his sacred spear, and a sword hangs naked in his other hand, but nothing in him speaks of violence— his merry eyes, the tenderness of his smile, radiates joy.
"Bless me," he asks.
I hesitate, at first, not knowing by which of His many names to name him. To bless a God is not something to be done lightly, and I am mortal.
So I reach up and put my hands on his broad shoulders and say the things that come to mind, thinking of how Skadhi once blessed me, and pray they are enough. Poetry does not flow from my tongue; the words do not come unbidden, as if remembered by my soul; they are carefully shaped, all that I can think that he would need:
'I bless you, Ingvi.
I bless you as King and Husband and Lover.
I bless you to lead Gods and wights and men
I bless you to succeed'
'I bless that your sword grant both life and death;
I bless you to be wise and just.'
'I bless you to be brave and loving. And to be loved in return.
I bless you with strength, courage and mercy,
And I bless you with my love, Ingvi:
Lord of Gods,
and wights,
and men…'
He hands me his sword to bless, and I kiss it on the flat, near the hilt, where the runes are deeply carved. I lean up to kiss him (for Freyr is very tall), throwing an arm around his neck, and kiss his armored chest— a kiss farewell, good luck before he leaves, but I know he will return.
He smiles and asks me to bless him one last time, hefting Gungnir and gazing up at its bronze tip, blazing like the rising sun.
I do, calling him Odin, for Odin he is as well, God of ecstasy and leader of hosts.
And then he bids me farewell, thanking me with a deep, satisfied sigh.
"Enjoy the morrow," he says (for it's still night, and the eve of the holiday, and he has been, this whole time, so elegantly formal in his half of this ancient ritual, and patient with me).
And in a shimmer of sunlight on a stream, of day on snow, of firelight winking on gold, he's gone.
* * *
I'm working on an illustrated book of journeys and encounters with Norse Gods and Goddesses during spae/seidhr.
This happened one year while I was traveling, when the state of New York was all covered in snow, when the shutting down of the outside world for the winter holidays (the hotel was nearly empty) and the quiet rhythm of the weather lent itself to trance. I've been meaning to paint what I saw for a long while, and finally finished this during Yule.
Rarely do we consider that Odin has not always been the Old Man… and might not always be. Linguistically and by symbolism, he is related to Lugh/Llew Llaw Gyffes, a Celtic God of bards, skilled crafts, and the shining heat of the sun, father of Cú Chulainn-- and a young warrior king bearing quite a similar spear.
I invite you to do your own research here, including the many names by which Odin is known in the Eddas and his frequent romantic and other exploits far more appropriate to a younger man than a grizzled, one-eyed, iron-haired warrior. The work of Norse and Comparative mythology scholars such as Hilda Ellis Davidson and Jan Puhvel help.
Background on the symbolism of this painting can be found on my website, here. Cross-posted to staffandcup.com
Shirl Sazynski was trained by the Gods and has been practicing the Norse shamanic art of seidhr for over a decade. A wife of Odin, oracle, icon painter and author, her work has appeared in popular and pagan media outlets for the last fifteen years, including Witches and Pagans, Sacred Hoop, Idunna, Eternal Haunted Summer, Oak Leaves and books from Bibliotheca Alexandrina.
She teaches workshops on Norse spirituality and seidhr, and works as a professional shaman and oracle in Albuquerque, New Mexico, consulting the Gods at staffandcup.com. |
[Expression of IL-2 receptor in peripheral blood lymphocytes and bronchoalveolar lavage fluid (BAL) in patients with sarcoidosis].
Interleukin 2 (IL-2) and its receptor (IL-2R) plays the important role in the lymphocytic alveolitis in sarcoidosis. The aim of this study was the comparative analysis of expression of IL-2 receptor subunits: alpha chain (IL-2R alpha) and beta chain (IL-2R beta) in peripheral blood and broncho-alveolar lavage fluid (BAL) in patients with sarcoidosis. We examined 18 patients (aged 23 to 47 years, 8 women and 10 men) with sarcoidosis and 5 healthy persons (the control group consisted of 2 women and 3 men, aged 25 to 39 years). The expression of IL-2R was evaluated by immunophenotyping in ORTHO Cytoron Absolute flow cytometer. Comparing the patients group to the control group, we found in BAL fluid of sarcoidosis patients statistically higher number of lymphocytes with IL-2R alpha and IL-2R beta expression (p < 0.002), when the percentage of these lymphocytes was similar in both groups. In peripheral blood of sarcoidosis patients comparing to control group, we noted higher percentage of lymphocytes with expression of IL-2R alpha and IL-2R beta (more evident difference concerned IL-2R beta) but without statistical significance. We also compared BAL fluid to peripheral blood in patients group. We found in BAL fluid lower percentage of lymphocytes with expression IL-2R beta (p < 0.02) and no significant changes in percentage and number of lymphocytes IL-2R alpha+. Our data concerning IL-2 and IL-2R suggest the activation of T lymphocytes in the lung in sarcoidosis. They present only little reflection of immunological reactions, which take place in sarcoidosis infant. The above study should be completed in future by evaluation of the other activation markers including IL-2R gamma. |
You will wonder to know what a parrot obsessed man did to himself
Bristol: A man from Bristol has undergone surgery to remove both ears in order to look more like his pet parrots. Ted Richards, who has already had his face and eyeballs tattooed to resemble his birds, had his ears removed in a six-hour operation.
He said, “I’ve done it because I want to look like my parrots as much as possible. I think it looks really great. I am so happy it’s unreal. I can’t stop looking in the mirror.”
Really Mad obsession:
The 56-year-old is obsessed by feathered pets Ellie, Teaka, Timneh, Jake and Bubi, and has his face tattooed with colorful feathers in tribute. He also sports 110 tattoos, 50 piercings and a split tongue. He has even given his severed ears to a friend who “will appreciate them” and is now planning to find a surgeon prepared to turn his nose into a beak. |
openSUSE – Novell Open Audiohttp://www.novell.com/feeds/openaudio
Connecting Novell users with what's going on inside and around the Novell universe.Fri, 30 Mar 2012 15:34:51 +0000en-UShourly1https://wordpress.org/?v=4.6.12006-2007 openaudio@novell.com (Erin Quill)openaudio@novell.com (Erin Quill)1440http://www.novell.com/feeds/openaudio/openaudio/wp-content/uploads/OpenAudio-144x144.pngNovell Open Audiohttp://www.novell.com/feeds/openaudio
144144Connecting Novell users with what's going on inside and around the Novell universe.Novell, Virtualization, Identity, Management, Software, DesktopErin QuillErin Quillopenaudio@novell.comnonoopenSUSE 11.4http://www.novell.com/feeds/openaudio/?p=661
http://www.novell.com/feeds/openaudio/?p=661#respondWed, 18 May 2011 21:47:57 +0000http://www.novell.com/feeds/openaudio/?p=661http://www.novell.com/feeds/openaudio/?feed=rss2&p=66100:38:23The Open Audio team talks with Jos Poortvliet, the openSUSE community Manager, about the latest release of openSUSE, 11.4.The Open Audio team talks with Jos Poortvliet, the openSUSE community Manager, about the latest release of openSUSE, 11.4.Linux, openSUSE, Subjects, SUSEErin QuillnonoopenSUSE 11.1http://www.novell.com/feeds/openaudio/?p=218
http://www.novell.com/feeds/openaudio/?p=218#respondFri, 13 Feb 2009 21:43:10 +0000http://www.novell.com/feeds/openaudio/?p=218http://www.novell.com/feeds/openaudio/?feed=rss2&p=21800:24:50Erin Quill chats with Joe “Zonker” Brockmeier and Martin Lasarche about the updates and new features in openSUSE 11.1.Erin Quill chats with Joe “Zonker” Brockmeier and Martin Lasarche about the updates and new features in openSUSE 11.1.Linux, openSUSE, Subjects, SUSEErin QuillnonoopenSUSE 11.0 Release with Zonker and Martin Lasarschhttp://www.novell.com/feeds/openaudio/?p=203
http://www.novell.com/feeds/openaudio/?p=203#commentsThu, 19 Jun 2008 21:37:15 +0000http://www.novell.com/feeds/openaudio/?p=203http://www.novell.com/feeds/openaudio/?feed=rss2&p=20380:00:01Erin Quill interviews Joe ‘Zonker’ Brockmeier and Martin Lasarsch about the release of openSUSE 11.0. They discuss KDE 4, a quicker installer and package manager, and live CDs.Erin Quill interviews Joe ‘Zonker’ Brockmeier and Martin Lasarsch about the release of openSUSE 11.0. They discuss KDE 4, a quicker installer and package manager, and live CDs.Linux, openSUSE, SUSEErin QuillnonoMerging the openSUSE Forumshttp://www.novell.com/feeds/openaudio/?p=202
http://www.novell.com/feeds/openaudio/?p=202#respondWed, 11 Jun 2008 18:36:39 +0000http://www.novell.com/feeds/openaudio/?p=202http://www.novell.com/feeds/openaudio/?feed=rss2&p=20200:25:11This edition of open audio is hosted by Joe ‘Zonker’ Brockmeier, openSUSE Community Manager. Zonker talks to some of the team that brought together the merged openSUSE Forums, Wolfgang Koller, Keith Kastorff, Kim Groneman, and Rupert Hor[...]This edition of open audio is hosted by Joe ‘Zonker’ Brockmeier, openSUSE Community Manager. Zonker talks to some of the team that brought together the merged openSUSE Forums, Wolfgang Koller, Keith Kastorff, Kim Groneman, and Rupert Horstkötter.openSUSE, SUSEErin QuillnonoopenSUSE Community Leader Joe “Zonker†Brockmeier Joins the Open Audio Crewhttp://www.novell.com/feeds/openaudio/?p=198
http://www.novell.com/feeds/openaudio/?p=198#respondThu, 17 Apr 2008 21:36:39 +0000http://www.novell.com/feeds/openaudio/?p=198http://www.novell.com/feeds/openaudio/?feed=rss2&p=19800:11:34Dave and Erin get a chance to sit down and meet Zonker, Novell’s new openSUSE community leader, during BrainShare 2008.
Time: 11:34
MP3 Size: 4 MBDave and Erin get a chance to sit down and meet Zonker, Novell’s new openSUSE community leader, during BrainShare 2008.
Time: 11:34
MP3 Size: 4 MBopenSUSE, Segments, SUSEErin QuillnonoFixing Security Problems in Linuxhttp://www.novell.com/feeds/openaudio/?p=187
http://www.novell.com/feeds/openaudio/?p=187#commentsThu, 06 Dec 2007 23:21:20 +0000http://www.novell.com/feeds/openaudio/?p=187http://www.novell.com/feeds/openaudio/?feed=rss2&p=18710:21:22How does openSUSE handle the inevitable vulnerabilities in software, and how do we make sure we have the most secure Linux System? To finish out our openSUSE interviews, Erin sits down with Marcus, one of our security experts at the Nuremberg office[...]How does openSUSE handle the inevitable vulnerabilities in software, and how do we make sure we have the most secure Linux System? To finish out our openSUSE interviews, Erin sits down with Marcus, one of our security experts at the Nuremberg offices, to learn more about the processes that go into creating security patches.
Time: 21:22
MP3 Size: 7.4 MB
Segment Times
Security : 1:27 – 19:45
Links for this Episode:
OpenSuse.org
NOA Backstage:
* Help us guide the future of Novell Open Audio by leaving Feedback on this site or by emailing us at openaudio@novell.comDesktop, Linux, NetIQ, openSUSE, Segments, Server, SUSEErin QuillnonoAutoBuild Updatehttp://www.novell.com/feeds/openaudio/?p=186
http://www.novell.com/feeds/openaudio/?p=186#respondWed, 28 Nov 2007 22:29:49 +0000http://www.novell.com/feeds/openaudio/?p=186http://www.novell.com/feeds/openaudio/?feed=rss2&p=18600:25:22In August of 2006 we introduced you to the Autobuild project. Over the last year the project has seen quite a bit of growth. Erin was recently in the Nuremberg offices and was able to sit down with the brains behind the Autobuild project. We disc[...]In August of 2006 we introduced you to the Autobuild project. Over the last year the project has seen quite a bit of growth. Erin was recently in the Nuremberg offices and was able to sit down with the brains behind the Autobuild project. We discuss a bunch of updates and a little about how you can have your projects hosted on the service.
Time: 25:26
MP3 Size: 8.7 MB
Segment Times
AutoBuild : 3:55 – 23:57
Links for this Episode:
OpenSuse.org
Build Service
SLE10: AutoBuild and Quality Assurance (previous episode)
NOA Backstage:
* Help us guide the future of Novell Open Audio by leaving Feedback on this site or by emailing us at openaudio@novell.comDesktop, Linux, openSUSE, Server, SUSEErin QuillnonoCreating custom distributions based on openSUSE 10.3 and telephony with Asteriskhttp://www.novell.com/feeds/openaudio/?p=185
http://www.novell.com/feeds/openaudio/?p=185#commentsFri, 09 Nov 2007 21:14:51 +0000http://www.novell.com/feeds/openaudio/?p=185http://www.novell.com/feeds/openaudio/?feed=rss2&p=18510:00:01How do you create a custom distribution based on openSUSE 10.3 for your organization? Check out Kiwi—openSUSE’s complete operating system imaging solution. We also review telephony services on openSUSE and the Asterisk server.
Time: 42:1[...]How do you create a custom distribution based on openSUSE 10.3 for your organization? Check out Kiwi—openSUSE’s complete operating system imaging solution. We also review telephony services on openSUSE and the Asterisk server.
Time: 42:16
MP3 Size: 10.6 MB
Segment Times
Kiwi : 1:40 – 16:53
Asterick : 18:54 – 29:25
Links for this Episode:
Kiwi on openSuse.org
Live USB Stick
Asterisk.org
Asterisk 1 Click install on openSuse Build Service
NOA Backstage:
* Help us guide the future of Novell Open Audio by leaving Feedback on this site or by emailing us at openaudio@novell.comLinux, openSUSE, SUSEErin QuillnonoopenSUSE 10.3 Yast Improvements and the New 1-Click Installhttp://www.novell.com/feeds/openaudio/?p=184
http://www.novell.com/feeds/openaudio/?p=184#commentsThu, 01 Nov 2007 20:05:17 +0000http://www.novell.com/feeds/openaudio/?p=184http://www.novell.com/feeds/openaudio/?feed=rss2&p=18440:17:40This time on Novell Open Audio we take a look at the improvements made to YaST in openSUSE 10.3, as well as the magical 1-click install.Time: 17:40
MP3 Size: 6.92 MB
Segment Times
Yast Improvments : 1:18 – 7:40
1 Click Install : 9:18 – [...]This time on Novell Open Audio we take a look at the improvements made to YaST in openSUSE 10.3, as well as the magical 1-click install.Time: 17:40
MP3 Size: 6.92 MB
Segment Times
Yast Improvments : 1:18 – 7:40
1 Click Install : 9:18 – 15:43
Links for this Episode:
OpenSuse
Yast
1 Click Install
NOA Backstage:
Help us guide the future of Novell Open Audio by leaving Feedback on this site or by emailing us at openaudio@novell.com
Linux, openSUSE, SUSEErin QuillnonoopenSUSE 10.3 released!http://www.novell.com/feeds/openaudio/?p=180
http://www.novell.com/feeds/openaudio/?p=180#commentsThu, 04 Oct 2007 22:41:23 +0000http://www.novell.com/feeds/openaudio/?p=180http://www.novell.com/feeds/openaudio/?feed=rss2&p=18050:00:01We get a chance to sit down with Martin Lasarsch (evangelist for openSUSE) and get an overview of what is in openSUSE 10.3. We also get a chance to sit down and talk to no fewer than 10 people who help bring the openSUSE release to you. They give us[...]We get a chance to sit down with Martin Lasarsch (evangelist for openSUSE) and get an overview of what is in openSUSE 10.3. We also get a chance to sit down and talk to no fewer than 10 people who help bring the openSUSE release to you. They give us an idea of what goes into a release of openSUSE and give us some insight into things like how the numbering of our releases work.
Time: 60:54
MP3 Size: 20.9
Links for this Episode:
openSUSE
Moosy blog
Emusic
NOA Backstage:
Help us guide the future of Novell Open Audio by leaving Feedback on this site or by emailing us at openaudio@novell.com
Linux, openSUSE, Segments, SUSEErin QuillnonoLinuxWorld Update and Enabling Learning with Open Source and Novellhttp://www.novell.com/feeds/openaudio/?p=173
http://www.novell.com/feeds/openaudio/?p=173#commentsWed, 22 Aug 2007 20:14:36 +0000http://www.novell.com/feeds/openaudio/?p=173http://www.novell.com/feeds/openaudio/?feed=rss2&p=17340:00:01Today the guys have invited Guy Lunardi into the studio to give an update on the announcements Novell made at the recent LinuxWorld conference. Then we chat with Norm O’Neil from ESI, who tells us about the Indiana Access Program and explains [...]Today the guys have invited Guy Lunardi into the studio to give an update on the announcements Novell made at the recent LinuxWorld conference. Then we chat with Norm O’Neil from ESI, who tells us about the Indiana Access Program and explains how they rolled out SUSE Linux Enterprise to 20,000 students.
Time: 47:12
MP3 Size: 16.2 MB
Segment Times
Linux World Update: 1:50 – 18:48
Enableing Learining with Open Source: 19:50 – 45:03
Links for this Episode:
Linux World
Novell Press room
Imaging SLED
NUGI ‑www.NUGI.org
ESI Tech Advisors
Indiana Access Program
NOA Backstage:
Help us guide the future of Novell Open Audio by leaving Feedback on this site or by emailing us at openaudio@novell.com
Desktop, Linux, NetIQ, openSUSE, SUSEErin QuillnonoHack Week in review and a Preview of KDE 4.0http://www.novell.com/feeds/openaudio/?p=170
http://www.novell.com/feeds/openaudio/?p=170#commentsFri, 27 Jul 2007 22:58:43 +0000http://www.novell.com/feeds/openaudio/?p=170http://www.novell.com/feeds/openaudio/?feed=rss2&p=170100:00:01Join us in a comical review of openSUSE’s Hack Week. Erin, Randy, Dave, and even Mike contribute their two cents’ worth on the developments. Then Erin brings in a guest – Doc Hodges – to help interview Will Stephenson on wh[...]Join us in a comical review of openSUSE’s Hack Week. Erin, Randy, Dave, and even Mike contribute their two cents’ worth on the developments. Then Erin brings in a guest – Doc Hodges – to help interview Will Stephenson on what is happening with KDE 4.0.
Time: 1.18:36
MP3 Size: 27.1 MB
Segment Times
Hack Week in review: 1:50 – 38:44
KDE Preview: 39:50 – 1.17:50
Links for this Episode:
Main Hackweek page – ideas.opensuse.org
Scroll to the bottom and select a filter. I.E. select Done under Status to see projects that were completed
KDE
NOA Backstage:
Help us guide the future of Novell Open Audio by leaving Feedback on this site or by emailing us at openaudio@novell.com
Desktop, Linux, openSUSE, SUSEErin QuillnonoUsing Open Source at Novell, Daylight Saving Timehttp://www.novell.com/feeds/openaudio/?p=134
http://www.novell.com/feeds/openaudio/?p=134#commentsThu, 08 Feb 2007 21:13:06 +0000http://www.novell.com/feeds/openaudio/?p=134http://www.novell.com/feeds/openaudio/?feed=rss2&p=13420:00:01Overview
Ted and Erin interview the guys in Novell IS&T to find out how and why they use open source software for running Novell’s business. Then comes News from Support, where we delve into what the upcoming change in Daylight Saving Tim[...]Overview
Ted and Erin interview the guys in Novell IS&T to find out how and why they use open source software for running Novell’s business. Then comes News from Support, where we delve into what the upcoming change in Daylight Saving Time may mean to your organization.
Time: 30:38
Size: 21.0 MB
Segment Times
Using Open Source at Novell: 3:12-16:13
News from Support: 17:23-29:34
Links for this Episode:
Daylight Saving Time Technical Information Document
BrainShare 2007
NOA Backstage:
Erin was in training, so Caitlin cohosts today.
Quothe Randy Goddard, “Use ‘zdump -v $timezone | grep 2007’ where ‘$timezone’ is a valid timezone found in /usr/share/zoneinfo/. For example, ‘zdump -v US/Mountain | grep 2007’ or ‘zdump -v Asia/Singapore | grep 2007’.”
A post-apocalyptic Buddy Holly story as told by a B-movie Kurasowa. That’s the NOA Backstage riddle for the day.
AppArmor, Desktop, Linux, NetIQ, openSUSE, Segments, Server, SubjectsErin QuillnonoThe Linux Foundation with Jim Zemlinhttp://www.novell.com/feeds/openaudio/?p=127
http://www.novell.com/feeds/openaudio/?p=127#commentsMon, 22 Jan 2007 13:02:21 +0000http://www.novell.com/feeds/openaudio/?p=127http://www.novell.com/feeds/openaudio/?feed=rss2&p=12730:00:01 A merger between the Open Source Development Lab and the Free Standards Group yields a new organization to advance open source software and Linux: The Linux Foundation. Executive Director for the Linux Foundation Jim Zemlin reveals how the new org[...] A merger between the Open Source Development Lab and the Free Standards Group yields a new organization to advance open source software and Linux: The Linux Foundation. Executive Director for the Linux Foundation Jim Zemlin reveals how the new organization plans to carry forward the missions and goals of its two predecessor organizations. Then we briefly talk with Novell’s Markus Rex about what this merger might mean to Novell and SUSE Linux.
Time: 35:55
Size: 24.6 MB
Segment Times
Interview with Jim Zemlin: 1:10-24:10
Interview with Markus Rex (Novell): 25:10-34:20
Links for this Episode
Linux Foundation
Funny Little Men Behind the Curtain
Yes, you raise a valid point, dear listener: This edition really does have all the trappings of being a full NOA episode. Which would make it the first episode of the 2007 season. However, we’re still not quite ready to kick into full gear and start releasing weekly shows again.Desktop, Linux, NetIQ, openSUSE, Server, Subjects, SUSEErin QuillnonoTed and Erin Headed for Socal Linux Expohttp://www.novell.com/feeds/openaudio/?p=124
http://www.novell.com/feeds/openaudio/?p=124#respondFri, 12 Jan 2007 21:43:48 +0000http://www.novell.com/feeds/openaudio/?p=124http://www.novell.com/feeds/openaudio/?feed=rss2&p=12400:00:01Overview
Erin and Ted tell about what they will present at Socal Linux Expo in February: how to set up virtualized systems on a Linux host using Xen.
Time: 3:08
Size: 2.2 MB
Notes for this Episode:
Southern California Linux Expo website
Our policy [...]Overview
Erin and Ted tell about what they will present at Socal Linux Expo in February: how to set up virtualized systems on a Linux host using Xen.
Time: 3:08
Size: 2.2 MB
Notes for this Episode:
Southern California Linux Expo website
Our policy at Novell Open Audio World Headquarters is generally “No commercials, dangit!” But when it comes to promoting a known-cool community Linux event we’re happy to oblige.
Linux, openSUSE, Server, Subjects, SUSEErin QuillnonoopenSUSE 10.2 Overview with Martin Lasarschhttp://www.novell.com/feeds/openaudio/?p=114
http://www.novell.com/feeds/openaudio/?p=114#commentsThu, 14 Dec 2006 00:11:06 +0000http://www.novell.com/feeds/openaudio/?p=114http://www.novell.com/feeds/openaudio/?feed=rss2&p=11490:00:01Overview
In this edition, we get an update from openSUSE developer and community advocate Martin Lasarsch about what’s new in openSUSE 10.2.
Time: 31:40
Size: 21.7 MB
Segment Times
openSUSE: 3:08-29:47
Links for this Episode:
openSUSE Wik[...]Overview
In this edition, we get an update from openSUSE developer and community advocate Martin Lasarsch about what’s new in openSUSE 10.2.
Time: 31:40
Size: 21.7 MB
Segment Times
openSUSE: 3:08-29:47
Links for this Episode:
openSUSE Wiki
Download openSUSE 10.2
Funny Little Men Behind the Curtain:
With the news that ATI may soon release Xorg 7.2-compliant drivers for FireGL cards, Ted is now watching the ATI Linux driver RSS feed like a hawk. Nothing yet…
Linux, openSUSE, Subjects, SUSEErin QuillnonoSpecial Report: CEO Ron Hovsepian on Novell and Microsofthttp://www.novell.com/feeds/openaudio/?p=100
http://www.novell.com/feeds/openaudio/?p=100#commentsWed, 08 Nov 2006 23:51:52 +0000http://www.novell.com/feeds/openaudio/?p=100http://www.novell.com/feeds/openaudio/?feed=rss2&p=10010:00:01Overview
After announcing the establishment of an agreement between Novell and Microsoft, Novell CEO Ron Hovsepian and Novell General Counsel Joe LaSala respond to some questions from Novell’s customers and the free software community.
Time: 1[...]Overview
After announcing the establishment of an agreement between Novell and Microsoft, Novell CEO Ron Hovsepian and Novell General Counsel Joe LaSala respond to some questions from Novell’s customers and the free software community.
Time: 17:55
Size: 12.5 MB
Links for this Episode:
Novell and Microsoft – Additional details
Press Release: Details of Agreement with Microsoft (Nov 7)
FAQ: Novell Answers Community Questions (Nov 7)
Linux, openSUSE, Subjects, SUSEErin QuillnonoSLE10: Technical Learning Optionshttp://www.novell.com/feeds/openaudio/?p=86
http://www.novell.com/feeds/openaudio/?p=86#commentsFri, 03 Nov 2006 00:19:40 +0000http://www.novell.com/feeds/openaudio/?p=86http://www.novell.com/feeds/openaudio/?feed=rss2&p=8610:00:01Overview
After announcing the establishment of an agreement between Novell and Microsoft, Novell CEO Ron Hovsepian and Novell General Counsel Joe LaSala respond to some questions from Novell’s customers and the free software community.
Segment[...]Overview
After announcing the establishment of an agreement between Novell and Microsoft, Novell CEO Ron Hovsepian and Novell General Counsel Joe LaSala respond to some questions from Novell’s customers and the free software community.
Segment Times
News: SUSE Linux 10.1 Reloaded: 1:26
openSUSE wiki as Training: 2:37-5:05
Cool Solutions Interview: 6:08-14:30
Novell Training Interview: 15:14-34:14
Links for this Episode:
openSUSE wiki
Example: Setting up VMware on SUSE Linux
Cool Solutions
Tom “Stomfi” Russell’s Linux basics articles
Cool Solutions wiki: How-to for installing MediaWiki on SLES9
Novell Training Services
CBT (discounted 30%)
Free Training Options
Training locator tool
NOA Odeo Claim (odeo/7659cdeaca0e3658)Desktop, Linux, openSUSE, Server, Subjects, SUSEErin QuillnonoGoing Upstream: SUSE Labs and GCC (GNU Compiler Collection)http://www.novell.com/feeds/openaudio/?p=93
http://www.novell.com/feeds/openaudio/?p=93#respondWed, 01 Nov 2006 21:14:36 +0000http://www.novell.com/feeds/openaudio/?p=93http://www.novell.com/feeds/openaudio/?feed=rss2&p=9300:00:01Overview
Michael Matz (SUSE team lead for the GNU Tool Chain) and Richard Guenther (GCC software engineer) explain their work with SUSE Labs. We learn about how Novell helps fund advancements of GNU/Linux infrastructure outside of the standard SUSE [...]Overview
Michael Matz (SUSE team lead for the GNU Tool Chain) and Richard Guenther (GCC software engineer) explain their work with SUSE Labs. We learn about how Novell helps fund advancements of GNU/Linux infrastructure outside of the standard SUSE Linux product development cycle. Also, we finally get a proper introduction to Erin Quill.
Segment Times
Intro to Erin Quill: 2:29-14:35
Interview on gcc: 14:40-23:31
Links for this Episode:
GNU Toolchain
GNU Compiler Collection
Reporting GCC Bugs
GCC home page
Free Software Foundation
Linux, openSUSE, SUSEErin QuillnonoSLE10: Going Mobile (a.k.a. “the Nuremberg Beer Garden Edition”)http://www.novell.com/feeds/openaudio/?p=79
http://www.novell.com/feeds/openaudio/?p=79#respondMon, 09 Oct 2006 17:28:19 +0000http://www.novell.com/feeds/openaudio/?p=79http://www.novell.com/feeds/openaudio/?feed=rss2&p=790SLE10: Linux clients in Active Directory, News from Supporthttp://www.novell.com/feeds/openaudio/?p=72
http://www.novell.com/feeds/openaudio/?p=72#respondTue, 12 Sep 2006 17:16:46 +0000http://coolblogsfilltest.provo.novell.com/feeds/openaudio/?p=72http://www.novell.com/feeds/openaudio/?feed=rss2&p=7200:00:01Overview
Samba hacker Lars Mueller explains new capabilities that he and Guenther Deschner team have been working on, allowing SUSE Linux Enterprise Desktop 10 to integrate into an Active Directory environment. From joining the Active Directory doma[...]Overview
Samba hacker Lars Mueller explains new capabilities that he and Guenther Deschner team have been working on, allowing SUSE Linux Enterprise Desktop 10 to integrate into an Active Directory environment. From joining the Active Directory domain to initial login and Kerberos provisioning, this stuff is too cool. And Dave Mair and Randy Goddard are back for News from Support, so cue the bagpipes!
Resources and Links for this Show
en.opensuse.org/Samba
Samba.org
Lars Müller’s home page
Günther Deschner’s home page
News from Support
Desktop, Linux, openSUSE, Subjects, SUSEErin QuillnonoSLE10: AutoBuild and Quality Assurancehttp://www.novell.com/feeds/openaudio/?p=68
http://www.novell.com/feeds/openaudio/?p=68#respondWed, 23 Aug 2006 17:11:30 +0000http://coolblogsfilltest.provo.novell.com/feeds/openaudio/?p=68http://www.novell.com/feeds/openaudio/?feed=rss2&p=6800:00:01How do SUSE engineers create an enterprise operating system? We learn how AutoBuild works from lead engineer Michael Schroeder, and AutoBuild team leader Rudi Oertel. Then the leads of two SUSE quality assurance teams, Chris Hueller and Ollie Ries, [...]How do SUSE engineers create an enterprise operating system? We learn how AutoBuild works from lead engineer Michael Schroeder, and AutoBuild team leader Rudi Oertel. Then the leads of two SUSE quality assurance teams, Chris Hueller and Ollie Ries, explain how they standardize and automate SUSE Linux Enterprise testing processes.
Resources and Links for this Show
– SUSE Linux Enterprise home
Desktop, Linux, NetIQ, openSUSE, Server, Subjects, SUSEErin QuillnonoSAMBA with Jeremy Allison, Mad Penguin’s Adam Doxtaterhttp://www.novell.com/feeds/openaudio/?p=50
http://www.novell.com/feeds/openaudio/?p=50#respondFri, 02 Jun 2006 16:52:43 +0000http://coolblogsfilltest.provo.novell.com/feeds/openaudio/?p=50http://www.novell.com/feeds/openaudio/?feed=rss2&p=5000:00:01The legendary Jeremy Allison graces Novell Open Audio's studio to tell Erin and Ted about the SAMBA project, and why he decided to join Novell. Adam Doxtater from madpenguin.org tells us why he is one of SUSE Linux's newest converts.The legendary Jeremy Allison graces Novell Open Audio's studio to tell Erin and Ted about the SAMBA project, and why he decided to join Novell. Adam Doxtater from madpenguin.org tells us why he is one of SUSE Linux's newest converts.Desktop, Linux, openSUSE, Server, Subjects, SUSEErin QuillnonoSUSE Linux 10.1 Unleashedhttp://www.novell.com/feeds/openaudio/?p=46
http://www.novell.com/feeds/openaudio/?p=46#respondThu, 11 May 2006 16:46:51 +0000http://coolblogsfilltest.provo.novell.com/feeds/openaudio/?p=46http://www.novell.com/feeds/openaudio/?feed=rss2&p=4600:00:01Martin Lasarsch tells us about all the cool stuff in SUSE Linux 10.1Martin Lasarsch tells us about all the cool stuff in SUSE Linux 10.1Linux, openSUSE, Subjects, SUSEErin Quillnono |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.