text stringlengths 8 5.77M |
|---|
Utility of sparse concentration sampling for citalopram in elderly clinical trial subjects.
The objective of this study was to evaluate whether the disposition of the selective serotonin reuptake inhibitor, citalopram, could be robustly captured using 1 to 2 concentration samples per subject in 106 patients participating in 2 clinical trials. Nonlinear mixed-effects modeling was used to evaluate the pharmacokinetic parameters describing citalopram's disposition. Both a prior established 2-compartment model and a de novo 1-compartment pharmacokinetic model were used. Covariates assessed were concomitant medications, race, sex, age (22-93 years), and weight. Covariates affecting disposition were assessed separately and then combined in a stepwise manner. Pharmacokinetic characteristics of citalopram were well captured using this sparse sampling design. Two covariates (age and weight) had a significant effect on the clearance and volume of distribution in both the 1- and 2-compartment pharmacokinetic models. Clearance decreased 0.23 L/h for every year of age and increased 0.14 L/h per kilogram body weight. It was concluded that hyper-sparse sampling designs are adequate to support population pharmacokinetic analysis in clinically treated populations. This is particularly valuable for populations such as the elderly, who are not typically available for pharmacokinetic studies. |
Q:
Why does EC2 instance not start correctly after resizing root EBS volume?
I was using the instructions on https://matt.berther.io/2015/02/03/how-to-resize-aws-ec2-ebs-volumes/ and http://atodorov.org/blog/2014/02/07/aws-tip-shrinking-ebs-root-volume-size/ to move to a EBS volume with less disk space. In both cases, when I attached the shrinked EBS volume(as /dev/xdva or /dev/sda1 , neither works) to an EC2 instance and start it, it stops on its own with the message
State transition reason
Client.InstanceInitiatedShutdown: Instance initiated shutdown
Some more tinkering and I found that the new volume did not have BIOS boot partition. So I used gdisk to make one and copied the MBR from the original volume(that works and using which I can start instances) to the new volume. Now the instance does not terminate but I am not able to ssh into the newly launched instance.
What might be the reason behind this happening? How can I get more information(from logs/AWS Console etc) on why this is happening?
A:
To shrink a GPT partioned boot EBS volume below the 8GB that standard images seem to use you can do the following: (a slight variation of the dd method from https://matt.berther.io/2015/02/03/how-to-resize-aws-ec2-ebs-volumes/ )
source disk is /dev/xvdf, target is /dev/xvdg
Shrink source partition
$ sudo e2fsck -f /dev/xvdf1
$ sudo resize2fs -M /dev/xvdf1
Will print something like
resize2fs 1.42.12 (29-Aug-2014)
Resizing the filesystem on /dev/xvdf1 to 257491 (4k) blocks.
The filesystem on /dev/xvdf1 is now 257491 (4k) blocks long.
I converted this to MB, i.e. 257491 * 4 / 1024 ~= 1006 MB
copy above size + a bit more from device to device (!), not just partition to partition, because that includes both partition table & data in the boot partition
$ sudo dd if=/dev/xvdf of=/dev/xvdg bs=1M count=1100
now use gdisk to fix the GPT partition on the new disk
$ sudo gdisk /dev/xvdg
You'll be greeted with roughly
GPT fdisk (gdisk) version 0.8.10
Warning! Disk size is smaller than the main header indicates! Loading
secondary header from the last sector of the disk! You should use 'v' to
verify disk integrity, and perhaps options on the experts' menu to repair
the disk.
Caution: invalid backup GPT header, but valid main header; regenerating
backup header from main header.#
Warning! One or more CRCs don't match. You should repair the disk!
Partition table scan:
MBR: protective
BSD: not present
APM: not present
GPT: damaged
****************************************************************************
Caution: Found protective or hybrid MBR and corrupt GPT. Using GPT, but disk
verification and recovery are STRONGLY recommended.
****************************************************************************
Command (? for help):
The following is the keyboard input within gdisk. To fix the problems, the data partition that is present in the copied partition table needs to be resized to fit on the new disk. This means it needs to be recreated smaller and it's properties need to be set to match the old partition definition.
Didn't test it so it's maybe not required to relocate the backup table to the actual end of the disk but I did it anyways:
go to extra expert options: x
relocate backup data structures to the end of the disk: e
back to main menu: m
Now to fixing the partition size
print and note some properties of partition 1 (and other non-boot partitions if they exist):
i
1
Will show something like
Partition GUID code: 0FC63DAF-8483-4772-8E79-3D69D8477DE4 (Linux filesystem)
Partition unique GUID: DBA66894-D218-4D7E-A33E-A9EC9BF045DB
First sector: 4096 (at 2.0 MiB)
Last sector: 16777182 (at 8.0 GiB)
Partition size: 16773087 sectors (8.0 GiB)
Attribute flags: 0000000000000000
Partition name: 'Linux'
now delete
d
1
and recreate the partition
n
1
Enter the required parameters. All defaults worked for me here (= press enter), when in doubt refer to partition information from above
First sector = 4096
Last sector = whatever is the actual end of the new disk - take the default here
type = 8300 (Linux)
The new partition's default name did not match the old one. So change it to the original One
c
1
Linux (see Partition name from above)
Next thing to change is the partition's GUID
x
c
1
DBA66894-D218-4D7E-A33E-A9EC9BF045DB (see Partition unique GUID, not the partition guid code above that)
That should be it. Back to main menu & print state
m
i
1
Will now print
Partition GUID code: 0FC63DAF-8483-4772-8E79-3D69D8477DE4 (Linux filesystem)
Partition unique GUID: DBA66894-D218-4D7E-A33E-A9EC9BF045DB
First sector: 4096 (at 2.0 MiB)
Last sector: 8388574 (at 4.0 GiB)
Partition size: 8384479 sectors (4.0 GiB)
Attribute flags: 0000000000000000
Partition name: 'Linux'
The only change should be the Partition size.
write to disk and exit
w
y
grow filesystem to match entire (smaller) disk. The fist step shrunk it down to the minimal size it can fit
$ sudo resize2fs -p /dev/xvdg1
We're done. Detach volume & snapshot it.
Optional step. Choosing proper Kernel ID for the AMI.
If you are dealing with PVM image and encounter following mount error in instance logs
Kernel panic - not syncing: VFS: Unable to mount root
when your instance doesn't pass startup checks, you may probably be required to perform this additional step.
The solution to this error would be to choose proper Kernel ID for your PVM image during image creation from your snapshot.
The full list of Kernel IDs (AKIs) can be obtained here.
Do choose proper AKI for your image, they are restricted by regions and architectures!
|
/*
COPYRIGHT STATUS:
Dec 1st 2001, Fermi National Accelerator Laboratory (FNAL) documents and
software are sponsored by the U.S. Department of Energy under Contract No.
DE-AC02-76CH03000. Therefore, the U.S. Government retains a world-wide
non-exclusive, royalty-free license to publish or reproduce these documents
and software for U.S. Government purposes. All documents and software
available from this server are protected under the U.S. and Foreign
Copyright Laws, and FNAL reserves all rights.
Distribution of the software available from this server is free of
charge subject to the user following the terms of the Fermitools
Software Legal Information.
Redistribution and/or modification of the software shall be accompanied
by the Fermitools Software Legal Information (including the copyright
notice).
The user is asked to feed back problems, benefits, and/or suggestions
about the software to the Fermilab Software Providers.
Neither the name of Fermilab, the URA, nor the names of the contributors
may be used to endorse or promote products derived from this software
without specific prior written permission.
DISCLAIMER OF LIABILITY (BSD):
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL FERMILAB,
OR THE URA, OR THE U.S. DEPARTMENT of ENERGY, OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT
OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
Liabilities of the Government:
This software is provided by URA, independent from its Prime Contract
with the U.S. Department of Energy. URA is acting independently from
the Government and in its own private capacity and is not acting on
behalf of the U.S. Government, nor as its contractor nor its agent.
Correspondingly, it is understood and agreed that the U.S. Government
has no connection to this software and in no manner whatsoever shall
be liable for nor assume any responsibility or obligation for any claim,
cost, or damages arising out of or resulting from the use of the software
available from this server.
Export Control:
All documents and software available from this server are subject to U.S.
export control laws. Anyone downloading information from this server is
obligated to secure any necessary Government licenses before exporting
documents or software obtained from this server.
*/
package org.dcache.ftp.proxy;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.io.IOException;
import java.io.PrintWriter;
import java.net.InetAddress;
import java.net.InetSocketAddress;
import java.nio.ByteBuffer;
import java.nio.channels.ClosedChannelException;
import java.nio.channels.ClosedSelectorException;
import java.nio.channels.SelectionKey;
import java.nio.channels.Selector;
import java.nio.channels.ServerSocketChannel;
import java.nio.channels.SocketChannel;
import java.util.ArrayList;
import java.util.Iterator;
import java.util.LinkedList;
import java.util.List;
import static org.dcache.util.ByteUnit.KiB;
import static org.dcache.util.Strings.indentLines;
/**
* The ActiveAdapter relays data by accepting TCP connections and establishing
* a corresponding TCP connection to the specified endpoint. The adapter opens
* a server socket that listens for incoming connections. When a connection is
* established, a corresponding connection to the target endpoint is established.
* <p>
* Once established, any data received from either connection is relayed to the
* corresponding connection. This data is not parsed in any way and can contain
* arbitrary information.
* <p>
* When a remote party (on either side of the adapter) half-closes their
* connection (i.e., the adapter sees the connection's input is now closed), it
* will half-close the corresponding channel's output. Therefore, when both
* remote parties close their connection, both connections are completely closed.
*/
public class ActiveAdapter implements Runnable, ProxyAdapter
{
private static final Logger _log =
LoggerFactory.getLogger(ActiveAdapter.class);
/* After the transfer is completed we only expect the key for the
* server socket to be left.
*/
private static final int EXPECTED_KEY_SET_SIZE_WHEN_DONE = 1;
private ServerSocketChannel _ssc; // The ServerSocketChannel we will
// listen on...
private String _tgtHost; // The remote host to connect
private int _tgtPort; // The remote port to connect
private String _laddr; // Local IP address
private int _maxBlockSize = KiB.toBytes(32); // Size of the buffers for transfers
private int _expectedStreams = 1; // The number of streams expected
private Selector _selector;
private final LinkedList<SocketChannel> _pending = new LinkedList<>();
private String _error;
private Thread _t; // A thread driving the adapter
private boolean _closeForced;
private int _streamsCreated;
private final List<Tunnel> _tunnels = new ArrayList<>();
public ActiveAdapter(InetAddress internalAddress, String host, int port)
throws IOException
{
_tgtHost = host;
_tgtPort = port;
_ssc = ServerSocketChannel.open();
_ssc.configureBlocking(false);
_ssc.bind(new InetSocketAddress(internalAddress, 0));
_laddr = InetAddress.getLocalHost().getHostAddress(); // Find the
// address as a
// string
_t = new Thread(this);
// Create a new Selector for selecting
_selector = Selector.open();
}
@Override
public synchronized void close()
{
_closeForced = true;
if (_selector != null) {
_selector.wakeup();
}
if (!_t.isAlive()) {
/* Take care of the case where the adapter was never started.
*/
closeNow();
}
}
private synchronized void closeNow()
{
if (_ssc != null) {
try {
say("Closing " + _ssc.socket());
_ssc.close();
} catch (IOException e) {
esay("Failed to close server socket: " + e.getMessage());
}
_ssc = null;
}
if (_selector != null) {
for (SelectionKey key: _selector.keys()) {
if (key.isValid() && key.attachment() instanceof Tunnel) {
((Tunnel) key.attachment()).close();
}
}
try {
_selector.close();
} catch (IOException e) {
esay("Failed to close selector: " + e.getMessage());
}
_selector = null;
}
}
/**
* Returns whether the transfer is still in progress.
*/
private synchronized boolean isTransferInProgress()
throws IOException
{
if (_closeForced) {
return false;
}
if (_streamsCreated < _expectedStreams) {
return true;
}
/* We call selectNow to make sure that cancelled keys have
* been removed from the key set.
*/
_selector.selectNow();
return _selector.keys().size() > EXPECTED_KEY_SET_SIZE_WHEN_DONE;
}
/*
* (non-Javadoc)
*
* @see diskCacheV111.util.ProxyAdapter#getError()
*/
@Override
public String getError() {
return _error;
}
@Override
public InetSocketAddress getInternalAddress() {
return (InetSocketAddress) _ssc.socket().getLocalSocketAddress();
}
/*
* (non-Javadoc)
*
* @see diskCacheV111.util.ProxyAdapter#setMaxBlockSize(int)
*/
@Override
public void setMaxBlockSize(int size) {
_maxBlockSize = size;
}
protected void say(String s) {
_log.info("ActiveAdapter: {}", s);
}
protected void esay(String s) {
_log.error("ActiveAdapter: {}", s);
}
protected void esay(Throwable t) {
_log.error(t.getMessage(), t);
}
/*
* (non-Javadoc)
*
* @see diskCacheV111.util.ProxyAdapter#hasError()
*/
@Override
public boolean hasError() {
return _error != null;
}
/*
* (non-Javadoc)
*
* @see diskCacheV111.util.ProxyAdapter#isAlive()
*/
@Override
public boolean isAlive() {
return _t.isAlive();
}
/*
* (non-Javadoc)
*
* @see diskCacheV111.util.ProxyAdapter#join(long)
*/
@Override
public void join(long millis) throws InterruptedException {
_t.join(millis);
}
/*
* (non-Javadoc)
*
* @see diskCacheV111.util.ProxyAdapter#start()
*/
@Override
public void start() {
_t.start();
}
public String getLocalHost() {
return _laddr;
}
/**
*
*/
private class Tunnel {
//
private final SocketChannel _scs;
private final SocketChannel _sct;
// A pre-allocated buffer for data
private final ByteBuffer _sbuffer = ByteBuffer.allocate(_maxBlockSize);
private final ByteBuffer _tbuffer = ByteBuffer.allocate(_maxBlockSize);
/*
*
*/
Tunnel(SocketChannel scs, SocketChannel sct) {
_scs = scs;
_sct = sct;
}
/*
*
*/
public void register(Selector selector) throws ClosedChannelException {
//
if (_sct.isConnectionPending()) {
// Register the target channel with selector, listening for
// OP_CONNECT events
_sct.register(selector, SelectionKey.OP_CONNECT, this);
} else if (_sct.isConnected()) {
// Register the source channel with the selector, for reading
_scs.register(selector, SelectionKey.OP_READ, this);
// System.err.printf("Register %s%n", _scs);
// Register the target channel with selector, listening for
// OP_READ events
_sct.register(selector, SelectionKey.OP_READ, this);
}
// System.err.printf("Register %s%n", _sct);
}
/*
*
*/
public void close()
{
if (_selector != null) {
SelectionKey key;
key = _scs.keyFor(_selector);
if (key != null) {
key.cancel();
}
key = _sct.keyFor(_selector);
if (key != null) {
key.cancel();
}
}
try {
say("Closing " + _scs.socket());
_scs.close();
} catch (IOException ie) {
esay("Error closing channel " + _scs + ": " + ie);
}
try {
say("Closing " + _sct.socket());
_sct.close();
} catch (IOException ie) {
esay("Error closing channel " + _sct + ": " + ie);
}
}
/*
*
*/
public ByteBuffer getBuffer(SocketChannel sc) {
if (sc == _scs) {
return _sbuffer;
} else if (sc == _sct) {
return _tbuffer;
} else {
return null;
}
}
/*
*
*/
public SocketChannel getMate(SocketChannel sc) {
if (sc == _scs) {
return _sct;
} else if (sc == _sct) {
return _scs;
} else {
return null;
}
}
/*
*
*/
private void processInput(SocketChannel scs) throws IOException
{
SocketChannel sct = getMate(scs);
ByteBuffer b = getBuffer(scs);
b.clear();
int r = scs.read(b);
if (r < 0) {
scs.shutdownInput(); // Mark as shutdown
say("EOF on channel " + scs + ", shutting down output of " + sct);
sct.socket().shutdownOutput();
if (scs.socket().isOutputShutdown()) {
close();
}
} else if (r > 0) {
b.flip();
processOutput(sct);
} else {
SelectionKey key = scs.keyFor(_selector);
key.interestOps(key.interestOps() | SelectionKey.OP_READ);
}
}
/*
*
*/
private void processOutput(SocketChannel sct) throws IOException
{
SocketChannel scs = getMate(sct);
ByteBuffer b = getBuffer(scs);
sct.write(b);
if (b.hasRemaining()) {
// Register the output channel for OP_WRITE
SelectionKey key = sct.keyFor(_selector);
key.interestOps(key.interestOps() | SelectionKey.OP_WRITE);
// System.err.printf("has remaining: set OP_WRITE%n");
} else {
// Register the input channel for OP_READ
SelectionKey key = scs.keyFor(_selector);
key.interestOps(key.interestOps() | SelectionKey.OP_READ);
// System.err.printf("no remaining: set OP_READ%n");
}
}
public String toString() {
return _scs.socket().toString() + "<->" + _sct.socket().toString();
}
} // class Tunnel
/*
*
*/
@Override
public void run()
{
try {
// Create a new Selector for selecting
// _selector = Selector.open();
// Register the ServerSocketChannel, so we can listen for incoming
// connections
_ssc.register(_selector, SelectionKey.OP_ACCEPT);
say("Listening on port " + _ssc.socket().getLocalPort());
// Now process the events
while (isTransferInProgress()) {
// Watch for either an incoming connection, or incoming data on
// an existing connection
int num = _selector.select(5000);
// System.err.printf("select returned %d%n", num);
// if (num == 0) continue; // Just in case...
// Get the keys corresponding to the activity that has been
// detected, and process them one by one
Iterator<SelectionKey> selectedKeys = _selector.selectedKeys()
.iterator();
while (selectedKeys.hasNext()) {
// Get a key representing one of bits of I/O activity
SelectionKey key = selectedKeys.next();
selectedKeys.remove();
if (!key.isValid()) {
continue;
}
try {
processSelectionKey(key);
} catch (IOException e) {
// Handle error with channel and unregister
key.cancel();
esay("key processing error");
}
}
// Process pending accepted sockets and add them to the selector
processPending();
}
} catch (ClosedSelectorException e) {
// Adapter was forcefully closed; not an error
} catch (IOException ie) {
esay(ie);
} finally {
closeNow();
}
}
/*
*
*/
private void accept(SelectionKey key) throws IOException {
ServerSocketChannel ssc = (ServerSocketChannel) key.channel();
SocketChannel sc = ssc.accept();
say("New connection: " + sc.socket());
addPending(sc);
}
/*
*
*/
private void finishConnection(SelectionKey key) throws IOException {
SocketChannel sc = (SocketChannel) key.channel();
Tunnel tnl = (Tunnel) key.attachment();
boolean success = sc.finishConnect();
if (success) {
say("New connection: " + sc.socket());
tnl.register(_selector);
} else {
// An error occurred; handle it
esay("Connection error: " + sc.socket());
tnl.close();
}
}
/*
*
*/
public void processSelectionKey(SelectionKey key) throws IOException {
// System.err.printf("key.readyOps()=%x%n", key.readyOps());
if (key.isValid() && key.isAcceptable()) {
// System.out.println("ACCEPT");
// It's an incoming connection, accept it
this.accept(key);
}
if (key.isValid() && key.isConnectable()) {
// System.out.println("CONNECT");
// Get channel with connection request
this.finishConnection(key);
}
if (key.isValid() && key.isReadable()) {
// Obtain the interest of the key
// int readyOps = key.readyOps();
// System.out.println("READ:"+readyOps);
// Disable the interest for the operation that is ready.
// This prevents the same event from being raised multiple times.
// key.interestOps(key.interestOps() & ~readyOps);
key.interestOps(key.interestOps() & ~SelectionKey.OP_READ);
// It's incoming data on a connection, process it
this.read(key);
}
if (key.isValid() && key.isWritable()) {
// Obtain the interest of the key
// int readyOps = key.readyOps();
// System.out.println("WRITE:"+readyOps);
// Disable the interest for the operation that is ready.
// This prevents the same event from being raised multiple times.
// key.interestOps(key.interestOps() & ~readyOps);
key.interestOps(key.interestOps() & ~SelectionKey.OP_WRITE);
// It's incoming data on a connection, process it
this.write(key);
}
}
/*
*
*/
private void read(SelectionKey key)
{
Tunnel tnl = null;
try {
// There is incoming data on a connection, process it
tnl = (Tunnel) key.attachment();
//
tnl.processInput((SocketChannel) key.channel());
} catch (IOException ie) {
esay("Communication error");
// On exception, remove this channel from the selector
tnl.close();
}
}
/*
*
*/
private void write(SelectionKey key)
{
Tunnel tnl = null;
try {
// There is outgoing data on a connection, process it
tnl = (Tunnel) key.attachment();
//
tnl.processOutput((SocketChannel) key.channel());
} catch (IOException ie) {
esay("Communication error");
// On exception, remove this channel from the selector
tnl.close();
}
}
/*
*
*/
void addPending(SocketChannel s) {
//
synchronized (_pending) {
// System.err.printf("addPending: add: %s%n", s);
_pending.add(s);
_pending.notify();
}
}
/*
* Process any targets in the pending list
*/
private void processPending() throws IOException {
synchronized (_pending) {
// System.err.printf("ProcessPending: pending.size=%d%n",
// _pending.size());
while (_pending.size() > 0) {
SocketChannel scs = _pending.removeFirst();
// Make it non-blocking, so we can use a selector on it.
scs.configureBlocking(false);
// System.err.printf("ProcessPending: got %s%n", scs);
try {
_streamsCreated++;
// Prepare the socket channel for the target
SocketChannel sct = createSocketChannel(_tgtHost, _tgtPort);
Tunnel tnl = new Tunnel(scs, sct);
_tunnels.add(tnl);
tnl.register(_selector);
} catch (IOException ie) {
// Something went wrong..........
esay(ie);
}
}
}
}
/*
* Creates a non-blocking socket channel for the specified host name and
* port and calls the connect() on the new channel before it is returned.
*/
public static SocketChannel createSocketChannel(String host, int port) throws IOException {
// Create a non-blocking socket channel
SocketChannel sc = SocketChannel.open();
sc.configureBlocking(false);
// Send a connection request to the server; this method is non-blocking
sc.connect(new InetSocketAddress(host, port));
return sc;
}
@Override
public String toString()
{
return "active -> " + _tgtHost + ":" + _tgtPort + "; " + _streamsCreated + " streams created";
}
@Override
public void getInfo(PrintWriter pw)
{
pw.println("Active adapter:");
pw.println(" Listening for pool on: " + _ssc.socket().getLocalSocketAddress());
pw.println(" Connecting to: " + _tgtHost + ":" + _tgtPort);
pw.println(" Streams: " + _streamsCreated);
pw.println(" Proxy status:");
ProxyPrinter proxy = new ProxyPrinter();
_tunnels.forEach(t -> proxy.client(t._sct.socket()).pool(t._scs.socket()).add());
pw.println(indentLines(" ", proxy.toString()));
}
}
|
Estradiol and progestins differentially modulate leukocyte infiltration after vascular injury.
Inflammation plays an important role in the response to endoluminal vascular injury. Estrogen (17beta-estradiol, E2) inhibits neointima formation in animal models, and the progestin medroxyprogesterone acetate (MPA) blocks this effect. This study tested the hypothesis that E2 inhibits the migration of inflammatory cells, particularly granulocytes, into the rat carotid arteries after acute endoluminal injury and that MPA blocks this effect. Ovariectomized rats were randomly divided into subgroups and treated with E2, MPA, E2+MPA, or vehicle and subjected to balloon injury of the right carotid artery. After 1, 3, or 7 days, rats were euthanized, and carotid arteries (injured and control) were analyzed for inflammatory cells by flow cytometry. At 1 day, granulocytes (HIS48+ and CD45+), monocyte/macrophages (Mar1+ and CD45+), and T lymphocytes (CD3+ and CD45+) were increased 26-fold, 12-fold, and 3-fold, respectively, in injured compared with contralateral control arteries of vehicle-treated rats. Granulocytes and monocyte/macrophages decreased markedly by 3 days. E2 reduced the granulocyte and monocyte/macrophage populations of injured vessels by approximately 50% and increased T lymphocytes. MPA had no independent effect on inflammatory cells but completely blocked the effect of E2. Immunohistochemical examination verified these findings and localized inflammatory cells to the adventitial and periadventitial domains of injured vessels. E2 may limit the neointimal response to endoluminal vascular injury, at least in part, by limiting leukocyte entry from adventitial/periadventitial tissues into injured vessels early in the injury response. |
Friday, May 27, 2011
Here's an update to an interesting chart that shows the three major components of the Personal Consumption Deflator. The main attraction here is the huge divergence between the level of durable goods prices, which have fallen by about 25% since 1995, and the ongoing rise in the prices of services and nondurable goods, which have risen about 50%.
As I noted last month, there is a reason why durable goods began to decline (for the first time ever) in 1995: that was the year that China first started pegging its currency to the dollar (thus stabilizing and eventually strengthening it), which in turn set the foundation for China's strong export-led growth in the years to follow. Cheap Chinese imported goods have helped keep U.S. inflation low, while at the same time boosting U.S. standards of living. The services component of the deflator is a good proxy for wages, so the chart is telling us that an hour's worth of work today buys the typical worker a whole lot more in the way of durable goods that it did 15 years ago (actually about twice as much). |
{
"name": "opentelemetry-base",
"version": "0.11.0",
"description": "OpenTelemetry is a distributed tracing and stats collection framework.",
"main": "build/src/index.js",
"types": "build/src/index.d.ts",
"scripts": {
"bench": "node benchmark",
"clean": "lerna run clean",
"postinstall": "npm run bootstrap",
"precompile": "tsc --version",
"version:update": "lerna run version:update",
"compile": "lerna run compile",
"test": "lerna run test",
"test:browser": "lerna run test:browser",
"test:backcompat": "lerna run test:backcompat",
"bootstrap": "lerna bootstrap",
"bump": "lerna publish",
"codecov": "lerna run codecov",
"codecov:browser": "lerna run codecov:browser",
"changelog": "lerna-changelog",
"predocs-test": "npm run docs",
"docs-test": "lerna run docs-test",
"docs": "lerna run docs",
"docs-deploy": "gh-pages --dist packages/opentelemetry-api/docs/out",
"lint": "lerna run lint",
"lint:fix": "lerna run lint:fix",
"lint:examples": "eslint ./examples/**/*.js",
"lint:examples:fix": "eslint ./examples/**/*.js --fix",
"lint:markdown": "./node_modules/.bin/markdownlint $(git ls-files '*.md') -i ./CHANGELOG.md",
"lint:markdown:fix": "./node_modules/.bin/markdownlint $(git ls-files '*.md') -i ./CHANGELOG.md --fix"
},
"repository": "open-telemetry/opentelemetry-js",
"keywords": [
"opentelemetry",
"nodejs",
"profiling",
"metrics",
"stats"
],
"author": "OpenTelemetry Authors",
"license": "Apache-2.0",
"devDependencies": {
"@commitlint/cli": "11.0.0",
"@commitlint/config-conventional": "11.0.0",
"@typescript-eslint/eslint-plugin": "4.1.1",
"@typescript-eslint/parser": "4.1.1",
"beautify-benchmark": "0.2.4",
"benchmark": "2.1.4",
"eslint": "7.6.0",
"eslint-config-airbnb-base": "14.2.0",
"eslint-plugin-header": "3.0.0",
"eslint-plugin-import": "2.22.0",
"gh-pages": "3.1.0",
"gts": "2.0.2",
"husky": "4.2.5",
"lerna": "3.22.1",
"lerna-changelog": "1.0.1",
"markdownlint-cli": "0.23.2",
"typescript": "3.9.7"
},
"husky": {
"hooks": {
"commit-msg": "commitlint -E HUSKY_GIT_PARAMS"
}
}
}
|
Osagyefo Dr. Kwame Nkrumah Profile:
Detailed Biography
Father: Kofi Ngonloma of the Asona Clan
Mother: Elizabeth Nyanibah of the Anona Clan
Wife: Helena Ritz Fathia
Childhood Mentor: Dr. Kwegyir Aggrey (Assistant Vice Principal and the first African member of staff at the then Prince of Wales’ College at Achimota)Education & Career Pattern: Nkrumah was first named Francis Nwia-Kofi (the latter name, after a prominent family personality), but later changed his name to Kwame Nkrumah in 1945 in the UK - he was born on Saturday.
Attended Elementary School at Half Assini where father worked as a goldsmith. A German Roman Catholic priest by name George Fischer significantly influenced his elementary school education
1930: Obtained Teacher's Certificate from the Prince of Wales’ College at Achimota (Formerly Government Training College, Accra)
1943: M.Sc. Education, MA Philosophy, and completed course work / preliminary examination for a Ph. D. degree at the University of Pennsylvania, USA
1939 - 1945: Combined studies with part-time lectureship in Negro History. (During this period, he helped to found the African Studies Association and the African Students Association of America and Canada.)
1945(May): Arrived in London with the aim of studying Law and completing thesis for a Doctorate but met George Padmore. The two as Co-Political Secretaries helped to organize the Sixth Pan-African Congress in Manchester, England. After the Congress, Nkrumah continued work for de-colonization of Africa and became Vice-President of West African Students Union. He was also leader of "The Circle", the secret organization dedicated to the unity and independence of West Africa, in its struggle to create and maintain a Union of African Socialist Republics
1947: Wrote his first book, "Towards Colonial Freedom"
1947(December): Returned to Gold Coast and became General Secretary of United Gold Coast Convention (UGCC)
1948: Detained with Executive Members of UGCC known later as the "Big Six" following disturbances in the colony.
1948 (September): Established the "Accra Evening News which appeared on the news-stands the same day that he was dismissed as General Secretary of UGCC.
1949 (June): Formed Convention Peoples Party (CPP) with the Committee on Youth Organization (CYO).
1951 (February): Won the election while in prison with a vote of 22,780 from the 23,122 ballots cast, to take the Accra Central seat. He was released later from prison in the same month to form new Government.
1956: Won the elections leading to independence.· 1957 (6 March): Declared Ghana's Independence
1958 (April): Convened Conference of the existing independent African States (Ghana, Egypt, Sudan, Libya, Tunisia, Ethiopia, Morocco and Liberia). In December, He held an All-African Peoples Conference in Accra, the first Pan-African conference to be held on African soil. He took the first step towards African Unification by signing an agreement with Sekou Toure to unite Ghana and Guinea.
1958: Married Helena Ritz Fathia, an Egyptian Coptic and relative of President Gamal Abdel Nasser of Egypt. Had three children with her - Gokeh, Sarmiah Yarba, and Sekou Ritz
1962 (August): Target of an assassination attempt at Kulungugu in the Northern Region of Ghana.
1963 (May): Nkrumah organized a conference of the 32 independent African States in Addis Ababa. The Organization of African Unity (OAU) was formed at this conference with the purpose of working for the Unity, Freedom and Prosperity of the people of Africa.
1964: Established Ghana as a One Party State with himself as Life President.
1965: Nkrumah published his book “Neocolonialism". In this book he showed how foreign companies and governments were enriching themselves at the expense of the African people. This book drew harsh protest from the US government and consequently withdrew its economic aid of $35m previously earmarked for Ghana.
1966 (February 24th): Overthrown in a Military Coup d'etat while on trip to Hanoi, North Vietnam. He left for Conakry Guinea on being told of the overthrow. He lived in Conakry as Co –President of Guinea.
1972 (April 27th): Died of natural causes in a Romania
1972 (7 July): Buried in Ghana.
The Osagyefo, Dr. Kwame Nkrumah authored over 20 books and publications (See list of his publications). He is a lead authority on the Political theory and Practical Pan-Africanism. Dr. Kwame Nkrumah selflessly dedicated his life to show how future sons and daughters of Africa should prepare themselves as well as strive to unify Africa and harness its wealth for the benefit of all descendants of the continent.
Today, the African continent is beset with poverty and misery even as it is endowed with abundance of natural, climatic, strategic and human wealth.
When he studied in the United States he joined Phi Beta Sigma Fraternity, Inc. and was a member of the West Africa Chapter, Beta Upsilon Sigma, upon is return to Ghana. |
Q:
SDL2 Can't create window since it couldn't find matching GLX visual
I have a problem as i am currently running Ubuntu Terminal on Windows 10. I also have XMing installed as my X-server(I use XMing for qemu,etc...). And i am trying to run this SDL2 Program. So i have this for main.cpp:
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <SDL2/SDL.h>
#include <GL/gl.h>
int main(int argc, char *argv[])
{
int final_status = 1;
SDL_Window *window;
SDL_GLContext openGL_context;
if (SDL_Init(SDL_INIT_VIDEO)) {
fprintf(stderr, "Unable to initialize SDL: %s\n",
SDL_GetError());
return 1;
}
window = SDL_CreateWindow("My Demo", SDL_WINDOWPOS_CENTERED,
SDL_WINDOWPOS_CENTERED, 640, 480,
SDL_WINDOW_OPENGL);
if (!window) {
fprintf(stderr, "Can't create window: %s\n", SDL_GetError());
goto finished;
}
openGL_context = SDL_GL_CreateContext(window);
if (!openGL_context) {
fprintf(stderr, "Can't create openGL context: %s\n",
SDL_GetError());
goto close_window;
}
/* drawing code removed */
final_status = 0;
SDL_GL_DeleteContext(openGL_context);
close_window:
SDL_DestroyWindow(window);
finished:
SDL_Quit();
fprintf(stdout, "done\n");
fflush(stdout);
return final_status;
}
And then when i run g++ main.cpp -lSDL2 , i get this output:
Can't create window: Couldn't find matching GLX visual
done
I have tried to search how to solve this GLX Problem but can't seem to find a solution for it. Help would be greatly appreciated!
A:
Ensure that GLX is installed correctly by running glxinfo. At the bottom, you'll find the list of supported visuals. Here's mine:
1 GLX Visuals
visual x bf lv rg d st colorbuffer sr ax dp st accumbuffer ms cav
id dep cl sp sz l ci b ro r g b a F gb bf th cl r g b a ns b eat
----------------------------------------------------------------------------
0x022 24 tc 0 24 0 r y . 8 8 8 0 . . 0 16 0 0 0 0 0 0 0 None
Try running the following before running the SDL2 program:
export SDL_VIDEO_X11_VISUALID=
This causes SDL to go down a different code path to find the visual. You can also try hardcoding the visual to the visual id from glxinfo:
export SDL_VIDEO_X11_VISUALID=0x022
|
1. Field of the Invention
The present invention relates to a feeding device for use in feeding animals, preferably young pigs, and comprising a basis designed as a feeding plate, a frame fastened to the basis and supporting a tube, which is connected at its upper end to a feed reservoir and the lower part of which is arranged at a short distance over the feeding plate and in combination with the latter delimits an annular feed discharge opening having a height corresponding to the distance of the tube over the feeding plate, said tube being optionally mounted rigidly or movably on the frame.
The feeding device is particularly intended for use in feeding weaned young pigs having a weight of between 5 and 13 kg.
2. The Prior Art
Various feeding devices are known for feeding animals, particularly pigs. They have proven to be valuable in practice and have seen a great commercial success. The known feeding device is disclosed U.S. Pat. No. 5,447,119.
The known feeding device is based on the use of an elastically flexible feeding tube. Due to its elastic flexibility, the animals are able to apply their snouts to cause a strong movement in the transverse direction of the tube so that feed flows out of the opening of the tube and comes to lay on the feeding plate. The elastic yieldingness will bring the tube back into its normal starting position when the animal does not activate the tube. The tube is suspended so that in itself it is able to perform a slight pendular movement. These slight pendular movements will be very limited since, otherwise, there is a risk of free flow of fed and consequently a risk of feed waste.
The known feeding device has lead a very great success in feeding pigs. It has turned out, however, that it is difficult to obtain quite satisfactory results with the feeding device for little pigs having a weight from 5 to 13 kg. It has turned out that these little pigs do not possess sufficient strength in their snouts to give the available flexible tubes a sufficient yielding bend allowing a discharge of feed.
Although the tube may have a rather great length and consequently a fairly good chance of an elastically yielding bend, a feeding device having a shorter feeding tube will often be used for feeding young pigs, the reservoir being formed in the shape of a funnel-shaped container. In such a feeding device the length of the tube will be relatively short. Therefore, it will be difficult to obtain a sufficient flexibility for the feeding device to be suited for feeding young pigs.
Consequently, there has been a desire to raise the feeding tube over the feeding plate so that the height of the feed discharge opening is sufficient for the feed to flow unto the feeding plate by itself. However, this results in waste since the feed flows beyond the lateral edge of the feeding plate and down into the water trough from where the piglets are unable to eat it.
The present invention may also be used in combination with other types of feeding devices in which the feeding tube is a rigid tube which is either retained or which just has a possibility of slight pendular swings. In this situation the present invention will function as an alternative to other types of dosing arrangements of different mechanical structures. Such mechanical dosing arrangements are generally complicated and may frequently cause operation stoppages. Furthermore, the mechanical dosing arrangements will require a strong actuation. The strength in the snouts of young pigs has consequently turned out to be insufficient for such mechanical arrangements.
Thus, it is the object of the present invention to establish a new technically simple feeding device which may be used in combination with existing feeding devices and, thereby, extend their field of use into an efficient feeding of small animals, preferably young pigs. |
Q:
Gamecenter Leaderboard Done Button with Swift using SpriteKit not working
I am currently working on my first iOS Game using Swift. Unfortunately I am having more problems implementing Gamecenter than with anything else so far.
After figuring out how to make the leaderboard pop up and save highscores I bumped in my next problem:
How can I make the "Done"-button work? If I press it nothing happens.
First of all my code:
GameViewController.swift:
import GameKit
....
override func viewWillLayoutSubviews() {
let skView = self.view as SKView
skView.ignoresSiblingOrder = true
let scene = GameScene.sceneWithSize(skView.bounds.size)
scene.scaleMode = .AspectFill
skView.presentScene(scene)
authenticateLocalPlayer()
}
func authenticateLocalPlayer(){
var localPlayer = GKLocalPlayer()
println(localPlayer)
localPlayer.authenticateHandler = {(viewController, error) -> Void in
if ((viewController) != nil) {
self.presentViewController(viewController, animated: true, completion: nil)
}else{
println((GKLocalPlayer.localPlayer().authenticated))
}
}
}
GameScene.swift:
import GameKit
....
override func touchesBegan(touches: NSSet, withEvent event: UIEvent) {
var touch:UITouch = touches.anyObject() as UITouch
var location:CGPoint = touch.locationInNode(self)
if gameCenterRect.contains(location) {
var vc = self.view?.window?.rootViewController
var gc = GKGameCenterViewController()
vc?.presentViewController(gc, animated: true, completion: nil)
}
func saveHighscore(score:Int){
NSUserDefaults.standardUserDefaults().setObject(score, forKey: "kHighscore")
if GKLocalPlayer.localPlayer().authenticated {
var scoreReporter = GKScore(leaderboardIdentifier: "LEADERBOARD_ID")
scoreReporter.value = Int64(self.highscore)
var scoreArray: [GKScore] = [scoreReporter]
//println("report score \(scoreReporter)")
GKScore.reportScores(scoreArray, {(error : NSError!) -> Void in
if error != nil {
println("error")
//NSLog(error.localizedDescription)
}
})
}
}
Where do I have to add something to my code to get back?
A:
you have to add this lines to your game scene:
class GameScene: SKScene, SKPhysicsContactDelegate,UIGestureRecognizerDelegate, GKGameCenterControllerDelegate{
func gameCenterViewControllerDidFinish(gameCenterViewController: GKGameCenterViewController!)
{
gameCenterViewController.dismissViewControllerAnimated(true, completion: nil)
}
and fix this
if gameCenterRect.contains(location) {
var vc = self.view?.window?.rootViewController
var gc = GKGameCenterViewController()
gc.gameCenterDelegate = self
vc?.presentViewController(gc, animated: true, completion: nil)
}
|
Pharmacare: Universal or tailored provincial top-up?
In February of this year, a commission led by former Ontario Health Minister Eric Hoskins was formed to study the cost of prescription drugs and the possibility of creating a national pharmacare strategy. Proponents of this system argue that the federal government has a larger role to play than just helping the provinces reduce costs by ensuring adequate and equitable drug coverage for every Canadian.
On the opposite side of the issue, opponents of a national pharmacare strategy are asking a number of questions that until now have remained unanswered. How many Canadians are insured, uninsured and under-insured for their prescription drugs? How will access to prescription drugs be affected by national pharmacare? And how much cost will be shifted from private insurance plans onto taxpayers under pharmacare? The fear is that pharmacare under federal jurisdiction might become a monopoly.
Currently, the Canadian system is the only one that has universal healthcare but not drug coverage across the board. According to the Wellesley Institute, Canadians pay some of the highest prices for prescription drugs across OECD countries. They also state that one in 10 Canadians doesn’t take a prescription medication as directed because of cost. What that means is that Canadians have access to free medical care but when it comes to getting essential medications to treat their medical conditions, they often have to pay out of pocket.
According to former Health Minister Jane Philpott, “there is a ton of savings down the road, if we make sure those chronic conditions are well treated so that further conditions don’t deteriorate to the point that people require expensive hospital care.”
Research has found that 25% of uninsured and 10% of insured Canadians can’t pay for their prescription medications. For instance, a cancer patient suffering from a neuroendocrine tumour to the pancreas metastasized to the liver and bones had a treatment that cost upwards of $8,000 a month. Her insurance covered the medication but advised her that her premiums would go up stiffly.
In Canada, there are 46 federal, provincial and territorial drug programs, each with their own administration and bureaucracies. Unfortunately, the fate of many patients is ultimately determined by where they live. From Manitoba west to British Columbia, all medically necessary prescriptions for oral cancer drugs are covered. But if they live in Ontario eastward, they must pay for oral cancer medications, while the cost treatment delivered via IV in hospital is covered. Even worse, some patients decide to decline oral treatments altogether rather than bankrupt their families. If you have health insurance and rely on it to pick up the tab, which usually takes a few weeks during which time you are likely very anxious.
Advances in medical technology and pharmacology have made many cancer treatments available in outpatient form – about 60% of therapies are take-home options, typically pills or self-injectables. They offer freedom from lengthy stays in hospital and less cost to the health care system. But unless you have private insurance from your employer or public insurance through your province, you’re on the hook for the bill. The average price for a month of oral cancer medication is about $6,000 – and treatment can last for a year or longer.
According to the Wellesley Institute, there are some common misconceptions about prescription drug coverage in Canada. One is that workers are covered by private insurance. For high income earners, 94% receive private insurance from their employers, but only 17% of low income workers do so. Likewise, women, young people and part-time workers are much less likely to be covered than other Canadians.
In 2016 a panel of Canadians was held and concluded that Canada is ready for a national, publicly funded pharmacare plan to go along with its universal health-care system. The panel recognized that the current patchwork of public and private drug plans leaves about 20% of Canadians with little or no coverage for prescription medications. They also found that Canadians should have access to prescription drugs free of charge in a federally funded system and that the government should act to ensure that it is equitable and cost-effective.
The panel added that there should be a new national formulary of prescription medicines covering the full range of individual patient treatment needs, including those for rare diseases. They acknowledged the difficult task ahead and so they proposed that first the government implements a short list of commonly prescribed drugs – such as those for treating high blood pressure, cardiovascular disease, diabetes and asthma – and later develop a more comprehensive formulary. The panel also projected that there are huge savings to be realized by implementing such a plan which could help offset the cost of the program. Estimates indicate that the savings would range between 4 and 10 billion dollars a year since Ottawa could harness savings on the whole sale price of drugs that the provinces can’t achieve individually.
Brett J Skinner, the CEO of the Canadian Health Policy Institute and his colleagues argue that “there are at least three reasons why Canadians should be skeptical about pharmacare.” First, they say, pharmacare will see a shift of $13.2 billion in private prescription-drug-related costs onto taxpayers and as a federally funded program it would cost $25.5 billion to the federal purse which is currently paid by provincial and private sectors. In addition, employment losses and other indirect economic costs would result in another $4.1 billion in the first year. The second point is that to avoid these costs, a national pharmacare strategy would have to cut coverage and reduce patient access to medicines. For instance, out of 39 new drugs approved by Health Canada in 2012, 92 % were covered by at least one private drug plan compared to only 28 % that were covered by at least one public plan. For the new drugs that were covered, private drug plans took 143 days on average to approve coverage compared to 316 days for public drug plans. This suggests that a private system is much more efficient than a public one and that a pharmacare monopoly could force 24 million Canadians to accept the inferior coverage provided by public plans. Thirdly, they argue that the intrusion of the federal government in provincial jurisdiction is not necessary because the provinces already provide universal drug insurance coverage for catastrophic expenses.
It seems that both arguments in favour and against a federally funded pharmacare system are based on sound evidence, so the real question here is what do Canadians think they should do?
Sources:
“Panel calls on Ottawa to provide universal pharmacare plan for Canadians” by Sheryl Ubelacker, The Canadian Press December 6, 2016
“Higher costs and inferior coverage. Why would Canadians even want pharmacare?” Brett J Skinner, Ph.D., is CEO of the Canadian Health Policy Institute. March 29th, 2016 |
Tickets available for UHMC’s Noble Chef fundraiser
October 17, 2013
KAHULUI - Maui Culinary Academy at the University of Hawaii Maui College, in partnership with the Fairmont Kea Lani Maui and Young's Market Company of Hawaii, will present the 17th annual Noble Chef benefit on Saturday, Oct. 26.
The event begins with a VIP reception at 5 p.m., followed by a full reception at 6 p.m. and elegant gourmet dinner with entertainment and auction at 7 p.m.
Tickets are $250 for preferred seating and $185 for general seating. Table sponsorships begin at $3,000.
Article Photos
This year’s celebrity “noble chefs” include Wes Holder of Pulehu, an Italian Grill.
Noble Chef, the academy's largest annual fundraiser, features a mentorship program that pairs together many Maui celebrity chefs and MCA students.
This year's theme is "The World on a Plate," represented by a celebrity chef-designed reception menu spanning the culinary "hot spots" of the world, including Japan, Vietnam, Morocco, Spain, Great Britain, India, Greece, New Zealand and Mexico.
After the reception, guests will sit down for a multi-course gourmet French dinner and dessert prepared by MCA's faculty chefs. They include Jake Belmonte, Tom Lelli, Craig Omori, Jeff Scheer, Theresa Shurilla and Christina Pafford.
Wine and spirits throughout the evening will be provided by Young's Market Company of Hawaii, Ocean Vodka and Maui Brewing Company.
In addition to preparing the evening's meals together, the celebrity chefs and students they mentor spend several days beforehand in a learning exchange, providing invaluable, hands-on experience for MCA students. |
Introduction {#s1}
============
A phylogenetic tree of a group of species (taxa) describes the evolutionary relationship among the species. The study of phylogeny not only helps to identify the historical relationships among a group of organisms, but also supports some other biological research such as drug and vaccine design, protein structure prediction, multiple sequence alignment and so on [@pone.0104008-Linder1]. The ultimate goal of this research community is to infer the *Tree of Life*, the phylogeny of all living organisms on earth, provided that it exists.
Phylogenetic tree reconstruction by analyzing the molecular sequences of different species can be regarded as the *sequence-based* reconstruction of the phylogeny. Sequence-based phylogenetic methods are basically of three types [@pone.0104008-Linder1]: (a) distance-based methods, such as Neighbor Joining (NJ) [@pone.0104008-Saitou1], which has very fast practical performance; (b) heuristics for either Maximum-Likelihood (ML) [@pone.0104008-Felsenstein1] or Maximum-Parsimony (MP) [@pone.0104008-Fitch1], which are two NP hard optimization problems; and (c) the Bayesian Markov Chain Monte Carlo (MCMC) method, which, instead of a single tree, produces a probability distribution of the trees or aspects of the evolutionary history. Sequence-based methods are generally highly accurate. However, these methods are computationally intensive. As a result, these can only be applied on small to moderate sized datasets if we want to provide results having an acceptable level of accuracy within a moderate amount of time. For larger datasets (few hundreds of taxa (species)), these methods may need several weeks or months to provide results with an acceptable level of accuracy [@pone.0104008-Linder1]. As the amount of molecular data is accumulating exponentially with the continuous advancement in sequencing technologies, scientists are facing new computational challenges to analyze these enormous amount of data. Therefore, we are forced to rely on *supertree* methods, where smaller trees on overlapping groups of species are combined together to get a single larger tree. Supertree-based tree construction is a two-phase method: in the first phase, many small trees on overlapping subsets of taxa are constructed using a sequence-based method; and in the next phase the small trees are summarized into a complete tree over the full set of taxa.
Supertree methods are considered to be the likely solutions towards assembling the *Tree of Life*. Hence, these methods have drawn potential research interest in recent years. Supertree methods have two major motivations: firstly, it gives us the opportunity to achieve increased scalability and secondly, it is more suitable to combine the phylogenetic analyses on different types of data (e.g., molecular, morphological and gene-order data) or species groups. The careful design of supertree methods may allow us to work on very large (several hundreds taxa) datasets more accurately and easily. The most widely used supertree method is called the Matrix Representation with Parsimony (MRP) [@pone.0104008-Baum1], [@pone.0104008-Regan1]. MRP encodes all the small trees into a matrix using the characters , and . Then it uses Maximum-Parsimony (MP) [@pone.0104008-Fitch1] to get a tree from the data matrix. MRP is considered to be the most reliable supertree method to date. But since it uses an NP hard problem to analyze the data matrix, it is not efficient for large datasets.
*Quartet amalgamation* methods are supertree methods when each of the the small trees to be combined is a quatret, i.e., an unrooted tree having taxa. Quartet is the most basic piece of unrooted phylogenetic information. Quartet-based phylogenetic inference has drawn significant attention from the research community, and numerous quartet-based methods have been developed over the last two decades. In this paper, we present a novel and highly accurate quartet amalgamation technique. We conduct an extensive experimental study that demonstrates the superiority of our algorithm over QMC [@pone.0104008-Snir1]--[@pone.0104008-Snir3], which is known as the best quartet amalgamation method to date.
With the increasing abundance of molecular data, constructing species trees from multilocus data has become the focus of attention. But combining data on multiple loci is not a trivial task due to the gene tree discordance [@pone.0104008-Maddison1]--[@pone.0104008-Pamilo1]. The task is even more complicated with the striking recognition that the most probable rooted gene tree topology (under a coalescent model [@pone.0104008-Pamilo1]--[@pone.0104008-Harding1]) need not match the species tree topology [@pone.0104008-Degnan2], [@pone.0104008-Degnan3]. These are termed as *Anomalous gene trees* (AGTs). AGTs occur because not all tree topologies are equiprobable under the coalescent model [@pone.0104008-Harding1], [@pone.0104008-Brown1], [@pone.0104008-Steel1]. In fact, rooted AGTs exist for any species tree with or more taxa. It has also been shown that rooted AGTs cannot occur with a three taxa and a symmetric four taxa species tree [@pone.0104008-Degnan2]. AGTs have also been studied for unrooted gene trees, and it has been observed that for a species tree with four taxa, the most probable rooted gene tree topologies have the same unrooted topology as the species tree [@pone.0104008-Degnan4]. This observation indicates that the most frequently occurring unrooted quartet is a consistent estimate of the unrooted species tree [@pone.0104008-Degnan4]. Thus, quartet based phylogeny can offer a sensible and statistically consistent approach to combine multilocus data, despite gene tree incongruence and AGTs [@pone.0104008-Larget1], . Thus a highly accurate quartet amalgamation approach will help to design species tree estimation methods that are not susceptible to the gene tree discordance and AGTs. Notably, as has already been mentioned above, the other important advantage of quartet-based methods is that efficient design of such inference algorithm can be scalable to very large datasets (several hundreds or thousands of taxa).
Previous Works {#s1a}
--------------
Quartet-based phylogenetic tree reconstruction has been receiving extensive attention in the literature for more than two decades. Different approaches have been proposed and improved time to time. Among these, the most prominent approaches are, quartet puzzling (QP), quartet joining (QJ) and quartet max-cut (QMC).
Quartet puzzling (QP) [@pone.0104008-Strimmer1] infers the phylogeny of sequences using a weighting mechanism. First, it computes the maximum-likelihood values for the three topologies on every 4 taxa and uses these values to compute the corresponding probabilities. Using these probabilities as weights, the puzzling step constructs a collection of trees over taxa. Finally it returns a consensus tree over *n*-taxa. TREE-PUZZLE [@pone.0104008-Schmidt1] is a widely used program package that implements QP. In 1997, Strimmer et al. [@pone.0104008-Strimmer2] extended the original QP algorithm by proposing three different weighting schemes, namely, continuous, binary and discrete. Later in 2001, Ranwez and Gascuel [@pone.0104008-Ranwez1] proposed weight optimization (WO), an algorithm which is also based on weighted 4-trees inferred by using the maximum likelihood approach. WO uses the continuous weighting scheme defined in [@pone.0104008-Strimmer2] and it searches for a tree on taxa such that the sum of the weights of the 4-trees induced by this tree is maximal [@pone.0104008-Ranwez1]. Unlike QP, WO constructs a single tree over taxa; hence no consensus step is required. Though the speed and accuracy of WO are better than that of QP, its accuracy is lower than that of the methods based on evolutionary distances or maximum likelihood. Quartet joining (QJ) [@pone.0104008-Xin1] was introduced in 2007 to overcome the limitations of QP and WO in outperforming the distance based methods. QJ provides the theoretical guarantee to generate the accurate tree if a complete set of consistent quartets is present. On average QJ outperforms QP and its performance is very close to the performance of NJ [@pone.0104008-Saitou1], but QJ outperforms NJ on quartet sets with low quartet consistency rate [@pone.0104008-Xin1].
In 2008, Snir et al. [@pone.0104008-Snir1] proposed a new quartet-based method, *short quartet puzzling* (SQP). The experimental studies in [@pone.0104008-Snir1] shows that SQP provides more accurate trees than QP, NJ and MP. It differs from the previous techniques in that it does not require all three topologies of the quartets on every 4 taxa. It is able to construct the output tree from a subset of all possible quartets as input. This is a two-phase technique: the first phase uses the randomized technique for selecting input quartets from all possible 4-trees (estimated using ML), and the second phase uses Quartet Max Cut (QMC) [@pone.0104008-Snir1], [@pone.0104008-Snir2] technique for combining quartets into a single tree. The experimental study conducted by Swenson et al. [@pone.0104008-Swenson1] concludes that QMC performs better than the other supertree methods and MRP for smaller (100-taxon and 500-taxon) and high scaffold (i.e., high scaffold density) datasets. But MRP outperforms QMC and other supertree methods on larger and low scaffold (i.e., low scaffold density) datasets [@pone.0104008-Swenson1]. Subsequently, Snir and Rao presented a fast and scalable implementation of QMC [@pone.0104008-Snir3], where they reported the improvement of QMC over MRP in terms of accuracy and running time. Although MRP is the mostly used supertree method in practice, the studies of [@pone.0104008-Snir3], [@pone.0104008-Swenson1] suggest that QMC is so far the best quartet-based supertree method.
In this paper, we present a new quartet-based phylogeny reconstruction algorithm, *Quartet FM* (QFM), which uses a bipartition technique inspired from the famous Fiduccia and Mattheyses (FM) algorithm for bipartitioning a hyper graph minimizing the cut size [@pone.0104008-Fiduccia1]. As will be reported later, QFM is highly accurate and scalable to large datasets (upto several hundreds of taxa). We demonstrate the accuracy of QFM by analyzing its performance on both simulated and biological datasets. We have compared our method on simulated datasets with Quartet MaxCut (QMC) [@pone.0104008-Snir1]--[@pone.0104008-Snir3], and showed the superiority of our method over QMC in terms of the accuracy of the estimated trees. To show the potential of our method, we also analyzed a real biological dataset containing species from genera of birds (*Amytornis*, *Stipiturus*, *Malurus* and *Clytomias*). We have demonstrated a qualitative analysis of our results on real dataset based on the results of some rigorous previous studies on the same dataset.
Problem Definition {#s1b}
------------------
We address the problem of *Maximum Quartet Consistency* (MQC), which is a natural optimization problem. This problem takes a quartet set as the input and finds a phylogenetic tree such that the *maximum* number of quartets in become "consistent" with (or "satisfies" the maximum number of quartets). Now we formally define the problem.
Problem 1 Maximum Quartet Consistency {#s1c}
-------------------------------------
***Input:*** *A multiset of quartets* *on a taxa set* .
***Output:*** *A phylogenetic tree* *on* *such that* *satisfies the maximum number of quartets of* .
The Maximum Quartet Consistency (MQC) problem is an NP-hard optimization problem [@pone.0104008-Steel2]. Both exact and heuristic approaches are available for the MQC problem in the literature [@pone.0104008-Morgado1]. The running time of an exact algorithm grows exponentially with the increase of number of taxa, since the number of possible trees grows more than exponentially with the number of taxa [@pone.0104008-Hodkinson1]. So for larger datasets we have to resort to the heuristic solutions. The focus of this work is on heuristic solutions for the MQC problem as we aim to build the phylogenetic tree for several hundreds of taxa.
Results {#s2}
=======
We have conducted an extensive experimental study on both simulated and biological datasets. We have evaluated the accuracy of the trees estimated by QFM and compared the results to that of QMC [@pone.0104008-Snir3]. QMC is the most accurate quartet amalgamation method developed to date, and was shown to be more accurate than MRP [@pone.0104008-Snir3]. We have reported RF (Robinson Foulds) [@pone.0104008-Robinson1] rates of the estimated trees. RF rate is the mostly used error metric, which is the ratio of the sum of the number of false positive and false negative edges to a factor , where is the number of taxa [@pone.0104008-Linder1]. The false positive (FP) and false negative (FN) edges are respectively, the edges which are absent in the true tree but present in the estimated tree, and the edges which are present in the true tree but absent in the estimated tree.
Simulated Datasets {#s2a}
------------------
To investigate the performance of our method on various model conditions, we have generated quartet sets, taken uniformly at random from model trees, by varying the number of taxa (), the number of quartets () and the percentage of consistent quartets () with respect to the model tree ( consistency level means that quartets are flipped to disagree with the model tree). We have generated model species trees with , , , , , and taxa. To generate the model trees and the input quartet sets, we have used the tool developed and used in [@pone.0104008-Snir3]. The tool takes as input the number of taxa (), number of quartets () and the consistency level (), and returns the quartet sets accordingly. For , , , we have generated , and quartets. We have not generated more quartets because quartets have been empirically shown to be enough to construct very accurate phylogenetic trees [@pone.0104008-Snir3]. Although is a small number, we have chosen this size to test the performance of both methods on a comparatively smaller number of quartets as well. For , , and -taxon model trees, we have generated datasets with and . For each size (), we have varied the percentage of consistent quartets () by making it , , , and . Thus in total we have generated model conditions. To test the statistical robustness, we have generated replicates of data for each of these model conditions. For each model condition, we report the average RF rate over the replicates of data. We also report the standard error, given by where is the standard deviation and is the number of datapoints (which is in our experiments). The standard errors are reported in Table S1 and Table S2 in [File S1](#pone.0104008.s001){ref-type="supplementary-material"}. We have used Wilcoxon signed-rank test with to test the statistical significance of the differences between QFM and QMC. The results of the Wilcoxon T-test (*p*-values) are reported in Table S3 in [File S1](#pone.0104008.s001){ref-type="supplementary-material"}.
Analyses on the Simulated Datasets {#s2b}
----------------------------------
We now present the results on the simulated datasets mentioned above. In each case, we have compared the average RF rate for the trees estimated by QFM and QMC. The results for , , and are summarized in [Table 1](#pone-0104008-t001){ref-type="table"}. [Figure 1](#pone-0104008-g001){ref-type="fig"} shows the bar charts comparing the values presented in [Table 1](#pone-0104008-t001){ref-type="table"}. The results in [Table 1](#pone-0104008-t001){ref-type="table"} is presented in batches for different values of as follows. For , , , we have three rows, one each for , and . For , , , we have two rows, one each for and . The topmost row of each batch of [Table 1](#pone-0104008-t001){ref-type="table"} shows the results when (from left to right, the consistency levels reported are , , , , respectively). For this () case, both QMC and QFM have performed poorly which implies that quartets are quite insufficient for accurate phylogeny reconstruction. This can be attributed to the fact that is a very small number compared to (i.e., the possible number of quartets). However, as the consistency level () increases, QFM starts to produce better trees than QMC; and very often the improvements of QFM over QMC are statistically significant (see Table S3 in [File S1](#pone.0104008.s001){ref-type="supplementary-material"}). This is very promising in the sense that, QFM can construct more accurate trees than QMC even with very small number of quartets. The second row of each batch of [Table 1](#pone-0104008-t001){ref-type="table"} shows the results with quartets. With quartets, both QFM and QMC begin to produce better trees than that of quartets. However, quadratic number of quartets is still not sufficient for reconstructing an accurate tree (which confirms the observation of [@pone.0104008-Snir3]). But as before, QFM is statistically significantly better than QMC in most of the cases. The bottom most row of the first three batches in [Table 1](#pone-0104008-t001){ref-type="table"} shows the results with quartets. In this case, both QFM and QMC reconstruct highly accurate species trees (error rates are close to zero) even with consistent quartets.
{#pone-0104008-g001}
10.1371/journal.pone.0104008.t001
###### Comparison of QFM and QMC under various model conditions.
{#pone-0104008-t001-1}
Average RF rate
----- -------- ----------------- ----------- ----------- ----------- ----------- ----------- ----------- -----------
25 125 0.882 0.881 **0.739** **0.772** 0.577 0.577 **0.458** **0.468**
25 625 **0.272** **0.308** **0.155** **0.178** **0.073** **0.101** **0.051** **0.062**
25 8208 0 **0.002** 0 **0.002** 0 0 0 0
50 354 0.973 0.964 **0.890** **0.904** 0.757 0.756 **0.696** **0.709**
50 2500 **0.400** **0.426** **0.289** **0.344** **0.171** **0.184** **0.161** **0.174**
50 57164 **0.007** **0.011** 0 0 0 0 0 0
100 1000 **0.991** **0.993** **0.921** **0.937** **0.862** **0.866** **0.806** **0.822**
100 10000 **0.551** **0.597** **0.433** **0.454** **0.350** **0.365** **0.293** **0.308**
100 398108 **0.009** **0.010** **0.003** **0.004** 0.001 0.001 **0** **0.001**
200 2829 0.997 0.994 **0.955** **0.963** **0.909** **0.934** **0.887** **0.901**
200 40000 **0.695** **0.720** **0.585** **0.608** **0.488** **0.514** **0.450** **0.471**
300 5197 0.996 0.996 **0.965** **0.972** **0.921** **0.949** **0.907** **0.930**
300 90000 **0.752** **0.766** **0.655** **0.676** **0.561** **0.583** **0.526** **0.535**
400 8000 **0.993** **0.996** **0.963** **0.977** **0.926** **0.952** **0.923** **0.941**
400 160000 **0.786** **0.804** **0.707** **0.731** **0.624** **0.634** **0.590** **0.601**
500 11181 0.994 0.993 **0.967** **0.978** **0.938** **0.962** **0.926** **0.950**
500 250000 **0.813** **0.832** **0.736** **0.762** **0.663** **0.684** **0.616** **0.636**
Average RF rates of QFM and QMC over the replicates of data under various model conditions. We varied the number of taxa (), the number of quartets (), and the percentage of consistent quartets (). Results are shown in bold face where QFM is better than QMC.
From these results, it is clear that QFM either matches the accuracy of QMC or (in most cases) produces better trees than QMC. QFM outperforms QMC in cases out of the model conditions shown in [Table 1](#pone-0104008-t001){ref-type="table"}, and in cases the differences are statistically significant (see Table S3 in [File S1](#pone.0104008.s001){ref-type="supplementary-material"}). QMC is better than QFM on only cases, but the differences between the two methods are not statistically significant. For the rest cases, both QFM and QMC have equal error rates (these are mostly the datasets with quartets where both of them have been able to reconstruct the true trees).
We have also evaluated QFM and QMC on the noise-free model conditions, meaning that all the quartets are accurate (). [Table 2](#pone-0104008-t002){ref-type="table"} demonstrates the results under the parameters () with . Of the model conditions analyzed, QFM has been found to be better than QMC on cases, and the improvements are statistically significant in cases (see Table S3 in [File S1](#pone.0104008.s001){ref-type="supplementary-material"}). QMC is better than QFM in two cases but the differences are not statistically significant. In cases QFM and QMC have identical accuracy.
10.1371/journal.pone.0104008.t002
###### Comparison of QFM and QMC under the noise-free model conditions.
{#pone-0104008-t002-2}
Average RF rate
----- -------- ----------------- -----------
25 125 **0.444** **0.515**
25 625 0.056 0.052
25 8208 0 0
50 354 **0.661** **0.666**
50 2500 0.140 0.140
50 57164 0 0
100 1000 **0.777** **0.797**
100 10000 **0.269** **0.274**
100 398108 0 0
200 2829 **0.848** **0.881**
200 40000 0.424 0.424
300 5197 **0.887** **0.907**
300 90000 0.506 0.499
400 8000 **0.897** **0.930**
400 160000 **0.554** **0.555**
500 11181 **0.903** **0.937**
500 250000 **0.590** **0.606**
Average RF rates of QFM and QMC over the replicates of data under the noise-free model conditions (). We varied the number of taxa () and the number of quartets (). Results are shown in bold face where QFM is better than QMC.
Computational Issues {#s2c}
--------------------
We have evaluated the running time and memory usage of QFM and QMC. On smaller datasets, both QFM and QMC run in few seconds. For example, on taxa, QFM took between seconds to seconds (depending on the number of quartets), and QMC took less than seconds. Both of these methods are very fast on the datasets with up to taxa and with quartets: QFM took few minutes while QMC completed in few seconds. However, QFM is much slower than QMC on the larger datasets. For example, QFM took hours for the largest datasets of our experiment with taxa and quartets, while QMC took only one minute. We believe that this difference is due to the naive implementation of our algorithm. QMC has been implemented in a very efficient code, and it scales well on larger datasets. We are currently working on improving our implementation using advanced data structures. We are also parallelizing our divide and conquer based approach.
We have also measured the memory usage by these methods. Both QFM and QMC are memory efficient and use only few megabytes of memory. For example, the peak memory usages by QMC and QFM on the datasets with taxa and quartets are MB and MB, respectively.
Analyses on the Avian Biological Dataset (Australo-Papuan Fairy-wrens) {#s2d}
----------------------------------------------------------------------
We have further evaluated the performance of QFM on a real avian biological dataset consisting of birds. Since Avian phylogeny is considered to be hard to reconstruct, we have chosen this dataset as a good representative of real datasets. This dataset consists of gene trees on species representing genera of birds (*Amytornis*, *Stipiturus*, *Malurus* and *Clytomias*) from Australo-Papuan avian family Maluridae, obtained from TreeBASE [@pone.0104008-Sanderson1]. This dataset has originally been used to study the efficacy of species tree methods at the family level in birds, using the Australo-Papuan Fairy-wrens (Passeriformes: Maluridae) clade [@pone.0104008-Lee1]. Due to the presence of substantial amount of incomplete lineage sorting (ILS) [@pone.0104008-Lee1], analyzing this family of birds is quite challenging.
We have decomposed every gene tree into its induced quartets which is called *embedded quartets* [@pone.0104008-Snir3], [@pone.0104008-Zhaxybayeva1]. Then, we have taken the union of all these quartets (multiple copies of a quartet have been retained). In this way we get 227,700 quartets. We have used these quartets to estimate a species tree using our method (QFM). We also ran QMC on this datasets. Both QFM and QMC returned the same tree. The tree is shown in [Figure 2](#pone-0104008-g002){ref-type="fig"}.
![The 25 species avian phylogeny, representing 4 genera of birds from Maluridae family, estimated by QFM using the 227,700 embedded quartets in 18 gene trees.\
The evolutionary relationships maintained by this tree are supported by the findings of the previous studies [@pone.0104008-Lee1], [@pone.0104008-Christidis1], [@pone.0104008-Christidis2], [@pone.0104008-Donnellan1].](pone.0104008.g002){#pone-0104008-g002}
Since we do not know the true trees for biological datasets, we have compared the result obtained from QFM with biological beliefs and other rigorous analyses. The tree returned by QFM (which is identical to the tree estimated by QMC) is quite interesting and consistent with the previous findings as discussed below.
• QFM has been able to correctly identify the clusters associated with the four genera of birds. Also, it has placed the group of *Amytornis* birds as the sister to the rest of the family, and the group of *Stipiturus* birds as the sister to *Malurus* and *Clytomias* birds. These evolutionary relationships maintained by QFM are supported by the findings of the previous studies [@pone.0104008-Lee1], [@pone.0104008-Christidis1], [@pone.0104008-Christidis2].
• *Amytornis*: Using allozyme analysis, Christidis [@pone.0104008-Christidis2] has shown that *A. barbatus* is the earliest diverged lineage in the Amytornis genus. Same results have been obtained by a DNA sequencing study in [@pone.0104008-Christidis3]. The sequence-based analysis of Lee et al. [@pone.0104008-Lee1] also have confirmed this. Our analyses with QFM also have found the same pattern. Lee et al. [@pone.0104008-Lee1] also have shown that *A. housei* should be within the *textilis* complex, which is confirmed by our QFM tree.
• *Stipiturus*: Evolutionary relationships within the *Stipiturus* genus have been well studied [@pone.0104008-Lee1], [@pone.0104008-Christidis1], [@pone.0104008-Donnellan1]. Our study is consistent with the previous findings: *S. mallee* and *S. ruficeps* are closer to each other than they are to *S. malachurus*.
• *Clytomyias* and *Malurus*: *C. insignis* was placed to *Stipiturus* species by [@pone.0104008-Christidis1]. However, in a more recent extensive multi-locus study, Lee et al. [@pone.0104008-Lee1] argued that *C. insignis* is closer to *M. grayi*. Our study has also confirmed this fact. Also our study has confirmed their [@pone.0104008-Lee1] findings that *M. alboscapulatus* is closer to *M. melanocephalus* than to *M. leucopterus*.
Lee *et al.* [@pone.0104008-Lee1] showed that ILS is likely a general feature of the genetic history of these avian species. Since quartets are not prone to anomaly zone [@pone.0104008-Degnan2], [@pone.0104008-Degnan4], quartet based analyses to resolve the avian history is of high importance. Interestingly, both QMC and QFM resolved the evolutionary history of these birds similarly. Therefore, we believe that this tree should be considered as a reasonable hypothesis about the evolutionary history of this family of birds.
Discussion {#s3}
==========
In this work we have presented a novel and highly accurate quartet amalgamation technique, which we refer to as QFM. We have demonstrated the superiority of our method over QMC, which is known to be the best quartet amalgamation method to date.
QFM is a new promising divide and conquer supertree method having an algorithmic appeal. We have conducted an extensive experimental study comparing QFM against QMC under different model conditions by varying different parameters. For almost all model conditions considered, QFM performs at least equal but in most cases better than QMC. In line with the experimental results shown in [@pone.0104008-Snir3], we have found that quadratic sampling of quartets is not sufficient for accurate supertree construction. However, with quartets, both QFM and QMC can reconstruct very accurate trees indicating that it is possible to reconstruct an accurate supertree from large number of quartets, even with high amount of noise in the input data. QFM has also been tested on real biological datasets and has been shown to perform pretty well. The tree estimated by QFM has maintained the important evolutionary relationships despite the presence of incomplete lineage sorting. This is particularly interesting because this suggests that we can use quartet-based technique to develop species tree estimation method (from multi-locus data), which is less susceptible to gene tree incongruence due to ILS.
Species tree estimation is frequently based on phylogenomic approaches that use multiple genes from throughout the genome. However, combining data on multiple genes is not a trivial task. Genes evolve through biological processes that include deep coalescence (also known as incomplete lineage sorting (ILS)), duplication and loss, horizontal gene transfer etc. As a result the individual gene histories can differ from each other [@pone.0104008-Maddison1]. Species tree estimation in the presence of ILS is a challenging task. Moreover, anomalous gene trees (AGTs) make this task even more complicated [@pone.0104008-Degnan2], [@pone.0104008-Degnan3]. It has been proven that AGTs cannot occur in quartets and thus the most probable quartets induced by the true gene trees represent the true species trees for the corresponding four species [@pone.0104008-Degnan2], [@pone.0104008-Degnan4], Therefore, quartets can be used to design statistically consistent methods (methods that have the statistical guarantee to construct the true species tree given sufficiently large number of true gene trees) for constructing the species tree from gene trees (which evolve with ILS) as follows. First, we compute the quartets induced by the gene trees. For every four species, there are three possible quartets. Given sufficiently large number of true gene trees, the most probable quartets (the most frequently occurring quartets) on every four species represent the true species trees for those four species. Thus combining the most probable quartets to get a single and coherent species tree is an statistically consistent approach for species tree estimation. In this context, we can formalize the maximum weighted quartet satisfiability problem as follows.
• ***Input:*** A set of weighted quartets.
• ***Output:*** The species tree such that maximizes the summation of the weights of the satisfied quartets in .
We can define the weight of a quartet as the proportion of the gene trees that induce . We can also incorporate the branch lengths in defining the weights. One major advantage of QFM is that it can readily be adapted to take a set of weighted quartets as input without making any change in its algorithmic constructs. Therefore, we think QFM is an important contribution to the phylogenomic analyses, in particular for estimating species trees from a set of gene trees where gene trees can be discordant from each other due to ILS.
Another advantage of QFM lies in its flexibility in choosing the *partition score* function (see "Partition Score" section). QFM can be customized to take different scoring functions (i.e., , , etc.) without making any change in the algorithmic construct. We have observed that QFM may not give the same result for different scoring functions for the same dataset. So for different datasets, we may obtain better results by adapting different suitable scoring functions. Thus QFM provides us with the flexibility to change the scoring function as needed. In future we shall try to make our algorithm self-adaptable to the appropriate scoring function by analyzing different characteristics of the input datasets. Notably, as has already been discussed above, one shortcoming of the current implementation of QFM is that it is not as fast as QMC.
Materials and Methods {#s4}
=====================
In this section we present our heuristic algorithm, namely, the **Q**uartet **FM** (**QFM**) algorithm. Our algorithm employs a *quartet based supertree reconstruction* technique that involves a bipartition method inspired by the Fiduccia Mattheyses (FM) bipartition technique [@pone.0104008-Fiduccia1].
Basics {#s4a}
------
A quartet is *consistent* with a tree if in , there is an edge (or path in general) separating and from and . For any four taxa, only one quartet (out of possible quartets) will be consistent with a tree . In [Figure 3](#pone-0104008-g003){ref-type="fig"} among the three quartets, quartet is consistent with tree as there exists an edge in such that it separates and from and . Other two quartets are inconsistent with as no such edge exists in .
{#pone-0104008-g003}
A bipartition of an unrooted tree is formed by taking any edge in , and writing down the two sets of taxa that would be formed by deleting that edge. Let be a tree over the taxa set . Now, if we take an internal edge of and delete , then we get two subtrees, namely, and . Let and be the sets of taxa of and respectively. We shall denote such bipartition by (, ). Thus an internal edge in corresponds to a bipartition of .
A quartet is *satisfied* with respect to a bipartition if taxa and reside in one part and taxa and reside in the other. A *satisfied* quartet is *consistent* with . The quartet is said to be *violated* with respect to a bipartition when taxa and (or and ) reside in one part and taxa and (or and ) reside in the other part. On the other hand, is said to be *deferred* with respect to a bipartition if any three of its four taxa reside in one part and the fourth one resides in the other.
A tree over a taxa set is said to be a *star*, if has only one internal node and there is an edge from the internal node incident to each taxon . We shall refer to such a tree as a *depth one tree*.
Divide and conquer approach {#s4b}
---------------------------
We follow a divide and conquer approach similar to QMC [@pone.0104008-Snir1]--[@pone.0104008-Snir3]. Let, be a set of quartets over a set of taxa, . We aim to construct a tree on , satisfying the largest number of input quartets possible. The divide and conquer approach recursively creates bipartition of the taxa set, where each bipartition corresponds to an internal edge in the tree under construction. QMC uses a heuristic bipartition technique which is based on finding a maximum cut (MaxCut) in a graph over the taxa set, where the edges represent the input quartets [@pone.0104008-Snir3]. On the other hand, our algorithm uses a heuristic bipartition algorithm inspired by the famous Fiduccia and Mattheyses (FM) [@pone.0104008-Fiduccia1] bipartition algorithm.
### Divide {#s4b1}
At each recursive step, we partition the taxa set into two sets and . We shall describe the bipartitioning algorithm in "Method of Bipartition" section. After the algorithm partitions the taxa set, it augments both parts ( and ) with a unique dummy (artificial) taxon. This taxon will play a role while returning from the recursion. After the addition of the dummy taxon to the sets and , we subdivide the quartet set into two sets, and . A quartet set takes those quartets from such that either all four taxa , , and or any three thereof belong to (here ). In other words, satisfied or violated quartets with respect to the partition are not considered to be included in either or . Moreover, in every deferred quartet, where three taxa are in the same part, the other taxon is renamed by the name of the dummy taxon, and the quartet continues to the next step. Thus we get, two pairs: and . We then recurse on both pairs and if is non-empty and . If either is empty or , we return a *depth one tree* over the taxa set .
### Conquer {#s4b2}
On returning from the recursion, at each step, we have two trees, (corresponding to ) and (corresponding to ). These two trees are rerooted at the dummy taxon. Then the dummy taxon is removed from each tree and the two roots are joined by an internal edge.
[Figure 4](#pone-0104008-g004){ref-type="fig"} describes the high level divide and conquer algorithm. Let be the input quartet set and be the corresponding taxa set. Assume that = , , , , , , and hence . First, is partitioned into two sets, and by using the bipartition technique described in "Method of Bipartition" section. Here, is the dummy taxon. The bipartition satisfies quartets , and from . So these quartets will not be considered in the next level. takes and as three of the taxa of and reside in . We replace the taxon which does not belong to with the dummy taxon . Hence we get . Similarly we get . Next we recurse on and , and and are partitioned further into and , respectively. The partition satisfies and violates in and satisfies the only quartet in . So the quartet sets for the next level are empty and hence no more recursion is required. We return a *depth one tree* for each of the taxa sets , , and . The returned trees are merged by removing the dummy taxon of that level and joining the branches of the dummy taxa. In [Figure 4](#pone-0104008-g004){ref-type="fig"}, the upper half shows the *divide* steps. The *depth one trees* are returned when no more recursion is required. The lower half of [Figure 4](#pone-0104008-g004){ref-type="fig"} shows how the trees are returned and merged as the recursion unfolds (conquer step). Thus we get the final merged tree (shown at the bottom of [Figure 4](#pone-0104008-g004){ref-type="fig"}) satisfying quartets in total. The satisfied quartets are , , , and .
{#pone-0104008-g004}
Method of Bipartition {#s4c}
---------------------
The most crucial part of our algorithm is the bipartition (divide step) technique. Here, we differ from QMC [@pone.0104008-Snir1]--[@pone.0104008-Snir3] and adopt a new bipartition technique inspired by the famous Fiduccia and Mattheyses (FM) algorithm for bipartitioning a hyper graph minimizing the cut size [@pone.0104008-Fiduccia1]. In divide and conquer based phylogenetic tree construction, the bipartition of the taxa set corresponds to an internal edge of the tree under construction. An internal edge, in turn, plays a role to make quartets to be satisfied or violated against the bipartition. So we adopt a different bipartition technique from that used in QMC, with an objective to get better results.
Our bipartition algorithm takes a pair of taxa set and a quartet set (, ) as input. It partitions into two sets, namely, and with an objective that (, ) satisfies the maximum number of quartets from . The algorithm starts with an initial partition and iteratively searches for a better partition. We will use a heuristic search to find the best partition. Before we describe the steps of the algorithm, we describe the algorithmic components.
### Partition Score {#s4c1}
We assess the quality of a partition by assigning a *partition score*. We use a scoring function, , such that the higher score will indicate a better partition. This function checks each against the partition and determines whether is *satisfied*, *violated* or *deferred*. We define the score function in terms of the number of satisfied and violated quartets. Let and denote the number of satisfied and violated quartets. Then, two natural ways of defining the score function are: 1) taking the difference between the number of satisfied and violated quartets (), and 2) taking the ratio of the number of satisfied and violated quartets (). As num In this paper, we used as the score function. We can also use some other complicated score functions defined in terms of the number of satisfied, violated and deferred quartets (i.e., , where denotes the number of deferred quartets). In our preliminary experimental study, we have explored different score functions and observed that gives better performance in most of the cases. Notably, although in some cases other functions (e.g., , ) achieve better results than (results are not shown in this paper), none of them is consistently better than .
### Gain Measure {#s4c2}
Let be a partition of set of taxa . Let be a taxon and without loss of generality we assume that . Let be the partition after moving the taxa from to . That means, , and . Then we define the *gain* of the transfer of the taxon with respect to , denoted by *Gain* , as .
### Singleton Bipartition {#s4c3}
A bipartition () of is *singleton* if or . In our bipartition algorithm, we keep a check for the singleton bipartition. We do not allow our bipartition algorithm to return a singleton bipartition to avoid the risk of an infinite loop.
### Algorithm {#s4c4}
Now we describe the bipartition algorithm which we call MFM (Modified FM) Bipartition Algorithm. Let, (, ) be the input to the bipartition algorithm, where be a set of taxa and be a set of quartets over the taxa set . We start with an initial bipartition of . The initial bipartitioning is done in four steps.
• Step 1: We count the frequency of each distinct quartet in .
• Step 2: We then sort by the frequency count of the quartets in a decreasing order.
• Step 3: Suppose after sorting , where . Now we consider the quartets one by one in the sorted order. Initially both and are empty.
Let be a quartet in . If none of the taxa belongs to either or , then we insert and in and and in . Otherwise, if any of the taxa exists in either or we take the following actions to insert a taxon which doest not exist in or . We maintain an insertion order. We consider , , and respectively.
-- To insert , we look for the partition of (if exists in any part) and insert into that partition. But if does not exist in either of the partitions, then we look for the partition of either or (either of these two must exist in or ) and insert into the other partition.
-- To insert , we look for the partition of and insert into that partition.
-- To insert , we look for the partition of (if exists in any part) and inset into that partition. But if does not exist in either of the partitions, then we look for the partition of either or and insert into the other partition.
-- To insert , we look for the partition of and insert into that partition.
• Step 4: When we insert a taxon to any part, we remove it from . After considering each and inserting taxa accordingly, if remains non-empty, we insert the remaining taxa to either part randomly.
Obtaining , we search for a better partition iteratively. At each iteration, we perform a series of transfers of taxa from one partition set to the other to maximize the number of satisfied quartets. At the beginning of an iteration, we set the status of all the taxa as *free*. Then, for each *free* taxon , we calculate , and find the taxon with the maximum gain. There can be more than one taxa with the maximum gain where we need to break the tie. We will discuss this issue later. Next we transfer and set the status of this taxon as *locked* in the new partition that indicates that it will not be considered to be transferred again in this current iteration. This transfer creates the first intermediate bipartition . The algorithm then finds the next free taxon with the maximum gain with respect to , and transfer and lock that taxon to create another intermediate bipartition . Thus we transfer all the free taxon one by one. Let be the input quartet set and be the corresponding taxa set. Assume that = , , , , , (same as used in [Figure 4](#pone-0104008-g004){ref-type="fig"}). Hence, . Following the steps of the initial bipartition, we get the initial bipartition and . [Figure 5](#pone-0104008-g005){ref-type="fig"} shows the first iteration of the bipartition algorithm for this particular example.
{#pone-0104008-g005}
Suppose that the taxa are locked in the following order: . That is, has been locked first, then , and so on. Let, the gain values of the corresponding partitions are:
Now we define the cumulative gain up to the th transfer as
The maximum cumulative gain, is defined as
In each iteration, the algorithm finds the current ordering () of the transfers and saves this order in a log table along with the cumulative gains (see [Table 3](#pone-0104008-t003){ref-type="table"} for example). Let be the taxon in the log table corresponding to . This means that we obtain the maximum cumulative gain after moving the th taxon (with respect to the order stored in the log table). Then we rollback the transfers of the taxa () that were moved after . Let the resultant partition after these rollbacks is . This partition will be the initial partition for the next iteration. In this way, the algorithm continues as long as the maximum cumulative gain is greater than zero and returns the resultant bipartition. [Table 3](#pone-0104008-t003){ref-type="table"} lists the order of locking, corresponding gain and cumulative gain with respect to the iteration illustrated in [Figure 5](#pone-0104008-g005){ref-type="fig"}. From [Table 3](#pone-0104008-t003){ref-type="table"} we note that we get the maximum cumulative gain, , after moving taxon . Here, we also get the maximum value of cumulative gain after moving taxon . We break the tie arbitrarily. We consider the taxon for which we get the maximum cumulative gain for the first time. For this example, we get the maximum cumulative gain of at taxon for the first time. So we rollback all the subsequent moves. The resultant partition after this rollback is (partition in [Figure 5](#pone-0104008-g005){ref-type="fig"}). Similarly, [Table 4](#pone-0104008-t004){ref-type="table"} lists the ordering of locking, corresponding gain and cumulative gain with respect to the iteration which follows the iteration illustrated in [Figure 5](#pone-0104008-g005){ref-type="fig"}. From [Table 4](#pone-0104008-t004){ref-type="table"} we get that the maximum cumulative gain is . So the moves are rolled back and we get the final resultant partition .
10.1371/journal.pone.0104008.t003
###### Gain Summary.
{#pone-0104008-t003-3}
-- -- -- --
-- -- -- --
The log table corresponding to the iteration shown in [Figure 5](#pone-0104008-g005){ref-type="fig"}. Here represents the step number. The input partition to step is (, ). The second column shows the taxon that has the maximum gain at the corresponding step, and the third column shows the corresponding maximum gain. The fourth column shows the cumulative gain of the gains listed in the third column. We observe that the cumulative gain gets maximum () after moving taxon in step . So all the subsequent moves of taxa are rolled back. The resultant partition of this iteration is (, ) = , which is the initial partition for the next iteration of the iteration in [Figure 5](#pone-0104008-g005){ref-type="fig"}.
10.1371/journal.pone.0104008.t004
###### Gain Summary.
{#pone-0104008-t004-4}
-- -- -- --
-- -- -- --
The log table corresponding to the next iteration of the iteration shown in [Figure 5](#pone-0104008-g005){ref-type="fig"}. Here represents the step number. The input partition to step is (, ). The second column shows the taxon that has the maximum gain at the corresponding step, and the third column shows the corresponding maximum gain. We observe that the cumulative gain gets maximum () at step . So we rollback all the subsequent moves including the move at step and return the initial partition of this iteration as the resultant bipartition of the bipartition algorithm. No more iteration is needed as the maximum cumulative gain of the current iteration is not greater than zero.
As we have mentioned earlier, we do not allow any transfer of taxa that results into a singleton bipartition. Therefore, we need to add some additional conditions. Also, there could be more than one free taxa with the maximum gain, where we need to decide which one to transfer. We consider the following cases to address these issues. Let, be a set of free taxa with the maximum gain.
• Case 1: and at least one corresponding bipartition is not singleton. That means, there exists such that transfer of does not result into a singleton bipartition. Let be the set of taxa, that can be safely transferred without resulting in a singleton bipartition. Note that, . If , we transfer the taxa . Otherwise, we have more than one taxa in . In that case, we pick the taxon , for which the corresponding bipartition (after transferring ) satisfies maximum number of quartets (note that every taxa in has the same gain, but the corresponding bipartitions do not necessarily satisfy the same number of quartets). In the case of a tie, we choose one taxon at random.
• Case 2: and transfer of each results in a singleton bipartition. In this case, we consider the set of taxa with the second highest maximum gain. Let be the set of free taxa with the second highest maximum gain. We then recursively check 'Case 1' and 'Case 2' on . If we cannot find a taxon that can be transferred without resulting into a singleton bipartition, we make the status of all the free taxa *locked* and set their gain to zero.
At each divide step we have a pair as input. The bipartition algorithm returns a bipartition of the taxa set . We then divide into and and obtain and pairs. and will be further bipartitioned in subsequent divide steps. The pseudo-code of the bipartition method MFM is given in Table S4 in [File S1](#pone.0104008.s001){ref-type="supplementary-material"}. Moreover, the run time analyses of Algorithm MFM is described in .
Supporting Information {#s5}
======================
######
**Supplementary material.** Additional tables, and the pseudocode and time complexity of MFM bipartition algorithm are presented.
(PDF)
######
Click here for additional data file.
We thank Dr. Sagi Snir for appreciating our work, providing constructive suggestions and helping us with the QMC code and data.
[^1]: **Competing Interests:**The authors have declared that no competing interests exist.
[^2]: Conceived and designed the experiments: RR MSB MSR. Performed the experiments: RR MSB. Analyzed the data: RR MSB MSR. Contributed reagents/materials/analysis tools: RR MSB. Wrote the paper: RR MSR MSB.
|
Originally Posted by Ajfar
^ $400 for driving 82? What was the speed limit? If he gave you a $400.00 ticket I'm pretty sure he will show up. But doesn't hurt to try. Play the I don't have a job card. Be like I was on my way a interview and I was really stressed out, I'm sorry, I messed up but I can't afford to pay $400. Judge might feel bad and reduce it. |
A lot went wrong for the Utah Jazz in their disheartening loss to the Memphis Grizzlies on Friday. Many of their issues in that game have been prevalent all season.
It’s hard to be too dissatisfied with the Utah Jazz’s start considering that they sit at 8-4, good for sole possession of fourth place in the West. Given Utah’s tendency to start out slow, their challenging schedule and all the new faces they are incorporating, I’m actually pleasantly surprised to see them at a win percentage of .667, because I figured they’d be closer to around .500 at this point.
What’s been even more pleasant has been their performances against tough opponents such as the Philadelphia 76ers and Milwaukee Bucks, both of which the Jazz defeated. Not to mention a win over a tough, albeit Kawhi-less LA Clippers team and a scrappy victory over a better-than-predicted Phoenix Suns team.
So between some big wins and a better start than projected, there’s more to be positive about than dismayed about when it comes to the Utah Jazz. Nevertheless, they’ve certainly shown some glaring issues that, while they may very well get fixed at some point down the road, remain both debilitating and highly frustrating. No one expects the Jazz to be perfect and go undefeated, but it feels like if they cleaned up a few things, they could be at an even better spot than they are currently.
Perhaps those woes were no better encapsulated than in their recent loss to the Memphis Grizzlies. Sure, there were a lot of emotional factors at play here with it being Mike Conley‘s return to the team he’d played the first 12 years of his career for, but it was still very much a game the Jazz should have won.
That in and of itself has been a reoccurring frustration for the Jazz – an inability to take care of business against inferior teams in winnable games. But if we break this contest down even further, we see glaring examples of some of the most pressing concerns facing the Salt Lake City squad.
In a lot of ways, the loss to Memphis was a perfect microcosm of what has been going wrong with the Jazz this season. Sure, there’s been a lot of good in the early-going of the 2019-20 campaign, but the issues have been notable and glaring. So let’s dive in to each one. |
Q:
ASP.NET Core: Is it possible to use HttpClient to fetch a file and return directly?
I have an internal API that would fetch and return a fileresult. However, this API does not have any concept of authentication/role/permission checks and cannot be modified to do so.
I would like to create an web API endpoint on an existing ASP.NET Core 2 Web API to do permission check, make a call to this internal API and return the fileresult back to a web client.
Would it be possible to get the wrapper API endpoint to just pass whatever it fetches as a file result without having to reconstruct the response (e.g., specify file name, content type, etc)? The files could be images, pdfs, document. I would prefer that this wrapper API only do permission check and make a call to the internal API endpoint using some sort of fileId and not need to know about the content length or type.
A:
Per @chris-pratt's recommendation, turned out it wasn't that complicate to reconstruct the result as I originally anticipated. I ended up implementing this way in case someone needs to do something similar here.
... some validation logic outside the scope of the question...
using (HttpClient client = new HttpClient())
{
var file = await client.GetAsync($"{someURL}/{id}");
return new FileContentResult(await file.Content.ReadAsByteArrayAsync(), file.Content.Headers.ContentType.MediaType);
}
|
Mesino-antimesino oscillations
The phenomenology of supersymmetric theories with low scale supersymmetry breaking and a squark as the lightest standard model superpartner are investigated. Such squarks hadronize with light quarks, forming sbaryons and mesinos before decaying. Production of these supersymmetric bound states at a high energy collider can lead to displaced jets with large negative impact parameters. Neutral mesino-antimesino oscillations are not forbidden by any symmetry and can occur at observable rates. Stop mesino-antimesino oscillations would give a sensitive probe of up-type sflavor violation, and can provide a discovery channel for supersymmetry through events with a same-sign top-top topology. |
#ifndef SIMECK_H
#define SIMECK_H
#include "../include/macros.h"
#ifdef __cplusplus
extern "C" {
#endif
void simeck(void*,void*);
void simeckx(void*,void*);
#ifdef __cplusplus
}
#endif
#endif
|
Is politics an art or science? It’s a vexed question that, I reckon, Plato must have struggled with in his time. While I am not an Athenian philosopher, I would wager that politics is a cross betwe... |
844 F.2d 1303
UNITED STATES of America, Plaintiff-Appellee,v.Dennis P. MARX a/k/a Dennis Martin, Big "D," Dennis Burtell,Defendant.Appeal of Mary Ann MARX.
No. 87-1715.
United States Court of Appeals,Seventh Circuit.
Argued Dec. 7, 1987.Decided April 6, 1988.As Amended April 12, 1988.
William H. Theis, Chicago, Ill., for defendant.
Melvin K. Washington, Asst. U.S. Atty., Patricia A. Gorence, U.S. Atty., Milwaukee, Wis., for plaintiff-appellee.
Before CUMMINGS, CUDAHY and MANION, Circuit Judges.
CUMMINGS, Circuit Judge.
1
Petitioner Mary Ann Marx seeks to recover her asserted interest in property forfeited by her husband Dennis Marx to the federal government. Dennis pled guilty to various counts of an indictment arising out of a drug conspiracy, including a count of engaging in a continuing criminal enterprise in violation of 21 U.S.C. Sec. 848, was sentenced to a prison term of twenty-five years and, under 21 U.S.C. Sec. 853(a), forfeited certain property, including sixty-seven shares of Accurate Brass and Aluminum Foundry, Inc. ("Accurate"), to the United States. In addition to providing for the criminal forfeiture of certain property, Section 853 also provides that any person, other than the defendant, asserting a legal interest in the property that has been ordered forfeited may petition the court for a hearing to adjudicate the validity of her alleged interest in the property. 21 U.S.C. Sec. 853(n)(2). Pursuant to this Section, Mary Ann petitioned the court and asserted her interests in the Accurate shares under three separate legal theories: (1) marital property, (2) a constructive trust, and (3) a resulting trust. Following a two-day hearing, the district court found that Mary Ann did not participate in or have knowledge of her husband's illegal activities, but it nevertheless concluded that she had not established her interests by a preponderance of the evidence, see 21 U.S.C. Sec. 853(n)(6)(A), and denied the petition. We reverse and remand.
2
* The forfeited property in dispute is sixty-seven shares of stock in Accurate that represent 100 per cent ownership of the corporation and at the time of the forfeiture were titled in the name of Dennis Marx. Accurate was incorporated in 1964 and initially issued fifty shares, twenty-five titled in the name of Dennis Marx and twenty-five in the name of Mary Ann's father, Steven Andracek. Although the Marx shares were titled in Dennis' name alone, both Dennis and Mary Ann testified1 that they regarded the business as belonging to both of them. As Dennis stated, "she had half of the headaches and half the problems, so I assumed if there was any benefit, she had half the benefit." Asked why, if they believed the shares belonged to both of them, the shares were titled in only Dennis' name, Mary Ann and Dennis each testified that it was a convention of the times, that in 1963 lawyers drew up legal documents in the man's name.
3
Each of the original fifty shares cost $100. The $2,500 consideration for the shares titled in Dennis' name was paid out of the proceeds of the sale of pieces of heavy equipment owned by M & M Excavating. M & M Excavating was a small business run between 1959 and 1963 by the Marxes and a war "buddy" of Dennis. During that period Mary Ann worked both as a bookkeeper for M & M and as a registered nurse. She worked for M & M without compensation and poured part of her nursing earnings into the struggling M & M. These earnings from Mary Ann's nursing contributed to the purchase price of the M & M equipment later sold to acquire the twenty-five shares in Accurate.
4
In 1967, Accurate issued ten additional shares, again titled to Dennis alone. The consideration for these ten shares came from the proceeds of the sale of a home that was titled jointly in both Dennis' and Mary Ann's names. Mortgage payments on the home had been made in part with Mary Ann's outside earnings.
5
In 1967 or 1968, Mary Ann's father gave his wife, Mary Andracek, his twenty-five shares of Accurate stock. In 1970, Mary Andracek entered into a written agreement with Dennis to sell him the twenty-five shares at the original price of $100 per share. Under the terms of the agreement, title to the twenty-five shares transferred to Dennis immediately but was subject to return if he failed to make payments of $200 per month on the $2,500 purchase price. Dennis ultimately paid only $500, the price of only five of the shares, again selling equipment of M & M Excavating to finance the purchase.
6
Although Dennis still owed her $2,000 for the twenty remaining shares, Mrs. Andracek told him that he need not pay the remainder because she was giving the twenty shares to Mary Ann. She testified that she told Dennis to hold the shares for Mary Ann, and that he stated that although the stock remained in his name, he understood the shares were a gift to Mary Ann from her mother.
7
The preceding narrative accounts for only sixty of the sixty-seven outstanding shares of Accurate. The record does not reflect and neither the parties nor the district court mention how and with what assets Dennis acquired the last seven shares.
II
8
Mary Ann contends that she has a presumptive one-half interest in all property acquired during her marriage to Dennis, regardless of whether she or her husband holds title. As a result of this presumption she claims a legal interest superior to that of the government in thirty-three and one-half of the sixty-seven shares forfeited. Under the applicable Wisconsin law, the district court correctly rejected this claim.
9
Wisconsin's new Marital Property Act, Wis.Stat. ch. 766 (1986), expresses "the intent of the [Wisconsin] legislature that marital property is a form of community property," Wis.Stat. Sec. 766.001(2), and states that all property is presumed to be marital property in which each spouse has a present undivided one-half interest. Wis.Stat. Sec. 766.31(2) and (3). Mary Ann, however, cannot rely on this statute to support her contentions. The federal criminal forfeiture statute requires the court to determine Mary Ann's interest "at the time of the commission of the acts which gave rise to the forfeiture of the property." 21 U.S.C. Sec. 853(n)(6)(A). All of Dennis Marx's drug activities occurred before the effective date of the new Marital Property Act, January 1, 1986. Therefore Mary Ann's interest must be determined under the prior statute, Wis.Stats. Sec. 767.255,2 and the cases interpreting that section.
10
Relying on Section 766.25, the district court dismissed Mary Ann's argument that she has a presumptive one-half interest in marital property during the marriage. At a pre-trial hearing the district court held that, unlike the new Section 766.31 that grants a present property interest during the marriage, Section 767.255 provided a property interest only upon a judgment of annulment, divorce or legal separation as to property titled in the husband's name. Under Section 767.255, a wife has no property rights until a court creates those rights in a divorce proceeding. Because Mary Ann and Dennis were married at the time of the acts giving rise to the forfeiture, and are still, Mary Ann has no interest in the shares titled to Dennis.
11
Mary Ann urges a different interpretation of Section 767.255. She argues that Wisconsin adheres to a partnership theory of marriage that grants the non-titled spouse more than an inchoate right in marital property that comes into existence only upon divorce. The cases she cites in support of her interpretation do recognize marriage as a partnership and marital property as partnership property, but they do so in the context of the division of property in a divorce proceeding. Krueger v. Wisconsin Department of Revenue, 124 Wis.2d 453, 369 N.W.2d 691 (1985) (transfers of property at divorce are not taxable on the ground that the spouses were equitable co-owners of the property); Lacey v. Lacey, 45 Wis.2d 378, 173 N.W.2d 142 (1970) (the division of property upon divorce rests upon the concept of marriage as a shared enterprise or joint undertaking--a partnership); see also Savings and Profit Sharing Fund of Sears Employees v. Gago, 717 F.2d 1038, 1044 (7th Cir.1983) (recognizing that the effect of Section 767.255 is that upon divorce all non-exempted property is presumptively subject to equal distribution). She does not cite, and we have not found, any cases affecting a division of marital property outside the context of a divorce proceeding.
12
In the absence of supporting case law outside the context of divorce, Mary Ann urges us to consider the fact that with her husband in prison for at least the next fifteen years3 she needs her share of marital property at least as much as a woman who is undergoing a divorce and asks us to look to general partnership law. Although her argument is compelling, it does not state the law of Wisconsin at the time of Dennis' illegal acts. If Mary Ann's interpretation of the prior law is correct, then the new Act is merely a codification of the preexisting law with no substantive effect. However, this is not the case. The new Act sharply differentiates between marriages occurring before and after the determination date of the new Act and property acquired before and after the determination date, emphasizing the significance of the change in the law. Wis.Stats. Sec. 766.31(6), (8) and (9). The express intent and language of the new Act undermine the interpretation Mary Ann urges. We affirm the decision of the district court rejecting her assertion of an interest in thirty-three and one-half shares on the basis of a marital property theory.
III
13
Mary Ann also claims that her mother, Mary Andracek, gave Dennis twenty shares of Accurate stock to hold for Mary Ann. The district court mislabeled this a resulting trust argument.4 Mary Ann labels it a constructive trust.5 We hold that if the testimony of Mary Andracek and Dennis is credited, the gift by Mary Andracek to her daughter created an express trust with Dennis as trustee; then, if Dennis' illegal activity breached that trust, the express trust was converted to a constructive trust. In either case, at all times Mary Ann had an interest in the shares.
14
An express trust arises as a result of the manifestation of an intention to impose a duty. If the requisite intent is present, no particular form of words or conduct is necessary. In re Estate of Horkan, 273 Wis. 442, 78 N.W.2d 767 (1956). It is immaterial whether the person manifesting the intent calls the relationship a trust or whether she even knows that the relationship she intends to create is called a trust. In addition, if the evidence concerning the creation of the trust is clear and convincing, a written instrument is not necessary. Swazee v. Lee, 259 Wis. 136, 47 N.W.2d 733 (1951).
15
In this case, the uncontradicted testimony of two witnesses supports the creation of an oral express trust. Mary Andracek and Dennis both testified that Mrs. Andracek told Dennis that he need not pay her the $2,000 he owed her on the twenty shares of Accurate because she was giving the shares to Mary Ann. Although the stock remained titled in Dennis' name, both he and Mrs. Andracek understood that the shares were a gift to Mary Ann. Where the owner of property transfers it to another with a direction to transfer it or to hold or deal with it for the benefit of a third person, this may be a sufficient manifestation of intent to create a trust. 1 Scott, The Law of Trusts Secs. 23-24 (3d ed. 1967 & Supp.1985).
16
The government attempts to cast doubt on Mrs. Andracek's testimony by suggesting that it may have been "inspired" by her situation as a widow living in Mary Ann's home (Br. at 26). However, Judge Curran's order does not state that he discredited Mrs. Andracek's or Dennis' testimony that such a gift had been made to Mary Ann in 1974. Absent any finding of lack of credibility, the gift testimony remains unrebutted. This testimony provides clear and convincing evidence of the creation of an express trust of twenty shares of Accurate stock to benefit Mary Ann.
17
Through his illegal activity Dennis may have breached this express trust. But when the trustee of an express trust commits a breach of that trust, a constructive trust may arise by operation of law as a remedial device. 5 Scott on Trusts Sec. 461. Following this principle, the Supreme Court of Wisconsin upheld the imposition of a constructive trust in the context of a transfer of property from one family member to a second family member who promised, then breached the promise, to hold the property for the benefit of yet a third family member. Masino v. Sechrest, 268 Wis. 101, 66 N.W.2d 740 (1954).
18
In Masino, a mother deeded property to her daughter and son-in-law with the understanding that they would hold it for the benefit of the daughter's brothers and sisters. After her mother's death, the daughter and her husband repudiated the express trust under which they received the deed and claimed the property as their own. The Court held that the daughter's brothers and sisters "have an equitable claim on the trust res which they may realize through equity's provision for imposing a constructive trust based on unjust enrichment." Masino, 268 Wis. at 111, 66 N.W.2d at 744 (emphasis in original). See also Nehls v. Meyer, 7 Wis.2d 37, 95 N.W.2d 780 (1959).
19
Under Wisconsin law, "the imposition of a constructive trust requires that the record in [the] case establish (1) unjust enrichment on the part of the defendant,6 and (2) abuse of a confidential relationship or some other form of unconscionable conduct." First National Bank of Appleton v. Nennig, 92 Wis.2d 518, 285 N.W.2d 614, 625 (1979). See also Watts v. Watts, 137 Wis.2d 506, 533-534, 405 N.W.2d 303 (1987); Gorski v. Gorski, 82 Wis.2d 248, 254-255, 262 N.W.2d 120 (1978). The record in this case establishes both elements. First, Dennis clearly was unjustly enriched by treating Accurate as his alone and subsequently using it in connection with his illicit drug activity. Second, an express trust establishes a confidential relationship, 6 Scott on Trusts Sec. 2.5, and the trustee abuses the confidential relationship by misusing trust assets. See e.g., Masino, 66 N.W.2d at 742-743.
20
The district court refused to accord relief for four reasons: (1) Accurate's corporate records at no time listed Mary Ann as a shareholder and its income tax returns list Dennis as owning one hundred percent of the shares; (2) Mary Ann's brother spoke to their mother in 1982 about her shares in Accurate and she did not indicate to him that she had given the shares to Mary Ann; (3) neither the wills of Mary Ann and Dennis nor a trust agreement executed by Dennis refer to Mary Ann's interest; and (4) Mary Ann and other family members executed a warranty agreement on September 17, 1986, stating that Dennis was the sole owner of the sixty-seven shares of Accurate. We are not persuaded.
21
First, the fact that the corporate records do not show Mary Ann as owning any share is the reason why she is attempting to establish ownership through a constructive trust. A constructive trust is an appropriate equitable remedy when title is held in the name of another party. Thus the fact that title is held in Dennis' name precipitates, not defeats, Mary Ann's constructive trust argument. Second, the fact that in 1982 Mrs. Andracek failed to mention to her son that she had given the shares to his sister as a gift in 1974 does not defeat the gift or contradict the testimony establishing the gift. Third, the lack of any reference to Accurate stock in the wills and trust agreement and the residuary clause in Dennis' will that gives everything to Mary Ann do not hurt Mary Ann's claim; the documents are completely neutral. Finally, despite its seemingly unambiguous language, the warranty deed was only furnished to enable the government to sell all the Accurate assets, while holding the proceeds until Mary Ann's anticipated claim was finally adjudicated. It is not an impeaching declaration by Mary Ann that she has no interest in the shares.
22
In view of the uncontradicted testimony of two witnesses concerning Mrs. Andracek's gift of her twenty shares to Mary Ann and the evidence of unjust enrichment and abuse of a confidential relationship, we reverse and remand. On remand, if Judge Curran discredits the testimony establishing the gift he should make an express finding to that effect. Otherwise, a constructive trust should be imposed to establish Mary Ann's interest in twenty shares of Accurate.
IV
23
Mary Ann also argues that she owns twenty additional shares of Accurate through operation of a resulting trust. Mary Ann asserts a one-half interest in the forty shares of Accurate for which she provided the consideration: the original twenty-five shares purchased in Dennis' name in 1964 with the proceeds of the sale of M & M equipment, the ten shares purchased in Dennis' name in 1967 with the proceeds from the sale of their jointly titled home, and the five shares purchased from Mary Andracek in 1970 with the proceeds of the sale of M & M equipment. The district court's order denied imposition of a resulting trust as to the five shares purchased from Mary Andracek and failed to address the imposition of a resulting trust over the remaining thirty-five shares. We reverse and remand.
24
Although neither the parties nor the district court erred by relying on the statute, to avoid contributing to any confusion, we acknowledge that purchase money resulting trusts have been abolished by statute in Wisconsin. Wis.Stats. Sec. 701.04. However, that statute did not become effective until July 1, 1971, and the acts upon which Mary Ann relies to support the creation of resulting trusts all antedate the statute. Although, as seen, applying the law at the time the actions occurred deprives Mary Ann of a remedy under the new Marital Property Act, it provides her a remedy under a resulting trust theory.
25
A resulting trust, unlike a constructive trust, seeks to carry out a donative intent rather than to thwart an unjust scheme. American National Bank, 832 F.2d at 1035. The general rule is that "[w]here a transfer of property is made to one person and the purchase price is paid by another, a resulting trust arises in favor of the person by whom the purchase price is paid." Restatement (Second) of Trusts Sec. 440 (1959). See Hanus v. Jankowski, 256 Wis. 187, 40 N.W.2d 573, 574 (1949) (citing the Restatement ). The reasoning behind the imposition of a resulting trust in favor of the payor is that persons usually don't give up something for nothing. This presumption has less force in dealings within a family where legal and moral duties of support, love and affection, rather than economic gain, motivate transactions. See American National Bank, 832 F.2d at 1036. The common law recognizes this and reverses the presumption in cases involving certain close relationships.
26
The rebuttable presumption of a gift, rather than a resulting trust, arises when the transferee is, by virtue of the relationship, a natural object of the transferor's bounty. Restatement Sec. 442. The common law considered the wife, but not her husband, the natural object of the bounty. The exception, quoted with approval by the Supreme Court of Wisconsin, is that a conveyance " 'from a husband, parent, or other person, where title is taken in the name of the wife, child or other natural object of the purchaser's bounty, generally does not raise, and, on the contrary, rebuts, a resulting trust, and raises a presumption of a gratuitous settlement on the wife, child, or other object of the bounty.' " Hanus, 40 N.W.2d at 573 (quoting 54 Am.Jur. par. 205).
27
Because the conveyance in this case was by a wife, Mary Ann, to her husband, Dennis, the exception does not apply and a resulting trust is presumed. However, if the situation were reversed and Dennis had provided funds to purchase the Accurate shares that were titled in Mary Ann's name, a gift would be presumed. As we recently stated concerning the same exception in Illinois, "[i]n an era when laws that differentiate between men and women on grounds that stereotype women as the weaker sex are constitutionally suspect, ... this exception ... must be reckoned to be highly dubious." American National Bank, 832 F.2d at 1036. A non-discriminatory rule that would adapt the original intent of the equitable tool to modern society would presume a gift when either spouse is the payor of property taken in the name of the other spouse. See, e.g., Mims v. Mims, 305 N.C. 41, 286 S.E.2d 779 (1982); Butler v. Butler, 464 Pa. 522, 347 A.2d 477 (1975); see also Bogert & Bogert, The Law of Trusts and Trustees Secs. 459-460 (rev. 2d ed. 1977); 5 Scott on Trusts Sec. 442. However, because the government failed to challenge the presumption in favor of Mary Ann on this ground and Wisconsin no longer allows the imposition of a resulting trust, thus making the point moot for future cases, the constitutionality of the distinction has not been argued and need not be determined here. We hold that a rebuttable presumption may exist imposing a resulting trust in favor of Mary Ann.7
28
In opposition to a resulting trust, the government argues that "the contention could equally be made" that Mary Ann and Dennis divided the proceeds from the sale of the M & M excavating equipment and of their home equally between them and that Mary Ann used her share in one way while Dennis used part of his share to acquire Accurate stock (Br. at 24). Nothing in the record supports this version of the facts.
29
The present order of the district court does not show why Mary Ann is not entitled to twelve and one-half of the original twenty-five shares, five of the ten shares purchased in 1967 and two and one-half of the five shares purchased from Mary Andracek in 1970. We remand to allow Mary Ann to establish that she provided the consideration for these shares and thus support the imposition of a resulting trust over some or all of the shares.
30
The judgment of the district court is vacated and the cause remanded for further proceedings consistent with this opinion. Costs to be borne by the respective parties.
1
Because Dennis is now serving his prison sentence for the crimes that led to the forfeiture, his deposition, taken via tele-conference on March 11, 1987, was introduced at the hearing
2
Section 767.255 provided in relevant part: "Upon every judgment of annulment, divorce or legal separation, or in rendering a judgment in an action under Section 767.02(1)(h) [actions affecting the family], the court shall divide the property of the parties and divest and transfer the title of any such property accordingly."
3
Dennis was sentenced to a prison term of twenty-five years. The sentence was imposed under 21 U.S.C. Sec. 848 and under the terms of that Section he cannot be paroled and must serve a minimum of fifteen years in prison
4
For distinctions between the theories of resulting and constructive trusts, see In the Matter of Iowa Railroad Co. v. Moritz, 840 F.2d 535, 544-545 (7th Cir.1988) (Illinois law), and American National Bank and Trust Co. of Rockford, Ill. v. United States, 832 F.2d 1032, 1035 (7th Cir.1987) (Illinois law)
5
She did not include this theory in her petition to set aside the forfeiture, but did advance a constructive trust argument in her Second Memorandum of Law filed in the district court. Her trial brief and proposed findings of fact and conclusions of law also relied upon a constructive trust argument. As the government did not object to her raising the argument in the district court, it is too late to assert that she should be barred from presenting it here
6
Although the nominal defendant here is the United States, Mary Ann is trying to establish her right to the property prior to the forfeiture; therefore the defendant as to the imposition of a constructive trust is Dennis
7
Even if a non-discriminatory rule applied, the presumption that Mary Ann made a gift to Dennis could be rebutted by the evidence that neither Mary Ann nor Dennis intended the payments to be a gift and both intended that they own the shares jointly
|
/**
* Copyright (c) 2013-2020, jcabi.com
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met: 1) Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer. 2) Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials provided
* with the distribution. 3) Neither the name of the jcabi.com nor
* the names of its contributors may be used to endorse or promote
* products derived from this software without specific prior written
* permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT
* NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND
* FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT,
* INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
* (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
* SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
* STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
* OF THE POSSIBILITY OF SUCH DAMAGE.
*/
/**
* Wires.
*
* @author Alexander Sinygain (sinyagin.alexander@gmail.com)
* @version $Id: c521fff60803731b4bb8b1ae1f0db7055104d047 $
*/
package com.jcabi.github.wire;
|
In a typical day many people come into contact with a massive number of electronically controlled devices. Such devices range from automobiles and appliances, to home and office equipment and to telephones and televisions to name but a few. Many of these devices are required to move from time to time. Many of these devices are even portable. These devices provide a vast and diverse assortment of services for the people coming into contact with them. However, they suffer from a common problem related to user input and output (I/O).
User I/O refers to components and processes used to communicate user-supplied data to an electronic device and to annunciate data from an electronic device so the data may be perceived by a user. Although electronic devices provide a vast and diverse assortment of services, they tend to have redundant I/O. In other words, many such devices have displays, speakers and the like at which data may be annunciated and have buttons, switches, keypads and other controls at which user-supplied data may be communicated to the devices. In order to keep costs low and size small, user I/O capabilities often suffer. As a result, many electronic devices encountered in everyday life and particularly many portable devices, are cumbersome and tedious to use because communicating data from a user to the devices is difficult and because provisions are unavailable for clearly annunciating data for a user's benefit.
In theory, this user I/O problem could be ameliorated by better integrating electronic devices to ease data communications therebetween. For example, a portable telephone could receive a facsimile (fax), but typically has no capability to print the fax and typically has no capability to communicate with a printer which may be able to print the fax. Likewise, a pager may receive a call-back phone number, but typical pagers have no capability to transfer the call-back number to a telephone from which the call-back can be made. User involvement is required to address these and many other data transfer issues. While many conventional data communication or computer network architectures are known, the conventional architectures are unsuitable for the task of integrating a plurality of electronic devices which collectively provide a vast and diverse assortment of services.
Conventional computer networks require excessively complicated setup or activation procedures. Such setup and activation procedures make the jobs of forming a connection to a new network node and making changes in connectibility permission cumbersome at best. Setup and activation procedures are instituted, at least in part, to maintain control of security and to define network addresses. Typically, a system administration level of security clearance is required before access is granted to network tables that define the network addresses. Thus, in conventional networks, many network users lack sufficient security clearance to activate and obtain addresses of network nodes with which they may wish to connect on their own.
Once setup is performed, either directly by a user or by a system administrator, connections are formed when an initiating node presents the network with the address of a network node to which a connection is desired. The setup or activation requirements of conventional networks force nodes to know or obtain a priori knowledge of node addresses with which they wish to connect prior to making the connection. Excessive user attention is involved in making the connection through setup procedures and during the instant of connection to obtain addresses. This level of user involvement leads to an impractical network implementation between the everyday electronic devices with which people come into contact.
Further, conventional computer networks tend to be infrastructure intensive. The infrastructure includes wiring, servers, base stations, hubs and other devices which are dedicated to network use but have no substantial non-network use to the computers they interconnect. The use of extensive network components is undesirable for a network implementation between everyday electronic devices because an immense expense would be involved to support such an infrastructure and because it impedes portability and movability of nodes.
The use of wiring to interconnect network nodes is a particularly offensive impediment to the use of conventional networks because wiring between diverse nodes is not suitable when some of the nodes are portable. Wireless communication links could theoretically solve the wiring problem, and conventional wireless data communication networks are known. However, the conventional wireless networks do little more than replace wire lines with wireless communication links. An excessive amount of infrastructure and excessive user involvement in setup procedures are still required.
There is a lot of information that one typically carries with their person. These data are encoded onto physical artifacts that are then tucked inside a wallet or a purse or simply carried in a pocket. In order for the artifact to be useful one must physically carry it around anticipating its use. Eventually, the wallet or purse gets bulky while carrying everything that you might anticipate using over the course of a week or a month. When an artifact is being used it must be physically removed from the wallet, and then returned upon completion of the transaction, if appropriate. The process of repeatedly removing and then replacing an artifact from the wallet both causes wear on the artifact and also subjects it to loss and theft. By digitizing all three categories of artifacts and by being able to selectively move the artifacts over a wireless link these problems are solved.
Individuals routinely carry three categories of things in their wallet:
1) financial instruments that can be used to obtain goods or services, PA1 2) items used as physical or logical "pass keys", and PA1 3) lists of data.
The first category, "financial instruments", usually includes paper cash and coins, credit cards, debit cards, cash cards, gift certificates, and discount coupons.
The second category contains artifacts that give you physical or logical access to some privilege. Cards containing personal information such as a social security number, health insurance number, and car insurance identification are often found in an individual's wallet for this purpose. Such contents may also include video club memberships, frequent eater cards such as those given out by restaurants, frequent flyer cards associated with the airline industry, warehouse store membership cards, telephone company calling cards, public library cards, and so on. Legal identification such as a drivers license or passport also fall into this category. Tickets such as those purchased for the theatre, football game or lottery reside in this category. This category may also harbor physical pass keys such as a door key or magnetic pass keys encoded onto a credit card format like those given out by a hotel.
The third category of artifact that people typically carry with them in a wallet are simply lists of data. Such lists may include medical emergency information such as medications, blood type, previous surgeries, name of doctor, next of kin, and so on. Telephone numbers, shopping lists, maps, your spouse's clothing sizes and color preferences, your children's birthdays, and calendar & schedule information are also included in this category. Pictures of your family can be treated as belonging to this category. Purchase receipts and other records of transactions fall into this category.
Note that these three categories of data are not necessarily mutually exclusive. Take for example the number on one's telephone calling card. This could easily fall into all three categories. First, it is a financial instrument because it allows access to toll services. Second, it is a logical pass key to a telephone's toll services. And third, it is a data item because ultimately it is just a number. This example implies that the data that an individual carries with them needs to be structured, in other words, meta-data are needed in order to enhance the information's use.
Current devices have yet another problem with respect to a subclass of the third type of data, "lists of data". Quite often there exists data purely about an individual that the individual cannot have access to; in some cases the individual can read the data but cannot change it, and in other cases the individual cannot even read the data. Two examples of this sort of data are credit histories and medical histories. In both cases the data refers to a specific individual, yet that individual cannot have write access to that data. This is for good reason, because while the data is about the individual, the individual is not the caretaker of that data. This type of data is called "restricted data" in the following discussions. Currently, if an individual wants to share such data with a third party, e.g., they are establishing a new physician or they are applying for a loan, they must refer the third party back to the caretaker of the data. Several personal transaction scenarios require a mechanism that allows the individual to carry such information with them and share it without the need for a caretaker.
Currently, the closest technology that accommodates the previously discussed needs is a SmartCard. A SmartCard is a credit-card sized database that is able to store and exchange information. Yet, the SmartCard is inadequate in the following ways: 1) A SmartCard has no user interface. Typically, the SmartCard is inserted into another device that allows the user to enter and retrieve data from the database. 2) A SmartCard must be physically docked with another device to transfer information. This is because the SmartCard has no user interface of its own, and it has no other communication link. 3) SmartCards tend to store a very narrow range of information. For example, a SmartCard would hold money, or at least its electronic equivalent. If one wanted to store pictures it would require another SmartCard. This is primarily because of the method used to access the information. The range of data on a SmartCard is narrow also because the SmartCard provides no mechanisms for structuring the data. 4) The user of a SmartCard must take overt action to use the capabilities of the SmartCard. At a minimum, they must pull it out of their wallet and run it through a reader. On the contrary, the user needs a device that can be configured to automatically (with no overt action from the user once programmed in this manner) perform transactions in specific situations. 5) When used for identification purposes, a specific SmartCard carrying legal identification must be physically read. This identification is performed only when the user takes the overt action to do so. What is needed is a device capable of beaconing a wireless digital identification at periodic intervals or when overtly "pinged" or interrogated by another unit. In this way the user can be identified with no overt action.
Another device which attempts to address some of the previously discussed needs and issues is a Personal Digital Assistant, otherwise known as a PDA. Advantages and disadvantages of the PDA include: 1) The PDA has some sort of user input capability and some user output capability directly on the unit. 2) The PDA can store fairly large amounts of unstructured data. 3) There is still a need to structure data beyond what the typical PDA allows. The typical PDA only allows business cards, notes and scheduling information. 4) The typical PDA uses some sort of physical coupling, perhaps through a docking station, to transfer information from one unit to another. In addition to a physical coupling capability, some PDAs also employ an IRDA wireless link for this purpose. 5) Current PDAs do not support "restricted data".
What is needed is a device/method for overcoming these deficiencies of the prior art, in a hand-holdable and readily reconfigurable fashion. |
#include <shogun/lib/SGMatrix.h>
#include <shogun/lib/config.h>
/*
Function that tests if a SGMatrix is a permutation matrix
*/
bool is_permutation_matrix(shogun::SGMatrix<float64_t> m);
|
Infection with *Mycobacterium tuberculosis* causes enormous worldwide morbidity and mortality; there were more cases of tuberculosis in 2007 (the last year for which data are available) than at any prior point in world history[@R1]. Among the factors that contribute to the continued growth of tuberculosis as a global health problem are the efficiency of human-to-human transmission by the aerosol route, the ability of the causal agent *M. tuberculosis* to persist and to progress despite development of host immune responses, and the absence of a vaccine with reliable efficacy in preventing transmission of the infection. Moreover, while attempts to control tuberculosis through improved identification and treatment of infectious cases have been successful in some settings, similar approaches in other contexts have resulted in increasing rates of resistance to available anti-tuberculosis drugs[@R2]. Therefore, new approaches to controlling tuberculosis are essential and would greatly benefit from an improved understanding of the biology of the bacteria and their interactions with their human hosts. In particular, understanding the factors that drive the evolution of *M. tuberculosis* and allow it to evade host defences, may suggest unique opportunities to develop novel strategies against tuberculosis.
Human tuberculosis is caused by *Mycobacterium tuberculosis* and *Mycobacterium africanum*, which are members of the *M. tuberculosis* complex (MTBC). In addition to these human-adapted pathogens, MTBC includes various animal-adapted forms, such as *Mycobacterium bovis, Mycobacterium microti*, and *Mycobacterium pinnipedii*[@R3]. To characterize the extent and nature of the forces acting to diversify MTBC, we and others have applied several approaches to phylogenetic analysis of multiple clinical isolates from geographically diverse sources. Using single nucleotide polymorphisms (SNPs)^[@R3]-[@R6]^ or large sequence polymorphisms (LSPs)^[@R7]-[@R9]^ as genetic markers resulted in congruent groupings of human-adapted MTBC into six major lineages and consistent geographical associations for each of these lineages[@R10]. In addition, these studies found strong evidence for a clonal population structure of MTBC, without evidence of ongoing horizontal gene transfer. Analysis of SNPs in a total of 7 megabases of DNA sequence from 89 genes in 108 isolates of MTBC provided strong evidence that MTBC originated in Africa, and underwent population expansion and diversification following ancient human migrations out of Africa, followed by global spread and return to Africa of three particularly successful MTBC lineages through recent waves of travel, trade, and conquest[@R3]. Taken together, these studies have revealed that MTBC has undergone genetic diversification that corresponds to patterns of human migration, suggesting that distinct lineages have co-evolved with distinct human populations[@R7]. Moreover, they indicate that further understanding of the mechanisms and consequences of the interactions between MTBC and its human host can be obtained through comparative genomic analyses.
Host-pathogen co-evolution is characterised by reciprocal adaptive changes in interacting species[@R11]. Host immune pressure and associated parasite immune evasion are key features of this process often referred to as an 'evolutionary arms-race'[@R12]-[@R13]. Studies in human pathogenic viruses, bacteria, and protozoa have revealed that genes encoding antigens tend to be highly variable as a consequence of diversifying selection to evade host immunity[@R14]-[@R17]. However, whether similar evolutionary mechanisms operate in MTBC, and whether the bacteria undergo antigenic variation in response to host immune pressure, is unknown.
Immunity to tuberculosis in humans, nonhuman primates, and mice depends on T lymphocytes[@R18]. Among human T lymphocyte subsets, CD4^+^ T cells are clearly essential for protective immunity to MTBC, as demonstrated by the observation that the incidence of active tuberculosis in people infected with HIV is inversely proportional to the number of circulating CD4^+^ T cells[@R19]. In addition to CD4^+^ T cell responses, humans infected with MTBC develop antigen-specific CD8^+^ T cell responses[@R20], and MTBC antigen-specific human CD8^+^ T cells lyse infected cells and contribute to killing of intracellular MTBC[@R21]. Therefore, there is strong evidence that the adaptive immune system represented by CD4^+^ and CD8^+^ T cells, is an important mechanism for host recognition and control of MTBC. Recognition of foreign antigens by T lymphocytes depends on binding of short peptide fragments (termed epitopes) derived by proteolysis of foreign proteins, to MHC (major histocompatibility; in humans termed HLA (human leukocyte antigen)) proteins on the surface of macrophages and dendritic cells; CD4^+^ T cells recognize peptide epitopes bound to MHC/HLA class II; CD8^+^ T cells recognize peptide epitopes bound to MHC/HLA class I.
To obtain a better understanding of the effects of human T cell recognition on the diversity of MTBC, and to test the hypothesis that MTBC uses antigenic variation as one mechanism of evading elimination by human immune responses, we determined the genome sequences of 21 phylogeographically diverse strains of MTBC and used those genome sequences to analyze the diversity of 491 experimentally verified human T cell epitopes. This analysis produced the unexpected finding that the known human T cell epitopes are highly conserved relative to the rest of the MTBC genome. These results provide evidence that the relationship between MTBC and its human hosts may differ from that of a classical evolutionary arms-race, and suggest that development of new approaches to control of tuberculosis must take into account the possibility that certain human immune responses may actually benefit MTBC.
RESULTS {#S1}
=======
A Genome-wide Phylogeny of Human-adapted MTBC {#S2}
---------------------------------------------
A total of 22 mycobacterial strains were included in this work. To study the sequence diversity of T cell antigens in MTBC, we used Illumina next-generation DNA sequencing to generate nearly complete genome sequences from 20 strains representative of the six main human MTBC lineages, and one strain of *Mycobacterium canettii* which is the closest known outgroup of MTBC[@R3],[@R22] ([Table 1](#T1){ref-type="table"}). In addition, we used the published genome sequence of the H37Rv laboratory strain of *M. tuberculosis* as a common reference[@R23]. For each of the 21 strains newly sequenced, a mean of 6.8 million sequence reads with a mean length of 51 base pairs were generated and mapped to the H37Rv reference genome. On average, the reads covered 98.9% of the 4.4 Mb reference genome ([Table 1](#T1){ref-type="table"}). The regions not covered primarily included members of the highly GC-rich and repetitive PE/PPE gene families[@R24]. A total of 32,745 SNPs were identified, corresponding to an average of 1 SNP call for every 3 kb of sequence generated. We used a total of 9,037 unique SNPs (i.e. SNPs that occurred in one or several strains) to derive a genome-wide phylogeny of 22 strains ([Fig. 1](#F1){ref-type="fig"}**,** [Supplementary Fig. 1](#SD2){ref-type="supplementary-material"}). Six main lineages could be distinguished with high statistical support. These lineages were completely congruent to the strain groupings previously defined based on genomic deletion analysis and multilocus sequencing[@R3],[@R7],[@R10]. The perfect congruence between these different phylogenetic markers further corroborates the highly clonal population structure of MTBC and lack of ongoing horizontal gene transfer in this organism[@R25]. Because of the comprehensive nature of genome-scale data, a higher degree of phylogenetic resolution could be achieved compared to all previous studies. In this new phylogeny the brown and green lineages (also known as *Mycobacterium africanum*) are the most basal groups when compared to the *M. canettii* outgroup. *M. africanum* is highly restricted to West Africa for reasons that remain unclear[@R8]. However, the fact that the two *M. africanum* lineages represent the most ancestral forms of human MTBC reinforces the notion that human MTBC originated in Africa[@R3],[@R7].
Evolutionary Conservation Across Gene Categories {#S3}
------------------------------------------------
We used these genome sequence data and the phylogeny derived from them to compare the genetic diversity in antigens and other experimentally determined gene classes. For comparisons across different gene categories, we divided our dataset into three gene sets, including 'essential genes', 'non-essential genes', and 'antigens' ([Supplementary Fig. 2](#SD2){ref-type="supplementary-material"}**,** [Supplementary Tables 1 and 2](#SD2){ref-type="supplementary-material"}). Antigens were defined based on the presence of 491 experimentally confirmed human T cell epitopes ([Supplementary Table 3](#SD2){ref-type="supplementary-material"}), which were compiled through the Immune Epitope Database (IEDB) initiative[@R26]. The 'essential' gene category was defined based on genome-wide analyses of transposon insertion mutants that were defective for the ability to grow on Middlebrook 7H11 agar, or in the spleens of intravenously-infected mice, published previously[@R27]-[@R28]. We excluded from this analysis genes belonging to the PE/PPE gene family[@R24] and those related to mobile elements as they are difficult to study using current next-generation DNA sequencing technologies (total genes excluded: 273/3,990 (6.8%) genes annotated in the H37Rv reference genome; [Supplementary Table 4](#SD2){ref-type="supplementary-material"}).
Based on evolutionary theory and findings in other bacteria[@R29], one would expect that in contrast to non-essential genes, the essential genes in MTBC will be under stronger purifying selection and thus more evolutionary conserved. In support of this notion, we observed that on average essential genes harboured less nucleotide diversity than non-essential genes ([Fig. 2](#F2){ref-type="fig"}; Mann-Whitney U test p\<0.002). We then compared the rates of synonymous and non-synonymous SNPs in the essential and non-essential gene categories. The synonymous and non-synonymous changes were derived by comparison to the most likely recent common ancestor of MTBC, which we inferred based on our new genome-wide phylogeny ([Fig. 1](#F1){ref-type="fig"}**,** [Supplementary Fig. 1](#SD2){ref-type="supplementary-material"}). Because MTBC harbours little sequence diversity, it was necessary to analyze the distribution of synonymous and non-synonymous SNPs based on gene concatenates rather than individual genes. The two measures of distribution we used were based on the number of non-redundant SNPs across all 21 MTBC strains (dN/dS based on Measure A in [Table 2](#T2){ref-type="table"} and [Fig. 3](#F3){ref-type="fig"}), and on the individual pairwise comparisons between each strain and the inferred most likely recent common ancestor (dN/dS based on Measures B in [Table 2](#T2){ref-type="table"}). From these analyses, we found that the dN/dS measures for essential genes were significantly lower than for non-essential genes (Measure A in [Fig. 3](#F3){ref-type="fig"}; Measure B in [Table 2](#T2){ref-type="table"}, Mann-Whitney U test p\<0.0001). Taken together, these data show that in MTBC essential genes are more evolutionary conserved than non-essential genes.
Because MTBC interacts with humans through antigen-specific CD4^+^ or CD8^+^ T-cells, we would expect T cell antigens to be among the most diverse genes in the genome. Particularly when invoking a co-evolutionary arms-race and associated immune evasion, we would anticipate these antigens to be under diversifying selection and to be more variable than other genes in order to escape T cell recognition. However, when we analyzed the nucleotide diversity in 78 experimentally confirmed human T cell antigens ([Supplementary Table 2](#SD2){ref-type="supplementary-material"}), we found that they were on average not more diverse than essential genes ([Fig. 2](#F2){ref-type="fig"}, Mann-Whitney U test p=0.12). Moreover, we found that the dN/dS measures in these antigens also resembled those of essential genes (Measure A in [Fig. 3](#F3){ref-type="fig"}; Measure B in [Table 2](#T2){ref-type="table"}, Mann-Whitney U test p=0.77). Thus, human T cell antigens in MTBC do not appear to be under diversifying selection. Instead, purifying selection appears to be the driving selection pressure in these genes.
T Cell Epitopes are Hyperconserved {#S4}
----------------------------------
T cell antigens consist of epitope regions that interact with human T cells, and non-epitope regions which are not targets of T cell recognition. Hence, we decided to study these regions separately. To this end, we generated a separate concatenate of the epitope regions and another concatenate of all corresponding non-epitope regions. Because little data is currently available in the IEDB with respect to whether these 491 epitopes are recognized by CD4^+^ or CD8^+^ T cells, we analyzed them as one class. If immune escape was driving antigen evolution to evade T cell recognition in MTBC, we would expect non-synonymous changes to accumulate in epitope regions, leading to a high dN/dS. Contrary to this expectation however, the overall dN/dS of the epitope regions was 0.53, which was still similar to essential genes and lower than non-essential genes ([Table 2](#T2){ref-type="table"}**,** [Fig. 3](#F3){ref-type="fig"}). Moreover, when we analyzed the distribution of amino acid replacements in individual epitopes we found that the large majority (95%) of the 491 epitopes showed no amino acid change ([Fig. 4](#F4){ref-type="fig"}). Only five epitopes, contained in *esxH, pstS1*, and Rv1986, harboured more than one variable position ([Supplementary Table 5](#SD2){ref-type="supplementary-material"}). The higher number of amino acid substitutions in these five epitopes may reflect ongoing immune evasion, but further investigation is needed to determine whether the observed changes are due to immune pressure, other selection pressure(s), or mere random genetic drift[@R3]. Because these five epitopes were clear outliers compared to the large majority of T cell epitopes analyzed here, we repeated our dN/dS analysis after excluding the three antigens harbouring the five outlier epitopes. Our analysis revealed that the epitope regions had the lowest dN/dS of all gene categories ([Table 2](#T2){ref-type="table"}, [Fig. 3](#F3){ref-type="fig"}). Furthermore, when we compared the proportion of non-redundant non-synonymous changes in epitope and non-epitope regions, we found that epitopes were less likely than non-epitopes to harbour changes at non-synonymous sites (Measure A in [Table 2](#T2){ref-type="table"}, χ^2^, p\<0.05), whereas no difference was observed at synonymous sites ([Table 2](#T2){ref-type="table"}, χ^2^, p=0.89).
To further corroborate our finding of hyperconservation of human T cell epitopes in MTBC, we repeated our analysis using a data set from a previous study in which 89 individual genes were sequenced in 99 human-adapted strains representative of the six major global lineages of MTBC[@R3]. Sixteen of these 89 genes belonged to the T cell antigens analyzed here, including two of the three outlier antigens *esxH* and *pstS1*^[@R3]^. Analysis of this additional dataset of 16 antigens in 99 MTBC strains revealed an overall dN/dS for the epitope regions of 0.74. However, after excluding the two outlier antigens, the dN/dS dropped to 0.46, which was again lower than the genome-based dN/dS values for essential and non-essential genes ([Fig. 3](#F3){ref-type="fig"}).
Taken together, our findings strongly suggest that a large proportion of the MTBC genome known to interact with human T cells is highly conserved and under as strong, or perhaps even stronger, purifying selection than essential genes.
DISCUSSION {#S5}
==========
In this study of 22 MTBC genomes, we demonstrate that, as expected, essential genes are more conserved than non-essential genes. These results are in agreement with a previous study which analyzed a single genome[@R30]. Surprisingly, however, we found that the large majority of the currently known T cell antigens are as conserved as essential genes. Furthermore, the epitope regions of these antigen genes are the most highly conserved regions we studied. This observation, that the regions of the genome that interact with the human adaptive immune system appear to be under even stronger purifying selection than essential genes, is inconsistent with a classical model of an evolutionary arms-race.
It is possible that the known human T cell epitopes that we found to be hyperconserved represent a select subset of all of the human T cell epitopes encoded in the genome, and that certain approaches to epitope identification have favoured discovery of hyperconserved epitopes in MTBC. For example, since most, if not all of the epitope discovery efforts to date have utilized proteins and/or peptide sequences of strains from one lineage (lineage 4) and T cells from humans that are likely to have been infected by strains of other lineages, the assays used may have been especially suited to identification of hyperconserved and/or cross-reactive epitopes. While further investigation using alternative approaches to epitope discovery may reveal that variable epitopes that exhibit evidence of positive selection exist in the MTBC, it is likely that the large number of epitopes that we examined will remain a significant subset of the total, and that future vaccine development efforts will need to account for the possibility that immune recognition of certain epitopes may actually provide a net benefit to the bacteria.
Lack of antigenic variation and immune evasion has been reported for a number of other human pathogens, including RNA viruses such as measles, mumps, rubella, and influenza type C[@R31]. Theoretical studies have suggested that the absence of immune escape variants in these viruses might be due to structural constraints in viral proteins or negative mutational effects leading to reduced infectivity or transmission[@R31]. While we cannot exclude the possibility that structural and functional constraints that are independent of T cell recognition contribute to hyperconservation of the regions encoding MTBC peptides recognized by human T cells, one important characteristic of the aformentioned viral pathogens is that they spread among young and immunologically naive hosts, which might eliminate the need for immune evasion[@R31]. Moreover, infection by these viruses usually results in acute disease, followed by elimination of the infection through adaptive immunity, and acquisition of lifelong immunity against re-infection. This further indicates that these viruses are specialized pathogens of immunologically naive hosts. By contrast, MTBC causes chronic and often lifelong infections, and adaptive immunity is usually unable to completely clear the infection[@R18]. Furthermore, tuberculosis patients are prone to re-infection[@R32], and mixed infections are also increasingly recognized[@R33]. These observations suggest that the biological basis for the lack of antigenic variation in MTBC reported here differs from what has been proposed for antigenically homogeneous RNA viruses[@R31]. In addition, we determined that the fraction of hyperconserved T cell epitopes of the MTBC that are derived from essential genes is indistinguishable from the frequency of essential genes in the MTBC genome as a whole (18% versus 21%, respectively; χ^2^ = 0.28, p = 0.59), indicating that our results were not skewed by over-representation of T cell epitopes in essential genes. Moreover, the T cell epitopes that we analyzed are present in genes from diverse gene ontologies, and the representation of five main gene categories (defined based on the NCBI Categories of Orthologous Groups (COG)) was no different in the T cell antigens when compared to the genome overall (χ^2^ with 4 degrees of freedom = 5.8, p = 0.21; [Supplementary Table 6](#SD2){ref-type="supplementary-material"}). Hence the only identifiable common property of these regions is their recognition by human T lymphocytes. These findings suggest that T lymphocyte recognition is an important factor in hyperconservation of these sequences, and that other structural or functional constraints are unlikely to fully account for the lack of sequence variation in these domains.
Our data suggest that T cell epitopes in MTBC are under strong selection pressure to be maintained, perhaps because the immune response they elicit in humans, which are essential for survival of an individual host, might partially work towards the pathogen's benefit. One potential mechanism of benefit to MTBC from human T cell recognition is that human T cell responses are essential for MTBC to establish latent infection. This notion is supported by the fact that CD4^+^ T cell-deficient HIV-positive individuals progress rapidly to active disease after infection, rather than to sustain prolonged periods of latent tuberculosis[@R34]. Latent infection mediated by host T cell responses, with subsequent reactivation to active disease often occurring decades after initial infection, is a key characteristic of human tuberculosis, and might have evolved as a way for MTBC to transmit to later generations of susceptible hosts[@R35]. In addition, there is evidence that T cell responses may contribute directly to human-to-human transmission of MTBC. In particular, cavitary tuberculosis, which generates secondary cases more efficiently than other disease forms[@R36], rarely occurs in CD4^+^ T cell-deficient HIV-positive individuals, and the frequency of cavitary lung lesions in HIV-infected patients with tuberculosis is directly correlated with the number of peripheral CD4^+^ T cells[@R37]. While the mechanisms of lung cavitation in tuberculosis are poorly understood, these observations suggest that CD4^+^ T cells directly or indirectly mediate tissue damage in tuberculosis, and together with our finding of epitope hyperconservation indicate that certain T cell responses may be detrimental to the host and beneficial to the pathogen. Hence our findings suggest that MTBC takes advantage of host adaptive immunity to increase its likelihood of spread, and that the benefits of enhanced transmission exceed the costs of within-host cellular immune responses to these epitopes. In this manner, MTBC may resemble HIV, for which there is evidence that virulence has evolved, not to maximize replication of the virus within individual hosts, but to maximize the likelihood of its transmission[@R38]. Whether T cell responses to other epitopes, or whether specific T cell subsets (e.g. Th17 versus Th1) that benefit the host and not the bacteria can be identified will require additional studies in humans.
One limitation of this study was the exclusion of PE/PPE genes because of technical reasons. Some of these genes are known to vary and to be cell-surface exposed, which has lead to the hypothesis they might be involved in antigenic variation[@R24]. However, no direct evidence for this has yet been presented. Future work will need to clarify the function and evolution of PE/PPE genes. By contrast, all the T cell antigens included in this study have been experimentally confirmed[@R26]. Furthermore, some of them are being targeted by new tuberculosis diagnostics and vaccines[@R39]. Our findings thus have important implications for the development of these new tools. On the one hand, the fact that MTBC harbours little sequence diversity in T cell antigens will facilitate the development of diagnostics that are universally applicable across geographical regions where MTBC strains differ[@R8]. On the other hand, the possibility that the immune responses induced by vaccine antigens might partially benefit the pathogen suggests current efforts in vaccine research should be broadened. Most disturbing is the suggestion that vaccine induced immunity against these conserved epitopes may perversely increase transmission. In this respect, it is interesting to note that the currently available tuberculosis vaccine Bacille-Calmette-Guerin (BCG), which is a live vaccine based on an attenuated from of *M. bovis*, offers no protection against pulmonary tuberculosis in adults[@R40]. More importantly, some clinical trials of BCG have even reported an increased risk of tuberculosis in vaccinees compared to unvaccinated individuals[@R41]. Thus, in contrast to standard reverse vaccinology, in which the least variable antigens of a genome are targeted[@R42], research into new tuberculosis vaccines should explore more variable regions of the MTBC genome.
While most of the T cell epitopes anlyzed here were highly conserved, five epitopes in three antigens harboured a larger number of amino acid changes. The fact that the dN/dS measure dropped sharply after excluding these outlier antigens from the analysis further supports the notion that they are indeed outliers compared to the other antigens. One of these outlier antigens, *esxH* (Rv0288, also known as TB10.4) is a member of a gene family known to encode a Type VII secretion system[@R43]. Importantly, this antigen is being considered as new vaccine antigen against tuberculosis[@R39]. Thus even though most of the other vaccine antigens analyzed here are conserved, our finding that this particular vaccine antigen harbours a comparatively high number of amino acid substitutions across a panel of global MTBC isolates, suggests that strain diversity should be considered during further development of the new vaccine candidates containing *esxH[@R8]*.
We detected significant differences in dN/dS between essential, non-essential, and antigenic genes. However, the individual dN/dS values remain high when compared to most other bacteria[@R44]. Such a high dN/dS was reported previously for MTBC, and has been linked to reduced selective constraint against slightly deleterious mutations[@R3]. It was proposed that the serial transmission bottlenecks associated with patient-to-patient transmission in MTBC could lead to an increase in random genetic drift compared to the forces of natural selection. Our new data show that even though the strength of purifying selection in MTBC might be reduced overall compared to other bacteria, it clearly is still acting on and capable of differentiating between gene categories.
In summary, we show that T cell epitopes of MTBC are highly conserved, and do not reflect any ongoing evolutionary arms-race or immune-evasion. Instead, the patterns observed might be indicative of a distinct evolutionary strategy of immune-subversion developed by this highly successful pathogen. Other intracellular bacteria such as *Salmonella enterica* serovar Typhi exhibit a similar lack of antigenic variation[@R45], suggesting comparable mechanisms might exist in other pathogens with a similar lifestyle.
Supplementary Material {#SM}
======================
We thank Fernando Gonzalez-Candelas, Sonia Borrell, and Douglas Young for comments on the manuscript. This project has been funded in whole or in part with Federal funds from the National Institute of Allergy and Infectious Disease, National Institutes of Health, Department of Health and Human Services, under Contract No. HHSN266200400001C. J.C. is a Howard Hughes Medical Institute Research Training Fellow. J.D.E. was supported by NIH grants AI046097 and AI051242, and S.G. by the Medical Research Council, UK, the Royal Society, the Swiss National Science Foundation, and NIH grants HHSN266200700022C and AI034238.
METHODS
Methods and any associated references are available in the online version of the paper at <http://www.nature.com/naturegenetics/>.
**Accession codes.** The sequencing reads have been submitted to the NCBI Sequence Read Archive (SRA) with accession codes SRX002001-SRX002005, SRX002429, SRX003589, SRX003590, SRX005394, SRX007715, SRX007716, SRX007718-SRX007726, and SRX012272. Sequence and SNP data are also available at the Tuberculosis Database (TBDB).
COMPETING INTEREST STATEMENT
The authors declare no competing financial interests.
![Neighbour-joining phylogeny based on 9,037 variable common nucleotide positions across 21 human *M. tuberculosis* complex genome sequences. The tree is rooted with *M. canettii*, the closest known outgroup. Node support following 1,000 bootstrap replications is indicated. Branches are coloured according to the six main phylogeographic lineages of MTBC defined previously[@R3],[@R7]-[@R8]. Highly congruent topologies were obtained by Maximum likelihood and Bayesian inference ([Supplementary Fig. 1](#SD2){ref-type="supplementary-material"}).](ukmss-29888-f0001){#F1}
{#F2}
{#F3}
{#F4}
######
Strains used in this study, sequencing coverage, and number of raw and filtered SNPs after comparison to the H37Rv reference genome
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Strain Lineage[a](#TFN1){ref-type="table-fn"} Origin Average mapped\ Number\ Percent genome\ Raw SNPs Filtered SNPs
sequencing depth of reads coverage[b](#TFN2){ref-type="table-fn"}
-------------- ---------------------------------------- ----------------- ------------------ ----------- ----------------------------------------- ---------- ---------------
MTB_95_0545 Lineage 1 Laos 77.37 7,621,946 99.75 3,478 2,017
MTB_K21 Lineage 1 Zimbabwe 77.99 7,112,888 99.29 2,853 2,151
MTB_K67 Lineage 1 Comoro Islands 78.29 7,097,284 98.95 2,943 2,070
MTB_K93 Lineage 1 Tanzania 65.52 6,017,391 99.22 2,949 2,041
MTB_T17 Lineage 1 The Philippines 72.59 7,130,412 99.36 3,788 1,988
MTB_T92 Lineage 1 The Philippines 46.01 5,068,053 98.85 4,080 1,994
MTB_00_1695 Lineage 2 Japan 77.92 7,394,236 99.02 2,875 1,351
MTB_98_1833 Lineage 2 China 64.49 6,395,114 99.1 2,962 1,361
MTB_M4100A Lineage 2 South Korea 40.47 4,022,290 98.94 3,316 1,354
MTB_T67 Lineage 2 China 78.77 7,616,603 98.73 2,820 1,343
MTB_T85 Lineage 2 China 61.65 6,159,284 99.04 3,046 1,377
MTB_91_0079 Lineage 3 Ethiopia 74.03 7,228,038 99.14 2,920 1,363
MTB_K49 Lineage 3 Tanzania 75.52 6,845,266 99.25 2,195 1,416
H37Rv Lineage 4 USA Reference
MTB_4783_04 Lineage 4 Sierra-Leone 78.12 7,466,814 98.78 1,559 741
MTB_GM_1503 Lineage 4 The Gambia 82.26 7,891,933 99.08 2,283 782
MTB_K37 Lineage 4 Uganda 59.86 5,480,451 98.85 2,496 822
MAF_11821_03 Lineage 5 Sierra-Leone 78.22 7,491,737 99.02 3,741 2,102
MAF_5444_04 Lineage 5 Ghana 79.75 7,578,690 98.92 3,686 2,079
MAF_4141_04 Lineage 6 Sierra-Leone 72.62 7,027,143 98.61 3,886 2,180
MAF_GM_0981 Lineage 6 The Gambia 76.39 7,350,873 99 4,451 2,213
MTB_K116 *M. canettii* Somalia 93.01 6,544,254 96.32 19,008 14,730
Total MTBC 62,327 32,745
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
**Notes:**
Defined as in [@R8].
Compared to the reference genome H37Rv.
######
Distribution of synonymous and non-synonymous SNPs in gene concatenates
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Gene category Length of\ Measure A[a](#TFN3){ref-type="table-fn"} Measure B[b](#TFN4){ref-type="table-fn"}
Concatenate\
(base pairs)
-------------------------------------------------- -------------- ------------------------------------------ ------------------------------------------ ------ ----------------------------------- -----------
**Essential** 907,584 1,124.83 755.17 1.49 0.53 0.45-0.67
**Non-essential** 2,674,329 4,392.51 2,338.49 1.88 0.65 0.78-0.56
**Antigens** 81,660 126.5 87.5 1.45 0.57 0.17-1.15
**Epitopes** 12,234 19 12 1.58 na[d](#TFN6){ref-type="table-fn"} na
**Epitopes** [c](#TFN5){ref-type="table-fn"} 11,088 9 12 0.75 na na
**Non-epitopes** [c](#TFN5){ref-type="table-fn"} 68,556 106.5 75.5 1.41 na na
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
**Notes:**
The number of non-redundant synonymous and non-synonymous SNPs after mapping the changes onto the phylogeny shown in [Figure 1](#F1){ref-type="fig"}. An overall dN/dS was calculated based on these SNPs and is shown in [Figure 3](#F3){ref-type="fig"} (Measure A; see Materials and Methods).
Calculated using Measure B. The median dN/dS was calculated from the 21 strain specific dN/dS values. This measure of dN/dS could only be calculated for the essential, non-essential and antigen categories because in the epitope and non-epitope concatenates some strains had zero values for synonymous or non-synonymous changes.
After exclusion of the three outlier antigens *esxH, pstS1*, and Rv1986 (see main text).
not applicable.
[^1]: AUTHOR CONTRIBUTION STATEMENTS
I.C., J.D.E. and S.G. designed the study; P.M.S., S.N., K.K. and S.G. contributed sources of *M. tuberculosis* DNA and demographic information; I.C., J.C. and J.G. performed DNA sequencing and bioinformatics; I.C., P.M.S., J.D.E. and S.G. wrote the manuscript with comments from all authors.
|
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
import os
import unittest
from datetime import datetime
from azure_devtools.scenario_tests import AllowLargeResponse
from .utilities.helper import DevopsScenarioTest, disable_telemetry, set_authentication, get_test_org_from_env_variable
DEVOPS_CLI_TEST_ORGANIZATION = get_test_org_from_env_variable() or 'Https://dev.azure.com/azuredevopsclitest'
class AdminBannerTests(DevopsScenarioTest):
@AllowLargeResponse(size_kb=3072)
@disable_telemetry
@set_authentication
def test_admin_banner_addUpdateShowListRemove(self):
self.cmd('az devops configure --defaults organization=' + DEVOPS_CLI_TEST_ORGANIZATION)
admin_banner_message = 'Sample banner message'
admin_banner_type = 'warning'
admin_banner_updated_message = 'Sample updated banner message'
admin_banner_updated_type = 'error'
admin_banner_id = self.create_random_name(prefix='banner-id-', length=15)
admin_banner_expiration_date = datetime.today().strftime('%Y-%m-%d')
try:
#add a banner to the project
add_admin_banner_command = ('az devops admin banner add --id ' + admin_banner_id + ' --message "' + admin_banner_message + '" --type ' + admin_banner_type +
' --expiration ' + admin_banner_expiration_date +
' --output json --detect false --debug')
add_admin_banner_output = self.cmd(add_admin_banner_command).get_output_in_json()
assert len(add_admin_banner_output) > 0
assert add_admin_banner_output[admin_banner_id]["level"] == admin_banner_type
assert add_admin_banner_output[admin_banner_id]["message"] == admin_banner_message
from azext_devops.dev.common.arguments import convert_date_string_to_iso8601
iso_date = convert_date_string_to_iso8601(admin_banner_expiration_date)
assert add_admin_banner_output[admin_banner_id]["expirationDate"] == iso_date
#Test was failing without adding a sleep here. Though the create was successful when queried after few seconds.
self.sleep_in_live_run(5)
#update banner
update_admin_banner_command = ('az devops admin banner update --id ' + admin_banner_id + ' --message "' + admin_banner_updated_message +
'" --expiration ' + '""' +
' --type ' + admin_banner_updated_type + ' --output json --detect false')
update_admin_banner_output = self.cmd(update_admin_banner_command).get_output_in_json()
assert len(update_admin_banner_output[admin_banner_id]) > 0
assert update_admin_banner_output[admin_banner_id]["level"] == admin_banner_updated_type
assert update_admin_banner_output[admin_banner_id]["message"] == admin_banner_updated_message
assert update_admin_banner_output[admin_banner_id]["expirationDate"] == ''
#Test was failing without adding a sleep here. Though the update was successful when queried after few seconds.
self.sleep_in_live_run(5)
#list banner command
list_admin_banner_command = 'az devops admin banner list --output json --detect false'
list_admin_banner_output = self.cmd(list_admin_banner_command).get_output_in_json()
assert len(list_admin_banner_output[admin_banner_id]) > 0
assert list_admin_banner_output[admin_banner_id]["level"] == admin_banner_updated_type
assert list_admin_banner_output[admin_banner_id]["message"] == admin_banner_updated_message
#show banner command
show_admin_banner_command = 'az devops admin banner show --id ' + admin_banner_id + ' --output json --detect false'
show_admin_banner_output = self.cmd(show_admin_banner_command).get_output_in_json()
assert len(show_admin_banner_output[admin_banner_id]) > 0
assert show_admin_banner_output[admin_banner_id]["level"] == admin_banner_updated_type
assert show_admin_banner_output[admin_banner_id]["message"] == admin_banner_updated_message
finally:
#TestCleanup - remove admin banner
remove_admin_banner_command = 'az devops admin banner remove --id ' + admin_banner_id + ' --output json --detect false'
self.cmd(remove_admin_banner_command)
#Verify remove
#Test was failing without adding a sleep here. Though the remove was successful.
self.sleep_in_live_run(5)
list_admin_banner_command = 'az devops admin banner list --output json --detect false'
list_admin_banner_output = self.cmd(list_admin_banner_command).get_output_in_json()
assert admin_banner_id not in list(list_admin_banner_output.keys())
|
.set noat # allow manual use of $at
.set noreorder # don't insert nops after branches
glabel func_00400090
/* 000090 00400090 03E00008 */ jr $ra
/* 000094 00400094 24820001 */ addiu $v0, $a0, 1
/* 000098 00400098 03E00008 */ jr $ra
/* 00009C 0040009C 00000000 */ nop
/* 0000A0 004000A0 03E00008 */ jr $ra
/* 0000A4 004000A4 00000000 */ nop
glabel test
/* 0000A8 004000A8 27BDFFD8 */ addiu $sp, $sp, -0x28
/* 0000AC 004000AC AFBF0014 */ sw $ra, 0x14($sp)
/* 0000B0 004000B0 AFA40028 */ sw $a0, 0x28($sp)
/* 0000B4 004000B4 AFA5002C */ sw $a1, 0x2c($sp)
/* 0000B8 004000B8 AFA60030 */ sw $a2, 0x30($sp)
/* 0000BC 004000BC AFA70034 */ sw $a3, 0x34($sp)
/* 0000C0 004000C0 8FAE0028 */ lw $t6, 0x28($sp)
/* 0000C4 004000C4 8FAF002C */ lw $t7, 0x2c($sp)
/* 0000C8 004000C8 01CFC021 */ addu $t8, $t6, $t7
/* 0000CC 004000CC AFB80024 */ sw $t8, 0x24($sp)
/* 0000D0 004000D0 8FB9002C */ lw $t9, 0x2c($sp)
/* 0000D4 004000D4 8FA80030 */ lw $t0, 0x30($sp)
/* 0000D8 004000D8 03284821 */ addu $t1, $t9, $t0
/* 0000DC 004000DC AFA90020 */ sw $t1, 0x20($sp)
/* 0000E0 004000E0 AFA0001C */ sw $zero, 0x1c($sp)
/* 0000E4 004000E4 8FAA0024 */ lw $t2, 0x24($sp)
/* 0000E8 004000E8 1540000D */ bnez $t2, .L00400120
/* 0000EC 004000EC 00000000 */ nop
/* 0000F0 004000F0 8FAB0020 */ lw $t3, 0x20($sp)
/* 0000F4 004000F4 1560000A */ bnez $t3, .L00400120
/* 0000F8 004000F8 00000000 */ nop
/* 0000FC 004000FC 0C100024 */ jal func_00400090
/* 000100 00400100 01602025 */ move $a0, $t3
/* 000104 00400104 AFA20020 */ sw $v0, 0x20($sp)
/* 000108 00400108 8FAC0020 */ lw $t4, 0x20($sp)
/* 00010C 0040010C 15800004 */ bnez $t4, .L00400120
/* 000110 00400110 00000000 */ nop
/* 000114 00400114 8FAD0034 */ lw $t5, 0x34($sp)
/* 000118 00400118 11A00003 */ beqz $t5, .L00400128
/* 00011C 0040011C 00000000 */ nop
.L00400120:
/* 000120 00400120 240E0001 */ addiu $t6, $zero, 1
/* 000124 00400124 AFAE001C */ sw $t6, 0x1c($sp)
.L00400128:
/* 000128 00400128 10000003 */ b .L00400138
/* 00012C 0040012C 8FA2001C */ lw $v0, 0x1c($sp)
/* 000130 00400130 10000001 */ b .L00400138
/* 000134 00400134 00000000 */ nop
.L00400138:
/* 000138 00400138 8FBF0014 */ lw $ra, 0x14($sp)
/* 00013C 0040013C 27BD0028 */ addiu $sp, $sp, 0x28
/* 000140 00400140 03E00008 */ jr $ra
/* 000144 00400144 00000000 */ nop
/* 000148 00400148 00000000 */ nop
/* 00014C 0040014C 00000000 */ nop
|
Q:
jQuery get and displaying html results
I have the following jQuery get:
$.get("rssread.cfm?number=10", function (d) {
$('#feedContent').append($(d).html());
});
The get works fine and what it returns is some html of unordered lists and their items and anchor tags. No big deal. I know that the html is there and properly formed because I can do
$(d).find("ul li a").each(function (i) {
alert("text: " + $(this).text());
alert("href: " + $(this).attr('href'));
});
and see the data in the tags (by the alert). However, I really would like to view all of the HTML in $(d) so that I can do some testing, but for the life of me I cannot figure out how to put the contents of $(d) into the div or a span tag!!
The div tag is simply:
<div id="feedContent">
</div>
And this is the part that doesn't work (as seen at the very top of my post) that I expect would (the div comes up empty every time even though I know that $(d) contains the html.
$('#feedContent').append($(d).html());
Thanks for any help in advance!
A:
Try simply doing this, if you're sure html is returned.
$('#feedContent').append( d );
|
The efficacy of a short education program and a short physiotherapy program for treating low back pain in primary care: a cluster randomized trial.
Cluster randomized clinical trial. To assess the efficacy of a short education program and short physiotherapy program for treating low back pain (LBP) in primary care. There is sparse evidence on the effectiveness of education and physiotherapy programs that are short enough to be feasible in primary care. Sixty-nine primary care physicians were randomly assigned to 3 groups and recruited 348 patients consulting for LBP; 265 (79.8%) were chronic. All patients received usual care, were given a booklet and received a consistent 15 minutes group talk on health education, which focused on healthy nutrition habits in the control group, and on active management for LBP in the "education" and "education + physiotherapy" groups. Additionally, in the "education + physiotherapy" group, patients were given a second booklet and a 15-minute group talk on postural hygiene, and 4 one-hour physiotherapy sessions of exercise and stretching which they were encouraged to keep practicing at home. The main outcome measure was improvement of LBP-related disability at 6 months. Patients' assessment and data analyses were blinded. During the 6-month follow-up period, improvement in the "control" group was negligible. Additional improvement in the "education" and "education + physiotherapy" groups was found for disability (2.0 and 2.2 Roland Morris Questionnaire points, respectively), LBP (1.8 and 2.10 Visual Analogue Scale points), referred pain (1.3 and 1.6 Visual Analogue Scale points), catastrophizing (1.6 and 1.8 Coping Strategies Questionnaire points), physical quality of life (2.9 and 2.9 SF-12 points), and mental quality of life (3.7 and 5.1 SF-12 points). The addition of a short education program on active management to usual care in primary care leads to small but consistent improvements in disability, pain, and quality of life. The addition of a short physiotherapy program composed of education on postural hygiene and exercise intended to be continued at home, increases those improvements, although the magnitude of that increase is clinically irrelevant. |
Some vehicles are equipped with an automatic stop/start controller which automatically stops and starts the vehicle engine, without operation of an ignition key, in order to improve fuel economy. Some of these vehicles are provided with an air conditioner, and the coolant of the engine is employed as a heat source for heating the vehicle, while the driving force of the engine is employed to drive an air conditioner compressor for cooling the vehicle interior. However, stopping of the engine by the automatic stop/start controller results in degradation of the heating or cooling performance.
One traditional automatic stop/start controller to address these problems includes an engine for drive, a motor, an air conditioner which employs a coolant or driving force of the engine, an engine controller to start/stop the engine according to the driving state, and an air conditioner controller to start/stop the air conditioner. A target temperature of output air that is sent to the driver's compartment of the vehicle is determined, and the engine is operated to permit air conditioning when this target temperature is at such a temperature that the driver's compartment needs air conditioning (see JP No. 3323097).
Another conventional automatic stop/start controller includes an engine, a motor, and an air conditioner to control the temperature by a refrigerating cycle created by a compressor and an evaporator in the vehicle. When the vehicle is stopped, if it is determined that the temperature of the air is below a predetermined temperature after evaporation, then the engine is required to start to maintain the cooling performance (see JP No. 3305974).
Further, another conventional automatic stop/start controller includes an air conditioner in a vehicle in which an engine is automatically stopped or started based on the driving state. The engine is prevented from automatic stopping so as to maintain air conditioning performance when a blower fan of the air conditioner is activated and the operational switch for the air conditioning is activated and the temperature of the outer air is below a predetermined temperature (see JP Laid-Open No. 2001-341515).
The conventional air conditioner on the vehicle having an automatic stop/start controller is a so-called automatic air conditioner system in that the system automatically controls the air in the driver's compartment to be at a set temperature. This automatic air conditioner cannot be applied to a so-called manual type air conditioner that manually controls the quantity of air, an air mix damper, and an outlet.
Although some of the conventional vehicle air conditioners having an automatic stop/start controller may be applied to the manual-type air conditioner, the heating performance cannot be maintained, since it is designed to keep the cooling performance by stopping the engine when the air temperature after the evaporation is below a predetermined temperature, thereby saving fuel. |
Why a New Car Purchase is Elusive for Many Americans
Over the years, the status of cars in the United States is a combination of bragging rights and function. This explains why the demand for them remains high despite the presence of other modes of transportation. The problem is many Americans these days may not be able to afford them.
The Challenge of Getting New Vehicles
A study illustrates the issue with buying a new car and highlights why used-car dealers, such as jtautogroup.com have never been this important.
According to research, many Americans may not be able to meet the 20/4/10 rule when it comes to getting a car loan. This ratio stands for 20% down payment, 4-years loan, and 10% of the income.
If those thinking to buy a vehicle decide to buy today, it’s possible that they need to extend their loans to two years more and spend more than 10% of their wages to the car even if they have already paid 20% down payment.
A report seems to corroborate this data. According to it, the average loan is already 68 months, which is almost six years. Moreover, for the first time, the average car financing reached over $30,000. As expected, the monthly payment has also increased to $503. This is also the first time it crossed the $500 mark.
The Reasons
There are different reasons Americans have a hard time paying for a new car. One of the strongest explanations is the disparity between the cost of cars and the incomes.
According to Kelley Blue Book, the average cost of a compact car is around $20,444. Meanwhile, the Bankrate.com report said 11 out of the 25 large metropolitan areas studied have households with an average income of less than $20,000.
This doesn’t mean used cars are getting cheaper either. However, within the last three decades, the cost of buying a secondhand vehicle has increased by only 25%. That’s 10 percentage points lower than that of a new car within the same period. Furthermore, the average service life of vehicles is already 11.5 years old. This means a used car can still have enough mileage or service value for its owners. |
Background
==========
Ovarian cancer ranks fifth when considering cancer mortality in women \[[@B1]\]. Unfortunately clinical or pathologic variables that can reliably predict recurrence in FIGO (Fédération Internationale de Gynécologie Obstétrique) stage I patients or resistance to platin-based chemotherapy in advanced stage disease (FIGO stage III or IV) are not available. The prognosis might be more optimally predicted based on gene expression analysis, since microarrays can capture tumour properties that might not be reflected in the commonly used clinical or histopathological variables at diagnosis.
Previously, we performed a pilot study consisting of microarray analysis on three groups of patients: seven stage I without recurrence, seven platin-sensitive advanced stage and six platin-resistant advanced stage ovarian tumours \[[@B2]\]. We investigated whether gene expression analysis can be used to distinguish between stage I and advanced stage ovarian tumours, and between platin-sensitive and platin-resistant ovarian tumours. The results showed that a considerable number of genes were differentially expressed between the different tumour classes. This was confirmed by principal component analysis (PCA) where the distinction between the three tumour classes was visualised. A least squares support vector machine (LS-SVM) analysis showed that the estimated classification performance was 100% for the distinction between stage I and advanced stage disease, and 76.92% for the distinction between platin-sensitive and platin-resistant disease when using a leave-one-out approach. These results indicated that gene expression analysis could be appropriate to predict prognosis of ovarian tumours. However, since leave-one-out cross validation can overestimate the performance of a model, an independent evaluation is needed to have an unbiased estimate of the generalization capacity.
In the current study, we describe results of an independent evaluation of models for predicting disease stage and response to platin-based chemotherapy built on the data of the pilot. Our goal was to evaluate whether an independent study could confirm the applicability of microarrays for the clinical management of ovarian cancer. This independent evaluation was carried out on a set of 49 new tumour samples which were subjected to the same experimental protocol. This data set was used as a test set to estimate the performance when predicting the difference between stage I and advanced stage disease, and between platin-sensitive and platin-resistant disease using models trained on the pilot data set. After presenting the results, we discuss the generalization performance on this independent data set and compare with models based on previously published gene sets.
Methods
=======
Tumour characteristics
----------------------
Tissue collection and analysis were approved by the local ethical committee. After obtaining informed consent, tumour biopsies were sampled and immediately frozen in liquid nitrogen during primary surgery and were taken from three groups of patients: 4 from patients with stage I disease, 30 from patients with platin-sensitive advanced stage disease and 15 from patients with platin-resistant advanced stage disease \[[@B3]\]. In this study, similarly as in the pilot study, we will refer to these three groups as: I, A~s~and A~r~respectively. The patient and tumour characteristics are shown in table [1](#T1){ref-type="table"}.
######
Tumour characteristics. Clinical information of the tumour samples in the independent data set
Class Ar (n = 15) Class As (n = 30) Class I (n = 4)
------------------------------------------------------ ------------------- ------------------- -----------------
Mean Age (range), years 61.8 61.3 49
*Histologic type*
Serous 14 29 1
Endometrioid 1 \- 2
Mucinous \- \- 1
Mixed carcinoma \- 1 \-
*FIGO stage*
I \- \- 4
III 9 28 \-
IV 6 2 \-
*Differentiation grade*
Grade 1 \- 1 1
Grade 2 5 7 2
Grade 3 10 22 1
*Operation*
Primary surgery 6 22 4
Interval surgery after three courses of chemotherapy 3 8 \-
Diagnostic biopsy, no surgery 6 \- \-
*Residual tumour load after surgery*
0 cm 8 24 4
0--1 cm \- 1 \-
1--2 cm \- 4 \-
\> 2 cm 7 1 \-
*Time to progression after first-line chemotherapy*
\< 6 months 15 \- \-
6--12 months \- \- 1
\> 12 months \- 22 2
No recurrence \- 8 1
*Current status*
No evidence of disease \- 8 1
Alive with evidence of disease 1 8 1
Died of disease 14 14 2
*Median follow-up, months* 16 35 18
Microarray procedures
---------------------
Microarray procedures were similar to our pilot study \[[@B2]\]. Briefly, each tumour in the independent data set was hybridized twice (dye-swap) against the same common reference pool from the pilot study on an array containing 21.372 probes enriched for genes related to ovarian cancer. From each patient, mRNA was amplified and labelled with Cy3 and Cy5, according to Puskas and collaborators \[[@B4]\]. All protocols can be downloaded from ArrayExpress \[[@B5]\]. Microarray data and information recommended by the MIAMI (Minimum Information About a MIcroarray experiment) guidelines can be found on the ArrayExpress website \[[@B6]\] (Accession number E-MEXP-995 for the independent data set and E-MEXP-979 for the pilot data).
Microarray data analysis
------------------------
The gene expression data were analysed using MATLAB 7 (R2006b). Pre-processing was done similarly as in our pilot study. Briefly, each microarray in the independent data set was analysed separately in the following order: the intensities were background-corrected, log-transformed and finally normalised using the intensity dependent Lowess fit procedure. The mean of the replicate and normalised log ratios was used as a measure for expression. After pre-processing, first PCA and secondly LS-SVM were used to analyse the data. PCA was used for visualisation of the data while LS-SVMs were used for building classification models. A p-value is considered statistically significant if smaller than 0.05. All statistical tests were two-sided unless mentioned otherwise. Exact bionomial confidence intervals were calculated using SAS 9.1.3 statistical software.
PCA
---
The procedure followed during PCA analysis can be found in Figure [1](#F1){ref-type="fig"}. This figure schematically shows the different steps involving the pilot and the independent data set. First, we rank the genes according to their differential expression between the three classes (Kruskal Wallis test) the pilot data and the top 3000 genes were selected. Then PCA analysis was performed on the reduced pilot data set and the three largest principal components were selected (i.e., the directions associated with the largest eigenvalues). Finally, we used the gene expression values from the independent data set corresponding to the 3000 genes that were previously selected in the pilot data set and projected this reduced independent data set in the space defined by the three largest principal components in the pilot data. Finally, the 3000 corresponding gene expression values were selected in the independent data set and the reduced independent data set was projected in this space.
{#F1}
LS-SVMs
-------
Next, we used the pilot data set to build an LS-SVM to predict disease stage and an LS-SVM to predict the response to platin-based chemotherapy (MATLAB scripts were downloaded from LS-SVMlab version 1.5 \[[@B7],[@B8]\]). In the pilot study, an RBF kernel did not improve results therefore in all subsequent analysis a linear kernel was used. Figure [2](#F2){ref-type="fig"} shows the different steps in this analysis which consists of the same steps for both two-class classification problems. First, the genes were ranked according to the differential expression between two classes using only the pilot study data and the top 3000 genes in this ranking were selected (Wilcoxon rank sum test). Next, the corresponding gene expression values were selected in the independent data set. Subsequently, an LS-SVM with linear kernel was trained using the reduced pilot data and applied to predict the class of the samples in the independent data set. This results in a estimate of the generalization performance of a model built only on the pilot study data for both classification problems.
{#F2}
Comparison with other profiles
------------------------------
To assess the performance of models based on our data we compared them with the performance of models based on published gene sets that predict a broad range of outcomes in ovarian cancer. It is difficult to directly apply the published models on our data since multiple different microarray platforms (e.g. one channel Affymetrix microarrays(Uv95Av2, HumanGeneFl, U133A) or two-channel custom arrays (cDNA)) have been used to derive these gene sets. Therefore we adopted the strategy visualized in Figure [3](#F3){ref-type="fig"}. First, the gene set is extracted from the literature and, if not already done, the genes were translated to HUGO (Human Genome Organization) gene symbols. Then, we extracted, in both the pilot and independent data set, the genes corresponding to the HUGO gene set from the literature. Subsequently, our model building strategy proceeds as previously described (see Figure [3](#F3){ref-type="fig"}). We used gene sets related to the response on platin-based chemotherapy \[[@B9]-[@B11]\], gene sets related to survival in epithelial ovarian cancer (EOC) \[[@B12]\] or in advanced stage serous EOC \[[@B13],[@B14]\], gene sets discriminating between the major histological types (serous, mucinous, clear cell and endometrioid) \[[@B15],[@B16]\], gene sets distinguishing between normal ovarian tissue and disease \[[@B17],[@B18]\], gene sets discriminating between low malignant potential or borderline disease and invasive disease \[[@B19]\], gene sets differentiating between ovarian cancer tissue and metastatic tissue \[[@B20]\] and a gene set predicting the presence of disease at second look surgery \[[@B21]\]. These gene sets where constructed based on affymetrix microarrays (HuGeneFl, U95 set, U95Av2, U133A), different cDNA microarrays or HPLC (High Performance Liquid Chromatography) followed by ESI-TOF (Electrospray Ionization Time of Flight) mass spectrometry.
{#F3}
Results
=======
In this study we describe the results of the evaluation of models developed based on the data from our previously published pilot study \[[@B2]\] using PCA analysis or LS-SVMs on independently gathered microarray data. Note that all stage I patients in the pilot study had ovarian tumours without recurrence while in the current study population the four patients with stage I disease consist of 3 stage I tumours with recurrence and 1 stage I tumour without recurrence. Figure [4](#F4){ref-type="fig"} shows the results of the PCA analysis. This figure visualises the projection of the patients from the independent data set belonging to the stage I, platin-sensitive and platin-resistant group onto the three principal component directions calculated based on the pilot study data. For all three groups, the data are scattered around the origin which indicates that the principal components computed based on the pilot data were not able to reproduce the three classes in the independent data set. Additionally, we did not observe a clear distinction between the stage I patients with and without recurrence (see Figure [4](#F4){ref-type="fig"}, top panel).
{#F4}
Secondly, we used LS-SVMs to assess if a supervised classification model can discriminate between the stage I and advanced stage disease, and between platin-sensitive and platin resistant disease. This resulted in a classification accuracy of 97.96% (CI 19%--99%) for the distinction between stage I and advanced stage disease which corresponds to one stage I tumour out of four that was classified as an advanced stage tumour. Next, a classification accuracy of 51.11% was obtained for the distinction between platin-sensitive and platin-resistant disease. This corresponds to five platin-resistant and eighteen platin-sensitive tumours that were misclassified, corresponding to a sensitivity of 67% (CI 38%--0.88%) and specificity of 40% (CI 23%--0.59%) when considering a platin resistant patient as a positive
Table [2](#T2){ref-type="table"} shows the accuracy on the independent data set for predicting stage and platin sensitivity of the models based on the pilot data and previously published gene sets. Most gene sets are able to predict ovarian cancer stage reliably (ranging from 87.8%--97.96%). Five profiles were less successful: Lancaster disease vs. normal (79.6%), Roberts platin sensitivity vs. platin resistance (75.5%) and both Lancaster ovarian cancer tissue vs. metastatic tissue models (71.4% and 57.14%). When focusing on the prediction of platin sensitivity, 5 of the published gene sets predicted the majority class on the independent data set resulting in 66.6% (30/45) classification accuracy. However, such a classifier has very little practical use since it predicts the same class for all independent data set samples. Finally, the Lancaster metastasis model consisting of 25 genes performed best with an accuracy of 60% corresponding to a sensitivity of 86% and specificity of 47% when considering a platin resistant patient as a positive (P-value 0.12, one sided binomial test).
######
Comparison with published gene sets. Accuracy of all published gene set models on the independent data set both when predicting stage and platin resistance ranked by stage accuracy. Gene sets have been named after the first author of the publication followed by a description of its relationship to patient outcome. References have been used when the same first author had multiple publications
Gene set first author Description Stage accuracy (%) Platin accuracy (%)
----------------------- ------------------------------------------------------------------------------------- -------------------- ---------------------
Ouellet *low malignant potential/borderline disease vs. invasive disease: tumour tissue* 97.96 55.56
Hibbs *Disease vs. normal or other tissues* 95.92 66.67\*
Spentzos *Residual disease vs. complete response at second look surgery* 93.88 37.78
Lu *Disease vs. normal* 93.88 48.89
Helleman *Platin sensitivity vs. platin resistance: differential expression* 93.88 44.44
Ouellet *low malignant potential/borderline disease vs. invasive disease: primary cultures* 91.84 53.33
Zhu *Clear cell vs. serous histology* 91.84 66.67\*
Lancaster \[14\] *Short-term vs. long-term survival* 91.84 44.44
Helleman *Platin sensitivity vs. platin resistance: 16-gene predictive model* 91.84 46.67
Berchuck *Short-term vs. long-term survival* 91.84 55.56
Schwartz *Clear cell vs. other histological types* 91.84 46.67
Hartmann *Early vs. late relapse after platin based chemotherapy* 91.84 66.67\*
Spentzos *Short-term vs. long-term survival* 87.76 66.67\*
Lancaster \[14\] *Disease vs. normal* 79.59 66.67\*
Roberts *Platin sensitivity vs. platin resistance* 75.51 42.22
Lancaster \[20\] *Ovarian cancer tissue vs. metastatic tissue: 27-gene predictive model* 71.43 60.00\#
Lancaster \[20\] *Ovarian cancer tissue vs. metastatic tissue: differential expression* 57.14 66.67\*
\*models predicting only the majority class (platin sensitive patients) on the independent data set
\#best platin model
Discussion
==========
Recently, several studies have investigated the use of microarrays to predict several clinically relevant outcomes of ovarian cancer \[[@B9],[@B10],[@B12],[@B13],[@B15],[@B21]\]. However, the identified gene sets or developed models in these studies have not been properly evaluated on independently gathered data. Microarray technology is notorious for its low signal-to-noise ratio, suffering from many potential experimental sources of error (e.g. dye effect, print-tip effect, array effect) on top of the biological variation inherent to the samples. Moreover due to the huge number of genes (e.g. \~25.000) compared to the low number of samples (\~50), overfitting models is a real danger. This occurs when models fit the training data too well and are not capable of predicting new samples. Overfitting can only be detected when using proper cross-validation techniques or independent test set analysis. Only a true independent test set -- not used for determining pre-processing parameters, selection of differentially expressed genes, model building or model selection -- can be used to estimate the true performance of models \[[@B22]\]. For example, we noticed a case of inappropriate use of a test set where this data set was used to select the best model \[[@B10],[@B22]\]. This implies that the model will perform well on this particular test set but, due to the high-dimensional nature of microarray data, this performance might be impossible to reproduce on truly independent data. Moreover, a recently published review of published microarray studies that focus on cancer related outcomes showed that the most common flaw in classification studies is a biased estimation of the accuracy (present in 12 of 28 studies published in 2004 \[[@B23]\]). This illustrates that inappropriate evaluation of classifiers based on microarray data is a common problem when building models to predict cancer outcomes.
Although more data should be gathered on stage I patients, the results presented in this paper indicate that predicting the response to platin-based chemotherapy is not straightforward and more subtle than predicting advanced stage disease. Furthermore, since most published studies lack a proper independent evaluation, their results should be cautiously interpreted. We advocated the use of microarrays based on the results from our pilot study, but warned for overestimating the generalization performance, as these results were based on a cross validation technique instead of using an independent data set. Additionally, since the pilot study performance for predicting the response to platin-based chemotherapy was not statistically significant, we searched for confirmation on an independent test set. Therefore, we carried out a new study to estimate the performance of models based on independently gathered microarray data in an unbiased way. The present results, both the PCA analysis and the performance of the LS-SVM models, show that the independent evaluation is disappointing. Only the LS-SVM stage model performed well and was able to distinguish early stage and advanced stage disease on the independent data set. The PCA analysis however demonstrated that, for the three classes, the independent data did not cluster to their corresponding class in the pilot study. Additionally, the LS-SVM platin model was not able to perform better than a random predictor. Therefore, we argue that a gene expression study should be validated on independently gathered data before the results can be considered for clinical use. Independently gathered data can be influenced by subtle changes in sample preparation, sample analysis and sample hybridizations, which can deteriorate model performance. Even the techniques used by the same lab might undergo subtle changes throughout time, causing a drop in model performance when the model is applied on new patient samples. It is unclear whether published models are robust against these influences.
Additionally, ovarian cancer represents an immense variation in histological structure and biological behaviour which complicates microarray based modelling. A large number of samples is required to correctly represent the complete microscopic spectrum. It is not unlikely that an independent data set contains a different mix of tumour samples with slightly different histological characteristics compared to the pilot study, complicating independent evaluation. Moreover, the quality of the samples has a major effect on the ability to detect true differential expression and subsequent model building. However in most cases, including ours, only a limited number of samples with sufficient follow-up is available which limits our ability to obtain a similar distribution of histopathology in the pilot and independent data set, and also forces us to use archival samples instead of new ones.
The comparison of the LS-SVM stage and LS-SVM platin model with published genes sets confirmed that predicting disease stage is easier than predicting response to platin-based chemotherapy. For predicting disease stage many previously developed gene sets are able to distinguish both classes indicating that many genes change when a tumour progresses from early to advance stage disease. Predicting the response to platin based chemotherapy is more challenging. None of the previously developed gene set models related to the response to platin based chemotherapy are able to predict this outcome significantly better than chance. This indicates that these gene sets do not generalize to our independently gathered data set. Only the 27-gene model by Lancaster and colleagues \[[@B20]\], which distinguishes between primary ovarian cancer and metastatic tissue, is able to predict the response to platin based chemotherapy to some degree. This gene set contains 12 genes which have previously been shown to be involved in oncogenesis and 10 genes which have been implicated in the p53 pathways. The performance of this gene set on our independent data set provides some evidence that genes distinguishing between primary and metastatic tissue also play a role in resistance to therapy.
Conclusion
==========
Our results show that an independent evaluation of models based on gene expression data is necessary to validate models before considering subsequent steps to make microarray analysis clinically available. Previously published studies should be critically reviewed, in light of the current results, to assess if the reported model performance is not overestimated by inappropriate use of a test set and, if this is not the case, to consider if an independent study would confirm the reported model performance. Finally, prospective validation in multi-centre trials is necessary before microarray technology can move to clinical practice.
Competing interests
===================
The author(s) declare that they have no competing interests.
Authors\' contributions
=======================
FDS, OG, NP, FA, BDM, DT and IV conceived the study and provided clinical and mathematical background. TVG looked up patient records in the database and tissue samples in the tumour bank, performed sample annotation and gathered follow-up of patients. KE performed pre-processing of the data sets. OG, FDS and NP performed the PCA and LS-SVM analysis, and the comparison with published gene set analysis. All authors contributed to the manuscript and approved it.
Pre-publication history
=======================
The pre-publication history for this paper can be accessed here:
<http://www.biomedcentral.com/1471-2407/8/18/prepub>
Acknowledgements
================
OG is a supported by the Institute for the Promotion of Innovation through Science and Technology in Flanders (IWT-Vlaanderen). NP is a Henri Benedictus Fellow of the King Baudouin Foundation and the Belgian American Educational Foundation (B.A.E.F.). KE is supported by CoE EF/05/007 SymBioSys. BDM is supported by the Research Council KUL: GOA AMBioRICS, CoE EF/05/007 SymBioSys, IDO (Genetic networks), Flemish Government: FWO: PhD/postdoc grants, projects G.0407.02 (support vector machines), G.0413.03 (inference in bioi), G.0388.03 (microarrays for clinical use), G.0499.04 (Statistics), G.0302.07 (SVM/Kernel); Belgian Federal Science Policy Office: IUAP P5/22 (\'Dynamical Systems and Control: Computation, Identification and Modelling, 2002--2006); FP6-NoE Biopattern; FP6-IP e-Tumours, FP6-MC-EST Bioptrain.
|
News
When Conrad Lifsey and his partner, Derek Loftin, who own a luxury RV rental company in Palm Springs, bought a house 2015 in a property development called Sol, what sealed the deal for them wasn’t inside, but up on the roof. “No other property in Palm Springs had a rooftop deck and it really set things apart,” said Lifsey, 52. “...
Within the density of downtown Washington, it’s rare to find a freestanding office building with unobstructed views. But the new structure at the intersection of 17th and M Streets and Rhode Island Avenue NW in the District’s Golden Triangle neighborhood is the exception to the city’s infill rule, offering impressive vistas from all sides. Visible from its windows and rooftop terrace are the Washington Monument, Jefferson Memorial, St. Matthew’s Cathedral...
1200 Seventeenth received its LEED Platinum certification. The building includes a number of sustainable design features, including water-saving fixtures that reduce water use by 35 percent, low-emitting materials and ample ventilation that increase air quality, and a state-of-the-art, dedicated outside air system (DOAS) with VAV controls.
...
Moves to New LEED® Platinum Targeted Trophy-Class Building in Heart of Downtown
WASHINGTON, DC— Effective today, Pillsbury has moved its Washington, DC office from 2300 N Street, NW in the West End to a newly constructed building at 1200 Seventeenth Street, NW in the Golden Triangle district.
Pillsbury occupies 101,000 of the 168,000 leasable square feet, consisting of the majority of the first floor as well as the top six floors...
Akridge came to the proverbial party with one partner and left with another. Its prime catch, marquee law firm Pillsbury Winthrop Shaw Pittman LLP, almost got away — twice. And the project, a nearly 170,000-square-foot building by M and 17th streets NW, temporarily became one of D.C.’s largest outdoor swimming pools in May when a water main ruptured and flooded the site that was, at the time, being excavated.
Nevertheless, Akridge... |
Peoplewhoeat’s Webloghttps://peoplewhoeat.wordpress.com
We cook, we eat, we live, we party (with food)Sat, 11 Aug 2018 08:00:11 +0000enhourly1http://wordpress.com/https://s0.wp.com/i/buttonw-com.pngPeoplewhoeat’s Webloghttps://peoplewhoeat.wordpress.com
Fridge Funhttps://peoplewhoeat.wordpress.com/2012/03/20/fridge-fun/
https://peoplewhoeat.wordpress.com/2012/03/20/fridge-fun/#respondTue, 20 Mar 2012 14:31:12 +0000http://peoplewhoeat.wordpress.com/?p=279I was inspired by these fantastic photos of people’s refrigerators , and so I decided to take a picture of my fridge to share with you. You will also note that, in addition to having lots of condiments, I am nowhere near the photographer that Mark Menjivar is. Can’t win ’em all…
]]>https://peoplewhoeat.wordpress.com/2012/03/20/fridge-fun/feed/0peoplewhoeatDSC_0708Sweet N’ Sour Creamhttps://peoplewhoeat.wordpress.com/2011/07/01/sweet-n-sour-cream/
https://peoplewhoeat.wordpress.com/2011/07/01/sweet-n-sour-cream/#commentsFri, 01 Jul 2011 15:23:13 +0000http://peoplewhoeat.wordpress.com/?p=273
A lovely assortment of fresh berries with Sweet N' Sour Cream for dessert
This has always been a summer treat in my house–I personally just like to dip fresh fruit (particularly raspberries) in it, but we’ve also used it as an alternative to whipped cream to top fruity summer baked goods. I believe there is some fancy french name for this, but I don’t know what that may be…
All you do for this is mix some brown sugar into sour cream (this is one place where I really notice a difference between low-fat and full-fat sour cream). For dip for a dessert-sized portion of mixed berries, I usually mix two heaping spoonfuls of sour cream with a spoonful of brown sugar. Mix in the brown sugar and let it set a short while until it’s all dissolved.
]]>https://peoplewhoeat.wordpress.com/2011/07/01/sweet-n-sour-cream/feed/1peoplewhoeatBerries and CreamStuffed Dateshttps://peoplewhoeat.wordpress.com/2011/03/05/stuffed-dates/
https://peoplewhoeat.wordpress.com/2011/03/05/stuffed-dates/#respondSat, 05 Mar 2011 02:09:12 +0000http://peoplewhoeat.wordpress.com/?p=260Got this from Rachel Wilkerson‘s excellent rundown of Thanksgiving recipes. But I think these puppies are awesome enough to deserve their very own blog post. Also, unlike Rachel, I don’t think the cheese is optional. I’ve made them three times now, and they’ve always been a hit!
Dates, the big ones, the best quality you can afford (it’s actually easier to buy the ones with the pit still in)
Bacon
Gorgonzola cheese
Pit the dates, and replace the pits with a similarly sized and shaped bit of cheese.
Cut the slices of bacon in half, and wrap a half-piece of bacon around each date.
Bake at 400* for 20 minutes, turning halfway.
They’re best slightly warm but not hot (about half an hour after coming out of the oven).
Ta-da! A filling, impressive, and easy appetizer! I made these the other day, chilled them, then reheated them at a party and they were still delicious.
Enjoy!
Emma
(No picture, because while they are DELICIOUS, they are not photogenic)
]]>https://peoplewhoeat.wordpress.com/2011/03/05/stuffed-dates/feed/0peoplewhoeatLasagna tarthttps://peoplewhoeat.wordpress.com/2011/02/26/lasagna-tart/
https://peoplewhoeat.wordpress.com/2011/02/26/lasagna-tart/#respondSat, 26 Feb 2011 23:10:07 +0000http://peoplewhoeat.wordpress.com/?p=268I had a vegetarian friend over and wanted to make something substantial, so I went for a lasagna tart from 101cookbooks. I couldn’t find ricotta at the store so I used cottage cheese, which I like in lasagna anyway. I used 250g, about 1 cup. I also layered parmesan cheese (by eye and taste) over the cottage cheese in each layer.
I did find that the wholemeal olive oil pastry was incredibly difficult to roll out unless quite a lot of water was used. It was very flakey and kept falling apart and was rolled thicker than I usually roll butter-based pastry. I may have to practice a few more times. I didn’t find that the taste of the lemon zest came through either so feel free to skip that. There was also a lot of leftover pastry even though it was quite thick, so I lined a few new mini tartlet tins I wanted to try out. Rolling it out thinly for the small tins was much easier; if you have small tins and don’t mind the tedium, use them instead. I baked them for 20 mins.
Got some sort of group excursion that will require snacks? This is what my friends and I did in high school for trips. Have everyone bring one or two items to contribute to a snack mix. Throw everything into a really big bowl or bag, give it a shake/stir and then divvy it up into individual snack baggies. Generally it’s a good idea to have the leader/adult/most boring friend bring the base–some kind of cereal or pretzel, usually. My favorite contributions were always Swedish Fish candies and honey bbq Fritos. Other good additions are mixed nuts, dried fruit, chocolate, shredded coconut, Goldfish, flavored or yogurt pretzels, cereals, etc. Pretty much anything sturdy enough to handle some jostling will make a tasty addition to a sweet and savory mix. Though this can be a healthy snack, that really depends on your friends…
Have fun!
Emma
]]>https://peoplewhoeat.wordpress.com/2011/02/21/group-gorp/feed/0peoplewhoeatGORPLobiani (Georgian Bean Bread)https://peoplewhoeat.wordpress.com/2011/02/20/lobiani-georgian-bean-bread/
https://peoplewhoeat.wordpress.com/2011/02/20/lobiani-georgian-bean-bread/#commentsSun, 20 Feb 2011 15:01:51 +0000http://peoplewhoeat.wordpress.com/?p=262I made this recipe from Darra Goldstein’s excellent The Georgian Feast. I brought it to a Super Bowl party, and it was a big hit–a savory, finger-food that’s quite filling but also a little different. The texture wasn’t the same as the lobiani that I bought on the streets of Tbilisi, but it was still very good. My friends suggested, though, that they would prefer a different spice blend; I’m not sure what would be good though. Any suggestions?
Makes 2 incredibly large breads (fortunately you can freeze it after you cut it)
Cream the butter. Beat in the eggs and sour cream. Mix in flour to make a soft dough.
On a well-floured board, roll the dough to a 15 x 18 rectangle. The dough will be INCREDIBLY sticky at this stage, but it will calm down fast, so just do your best the first few times. Sprinkle the rectangle with 1/4 teaspoon of baking soda. Fold the dough into quarters and reroll, repeating the baking soda, fold, re-roll procedure until the baking soda is used up. Place the dough into a floured bowl, cover it and leave to rise for 6-8 hours indoors or 2-3 hours in the sun.
For filling: Boil the kidney beans for about one hour. Drain, and then mash. Dice the onions and sautee them in the oil until soft. Stir the onions (and oil) into the kidney beans, and add the spices. Divide in half, and set aside.
Before you start assembling the bread, preheat the oven to 350* F. When the dough has risen, divide it into two parts. Roll each out in a large circle, keeping the inside of the circle thicker than the outside. Place the filling in the center, and bring the dough up around it, forming a sort of ball. Flatten this out, to a large disc. Brush the top of the bread with beaten egg yolk, and bake for 40-45 minutes until browned.
Enjoy!
Emma
]]>https://peoplewhoeat.wordpress.com/2011/02/20/lobiani-georgian-bean-bread/feed/2peoplewhoeatBlueberry Banana Breadhttps://peoplewhoeat.wordpress.com/2011/02/20/blueberry-banana-bread/
https://peoplewhoeat.wordpress.com/2011/02/20/blueberry-banana-bread/#respondSun, 20 Feb 2011 14:42:47 +0000http://peoplewhoeat.wordpress.com/?p=257I made this recipe yesterday and found it DELICIOUS, but I thought the directions and ordering of ingredients were a bit confusing. I haven’t made any substantive changes to the recipe, I’ve just rewritten it a bit for clarity.
Directions:
In a large bowl, cream together the egg and sugars. Add wet ingredients and mix, then dry.
Add the mashed bananas and mix.
Fold in blueberries, coconut, and pecans if using.
Pour into greased loaf or muffin pans.
Bake at 350* F for about an hour for a loaf, and about 20 minutes for muffins.
Cool slightly, loosen, and turn onto a baking rack to cool.
When cool, wrap in plastic wrap and keep refrigerated.
Enjoy!
-Emma
]]>https://peoplewhoeat.wordpress.com/2011/02/20/blueberry-banana-bread/feed/0peoplewhoeatSick Teahttps://peoplewhoeat.wordpress.com/2011/02/12/sick-tea/
https://peoplewhoeat.wordpress.com/2011/02/12/sick-tea/#respondSat, 12 Feb 2011 14:12:54 +0000http://peoplewhoeat.wordpress.com/?p=247This is what I always have when I have a cold (as I fear might be soon). The tea itself is actually strictly optional; it’s the lemon, honey and ginger that will feel wonderful on a sore throat, and will supposedly help your immune system.
In a mug, mix together approximately:
1 Tablespoon lemon juice
1 Tablespoon honey
1/2 inch fresh ginger, roughly chopped or about a teaspoon of ginger from a jar. Don’t use dried!
(1 tea bag of your choice, optional)
Hot Water
Mix it all together, and feel better!
]]>https://peoplewhoeat.wordpress.com/2011/02/12/sick-tea/feed/0peoplewhoeatGreener Pastureshttps://peoplewhoeat.wordpress.com/2011/01/30/greener-pastures/
https://peoplewhoeat.wordpress.com/2011/01/30/greener-pastures/#respondSun, 30 Jan 2011 16:53:01 +0000http://peoplewhoeat.wordpress.com/?p=249Our dear Juliet, who brought you Stuffed Peppers, Profiteroles, and Banana Chocolate Chip Cookies has started her own cooking, baking and crafting blog. If you’d like to see what she’s up to, you can visit her here: http://jooolzzz.wordpress.com/
-Peoplewhoeat
]]>https://peoplewhoeat.wordpress.com/2011/01/30/greener-pastures/feed/0peoplewhoeatBlue Cheese and Apple Omelethttps://peoplewhoeat.wordpress.com/2011/01/29/blue-cheese-and-apple-omelet/
https://peoplewhoeat.wordpress.com/2011/01/29/blue-cheese-and-apple-omelet/#respondSat, 29 Jan 2011 15:38:22 +0000http://peoplewhoeat.wordpress.com/?p=244I’ve been playing around with sweet and savory quite a lot lately–savory fruits, sweet veggies and the like. This recipe in particular was inspired by my host mom in Tbilisi. I don’t know her actual recipe, so I’ve been messing around with this for awhile to come up with something similar. I don’t think this is all that close to hers, but it’s really good!
(In other, totally unrelated news, my 90s plates from Goodwill were the ones they had in My So-Called Life)
2 eggs
a splash of milk
cream cheese (optional)
Blue Cheese that will melt (I like Cambazola or Blue Brie..especially the Saga brand)
Cube the apple and slice the garlic into chunks slightly smaller than the pieces of apple. Sautee the apple and garlic with the black pepper and cinnamon.
While the apple and garlic are sauteeing, beat together the eggs and milk (and if you choose to use cream cheese, it will make the egg part creamier). I learned that one of the tricks to making a good omelet is to make sure your mixture is thoroughly and smoothly beaten together.
Remove the apple mixture from the pan and put it aside. Add a bit more butter to the pan, and turn the heat up to medium high. When the butter is melted, pour in the eggs and swirl the pan to coat evenly. As you go, lift the edges so the uncooked egg can run under and cook.
When the omelet is mostly firm, place the cheese on top of one half. Add the apple mixture to that half. When the cheese is melted, Slide the omelet out of the pan and onto a plate.
In my opinion, this works equally well as a nice breakfast, or a quick dinner. |
05/07/2010
A doggy post...
This is *not* a beauty related post… so stop reading now if that’s what you’re looking for!
Had a couple of friends who were wondering about the reference to dogs in my previous post, Who let the dogs out.
well, the short answer is Dayna made a set of mineral e/s specially for one of the Voxers and named them after her dogs. So Scully, Jack, Popo and Gigi are all dogs. Gettit?
Speaking of dogs… my mafia wars clan on facebook is having a ‘family feud’. So my team, the Monkey Junkies, is up against the Toto’s Terrors… the sheer amount of trash talking via pictures on my clan’s public FB page is amusing… here’s a link if you wanna have a look.
Here’s one in particular that totally amused me.
and finally to wrap up this doggy post… here’s a cute smiling dog! He makes me wanna smile right back at him. |
The rat ortholog of the presumptive flounder antifreeze enhancer-binding protein is a helicase domain-containing protein.
The expression of winter flounder liver-type antifreeze protein (wflAFP) genes is tissue-specific and under seasonal and hormonal regulation. The only intron of the major wflAFP gene was demonstrated to be a liver-specific enhancer in both mammalian cell lines and flounder hepatocytes. Element B, the core enhancer sequence, was shown to interact specifically with a liver-enriched transcription factor, CCAAT/enhancer-binding protein alpha (C/EBPalpha), as well as a presumptive antifreeze enhancer-binding protein (AEP). In this study, the identity of the rat AEP ortholog was revealed via its DNA-protein interaction with element B. It is a helicase-domain-containing protein, 988 amino acids in length, and is homologous to mouse Smubp-2, hamster Rip-1 and human Smubp-2. The specific binding between element B and AEP was confirmed by South-Western analysis and gel retardation assays. Residues in element B important to this interaction were identified by methylation interference assays. Mutation on one of the residues disrupted the binding between element B and AEP and its enhancer activity was significantly reduced, suggesting that AEP is essential for the transactivation of the wflAFP gene intron. The rat AEP is ubiquitously expressed in various tissues, and the flounder homolog is present as shown by genomic Southern analysis. The potential role of AEP in regulating the flounder AFP gene expression is discussed. |
Lal Bakhsh
Lal Bakhsh (born 4 November 1943) is a former Pakistani cyclist. He competed in the team pursuit event at the 1964 Summer Olympics.
References
Category:1943 births
Category:Living people
Category:Pakistani male cyclists
Category:Olympic cyclists of Pakistan
Category:Cyclists at the 1964 Summer Olympics
Category:Place of birth missing (living people) |
Fred Plaut
Frederick "Fred" Plaut (1907-1985) was a recording engineer and amateur photographer. He was employed by Columbia Records in the US during the 1940s, 1950s, and 1960s, eventually becoming the label's chief engineer. Plaut engineered sessions for what would result in many of Columbia's famous albums, including the original cast recordings of South Pacific, My Fair Lady, and West Side Story, jazz LPs Kind of Blue and Sketches of Spain by Miles Davis, Time Out by Dave Brubeck, Mingus Ah Um and Mingus Dynasty by Charles Mingus.
Early life
Frederick ("Fred") Plaut was born in Munich, Germany, on May 12, 1907. He graduated from the Technical University Munich with a degree in electrical engineering. From 1933 to 1940, Plaut lived in Paris, where he founded and operated his own recording studio. At the same time, he worked as a consulting engineer for Polydor Records, where he designed and built a complete recording installation.
In Paris, Fred Plaut met his future wife, Rose Kanter, a Polish-American soprano pursuing vocal studies in France. They were married on September 24, 1938. She performed in France, England, Belgium, the Netherlands, and Italy, returning to the United States in June 1940, just as Paris was falling into Nazi hands. She continued her singing career in the United States under the name Rose Dercourt, making her American debut at Town Hall in April 1944. She was a close friend of Francis Poulenc, who dedicated some of his songs to her and maintained a steady correspondence with her until his death in 1963.
Engineer for Columbia Records
Fred Plaut came to the United States in January 1940 and in April of that same year began his career as a recording engineer with Columbia Records. He recorded the majority of the Columbia Masterworks series and many sessions of the Philadelphia, Cleveland, Chicago, Minneapolis, Louisville, and New York orchestras. He recorded almost all of the cast albums of Broadway shows, operas, and dramatic plays for Columbia and other labels. Plaut also recorded many chamber music and solo performances, as well as popular and jazz sessions. His work took place in the Columbia recording studios and on location at such events as the Newport Jazz Festival and the Marlboro Festival. He received five Grammy Awards and six nominations for engineering.
While still with Columbia Records, Plaut gave several extension courses in The Art of Recording for the Manhattan School of Music. After his retirement from Columbia in 1972, Plaut joined the staff of the Yale School of Music as consultant and Senior Recording Engineer and in 1977 began teaching classes in the Art of Recording. In 1975, Plaut taught Music in Modern Media at Columbia University.
Photography
Plaut's second career was as a photographer. His work as a recording engineer for Columbia Records allowed him many opportunities to photograph recording artists in the studios and on location while they were relaxing, performing, or listening to playback of recording sessions. The result is thousands of candid portraits of the great conductors, orchestras, soloists, chamber players, popular and jazz musicians, actors, and writers. Some of these artists include Samuel Barber, Leonard Bernstein, the Budapest String Quartet, Pablo Casals, Aaron Copland, Zino Francescatti, Glenn Gould, Mieczyslaw Horszowski, the Juilliard String Quartet, Dimitri Mitropoulos, Eugene Ormandy, Richard Rodgers, Alexander Schneider, Rudolf Serkin, Isaac Stern, Igor Stravinsky, George Szell, Joseph Szigeti, Edgard Varèse, and Bruno Walter.
Fred and Rose Plaut were in the mainstream of New York City's musical life. They frequently attended or hosted dinner parties. The Plaut penthouse received such guests as Ned Rorem, Virgil Thomson, Henri Sauguet, Carlos Surinach, Igor Stravinsky, George Balanchine, Edgard Varèse, and Vittorio Rieti. These social occasions also allowed Plaut to take many candid photographs.
Fred Plaut also took many candid photographs on his frequent vacations with Rose to such countries as France, Italy, Spain, India, Mexico, and Israel. These photos included not only typical tourist's pictures, but also the famous personalities they encountered. The travel photos include candid portraits of Francis Poulenc, Pierre Bernac, Alberto Moravia, Pablo Picasso, Eugene Berman, and Janet Flanner
Plaut also had opportunities to take posed portraits of artists for publicity purposes. His photos have been used for brochures, flyers, posters, and concert programs by the Juilliard String Quartet, the Kalichstein-Laredo-Robinson Trio, the Albaneri Trio, Bethany Beardslee, Lehman Engel, and Ettore Stratta.
Plaut's photographs have been exhibited at several museums, including seven exhibits at the Museum of Modern Art in New York City, and have appeared in numerous major American and foreign magazines. Many book illustrations, book covers, and some eighty record album covers are to his credit. In a departure from his music imagery, it was his charming photograph of a stout Frenchman in slippers playing draughts on a street bench against an opponent who is a young girl that was chosen by Edward Steichen for The Family of Man who curated the record-breaking MoMA show in 1955 which toured the world to be seen by 9 million visitors. A selection of Plaut photographs was published as "The Unguarded Moment: A Photographic Interpretation" (Englewood Cliffs, N.J.: Prentice-Hall, c1964). This contains over one hundred of his finest portraits, as well as short biographical sketches of the subjects.
References
External links
The Frederick and Rose Plaut Papers at Irving S. Gilmore Music Library, Yale University
Category:American audio engineers
Category:1907 births
Category:1987 deaths
Category:German record producers
Category:Technical University of Munich alumni
Category:People from Munich
Category:German emigrants to the United States
Category:20th-century American engineers |
Tag Archives: HTML
Ever make changes to a client’s website – whether to the stylesheet or the jQuery powering the site – then send a link and ask them to refresh? It happens a fair amount for clients that love to see fresh changes. And it happens here in the Focus97 kingdom as well. I make changes to…
The short answer leans closer and closer to “bad” as the years move on. But I’ll explain…
From the perspective of a photographer and graphic designer, there are a number things to be said about HTML vs. Flash.
The Pros: dynamic aesthetics are under full control by the programmer (with HTML, you’re limited to static pages, for the most part). The Cons: Google (or Yahoo, MSN, Ask.com, etc.) doesn’t recognize anything inside Flash, it simply recognizes the Flash object itself. With HTML pages, Google will recognize all the jpegs and text, and index it that way. |
Monoclonal antibodies against ricin: effects on toxin function.
Monoclonal antibodies against ricin toxin were produced and some of their properties investigated. Antibodies 196 C12 and 197 C7 raised against A-chain reacted with a CnBr fragment probably comprised between amino acid 254 and 262. Antibodies 193 A9, 196 A3, and 191 B7 recognized a 6-7 kD CnBr peptide. A second set of antibodies was raised against whole inactivated ricin. Most of them bound in a solid phase radioimmunobinding assay only to ricin and few had a low activity against purified A-chain. Different effects were noted on toxin action in cultured leukemic cells. If cells were preincubated with ricin followed by antibodies, MAb 207 E5 and 216 B3 had a strong enhancing effect on toxin action. If antibodies and toxin were mixed and then added to sensitive cells, antibody 207 E5 gave a strong protection while 216 B3 maintained its enhancing activity. The effect of antibody 216 B3 was further investigated by quantitative cloning experiments which showed that toxin had a fivefold enhancement in its activity by a preincubation with this antibody. Binding of fluoresceinated ricin to leukemic target cells was inhibited by a preincubation with antibody 207 E5 while antibody 216 B3 had no effect. |
Q:
Blink leds while switches are pressed in Tiva C
I'm trying to make the PF0 and PF4 leds blink while the switches are pressed. But it simply does not turn on any leds.
I was told I needed to use two ports, I don't see why, since this can be done with only one port — in this case the port D — but it was suggested that I use the port K as well (?).
The board is a Tiva C TM4c1294nctpd
#include <stdint.h>
#include <stdbool.h>
#include "inc/hw_memmap.h"
#include "driverlib/debug.h"
#include "driverlib/gpio.h"
#include "driverlib/sysctl.h"
#include <inc/tm4c1294ncpdt.h>
uint32_t SW1,SW2; //
int main(void) {
while(1){
SYSCTL_RCGCGPIO_R=0X1100; // Enable port D
GPIO_PORTD_DIR_R=0X03; //enable the GPIO pin PN0,
GPIO_PORTD_DEN_R=0X03;
GPIO_PORTK_AHB_DIR_R=0;
GPIO_PORTK_AHB_DEN_R=0X03;
GPIO_PORTK_AHB_PUR_R=0X01;
SW1 = GPIO_PORTD_DATA_R&0x10; // read PF4 into SW1
SW2 = GPIO_PORTD_DATA_R&0x01; // read PF0 into SW2
if (!SW1 && !SW2) { // both pressed
GPIO_PORTD_DATA_R = 0x04;
} else if (!SW1) { // SW1 pressed
GPIO_PORTD_DATA_R = 0x02;
} else if (!SW2) { // SW2 pressed
GPIO_PORTD_DATA_R = 0x08;
} else { // neither
GPIO_PORTD_DATA_R = 0x00;
}
}
}
A:
You have only enabled D0 and D1, but appear to be using D0, D1, D2, D3 and D4.
You have set D0 and D1 as output, but appear to be be using D1, D2, D3 as outputs.
You have set D0 as an output, but attempt to read it as an input.
The configuration of PORTK is entirely irrelevant if you are not using it.
The RCGCGPIO enables the clock for PORTN and PORTJ which you are not using at all.
I am not familiar with the part and have only briefly read the data sheet, but the the PORTD clock, direction and digital enable configuration should be as follows if the input/output code is itself correct.
SYSCTL_RCGCGPIO_R = 0x0008; // Enable port D clock
GPIO_PORTD_DIR_R = 0x0E; // D4, D0 input, D1 to D3 output.
GPIO_PORTD_DEN_R = 0x1F; // Enable D0 to D4
These initialisation settings need be done once only - before the loop, not inside it.
int main(void)
{
SYSCTL_RCGCGPIO_R = 0x0008; // Enable port D
GPIO_PORTD_DIR_R = 0x0E; // D4, D0 input, D1 to D3 output.
GPIO_PORTD_DEN_R = 0x1F; // Enable D0 to D4
for(;;)
{
uint32_t SW1 = GPIO_PORTD_DATA_R & 0x10; // read PD4 into SW1
uint32_t SW2 = GPIO_PORTD_DATA_R & 0x01; // read PD0 into SW2
if (!SW1 && !SW2) // both pressed
{
GPIO_PORTD_DATA_R = 0x04;
}
else if (!SW1) // SW1 pressed
{
GPIO_PORTD_DATA_R = 0x02;
}
else if (!SW2) // SW2 pressed
{
GPIO_PORTD_DATA_R = 0x08;
}
else // neither
{
GPIO_PORTD_DATA_R = 0x00;
}
}
}
After thought:
The comments in the presumably copy & pasted code suggest the board might be an EK-TM4C1294XL. In that case the the LEDs are called D1, D2, D3, D4 (D for diode, not _PORTD), but are on GPIOs PN1, PN0, PF4 and PF0 respectively, and the switches are on PJ0 and PJ1.
In that case perhaps the following will be more successful:
int main(void)
{
SYSCTL_RCGCGPIO_R |= (1<<5 | 1<<8 | 1<<12); // Enable port F, J and N clocks
GPIO_PORTN_DIR_R |= 0x03; // PN1 = LED0, PN0 = LED1 (Outputs)
GPIO_PORTN_DEN_R |= 0x03; // Enable PN0 and PN1
GPIO_PORTF_DIR_R |= 0x11; // PF4 = LED3, PF0 = LED4 (Outputs)
GPIO_PORTF_DEN_R |= 0x11; // Enable PF0 and PF4
GPIO_PORTJ_DIR_R &= ~0x03; // PJ0 = SW1, PJ1 = SW2 (Inputs)
GPIO_PORTJ_DEN_R &= ~0x03; // Enable PJ0 and PJ4
for(;;)
{
uint32_t SW1 = GPIO_PORTJ_DATA_R & 0x01; // read PJ0 into SW1
uint32_t SW2 = GPIO_PORTJ_DATA_R & 0x02; // read PJ1 into SW2
if (!SW1 && !SW2) // both pressed
{
GPIO_PORTF_DATA_R = 0x01; // LED4
}
else if (!SW1) // SW1 pressed
{
GPIO_PORTF_DATA_R = 0x10; // LED3
}
else if (!SW2) // SW2 pressed
{
GPIO_PORTN_DATA_R = 0x01; // LED2
}
else // neither
{
GPIO_PORTN_DATA_R = 0x02; // LED1
}
}
}
This remains broken because the code only ever switches LED's on - an it does not respect other hardware that may be connected to other pins on ports F and N; you need to add code to read-modify-write the respective pins fo reach LED you set, you need to clear the other three. I'll leave that to you - it goes beyond the original question.
|
{
"name": "@aws-solutions-constructs/aws-cloudfront-s3",
"version": "1.64.1",
"description": "CDK Constructs for AWS Cloudfront to AWS S3 integration.",
"main": "lib/index.js",
"types": "lib/index.d.ts",
"repository": {
"type": "git",
"url": "https://github.com/awslabs/aws-solutions-constructs.git",
"directory": "source/patterns/@aws-solutions-constructs/aws-cloudfront-s3"
},
"author": {
"name": "Amazon Web Services",
"url": "https://aws.amazon.com",
"organization": true
},
"license": "Apache-2.0",
"scripts": {
"build": "tsc -b .",
"lint": "eslint -c ../eslintrc.yml --ext=.js,.ts . && tslint --project .",
"lint-fix": "eslint -c ../eslintrc.yml --ext=.js,.ts --fix .",
"test": "jest --coverage",
"clean": "tsc -b --clean",
"watch": "tsc -b -w",
"integ": "cdk-integ",
"integ-no-clean": "cdk-integ --no-clean",
"integ-assert": "cdk-integ-assert",
"jsii": "jsii",
"jsii-pacmak": "jsii-pacmak",
"build+lint+test": "npm run jsii && npm run lint && npm test && npm run integ-assert",
"snapshot-update": "npm run jsii && npm test -- -u && npm run integ-assert"
},
"jsii": {
"outdir": "dist",
"targets": {
"java": {
"package": "software.amazon.awsconstructs.services.cloudfronts3",
"maven": {
"groupId": "software.amazon.awsconstructs",
"artifactId": "cloudfronts3"
}
},
"dotnet": {
"namespace": "Amazon.Constructs.AWS.CloudfrontS3",
"packageId": "Amazon.Constructs.AWS.CloudfrontS3",
"signAssembly": true,
"iconUrl": "https://raw.githubusercontent.com/aws/aws-cdk/master/logo/default-256-dark.png"
},
"python": {
"distName": "aws-solutions-constructs.aws-cloudfront-s3",
"module": "aws_solutions_constructs.aws_cloudfront_s3"
}
}
},
"dependencies": {
"@aws-cdk/core": "~1.64.1",
"@aws-cdk/aws-cloudfront": "~1.64.1",
"@aws-cdk/aws-s3": "~1.64.1",
"@aws-cdk/aws-lambda": "~1.64.1",
"@aws-solutions-constructs/core": "~1.64.1",
"constructs": "^3.0.4"
},
"devDependencies": {
"@aws-cdk/assert": "~1.64.1",
"@types/jest": "^24.0.23",
"@types/node": "^10.3.0"
},
"jest": {
"moduleFileExtensions": [
"js"
]
},
"peerDependencies": {
"@aws-cdk/core": "~1.64.1",
"@aws-cdk/aws-cloudfront": "~1.64.1",
"@aws-cdk/aws-s3": "~1.64.1",
"@aws-solutions-constructs/core": "~1.64.1",
"constructs": "^3.0.4",
"@aws-cdk/aws-lambda": "~1.64.1"
}
}
|
Risk stratification by guidelines compared with risk assessment by risk equations applied to a MONICA sample.
The World Health Organization/International Society of Hypertension (WHO/ISH) Hypertension Guidelines from 1999 propose a risk stratification scheme for estimating absolute risk for cardiovascular disease (CVD). Risk equations estimated by statistical methods are another way of predicting cardiovascular risk. We studied the differences between these two approaches when applied to the same set of individuals with high blood pressure. The two northernmost counties in Sweden (NSW) constitute one of the centres in the WHO MONICA (monitoring trends and determinants in cardiovascular disease) Project. Three population surveys have been carried out in 1986, 1990 and 1994, and were used to estimate a risk equation for predicting the 10-year risk of fatal/non-fatal stroke and myocardial infarction. Another MONICA sample from 1999, a total of 5997 subjects, was classified according to the recent WHO/ISH risk stratification scheme. A risk assessment was also performed, by using the risk equations from the NSW MONICA sample and Framingham risk equations. The agreement between the two methods was good when the values obtained from the risk equation were averaged for each risk group obtained from the risk classification by guidelines. However, if the predicted risk for each individual was considered, the agreement was poor for the medium and high-risk groups. Although the average risk for all individuals is the same, many subjects have a higher risk or a lower risk than predicted by guidelines. Risk classification by the 1999 WHO/ISH Hypertension Guidelines is not accurate and detailed enough for medium- and high-risk patients, which could be of clinical importance in the medium risk group. |
Q:
Doing a secondary sort by year in a SQL query
So I have a table that with ID values and dates that looks something like this:
ID Date
0001 1/1/2012
0002 1/2/2010
0002 1/2/2011
0001 1/1/2011
0001 1/1/2010
0002 1/2/2012
Basically, the ID values are unique to that year only - they reset the subsequent year.
I want to be able to sort by ID values and by dates, but I want to do the sorting so that the values are ordered by year. Just a regular sort of ID with a secondary date sort yields this:
ID Date
0001 1/1/2010
0001 1/1/2011
0001 1/1/2012
0002 1/2/2010
0002 1/2/2011
0002 1/2/2012
But I would like a query that generates a table that looks like this:
ID Date
0001 1/1/2010
0002 1/2/2010
0001 1/1/2011
0002 1/2/2011
0001 1/1/2012
0002 1/2/2012
Is this possible?
A:
How about this:
order by year(date), id
|
He, a 51-year-old nonprofit worker in southern China’s Guangzhou city, has joined LGBT activists and supporters in an appeal to lawmakers to allow same-sex marriage, using a state-sanctioned channel to skirt recent government moves to suppress collective action.
As a lesbian couple in China, He and Li Qin kept their ties largely unspoken, sometimes introducing themselves as cousins. This rarely bothered He until Li succumbed to complications from lupus in 2016, and Li’s parents demanded that He hand over the deed for their apartment and other property documents under Li’s name.
BEIJING — It was only after her partner’s death that He Meili realized the full meaning of marriage.
“I realized if LGBT people don’t have the right to marry, we have no legal protections,” she said. “Others will also experience what I did — and be left with nothing.”
Under Chinese President Xi Jinping, space for civil society and advocacy has shrunk. Human rights activists and their lawyers have been detained, while internet censorship has increased.
LGBT activists have turned to a novel tactic: submitting statements to the National People’s Congress, China’s legislature, which is soliciting opinions from the public on a draft of the “Marriage and Family” portion of the Civil Code through Friday.
“A lot of people told me that this is the first time they’ve participated in the legal process,” said Peng Yanzi, director of LGBT Rights Advocacy China, one of several groups running the campaign.
The Marriage and Family section is among six draft regulations for which the legislature began seeking comments at the end of October. As of Thursday afternoon, the website showed that more than 200,000 suggestions had been submitted either online or by mail, the greatest number of any of the outstanding drafts. It was not clear what proportion of the suggestions pertained to same-sex marriage.
In social media posts, campaign participants held up their Express Mail Service envelopes along with rainbow Pride flags. In their suggestions, they shared stories of coming out, the challenge of gaining family members’ acceptance and running into legal roadblocks when trying to share their lives with someone of the same sex.
A teacher wrote about experiencing discrimination at his workplace; others wrote about not being allowed to make medical decisions for their ailing partners.
“This is not just a symbolic gesture,” Peng said. “It really has an impact on our everyday lives.”
Peng’s organization has outlined a desired revision to the language in the Civil Code, changing the terms throughout from “husband and wife” to “spouses” and from “men and women” to “the two parties.” Rather than adding specific language about same-sex marriage, the revisions seek to eliminate gendered terms from the legislation.
While activists and experts acknowledge that legalizing same-sex marriage is still a far-off reality in China, they said appeals through the official channel will push the government to take the demand more seriously.
“There’s a near-zero chance the suggested changes will be accepted and implemented, but this campaign makes China’s LGBT community’s demands for equality harder to ignore,” said Darius Longarino, a senior fellow at Yale Law School’s Paul Tsai China Center who has worked on legal reform programs promoting LGBT rights in China.
“Calls for gay marriage often get dismissed as being too marginal and unimportant to get onto the political agenda, or as being inconsistent with Chinese traditional culture,” Longarino said.
Few legal protections are available for same-sex couples in China. One party can apply to be the other’s legal guardian, but those accompanying rights are just a fraction of those enjoyed by married couples, Longarino said. He gave the example of a lesbian woman who bears a child in China, with no way for her partner to become a second legally recognized parent of that baby.
At a briefing in August, a spokesman for the National People’s Congress Standing Committee’s Legislative Affairs Commission suggested that same-sex marriage does not suit Chinese society.
“China’s current marriage system is built on the basis of a man and a woman becoming husband and wife,” said Zang Tiewei, director of the commission’s research department, when asked whether same-sex marriage will be legalized.
“This regulation is in line with China’s national conditions and historical and cultural traditions,” Zang said. “As far as I know, at the moment most countries in the world don’t recognize the legality of same-sex marriage.”
LGBT advocates have garnered growing support from the Chinese public, using social media to raise awareness even as they face frequent censorship. They won a victory over the censors in April 2018, when one of the country’s top social networking sites backtracked on a plan to restrict content related to LGBT issues. Users flooded Weibo with hashtags such as “#I’mGayNotaPervert” after the Twitter-like platform said “pornographic, violent or gay” subject matter would be reviewed.
But misconceptions and discrimination persist. A 2015 survey by the Beijing LGBT Center found that 35% of mental health professionals in a sample group of nearly 1,000 believed that being gay is a mental illness. Around the same percentage supported the use of conversion therapy. When Bohemian Rhapsody, the hit biopic about Queen lead singer Freddie Mercury, came to China, viewers were treated to a version without any references to Mercury’s sexuality or his struggle with AIDS.
Hua Zile, the chief editor of an LGBT-focused Weibo account with 1.69 million followers, said he hasn’t publicized the same-sex marriage campaign on his microblog because he worries about the dispiriting effect it will have on the LGBT community when it inevitably fails.
“We can’t reach the sky in a single leap,” Hua said. “We should try to make progress step-by-step, or else we’ll constantly be disappointed.”
After He’s partner passed away, it pained her to think about how they kept their status in the shadows.
Through their 12-year relationship, it was He who accompanied Li on doctor’s visits. She stayed with her at the hospital when lupus made her nauseous and delirious with fever, and she helped her reach their fourth-floor walk-up after her legs grew weak.
In He’s mind, they were married. But in reality, many people didn’t even know they were dating.
Friends told He that she could file a lawsuit to recover some of her and Li’s shared property. She hired a lawyer to start the process, which required painstaking documentation of their relationship and signed statements from their neighbors and friends attesting to their long-term bond.
“It was like tearing open a wound over and over again,” He said. “I had to keep coming out about my sexuality. If we were married, all of this would be understood.”
In the end, He gave up on the lawsuit. It was too exhausting, she said, to have to prove their love to everyone. |
NADPH-dependent drug redox cycling and lipid peroxidation in microsomes from human term placenta.
1. NADPH-dependent iron and drug redox cycling, as well as lipid peroxidation process were investigated in microsomes isolated from human term placenta. 2. Paraquat and menadione were found to undergo redox cycling, catalyzed by NADPH:cytochrome P-450 reductase in placental microsomes. 3. The drug redox cycling was able to initiate microsomal lipid peroxidation in the presence of micromolar concentrations of iron and ethylenediaminetetraacetate (EDTA). 4. Superoxide was essential for the microsomal lipid peroxidation in the presence of iron and EDTA. 5. Drastic peroxidative conditions involving superoxide and prolonged incubation in the presence of iron were found to destroy flavin nucleotides, inhibit NADPH:cytochrome P-450 reductase and inhibit propagation step of lipid peroxidation. 6. Reactive oxo-complex formed between iron and superoxide is proposed as an ultimate species for the initiation of lipid peroxidation in microsomes from human term placenta as well as for the destruction of flavin nucleotides and inhibition of NADPH:cytochrome P-450 reductase as well as for impairment of promotion of lipid peroxidation under drastic peroxidative conditions. |
Australian Research Council (ARC)
The Final Report must justify why any publications from a Project have not been deposited in appropriate repositories within 12 months of publication. The Final Report must outline how data arising from the Project has been made publicly accessible where appropriate. (Section 13.3.2)
National Health and Medical Research Council (NHMRC)
The revised Funding Agreement of the National Health and Medical Research Council states:
If required by an NHMRC policy about the dissemination of research findings, the Administering Institution must deposit any publication resulting from a Research Activity, and its related data, in an appropriate subject and/or open access repository (such as the Australian Consortium for Social and Political Research Inc. archive or databases listed under the National Centre for Biotechnology Information) in accordance with the timeframe and other requirements set out in that policy. (Paragraph 12.9)
As of 1 July 2012 the NHMRC requires any publication arising from an NHMRC supported research project be deposited into an open access institutional repository within a twelve month period from the date of publication.
In a February 2012 editorial article about open access, the CEO of the NHMRC stated that: |
Transfusions of leukocyte-rich erythrocyte concentrates: a successful treatment in selected cases of habitual abortion.
Forty-nine women who suffered recurrent spontaneous abortion of unknown etiology were studied for cellular reactivity and blocking antibody in one-way mixed lymphocyte culture before and after their receipt of three transfusions of leukocyte-rich erythrocyte concentrates from third-party donors. Those 38 of the 49 women who had no blocking antibody all developed significant blocking activity after the transfusion series. Twenty-five of them have become pregnant since, and only one aborted again. The blocking activity demonstrable in 11 of 49 women was increased after the transfusions. Subsequently five of them became pregnant, and all aborted again. We later found that these five women, who considered themselves to be in perfect health, all had serologic signs of autoimmune disease. We advise against transfusion treatment of women who habitually abort without preceding immunologic investigation, because a population of habitual aborters may contain women with yet undiagnosed autoimmune disease, who would be worse off after blood transfusion. We conclude from our results that a selected population of habitual aborters, that is, those without blocking antibody, benefits from transfusion treatment. |
india
Updated: May 28, 2019 07:34 IST
In a ‘thank you’ letter, Samajwadi Party (SP) patriarch Mulayam Singh Yadav has credited workers of his party, as well as those from the Bahujan Samaj Party (BSP) for his victory from Mainpuri Lok Sabha constituency. The three-time former chief minister won the seat for the fifth time, albeit with his lowest victory margin.
While Singh won the seat by 360,000 votes in 2014, his victory margin in 2019 was 94,000 votes.
According to political observers, it was the alliance that helped Singh. In 2014, the BSP candidate pitted against him had polled 140,000 votes.
Initially, Singh had not been keen to make the alliance struck by his son and party president Akhilesh Yadav. Eventually, Singh was fielded as the gathbandhan candidate for Mainpuri, and even attended a joint rally with BSP supremo Mayawati.
In a letter addressed to Mainpuri voters, as well as BSP and SP workers, Singh wrote: “I am thankful to the people of Mainpuri, the place that had been my ‘Karmbhoomi’, for letting me represent it for the fifth time. I am thankful to all from the bottom of my heart. I will do everything possible to take Mainpuri’s development to newer heights.”
“I express my gratitude to all the workers of SP and BSP for working this hard to ensure my victory,” the letter stated.
Singh wrote the letter on his official Member of Parliament letterhead. In the pre-poll alliance, of which the Rashtriya Lok Dal was also a part, SP contested 37 seats, the BSP fought on 38 seats, while the RLD contested three seats. However, the BSP won 10 seats, while the SP bagged only five. Three members of Yadav’s family — Dimple Yadav (Kannauj), Dharmendra Yadav (Badaun) and Akshaya Yadav (Firozabad) — lost the election, and the RLD drew a blank.
The state has 80 seats, of which the BJP won 62 seats. It’s ally, Apna Dal, won two seats |
Federal relations event advises campus community on how to advocate for UCLA
Nearly 100 students, staff, faculty and alumni gathered at the UCLA Faculty Center to learn how they can best advocate for funding in light of President Trump’s budget proposal. (photos by Jonathan Van Dyke)
A little guidance and planning can go a long way toward being an effective advocate for UCLA in light of what’s happening in Washington, D.C., those attending a special briefing by UCLA Government and Community Relations learned recently.
The briefing was held because the campus community has shown increased interest in responding to the proposed budget by the Trump administration. Nearly 100 UCLA students, faculty and alumni attended the May 30 event, including Dean Jayathi Murthy of the UCLA Henry Samueli School of Engineering and Applied Science and Laura Gómez, the interim dean of the Division of Social Sciences in the UCLA College.
“We help manage relationships with elected officials and federal agencies on behalf of the campus, but we can’t do this alone,” said Francisco Carrillo, executive director of federal relations at UCLA, part of UCLA Government and Community Relations. “We’ll only be successful with the involvement of all of you in this room and the larger Bruin community.”
Marjoie Duske
The newly proposed budget for fiscal year 2018, which must still be deliberated by Congress, includes some severe cuts to non-defense spending — money that supports student aid, loan programs and research grants — in order to pay for the president’s proposed increase in defense spending. Trump’s budget proposes to cut the Department of Education’s budget by $9 billion, the National Institutes of Health by $7 billion and the National Science Foundation by $819 million. And it seeks to eliminate both the National Endowment for the Humanities and the National Endowment for the Arts as well as several other programs important to UCLA.
Federal relations staff members said they are eager to partner with faculty and staff who have come forward with their concerns. “We don’t want to prevent engagements, but we can help support them, and we can provide helpful policy updates and timely information, including what discussions have already taken place and what congressional members’ priorities are,” Carrillo said.
Panel members discussing effective advocacy included Marjorie Duske, director of science and technology with the University of California Federal Governmental Relations office in Washington, D.C., and David Pomerantz, former Democratic staff director on the U.S. House Committee on Appropriations.
They emphasized the need for varied and continued communications with elected officials. “They need to hear what their constituents believe and think,” Pomerantz said. “The more you engage, the more you are likely to have an influence on the system.” The House subcommittee on the budget will take public comments, for example.
David Pomerantz
The advocacy fight will be long and complex, Pomerantz predicted, but that is why it will help to have the federal relations team working in concert with faculty, administrators and students, he said. House members are expected to mark up their appropriations bills this month, and the Senate in July.
If the Trump proposal is passed by Congress, Pomerantz warned, “This would be the lowest non-defense budget — as a percentage of the GDP — since records began in the 1960s.” Funding needs for research and education, among others not in the defense budget, will be ignored, he said. “Universities are uniquely engaged in so many activities that fall under [the non-defense budget]. It’s not going to be adopted as a single piece, but all of the elements are out there, and you need to make sure [elected officials] know that it is not going to work.”
At the UC level, Duske said, the outpouring of concern has been heartening and might have even contributed to the successful passing of an omnibus bill for 2017 that was more positive for research funding. “The long game, when it comes to federal advocacy, is always important,” she noted.
“Even though there is so much uncertainty, we don’t stop or wait around,” she said. “We have a mission. We need to have the conversations with people who don’t necessarily agree with us. That can be scary, or hard, but there are [Congressional] members with their doors open. It’s about building the relationship in an appropriate way, and challenging information that might not be accurate.”
Francisco Carrillo
Audience members asked the panel how to best present facts about the UC, whether faculty face time in Washington, D.C., was an effective approach and how to reach elected officials who do not share the UC’s values or are from districts out of state.
The panel implored audience members not be discouraged if they meet with staff members rather than a member of Congress. These people hold influence with their elected officials — and could become a long-term ally as they move up in the political ladder, Duske noted.
As for advocacy and presenting UCLA in the best light, Carrillo advised, “You want to do your homework. We have to work with members’ offices to really support them and their priorities, as well as ours. And most of the time, those [relationships] are mutually beneficial.”
He also said that UCLA and UC are already looking to leverage relationships with other higher education institutions outside of California’s borders, along with nonresident students and alumni who are important constituents of key committee chairs from other states.
Visit the Federal Advocacy Toolkit website to learn more about federal issues impacting UCLA and sign up to be a UCLA advocate so the federal relations team can help you prepare and deliver your messages to policymakers in Washington, DC.
Visit Facebook and Flickr for all photos from the event and reception. |
Q:
Accessing state value from StyleSheet.create in ReactNative
I'm new in ReactNative and I have the following components:
<View style={{flex: 1}}>
<View style={styles.container}>
<Animated.Image style= {styles.photo}
source={{uri: this.props.user.picture.large}}/>
</View>
<View style={{backgroundColor:'whitesmoke', flex: 1}}></View>
</View>
In the styles, I have the following:
const styles = StyleSheet.create({
container: {
flex: 0.5,
backgroundColor: '#4dbce9',
alignItems: 'center',
},
photo: {
height: 150,
width: 150,
borderRadius: 75,
borderWidth: 3,
borderColor: 'white',
transform: [ // `transform` is an ordered array
{scale: this.state.profileImgBounceVal},
]
}
});
I got an error when I access this.state.profileImgBounceVal as its outside the component I know. Is there any workaround for this except including the styles inside the Animated.Image tag?
A:
You could use Stylesheet.flatten() to create a reusable style object in your component:
var animatedImageStyle = StyleSheet.flatten([
styles.photo,
{
transform: [{scale:this.state.profileImgBounceVal}]
}
])
<View style={{flex: 1}}>
<View style={styles.container}>
<Animated.Image style={animatedImageStyle}
source={{uri: this.props.user.picture.large}}/>
</View>
<View style={{backgroundColor:'whitesmoke', flex: 1}}></View>
</View>
|
// RUN: llvm-mc -filetype=obj -triple x86_64-pc-linux-gnu %s -o - | elf-dump --dump-section-data | FileCheck %s
f:
.cfi_startproc
nop
.cfi_restore %rbp
nop
.cfi_endproc
// CHECK: # Section 4
// CHECK-NEXT: (('sh_name', 0x00000011) # '.eh_frame'
// CHECK-NEXT: ('sh_type', 0x00000001)
// CHECK-NEXT: ('sh_flags', 0x0000000000000002)
// CHECK-NEXT: ('sh_addr', 0x0000000000000000)
// CHECK-NEXT: ('sh_offset', 0x0000000000000048)
// CHECK-NEXT: ('sh_size', 0x0000000000000030)
// CHECK-NEXT: ('sh_link', 0x00000000)
// CHECK-NEXT: ('sh_info', 0x00000000)
// CHECK-NEXT: ('sh_addralign', 0x0000000000000008)
// CHECK-NEXT: ('sh_entsize', 0x0000000000000000)
// CHECK-NEXT: ('_section_data', '14000000 00000000 017a5200 01781001 1b0c0708 90010000 14000000 1c000000 00000000 02000000 0041c600 00000000')
// CHECK-NEXT: ),
// CHECK-NEXT: # Section 5
// CHECK-NEXT: (('sh_name', 0x0000000c) # '.rela.eh_frame'
// CHECK-NEXT: ('sh_type', 0x00000004)
// CHECK-NEXT: ('sh_flags', 0x0000000000000000)
// CHECK-NEXT: ('sh_addr', 0x0000000000000000)
// CHECK-NEXT: ('sh_offset', 0x0000000000000390)
// CHECK-NEXT: ('sh_size', 0x0000000000000018)
// CHECK-NEXT: ('sh_link', 0x00000007)
// CHECK-NEXT: ('sh_info', 0x00000004)
// CHECK-NEXT: ('sh_addralign', 0x0000000000000008)
// CHECK-NEXT: ('sh_entsize', 0x0000000000000018)
// CHECK-NEXT: ('_relocations', [
// CHECK-NEXT: # Relocation 0
// CHECK-NEXT: (('r_offset', 0x0000000000000020)
// CHECK-NEXT: ('r_sym', 0x00000002)
// CHECK-NEXT: ('r_type', 0x00000002)
// CHECK-NEXT: ('r_addend', 0x0000000000000000)
// CHECK-NEXT: ),
// CHECK-NEXT: ])
// CHECK-NEXT: ),
|
Quitting smoking is difficult enough without also having to worry about the weight one seems destined to gain. Katahn (The T-Factor Diet) says at least two-thirds of the people who quit smoking will gain between 10 and 12 pounds. Part of the problem: cigarette smoking really is an aid to weight management, because it reduces one's desire to eat by directly affecting body chemistry. So it's important, he says, to increase one's metabolic rate to compensate for nicotine's metabolic effects. He advises that whenever the urge to smoke occurs after quitting, it's important to get up and move around. Brief physical activity not only reduces the urge to smoke, but can provide the lift that otherwise might have been obtained from nicotine. And yet, increased physical activity may not be enough. Katahn recommends reducing fat and carbohydrate intake without making any radical changes in one's regular diet, because he believes undertaking a traditional weight-reduction diet at such a time can actually lead to weight gain. To identify fats and carbohydrates, he provides an extensive calorie/fat/carbohydrate/gram counter and reduced-calorie menus. He also advocates deep relaxation and meditation techniques that will help curb those ``I've just got to have a cigarette'' moments. (Nov.) |
How to find Painting Adelaide professionals?
So you have decided to go ahead with your Painting Adelaide, and you’ve decided to begin your search for professional house painter for doing the work? The most important thing that you need to look at here is how to choose a professional who is worth it. You may get references from your family members, friends, colleagues, etc.
Begin your search
There are a number of ways to start your search for professional house painter.
1. Internet
It is the most common source for the people to look for their services as well as products. Most of the people employ search engines like Yahoo or Google to look for plumber, house painter, handymen or electrician for offering an estimate.
But all these search engines do not state the full story and they also do not distinguish between trustworthy, reputable and professional Painters Adelaide and people who just try to make some quick money and leave the project halfway which would cost you a bomb. But there is no need of giving up hope.
There are a number of online resources which would help you in narrowing down the fields and weeding out the shysters. Some of these include Google places, Yelp, Angie’s List and Kudzu. You need to bear in mind that not every reputable house painter is listed on all these websites just like not all the dishonest painter would be listed. But all these websites are a very good barometer of judging which ones are listed.
They would tell you the type of work you may expect from these professionals. Most of the reputable painters encourage their clients for posting their experience online so that the other potential customers feel comfortable with the help of their services.
2. Getting references from your neighbours or friends
It is always the most reliable way of selecting a painter. It’s also considered to be a cost efficient way for the painter to bring in new business so that it’s always in the best interest to offer good quality work at reasonable prices with a wish to stand for their work and the track record for doing it.
3. Better Business Bureau
It is also a very valuable resource of determining if the painter would live up to your expectations. The house painters who belong to this organisation should agree to resolve all customer queries and complaints should have the right insurance and conduct business in the most professional manner.
Apart from making the commitment to the bureau, every business is rated with letter grading on the basis of the complaints, the time they have been in this business and size of the company.
A company which has a good rating and not have any pending complaints also had been in the business since a very long time. You need to bear in mind that the companies get complaints and these are from the customers who have unrealistic expectations or who are constantly filing issues and complaints for grabbing attention.
Conclusion
Looking for Painting Adelaide professionals is a little difficult task. But just by following a few things you can make this difficult task very easy. Just follow these simple things and make your life easy. |
Forgive us, then, if we don’t get too excited by breathless media reports tonight that Romney’s son Tagg wanted to “take a swing” at the president during Tuesday night’s debate. Asked by North Carolina radio host Bill Lumaye what it felt like to hear Obama call his father a liar, Tagg Romney said it made him want to jump out of his seat, “rush down to the debate stage and take a swing at him.” (Audio is available at the link above; it’s well worth a listen to hear just how “serious” this threat was.)
Media figures on the Left, though, suddenly have no stomach for the sort of violent rhetoric we thought had died with Sarah Palin’s candidacy and her talk of “targeting” seats. Someone, fetch the smelling salts!
The New Republic’s editor is aghast at such barbarism from new money like Tagg.
Classy bunch, these sons of candidates: one jokes about Kenya, another about rushing the stage to punch the president. #argumentforestatetax
Just how much traction will Tagg’s remark get in the media? Is the Obama campaign, along with its lapdog surrogates, willing to give up the golden goose that is “binders” for a day and put their money on this faux outrage instead? Conservatives say, “Please do.”
98% of American men upon hearing that Tagg Romney is protective of his father -> “Um… okay? Who isn’t?” #tcot |
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" lang="en-US">
<head profile="http://gmpg.org/xfn/11">
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
<title>Admin 0 – Boundary Lines | Natural Earth</title>
<link rel="shortcut icon" href="favicon.ico" type="image/x-icon">
<link rel="alternate" type="application/rss+xml" title="Natural Earth RSS Feed" href="http://www.naturalearthdata.com/feed/" />
<link rel="pingback" href="http://www.naturalearthdata.com/xmlrpc.php" />
<script type="text/javascript" src="http://www.naturalearthdata.com/wp-content/themes/NEV/includes/js/suckerfish.js"></script>
<!--[if lt IE 7]>
<script src="http://ie7-js.googlecode.com/svn/version/2.0(beta3)/IE7.js" type="text/javascript"></script>
<script defer="defer" type="text/javascript" src="http://www.naturalearthdata.com/wp-content/themes/NEV/includes/js/pngfix.js"></script>
<![endif]-->
<link rel="stylesheet" href="http://www.naturalearthdata.com/wp-content/themes/NEV/style.css" type="text/css" media="screen" />
<!-- All in One SEO Pack 2.3.2.3 by Michael Torbert of Semper Fi Web Designob_start_detected [-1,-1] -->
<meta name="description" content="Country boundaries on land and offshore. About Land boundaries and Pacific grouping boxes. Marine boundaries and marine indicator lines" />
<link rel="canonical" href="http://www.naturalearthdata.com/downloads/110m-cultural-vectors/110m-admin-0-boundary-lines/" />
<!-- /all in one seo pack -->
<link rel="alternate" type="application/rss+xml" title="Natural Earth » Admin 0 – Boundary Lines Comments Feed" href="http://www.naturalearthdata.com/downloads/110m-cultural-vectors/110m-admin-0-boundary-lines/feed/" />
<script type="text/javascript">
window._wpemojiSettings = {"baseUrl":"https:\/\/s.w.org\/images\/core\/emoji\/72x72\/","ext":".png","source":{"concatemoji":"http:\/\/www.naturalearthdata.com\/wp-includes\/js\/wp-emoji-release.min.js?ver=4.4.11"}};
!function(a,b,c){function d(a){var c,d,e,f=b.createElement("canvas"),g=f.getContext&&f.getContext("2d"),h=String.fromCharCode;return g&&g.fillText?(g.textBaseline="top",g.font="600 32px Arial","flag"===a?(g.fillText(h(55356,56806,55356,56826),0,0),f.toDataURL().length>3e3):"diversity"===a?(g.fillText(h(55356,57221),0,0),c=g.getImageData(16,16,1,1).data,g.fillText(h(55356,57221,55356,57343),0,0),c=g.getImageData(16,16,1,1).data,e=c[0]+","+c[1]+","+c[2]+","+c[3],d!==e):("simple"===a?g.fillText(h(55357,56835),0,0):g.fillText(h(55356,57135),0,0),0!==g.getImageData(16,16,1,1).data[0])):!1}function e(a){var c=b.createElement("script");c.src=a,c.type="text/javascript",b.getElementsByTagName("head")[0].appendChild(c)}var f,g;c.supports={simple:d("simple"),flag:d("flag"),unicode8:d("unicode8"),diversity:d("diversity")},c.DOMReady=!1,c.readyCallback=function(){c.DOMReady=!0},c.supports.simple&&c.supports.flag&&c.supports.unicode8&&c.supports.diversity||(g=function(){c.readyCallback()},b.addEventListener?(b.addEventListener("DOMContentLoaded",g,!1),a.addEventListener("load",g,!1)):(a.attachEvent("onload",g),b.attachEvent("onreadystatechange",function(){"complete"===b.readyState&&c.readyCallback()})),f=c.source||{},f.concatemoji?e(f.concatemoji):f.wpemoji&&f.twemoji&&(e(f.twemoji),e(f.wpemoji)))}(window,document,window._wpemojiSettings);
</script>
<style type="text/css">
img.wp-smiley,
img.emoji {
display: inline !important;
border: none !important;
box-shadow: none !important;
height: 1em !important;
width: 1em !important;
margin: 0 .07em !important;
vertical-align: -0.1em !important;
background: none !important;
padding: 0 !important;
}
</style>
<link rel='stylesheet' id='bbp-child-bbpress-css' href='http://www.naturalearthdata.com/wp-content/themes/NEV/css/bbpress.css?ver=2.5.8-5815' type='text/css' media='screen' />
<!-- This site uses the Google Analytics by Yoast plugin v5.4.6 - Universal enabled - https://yoast.com/wordpress/plugins/google-analytics/ -->
<script type="text/javascript">
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
})(window,document,'script','//www.google-analytics.com/analytics.js','__gaTracker');
__gaTracker('create', 'UA-10168306-1', 'auto');
__gaTracker('set', 'forceSSL', true);
__gaTracker('send','pageview');
</script>
<!-- / Google Analytics by Yoast -->
<link rel='https://api.w.org/' href='http://www.naturalearthdata.com/wp-json/' />
<link rel="EditURI" type="application/rsd+xml" title="RSD" href="http://www.naturalearthdata.com/xmlrpc.php?rsd" />
<link rel="wlwmanifest" type="application/wlwmanifest+xml" href="http://www.naturalearthdata.com/wp-includes/wlwmanifest.xml" />
<link rel='prev' title='Admin 1 – States, Provinces' href='http://www.naturalearthdata.com/downloads/110m-cultural-vectors/110m-admin-1-states-provinces/' />
<link rel='next' title='Admin 0 – Details' href='http://www.naturalearthdata.com/downloads/110m-cultural-vectors/110m-admin-0-details/' />
<meta name="generator" content="WordPress 4.4.11" />
<link rel='shortlink' href='http://www.naturalearthdata.com/?p=1547' />
<link rel="alternate" type="application/json+oembed" href="http://www.naturalearthdata.com/wp-json/oembed/1.0/embed?url=http%3A%2F%2Fwww.naturalearthdata.com%2Fdownloads%2F110m-cultural-vectors%2F110m-admin-0-boundary-lines%2F" />
<link rel="alternate" type="text/xml+oembed" href="http://www.naturalearthdata.com/wp-json/oembed/1.0/embed?url=http%3A%2F%2Fwww.naturalearthdata.com%2Fdownloads%2F110m-cultural-vectors%2F110m-admin-0-boundary-lines%2F&format=xml" />
<script type="text/javascript">
/* <![CDATA[ */
var ajaxurl = 'http://www.naturalearthdata.com/wp-admin/admin-ajax.php';
/* ]]> */
</script>
<!-- begin gallery scripts -->
<link rel="stylesheet" href="http://www.naturalearthdata.com/wp-content/plugins/featured-content-gallery/css/jd.gallery.css.php" type="text/css" media="screen" charset="utf-8"/>
<link rel="stylesheet" href="http://www.naturalearthdata.com/wp-content/plugins/featured-content-gallery/css/jd.gallery.css" type="text/css" media="screen" charset="utf-8"/>
<script type="text/javascript" src="http://www.naturalearthdata.com/wp-content/plugins/featured-content-gallery/scripts/mootools.v1.11.js"></script>
<script type="text/javascript" src="http://www.naturalearthdata.com/wp-content/plugins/featured-content-gallery/scripts/jd.gallery.js.php"></script>
<script type="text/javascript" src="http://www.naturalearthdata.com/wp-content/plugins/featured-content-gallery/scripts/jd.gallery.transitions.js"></script>
<!-- end gallery scripts -->
<script type="text/javascript">
window._se_plugin_version = '8.1.4';
</script>
<link href="http://www.naturalearthdata.com/wp-content/themes/NEV/css/default.css" rel="stylesheet" type="text/css" />
<style type="text/css">.recentcomments a{display:inline !important;padding:0 !important;margin:0 !important;}</style>
<style type="text/css">.broken_link, a.broken_link {
text-decoration: line-through;
}</style><!--[if lte IE 7]>
<link rel="stylesheet" type="text/css" href="http://www.naturalearthdata.com/wp-content/themes/NEV/ie.css" />
<![endif]-->
<script src="http://www.naturalearthdata.com/wp-content/themes/NEV/js/jquery-1.2.6.min.js" type="text/javascript" charset="utf-8"></script>
<script>
jQuery.noConflict();
</script>
<script type="text/javascript" charset="utf-8">
$(function(){
var tabContainers = $('div#maintabdiv > div');
tabContainers.hide().filter('#comments').show();
$('div#maintabdiv ul#tabnav a').click(function () {
tabContainers.hide();
tabContainers.filter(this.hash).show();
$('div#maintabdiv ul#tabnav a').removeClass('current');
$(this).addClass('current');
return false;
}).filter('#comments').click();
});
</script>
<script type="text/javascript" language="javascript" src="http://www.naturalearthdata.com/dataTables/media/js/jquery.dataTables.js"></script>
<script type="text/javascript" charset="utf-8">
$(document).ready(function() {
$('#ne_table').dataTable();
} );
</script>
</head>
<body>
<div id="page">
<div id="header">
<div id="headerimg">
<h1><a href="http://www.naturalearthdata.com/"><img src="http://www.naturalearthdata.com/wp-content/themes/NEV/images/nev_logo.png" alt="Natural Earth title="Natural Earth" /></a></h1>
<div class="description">Free vector and raster map data at 1:10m, 1:50m, and 1:110m scales</div>
<div class="header_search"><form method="get" id="searchform" action="http://www.naturalearthdata.com/">
<label class="hidden" for="s">Search for:</label>
<div><input type="text" value="" name="s" id="s" />
<input type="submit" id="searchsubmit" value="Search" />
</div>
</form>
</div>
<!--<div class="translate_panel" style="align:top; margin-left:650px; top:50px;">
<div id="google_translate_element" style="float:left;"></div>
<script>
function googleTranslateElementInit() {
new google.translate.TranslateElement({
pageLanguage: 'en'
}, 'google_translate_element');
}
</script>
<script src="http://translate.google.com/translate_a/element.js?cb=googleTranslateElementInit"></script>
</div>-->
</div>
</div>
<div id="pagemenu" style="align:bottom;">
<ul id="page-list" class="clearfix"><li class="page_item page-item-4"><a href="http://www.naturalearthdata.com/">Home</a></li>
<li class="page_item page-item-10"><a href="http://www.naturalearthdata.com/features/">Features</a></li>
<li class="page_item page-item-12 page_item_has_children"><a href="http://www.naturalearthdata.com/downloads/">Downloads</a></li>
<li class="page_item page-item-6 current_page_parent"><a href="http://www.naturalearthdata.com/blog/">Blog</a></li>
<li class="page_item page-item-14"><a href="http://www.naturalearthdata.com/forums">Forums</a></li>
<li class="page_item page-item-366"><a href="http://www.naturalearthdata.com/corrections">Corrections</a></li>
<li class="page_item page-item-16 page_item_has_children"><a href="http://www.naturalearthdata.com/about/">About</a></li>
</ul>
</div>
<hr /> <div id="main">
<div id="content" class="narrowcolumn">
« <a href="http://www.naturalearthdata.com/downloads/110m-cultural-vectors/">1:110m Cultural Vectors</a>
« <a href="http://www.naturalearthdata.com/downloads/">Downloads</a>
<div class="post" id="post-1547">
<h2>Admin 0 – Boundary Lines</h2>
<div class="entry">
<div class="downloadPromoBlock">
<div style="float: left; width: 170px;"><img class="alignleft size-thumbnail wp-image-92" title="home_image_3" src="http://www.naturalearthdata.com/wp-content/uploads/2009/09/boundaries_thumb.png" alt="home_image_3" width="150" height="97" /></div>
<div style="float: left; width: 410px;"><em>Country boundaries on land and offshore.</em></p>
<div class="download-link-div">
<a href="http://www.naturalearthdata.com/http//www.naturalearthdata.com/download/110m/cultural/ne_110m_admin_0_boundary_lines_land.zip" class="download-link" rel="nofollow" title="Downloaded 114946 times (Shapefile, geoDB, or TIFF format)" onclick="if (window.urchinTracker) urchinTracker ('http://www.naturalearthdata.com/http//www.naturalearthdata.com/download/110m/cultural/ne_110m_admin_0_boundary_lines_land.zip'); __gaTracker('send', 'event', 'download', 'http://www.naturalearthdata.com/http//www.naturalearthdata.com/download/110m/cultural/ne_110m_admin_0_boundary_lines_land.zip');">Download country boundaries</a> <span class="download-link-span">(43.68 KB) version 4.0.0</span>
</div>
<div class="download-link-div">
<a href="http://www.naturalearthdata.com/http//www.naturalearthdata.com/download/110m/cultural/ne_110m_admin_0_pacific_groupings.zip" class="download-link" rel="nofollow" title="Downloaded 2656 times (Shapefile, geoDB, or TIFF format)" onclick="if (window.urchinTracker) urchinTracker ('http://www.naturalearthdata.com/http//www.naturalearthdata.com/download/110m/cultural/ne_110m_admin_0_pacific_groupings.zip'); __gaTracker('send', 'event', 'download', 'http://www.naturalearthdata.com/http//www.naturalearthdata.com/download/110m/cultural/ne_110m_admin_0_pacific_groupings.zip');">Download Pacific grouping lines</a> <span class="download-link-span">(17.59 KB) version 4.0.0</span>
</div>
<p><span id="more-1547"></span></div>
</div>
<div>
<img class="alignnone size-full wp-image-1901" title="pacific_banner" src="http://www.naturalearthdata.com/wp-content/uploads/2009/09/pacific_banner.png" alt="pacific_banner" width="580" height="150" srcset="http://www.naturalearthdata.com/wp-content/uploads/2009/09/pacific_banner-300x77.png 300w, http://www.naturalearthdata.com/wp-content/uploads/2009/09/pacific_banner.png 580w" sizes="(max-width: 580px) 100vw, 580px" /></p>
<p><strong>About</strong></p>
<p>Land boundaries and Pacific grouping boxes. Marine boundaries and marine indicator lines are deliberately omitted for the 110m scale.</p>
<p><strong>Issues</strong></p>
<p>None known.</p>
<p><strong>Version History</strong></p>
<ul>
<li>
<a href="http://www.naturalearthdata.com/http//www.naturalearthdata.com/download/110m/cultural/ne_110m_admin_0_boundary_lines_land.zip" onclick="__gaTracker('send', 'event', 'download', 'http://www.naturalearthdata.com/http//www.naturalearthdata.com/download/110m/cultural/ne_110m_admin_0_boundary_lines_land.zip');" rel="nofollow" title="Download version 4.0.0 of ne_110m_admin_0_boundary_lines_land.zip">4.0.0</a>
</li>
<li>
<a rel="nofollow" title="Download version 2.0.0 of ne_110m_admin_0_boundary_lines_land.zip" href="http://www.naturalearthdata.com/http//www.naturalearthdata.com/download/110m/cultural/ne_110m_admin_0_boundary_lines_land.zip?version=2.0.0">2.0.0</a>
</li>
<li>
1.4.0
</li>
<li>
1.1.0
</li>
<li>
1.0.0
</li>
</ul>
<p><a href="https://github.com/nvkelso/natural-earth-vector/blob/master/CHANGELOG" onclick="__gaTracker('send', 'event', 'outbound-article', 'https://github.com/nvkelso/natural-earth-vector/blob/master/CHANGELOG', 'The master changelog is available on Github »');">The master changelog is available on Github »</a>
</div>
</div>
</div>
</div>
<div id="sidebar">
<ul><li id='text-5' class='widget widget_text'><h2 class="widgettitle">Stay up to Date</h2>
<div class="textwidget"> Know when a new version of Natural Earth is released by subscribing to our <a href="http://www.naturalearthdata.com/updates/" class="up-to-date-link" >announcement list</a>.</div>
</li></ul><ul><li id='text-2' class='widget widget_text'><h2 class="widgettitle">Find a Problem?</h2>
<div class="textwidget"><div>
<div style="float:left; width:65px;"><a href="/corrections/index.php?a=add"><img class="alignleft" title="New Ticket" src="http://www.naturalearthdata.com/corrections/img/newticket.png" alt="" width="60" height="60" /></a></div><div class="textwidget" style="float:left;width:120px; font-size:1.2em; font-size-adjust:none; font-style:normal;
font-variant:normal; font-weight:normal; line-height:normal;">Submit suggestions and bug reports via our <a href="/corrections/index.php?a=add">correction system</a> and track the progress of your edits.</div>
</div></div>
</li></ul><ul><li id='text-3' class='widget widget_text'><h2 class="widgettitle">Join Our Community</h2>
<div class="textwidget"><div>
<div style="float:left; width:65px;"><a href="/forums/"><img src="http://www.naturalearthdata.com/wp-content/uploads/2009/08/green_globe_chat_bubble_562e.png" alt="forums" title="Chat in the forum!" width="50" height="50" /></a></div><div class="textwidget" style="float:left;width:120px; font-size:1.2em; font-size-adjust:none; font-style:normal;
font-variant:normal; font-weight:normal; line-height:normal;">Talk back and discuss Natural Earth in the <a href="/forums/">Forums</a>.</div>
</div></div>
</li></ul><ul><li id='text-4' class='widget widget_text'><h2 class="widgettitle">Thank You</h2>
<div class="textwidget">Our data downloads are generously hosted by Florida State University.</div>
</li></ul><ul><li id='bbp_topics_widget-3' class='widget widget_display_topics'><h2 class="widgettitle">Recent Forum Topics</h2>
<ul>
<li>
<a class="bbp-forum-title" href="http://www.naturalearthdata.com/forums/topic/natural-earth-in-wagner-vii/">Natural Earth in Wagner VII</a>
</li>
<li>
<a class="bbp-forum-title" href="http://www.naturalearthdata.com/forums/topic/downloads-are-404ing/">Downloads are 404ing</a>
</li>
<li>
<a class="bbp-forum-title" href="http://www.naturalearthdata.com/forums/topic/disputed-territories-type-field/">Disputed Territories: "type" field</a>
</li>
<li>
<a class="bbp-forum-title" href="http://www.naturalearthdata.com/forums/topic/iso-code-confusion/">ISO code confusion</a>
</li>
<li>
<a class="bbp-forum-title" href="http://www.naturalearthdata.com/forums/topic/bad-adm1name-encoding-in-version-3-0-0-and-missing-diacritics-in-name/">Bad ADM1NAME, encoding in version 3.0.0 and missing diacritics in NAME</a>
</li>
<li>
<a class="bbp-forum-title" href="http://www.naturalearthdata.com/forums/topic/u-s-county-shape-file-2/">U.S. County Shape File</a>
</li>
<li>
<a class="bbp-forum-title" href="http://www.naturalearthdata.com/forums/topic/projection-proportion-compatibility/">Projection / Proportion / Compatibility?</a>
</li>
<li>
<a class="bbp-forum-title" href="http://www.naturalearthdata.com/forums/topic/download-urls-double-slash/">Download URLs – double slash</a>
</li>
<li>
<a class="bbp-forum-title" href="http://www.naturalearthdata.com/forums/topic/map-soft-writer-me/">map soft – writer: me</a>
</li>
<li>
<a class="bbp-forum-title" href="http://www.naturalearthdata.com/forums/topic/unicode-encoding-issue-ne_10m_lakes-dbf/">Unicode encoding issue – ne_10m_lakes.dbf</a>
</li>
</ul>
</li></ul><ul><li id='bbpresswptweaks_login_links_widget-3' class='widget bbpresswptweaks_login_links_widget'><h2 class="widgettitle">Forum Login</h2>
<div class="bbp-template-notice">
<a href="http://www.naturalearthdata.com/wp-login.php?redirect_to=/downloads/110m-cultural-vectors/110m-admin-0-boundary-lines/" rel="nofollow">Log in</a>
- or -
<a href="http://www.naturalearthdata.com/wp-login.php?action=register" rel="nofollow">Register</a>
</div></li></ul> </div>
</div>
<hr />
<div id="footer">
<div id="footerarea">
<div id="footerlogos">
<p>Supported by:</p>
<div class="footer-ad-box">
<a href="http://www.nacis.org" target="_blank"><img src="http://www.naturalearthdata.com/wp-content/themes/NEV/images/nacis.png" alt="NACIS" /></a>
</div>
<div class="footer-ad-box">
<a href="http://www.cartotalk.com" target="_blank"><img src="http://www.naturalearthdata.com/wp-content/themes/NEV/images/cartotalk_ad.png" alt="Cartotalk" /></a>
</div>
<div class="footer-ad-box">
<a href="http://www.mapgiving.org" target="_blank"><img src="http://www.naturalearthdata.com/wp-content/themes/NEV/images/mapgiving.png" alt="Mapgiving" /></a>
</div>
<div class="footer-ad-box">
<a href="http://www.geography.wisc.edu/cartography/" target="_blank"><img src="http://www.naturalearthdata.com/wp-content/themes/NEV/images/wisconsin.png" alt="University of Wisconsin Madison - Cartography Dept." /></a>
</div>
<div class="footer-ad-box">
<a href="http://www.shadedrelief.com" target="_blank"><img src="http://www.naturalearthdata.com/wp-content/themes/NEV/images/shaded_relief.png" alt="Shaded Relief" /></a>
</div>
<div class="footer-ad-box">
<a href="http://www.xnrproductions.com " target="_blank"><img src="http://www.naturalearthdata.com/wp-content/themes/NEV/images/xnr.png" alt="XNR Productions" /></a>
</div>
<p style="clear:both;"></p>
<div class="footer-ad-box">
<a href="http://www.freac.fsu.edu" target="_blank"><img src="http://www.naturalearthdata.com/wp-content/themes/NEV/images/fsu.png" alt="Florida State University - FREAC" /></a>
</div>
<div class="footer-ad-box">
<a href="http://www.springercartographics.com" target="_blank"><img src="http://www.naturalearthdata.com/wp-content/themes/NEV/images/scllc.png" alt="Springer Cartographics LLC" /></a>
</div>
<div class="footer-ad-box">
<a href="http://www.washingtonpost.com" target="_blank"><img src="http://www.naturalearthdata.com/wp-content/themes/NEV/images/wpost.png" alt="Washington Post" /></a>
</div>
<div class="footer-ad-box">
<a href="http://www.redgeographics.com" target="_blank"><img src="http://www.naturalearthdata.com/wp-content/themes/NEV/images/redgeo.png" alt="Red Geographics" /></a>
</div>
<div class="footer-ad-box">
<a href="http://kelsocartography.com/blog " target="_blank"><img src="http://www.naturalearthdata.com/wp-content/themes/NEV/images/kelso.png" alt="Kelso Cartography" /></a>
</div>
<p style="clear:both;"></p>
<div class="footer-ad-box">
<a href="http://www.avenza.com" target="_blank"><img src="http://www.naturalearthdata.com/wp-content/themes/NEV/images/avenza.png" alt="Avenza Systems Inc." /></a>
</div>
<div class="footer-ad-box">
<a href="http://www.stamen.com" target="_blank"><img src="http://www.naturalearthdata.com/wp-content/themes/NEV/images/stamen_ne_logo.png" alt="Stamen Design" /></a>
</div>
</div>
<p style="clear:both;"></p>
<span id="footerleft">
© 2009 - 2017. Natural Earth. All rights reserved.
</span>
<span id="footerright">
<!-- Please help promote WordPress and simpleX. Do not remove -->
<div>Powered by <a href="http://wordpress.org/">WordPress</a></div>
<div><a href="http://www.naturalearthdata.com/wp-admin">Staff Login »</a></div>
</span>
</div>
</div>
<script type='text/javascript' src='http://www.naturalearthdata.com/wp-includes/js/wp-embed.min.js?ver=4.4.11'></script>
</body>
</html>
<!--Generated by Endurance Page Cache--> |
Salicylate inhibits LDL oxidation initiated by superoxide/nitric oxide radicals.
Simultaneously produced superoxide/nitric oxide radicals (O2*-/NO*) could form peroxynitrite (OONO-) which has been found to cause atherogenic, i.e. oxidative modification of LDL. Aromatic hydroxylation and nitration of the aspirin metabolite salicylate by OONO- has been reported. Therefore we tested if salicylate may be able to protect LDL from oxidation by O2*-/NO* by scavenging the OONO reactive decomposition products. When LDL was exposed to simultaneously produced O2*-/NO* using the sydnonimine SIN-1, salicylate exerted an inhibitory effect on LDL oxidation as measured by TBARS and lipid hydroperoxide formation and alteration in electrophoretic mobility of LDL. The cytotoxic effect of SIN-1 pre-oxidised LDL to endothelial cells was also diminished when salicylate was present during SIN-1 treatment of LDL. Spectrophotometric analysis revealed that salicylate was converted to dihydroxybenzoic acid (DHBA) derivatives in the presence of SIN-1. 2,3- and 2,5-DHBA were even more effective to protect LDL from oxidation by O2*-/NO*. Because O2*-/NO* can occur in vivo, the results may indicate that salicylate could act as an efficacious inhibitor of O2*-/NO* initiated atherogenic LDL modification, thus further supporting the rationale of aspirin medication regarding cardiovascular diseases. |
The execution steps in most research, development, and engineering experiments generally involve manual operations carried out on unconnected technology platforms. The scientist or engineer works in what are essentially isolated technology islands with manual operations providing the only bridges. To illustrate, when there is a Standard Operating Practice (SOP) Guide for the experimental work, it is often an electronic document, for example in Microsoft Word. The experimental plan (Step 1) within the SOP Guide has to be transferred to the target device (instrument, instrument platform, or component module for execution (Step 2) by manually re-keying the experiment into the device's instrument control program (ICP)—the device's controlling application software. In a few cases the statistical analysis of results (Step 3a) can be done within the ICP, but it is most often done within a separate statistical analysis software package or spreadsheet program such as Microsoft Excel. This also requires manually transferring the results data from the ICP to the analysis software package. Reporting of results (Step 3b) is usually carried out in Microsoft Word, and therefore requires the manual transfer of all results tables and graphs from the separate statistical analysis software package. The manual operations within the general execution sequence steps are presented below. The isolated technology islands are illustrated in FIGS. 1 and 2.
FIG. 1 illustrates the manual tools and operations involved in carrying out a research and development experiment. In this work a statistical experiment design protocol is first generated, via step 12. This protocol is developed manually and off-line using non-validated tools such as Microsoft Word. The protocol then must be approved, once again manually and off-line, via step 14. When required, sample amounts are then calculated using non-validated tools such as Microsoft Excel, via step 16. Thereafter the samples are prepared, via step 18 and the experiment is run on a target device, via step 20, for example, a high-performance liquid chromatograph (HPLC). Running the experiment requires manually re-constructing the statistical design within the target device's ICP. When this software does not exist, or does not allow for full instrument control, the experiment must be carried out in a fully manual mode by manually adjusting instrument settings between experiment runs.
FIG. 2 illustrates the manual tools and operations involved in analyzing the data and reporting the results of the research and development experiment, via step 22. The analysis and reporting of data is accomplished by first statistically analyzing and interpreting the experiment data, off-line, using non-validated tools such as Microsoft Excel. Next, it is determined whether or not there is a need for more experiments, possibly using off-line generic Design of Experiments (DOE) software, via step 24. Then, data are entered and a report is written, via step 26. Finally, the report is archived, via step 28. As is seen from the above, the research, development, and engineering experimentation process involves a series of activities that are currently conducted in separate “technology islands” that require manual data exchanges among the tools that are used for each activity. However, until now, no overarching automation technology exists that brings together all the individual activities under a single integrated-technology platform that is adapted to multiple devices and data systems.
Method development activities encompass the planning and experimental work involved in developing and optimizing an analytical method for its intended use. These activities are often captured in company Standard Operating Procedure (SOP) documents that may incorporate Food and Drug Administration (FDA) and International Conference on Harmonization (ICH) requirements and guidances. Method development SOP documents include a description of all aspects of the method development work for each experiment type (e.g. early phase analytical column screening, late phase method optimization, method robustness) within a framework of three general execution sequence steps: (1) experimental plan, (2) instrumental procedures, and (3) analysis and reporting of results. The individual elements within these three general steps are presented below. Step 1: Generate Experimental Plan Select experiment type Select target instrument Define study variables: analyte concentrations instrument parameters environmental parameters Specify number of levels per variable Specify number of preparation replicates per sample Specify number of injections per preparation replicate Integrate standards Include system suitability injections Define Acceptance Criteria Step 2: Construct Instrumental Procedures
Define required transformations of the experiment plan into the native file or data formats of the instrument's controlling ICP software (construction of Sample Sets and Method Sets or Sequence and Method files). Specify number of injections (rows) Specify type of each injection (e.g., sample, standard) Step 3: Analyze Data and Report Results Specify analysis calculations and report content and format Carry out numerical analyses Compare analysis results to acceptance criteria (FDA & ICH requirements) Specify graphs and plots that should accompany the analysis Construct graphs and plots Compile final report
The execution steps in analytical method development generally involve manual operations carried out on unconnected technology platforms. To illustrate, an SOP Guide for the development of an HPLC analytical method is often an electronic document in Microsoft Word. The experimental plan (Step 1) within the SOP Guide has to be transferred to the HPLC instrument for execution (Step 2) by manually re-keying the experiment into the instrument platform's ICP—in the case of an HPLC this is typically referred to as a chromatography data system (CDS). In a few cases the statistical analysis of results (Step 3) can be performed within the CDS, but it is most often carried out within a separate statistical analysis software package or spreadsheet program such as Microsoft Excel. This also requires manually transferring the results data from the CDS to the analysis software package. Reporting of results (Step 3) is usually carried out in Microsoft Word, and therefore requires the manual transfer of all results tables and graphs from the separate statistical analysis software package. The manual operations within the three general execution sequence steps are presented below. Step 1—Experimental Plan Development plan developed in Microsoft Word. Experimental design protocol developed in off-line DOE software. Step 2—Instrumental Procedures Manually build the Sequences or Sample Sets and instrument methods in the CDS. Raw peak (x, y) data reduction calculations performed by the CDS (e.g. peak area, resolution, retention time, concentration). Step 3a—Statistical Analysis Calculated results manually transferred from the CDS to Microsoft Excel. Statistical analysis usually carried out manually in Microsoft Excel. Some graphs generated manually in Microsoft Excel, some obtained from the CDS. Step 3b—Reporting of Results Reports manually constructed from template documents in Microsoft Word. Graphs and plots manually integrated into report document.
It is realized that prior art systems in the area do not address the overarching problem of removing the manually intensive steps required to bridge the separate technology islands. Similarly, it is also realized from the prior art that inherent data loss is known to occur in sampling of experimental results to impact quantitative effect estimations and thereby degrade and typically render inaccurate statistical confidences from experimental results. However, the prior art is not instructive in assisting in overcoming these problems to improve the accuracy or analyzability of experimental results and sampling, nor is the prior art instructing in overcoming deficiencies enabling one to develop a readily obtainable solution to overcome inherent data loss, provide an identifiable metric for separate experimental undertakings, or provide information about resulting effects where experimental samples contain inherent data losses.
For instance, often trial runs of research and development (R&D) experiments may be carried out by making changes to one or more controllable parameters (as used herein such may include but not be limited to study factors, instrument settings, controllable parameters of instrumentation, a set of discrete process events, or other experimentation factors with other factors remaining constant (as used herein the controlled portion of an experiment or experimental run or trial) of a process or system and then measuring test samples obtained from in-process sampling or process output. Typically, an objective of a researcher in these undertakings is to identify and quantify the effects of the parameter changes on the identified important process output quality attributes or performance characteristics that are being measured. The quantified effects can then be used to define the parameter settings that will give the desired process output results.
FIG. 3 illustrates a generalized flow diagram of a process in a predetermined process flow direction (305) consisting of four discrete elements (300): base material input (310), key reactant input (320), heating (330), and chemical reaction (340). For the avoidance of doubt, FIG. 3 and its related embodiments are foundational to the present invention herein. The flow diagram 300 also contains an in-process measurement step at 335 and a process endpoint measurement step at 350. In this generalized process 300 the base material element may have one or more controllable parameters such as material feed rate or be of two or more blended components including base material formulation for example. In addition, the measurements of the quality attributes or performance characteristics of interest may actually be taken within the process stream, as would be done by an in-process measurement system, or on the process output material.
The process 300 of FIG. 3 can similarly be analogized via a chemical separation process performed by instrumentation such as that of an HPLC. FIG. 4A is demonstrative of such an adaptation of the general process flow diagram 300 of FIG. 3 to that of an HPLC. In FIG. 4A, the flow diagram 400 comprises three primary HPLC process elements: solvent delivery (410), sample injection (420), and a separation chamber (430).
In FIG. 4A, method development experiments may be performed on controllable parameters within the HPLC to identify the parameter settings that are optimum for the separation of a given mixture of compounds. In such experiments, one critical performance characteristic being measured, for example, may be the degree of separation of the mixture into isolated pure individual compounds, as is further defined by the legend at 440. However, and more particularly in typical practical applications such as those within the pharmaceutical industry, the active pharmaceutical ingredient (API) and one or more degradants and/or impurities in a drug product often represents a normal mixture of compounds for which an HPLC method must be developed. As is known from practical applications under tradition methods, accurately measuring the amount of API in a test sample (or actual sample) with an HPLC would require that the instrument first separate the API from the degradants and/or impurities.
As used herein, the term “impurities” are defined to include but not be limited to components of the drug product formulation, which may also be termed excipients, or contaminants that come from various points or stages in the process or even the product packaging of an affected product or sample. For example, an impurity may be a plastic compound from a product container that may contaminate the surface of the drug tablet for instance. By further example, a test sample may be a dissolved tablet (i.e., the solid dosage form of the drug product) that contains the API and impurities. As used herein, the term “degradants” are defined as breakdown products of a sample API or impurity, i.e., molecules which result from the decomposition of the API or impurity. As an additional further example, a test sample may be a dissolved tablet (i.e., the solid dosage form of the drug product) that is subject to stress conditions which attempt to force the degradation of the API and impurity compounds in the sample.
Therefore a critical HPLC method development experiment objective in a traditional practice application may include identifying the instrument operating conditions that separate the API from any degradants and impurities in a test mixture to the degree required (i.e., accuracy level) to accurately measure the API amount. Further in separation method development experiments, for example, some of the HPLC parameter settings used in the experiment trials can result in the inability to accurately measure a critical performance characteristic, such as compound separation. These issues are known to be a significant challenge for researchers and commercial entities alike.
The consequences of these limitations realized by many in the field then are the inherent data losses in one or more experiment trials which can then result in the inability to quantitatively analyze the experiment results and draw any meaningful conclusions.
FIG. 4B depicts an instrument hardware framework 450 associated with an HPLC instrument system. The HPLC framework 450 comprises several process elements with controllable parameters that can be experimentally addressed. The process elements include: solvent formulation and solvent pH adjusted using a controllable valve module for solvent selection (CVM—Solvent Switching) (451), the solvent flow rate (Pump Module) (452), the type of separation column adjusted using a controllable valve module for column switching (CVM—Column Switching) (453), a sampler (454) and a detector (455).
For FIG. 4B, a typical experiment (i.e., method development experiment) may be comprised of conducting one or more trials where a trial consists of operating the HPLC instrument at one or more predetermined settings of the study parameters, injecting a small amount of the sample mixture into the solvent stream and measuring critical performance characteristics such as the degree of separation of the individual sample compounds at the endpoint of the process 455.
By exemplar, objectives of experimentation under the framework of FIG. 4B in view of the process set forth in FIG. 4A, may include attempting to separate out one or more APIs from impurities or degradants. In such experiments, for example, the controllable parameters of the CVM modules (451 and 453) and the Pump Module (452) may be selected for experimental study. In such experiments, CVM solvent switching parameters may be adjusted between experiment trials to deliver a solvent mix at a different pH and the results captured. In such experiments, CVM column switching parameters may also be adjusted so as to employ a different column, for example, in each experimental trial undertaken. Similarly, in such experiments, pump module parameters may be adjusted between trials to both change the rate at which the solvent formulation is changed (i.e., proportion of organic solvent increased) during a trial run and to deliver the solvent formulation at a different flow rate. However, as will become further evident, in these types of experiments, despite the objectives of experimentation including attempts to separate out one or more APIs from impurities and/or degradants by selecting predetermined controllable parameters for experimental study, the results can be inaccurate.
FIG. 4C depicts a graphical chromatogram representation 460 of experimental results data obtained from a particular trial run trial in one of the experimental runs under assessment herein (e.g. trial run 11), wherein the “raw” results depicted in the figure are in the form of “absorbance peaks.” A peak typically occurs when a compound absorbs light transmitted through the solvent stream and is detected by the detector as the compound passes the detector at a given time X, wherein the baseline condition represents zero absorbance of the light.
As used herein, an “absorbance peak” or “peak” generally means a vertical spike (Y axis deviation) along a horizontal line in the graph from baseline conditions (where Y=zero) occurring at a given X axis time interval. As also used herein, a compound's “retention time” is defined as the time from injection to detection, and, in the chromatogram, this time is the X-axis value corresponding to the peak's maximum Y value.
In FIG. 4C, poorly separated peaks are apparent at 461 and 462. Interpretatively, each peak in FIG. 4C corresponds to at least one compound (i.e., the API or an impurity). It should also be readily recognized that the area under a given peak is proportional to the amount of absorbed light, which is in turn proportional to the amount of the corresponding compound in the solvent stream passing the detector at the time indicated on the X axis in the chromatogram.
However, problematically, translating the measured area of a given peak into an amount of the corresponding compound is typically accurate only where the peak in a chromatogram is the result of only one compound. As a result, accurately measuring the amount of an individual compound in a sample using traditional approaches is difficult and often impossible when two or more compounds pass through the detector at the same time due to lack of separation (i.e., 461 and 462). Unfortunately, the occurrence of two or more compounds passing through the detector at the same time due to lack of separation is quite a common event in many method development experiment instances.
To attempt to compensate for this limitation, often a primary goal of many HPLC method development experiments is to identify the instrument settings that result in a chromatogram with the following critical characteristics: (1) an observable peak being present for each compound in the sample, (2) situations where each peak is separated from all other peaks (i.e., no overlap) to a degree at least minimally necessary to accurately quantify the amount of the corresponding compound in the sample, and (3) the separation of a critical peak, often an API, from its nearest peak. The degree of separation between a given pair of adjacent compound peaks in a chromatogram is defined herein as the “peak resolution.”
In a traditional approach to HPLC method development, the effect of instrument setting changes on the resolution of sample compounds is therefore typically relied on as being one of the most important experiment results. As a result, it is traditionally believed and practiced to carry out the following steps: a. change one or more instrument settings, inject a sample, and obtain a resulting chromatogram; b. associate each peak in the chromatogram with one of the sample compounds; c. compute the peak resolution results for all adjacent peak (compound) pairs; d. determine if the compounds are sufficiently separated, as represented by the adjacent peak pair resolution data, to accurately determine the amount of each compound in the sample to the required level of precision; and e. repeat Steps (a)-(d) above if the compounds are not sufficiently resolved.
Unfortunately, the correct assignment of the sample compounds to the chromatogram peaks as in Step (b) above is critical to accurately interpret experiment trial results in accordance with traditional practice. Such traditional practice characteristics may include current numerical analysis approaches and the like. Since, as is often the situation, current analysis and interpretation approaches target the interactions of each compound with the HPLC system elements that result from the specific chemical and structural nature of the compound, determining specifically and precisely which compound each resolution result associates with, in a given chromatogram, is effectively the only way to track the effects of instrument changes on the separation of that compound.
A further complication especially common to early HPLC method development experiments that involve analytical column and pH screening has been that it may not be readily determinable as to how many compounds are in an experimental sample, and therefore how many peaks an experimenter is to expect in a chromatogram obtained from sample analysis by HPLC. This particular complication is further illustrated by comparing FIG. 4C with FIG. 4D.
FIG. 4D is a chromatogram 470 obtained from the same sample of FIG. 4C as analyzed under different trial settings of the HPLC instrument. The chromatogram of FIG. 4D shows twelve well separated peaks being visible along the X axis time interval of 10 to 34 minutes (see for example representative peaks at 471 and 472, where an uncertain or undefinable number of peaks exist in this same interval in FIG. 4C (see for example representative points at 461 and 462).
However, additional complications can result even where the number and identity of all compounds in a test sample are known as such knowledge does not necessarily simplify the work of correctly associating each peak with a sample compound in each trial chromatogram, since instrument changes between trials can affect both peak shape (i.e., broad-flat versus narrow-spiked) and the column transit time of the corresponding compound (i.e., peak retention time).
For example, for a particular experimental trial, a peak arising in a resulting chromatogram corresponding to a given compound may occur at 15 minutes and appear narrow and spiked. In a second trial with different instrument settings, the peak corresponding to the same compound may occur at 12 minutes and may appear as being broad and flat. Contradistinctively, a third trial's settings may cause a second peak to also occur at the 12 minute location in the chromatogram resulting in a combined peak that differs greatly in shape and area from the others. By further example, in FIG. 4C at 461, overlapping peaks corresponding to incompletely separated compounds can be seen, and again at 462, while peaks with the same or very similar shape and area in FIG. 4D occur at approximately 22, 23, and 24 minutes (473, 474, and 475 respectively). |
$NetBSD: patch-Makefile.PL,v 1.1 2017/06/07 14:42:23 ryoon Exp $
* Fix build with Perl 5.26.0
--- Makefile.PL.orig 2014-09-26 10:47:10.000000000 +0000
+++ Makefile.PL
@@ -1,5 +1,7 @@
use strict;
use warnings;
+use FindBin;
+use lib $FindBin::Bin;
use inc::Module::Install 1.06;
name 'Class-Accessor-Grouped';
|
Q:
Не работает z-index
Есть два элемента
<div class="menu_1">
321321
</div>
<div class="menu_2">
11
</div>
CSS
.menu_1{
background-color:#333333;
height:300px;
z-index: 1;
}
.menu_2{
background-color:#cccccc;
height:400px;
z-index: 0;
}
Но почему то первый блок не перекрывает второй, а идут поочередно, почему?
A:
Он и не должен перекрывать, если не задан стиль position. Задайте например position : absolute
И z-index ставьте больше 1, т.к. это дефолтное положение.
|
Signs have now been put up at the Applegreen located on the M1 just outside Lisburn, advertising a Greggs shop
Bakery chain Greggs looks set to open another Northern Ireland store at a new Applegreen service station.
Signs have now been put up at the Applegreen located on the M1 just outside Lisburn, advertising a Greggs shop.
Greggs opened its first location at the first Applegreen last year.
In November, it then revealed the location of its first Northern Ireland stand-alone store, located on the Boucher Road in south Belfast.
Last year the Belfast Telegraph revealed the bakery giant is eyeing as many as 50 locations across Northern Ireland, with around 10 in Belfast alone. Greggs has 1,670 stores located across Britain.
The UK's largest bakery chain will be rolling out its popular fare across Northern Ireland this year. That includes favourites already flying off the shelves at Applegreen, such as the steak bake, and the sausage and bean melt.
Belfast Telegraph |
/**
* Copyright (c) 2008-2020 Bird Dog Games, Inc.
*
* This file is part of Ardor3D.
*
* Ardor3D is free software: you can redistribute it and/or modify it
* under the terms of its license which may be found in the accompanying
* LICENSE file or at <https://git.io/fjRmv>.
*/
package com.ardor3d.input.logical;
import java.util.function.Predicate;
import com.ardor3d.annotation.Immutable;
import com.ardor3d.input.InputState;
import com.ardor3d.input.mouse.ButtonState;
import com.ardor3d.input.mouse.MouseButton;
import com.ardor3d.input.mouse.MouseState;
import com.ardor3d.math.MathUtils;
/**
* A condition that is true if a given button was pressed when going from the previous input state
* to the current one.
*/
@Immutable
public final class MouseButtonLongPressedCondition implements Predicate<TwoInputStates> {
private final MouseButton _button;
private final long _triggerTimeMS;
private final double _maxDrift;
private boolean _armed;
private int _armedX, _armedY;
private long _armedAt;
/**
* Construct a new MouseButtonLongPressedCondition.
*
* @param button
* the button that should be pressed to trigger this condition
* @param triggerTimeMS
* how long, in ms, a button needs to be down before triggering the condition.
* @param maxDrift
* how far the mouse / cursor can move from the initial long press location before
* invalidating the long press.
* @throws IllegalArgumentException
* if the supplied button argument is null
*/
public MouseButtonLongPressedCondition(final MouseButton button, final long triggerTimeMS, final double maxDrift) {
if (button == null) {
throw new IllegalArgumentException("button was null");
}
_button = button;
_triggerTimeMS = triggerTimeMS;
_maxDrift = maxDrift;
}
@Override
public boolean test(final TwoInputStates states) {
final InputState currentState = states.getCurrent();
final InputState previousState = states.getPrevious();
// we need non-null states
if (currentState == null || previousState == null) {
_armed = false;
return false;
}
final MouseState mouseState = currentState.getMouseState();
final long now = System.currentTimeMillis();
// if we were not armed before...
if (!_armed) {
// only arm if we are pressing the button anew
if (mouseState.getButtonsPressedSince(previousState.getMouseState()).contains(_button)) {
_armed = true;
_armedX = mouseState.getX();
_armedY = mouseState.getY();
_armedAt = now;
}
return false;
}
if (mouseState.getButtonsPressedSince(previousState.getMouseState()).size() > 0) {
_armed = false;
return false;
}
final float dx = _armedX - mouseState.getX();
final float dy = _armedY - mouseState.getY();
// we must be armed, so check if we should still be armed... Button should be down and we should not
// have
// drifted too far from our initial pressed location.
if (mouseState.getButtonState(_button) != ButtonState.DOWN || MathUtils.sqrt(dx * dx + dy * dy) > _maxDrift) {
_armed = false;
return false;
}
// armed, and not drifting. Return if enough time has passed.
if (now - _armedAt > _triggerTimeMS) {
_armed = false;
return true;
}
return false;
}
}
|
LEARN/CREATE (PT)
NOTE: When the Idaho Legislature is in session, programming on the Learn/Create and World channels may be pre-empted for live coverage from the House and Senate floors.
4:30 pm
Cyberchase"A Battle of Equals"
Hacker pollutes cyberspace with dangerous cyberstatic by tampering with four satellites designed to keep everything free from cyber- static cling. But is his plan only to trash cyberspace - or does he have something even more devious up his cloak? The kids learn the answer as they use balance scales and equations to restore the satellites and save Motherboard. The Big Idea: You can use an equation - a statement that two different expressions are equal to each other - to find unknown values that make the equation true. D
Cook's Country from America's Test Kitchen"Autumn Desserts"
Bridget reveals the secrets to the ultimate Apple Dumplings. Then, Adam reveals his top picks for cake carriers. Next, Chris learns about the best apple varieties for baking at Saratoga Orchards. D
Pati's Mexican Table"Easy Comfort Food"
Simple, easy, home-style cuisine that you'd find in just about any Mexican home, recreated for the American kitchen. This meal was Pati's favorite "everyday" meal growing up in Mexico, and one she regularly makes for her own family today. She's proud to share the steps so that you can enjoy it, too! Dressed-up Chicken Milanesa; Chayote Squash and Pickled Onion Salad; Chunky Chipotle Mashed Potatoes. D
7:00 pm
Ask This Old House"Special New York Episode"
In a special episode, the guys visit New York. Plumbing and heating expert Richard Trethewey tours a scale model of the entire city in Queens. Host Kevin O'Connor works with a local electrician to add task lighting to a dark kitchen in Brooklyn. D
Grand View"Redwood National Park"
In this episode, Baumann's journey takes his audience to some of the tallest trees on Earth. These old growth redwood stands are some of the last remaining timber that was saved from the lumber industry. These magnificent trees live to be 2000 years old and grow to be over 300 feet tall; Baumann's journey takes him to the base of these mythical giants. D
Cook's Country from America's Test Kitchen"Autumn Desserts"
Bridget reveals the secrets to the ultimate Apple Dumplings. Then, Adam reveals his top picks for cake carriers. Next, Chris learns about the best apple varieties for baking at Saratoga Orchards. D
9:30 pm
Mexico -- One Plate at a Time with Rick Bayless"Salsas That Cook"
In their Chicago backyard, Rick and his daughter, Lanie, gather the last of the season's tomatoes to make a big batch of Salsa Mexicana, the fresh tomato salsa sometimes known as Pico de Gallo. And that's the starting point for a fast-paced salsa dance that goes way beyond tomatoes. In Mexico, salsas can be bright and fresh, dark and earthy, red or green, raw or roasted - and they're more of a condiment for food than a dip for chips. D
Destinos: An Introduction to Spanish"Asf Fue: IV"
This episode is the fourth segment of the four-part review of the entire series and features Raquel talking about her experiences since arriving in Mexico.G |
The Village of New Gourna
The Village of New Gourna
by Lara Iskander
Famous architect and artist, Hassan Fathy was born in Alexandria in 1899 to an Egyptian father and a Turkish mother. He studied at Cairo University and later became a professor at the Faculty of Fine Arts and was head of the Architectural School.
He was an architect who devoted himself to housing the poor in developing nations. He aimed to create affordable and livable spaces suitable to the surrounding environment, thus improving the economy and the standard of living in rural areas.
Nevertheless, Hassan Fathy was not very successful at convincing the state of his ideas. His work was considered to be ahead of his time as they were not always welcomed by the government bureaucrats neither were they to the tastes of poor Egyptians peasants who longed for the "luxury" of the concrete city buildings. Fathy's buildings were distressingly inexpensive. This was seen as a back draw.
Hassan Fathy was strongly against Western techniques and materials like reinforced concrete and steel which he found inappropriate for Egypt's climate and the craftsmen's limited skills.
"Matchbox houses" were too hot in the summer and too cold in winter. He encouraged ancient design methods and materials. He saw a more appropriate method of building in the Vernacular Architecture of the Nubians (region of southern Egypt), which influenced his ideas greatly.
Nubian craftsmen were masters at constructing domed and vaulted roofs of mud brick which they also used for the walls. The structures were cheap, cool in the summer and the walls were heat-retaining in winter.
While implementing the Nubian building techniques, he aimed to train Egyptian craftsmen to build their houses using mud brick or Adobe, which was ideally suitable to the local conditions of Upper Egypt and at a fraction of the cost.
New Gourna Project is one of his best known housing projects. This is due to the international popularity of his book, "Architecture for the poor" published originally in French, 20 years after the beginning of the project, in which he explained his vision for the village. This book details his thoughts, processes, dealings with the politics involved, and his theories behind the forms.
The idea was launched by the Egyptian department of Antiquities in 1946 to build a new town near Luxor to relocate the inhabitants of the Gourna Village or also called "Sheikh Abd el-Qurna".
The old Gourna village is built over Pharaonic tombs, many of which were not discovered yet. The residents were famous for being able to bring up suspiciously authentic Egyptian monuments from their cellars. The antiquities were having trouble controlling the tomb-robbing occurring in the areas of the Valley of the Kings, Queens and Nobles nearby. And so, the perfect solution seemed to be to move the seven thousand locals whose economy depended on tomb looting. This came as Fathy's perfect opportunity.
The new location is about five miles downhill towards the river, not far from the old village. Hassan Fathy's saw this as a challenge, as he says in his book. Faced with a 50 acre land intended to home 7000 people unwilling to leave their homes was not to be an easy task.
His designs depended on natural ventilation, orientation and local materials, traditional construction methods and energy- conservation techniques. He carried out detailed studies of temperature and wind patterns.
Hassan Fathy did not believe that the locals should be housed in similar homes. Each had different needs, tastes and comforts apart from the number living in the house.
Fathy worked with the villagers to tailor his designs to their needs with creativity and variety keeping the same spirit to the entire village.
Fathy included an open air theatre, a school, a "Suq" (market) and a Mosque, famous for the unusual shape of its minaret. He also built himself a house in the same spirit of the village, using the same materials.
The "Gourna Village experiment" was not just an architectural experiment. To Hassan Fathy it was more like the development of a town on a cultural, social level following the regional traditions. Relating to the people and knowing their needs while asking them to participate in the construction of their town was a major part of the project.
The village was never completed. The locals did start moving into their new homes, but eventually they did not settle down.
The reluctance of the people to cooperate in the design and building of the village was mistakenly understood as a sure sign of the inappropriateness the project. Normally, the people resented the change and took every opportunity possible to sabotage their new village in order to stay where they were and to continue their own secret ancient trading.
It did not take long to see the beginning of the failure the village. The village was never completed.
All what remains today of New Gourna is the mosque, market, a couple of houses and Hassan Fathy's. Even the school was demolished and rebuilt in modern materials. As for the rest of the houses, most of them were rebuilt in a more "suitable" way according to the people's taste. In 1967, he had another trial similar to Gourna called the village of Bariz in Kharga. It didn't not prove to be a better success from the previous because of funding problems. His numerous honors include the International Gold Medal Award from the International Union of Architects and the Aga Khan award he received in 1980.
Hassan Fathy's ideas are now more in vogue. Though not exactly his aim, his style of structures has become very famous between the upper classes that tend to use the traditional vaults and domes for roofing. Also, many touristic resorts such as El Gouna near Hurghada have followed this theme. It was partially built by one of his students and is obviously a great success.
The remains of these villages are worth a visit. His buildings have proven to be very efficient, comfortable, spacious, always based on natural resources and most convenient to hot climate. Fathy designed several projects abroad in such places as New Mexico and India. His files, drawing and notes are all kept at the AUC rare collections where they are exhibited.
Resource:
Original research by Lara Iskander
Last Updated: June 9th, 2011
Who are we?
Tour Egypt aims to offer the ultimate Egyptian adventure and intimate knowledge about the country. We offer this unique experience in two ways, the first one is by organizing a tour and coming to Egypt for a visit, whether alone or in a group, and living it firsthand. The second way to experience Egypt is from the comfort of your own home: online. |
Various aspects of providing containers having membrane-type closures, and of induction heat sealing membrane-type closures to containers are disclosed in prior art U.S patents of which the following are representative: U.S. Pat. No. 2,937,481 issued May 24, 1960 to Jack Palmer; U.S. Pat. No. 3,460,310 issued Aug. 12, 1969 to Edmund Philip Adcock et al.; U.S. Pat. No. 3,501,045 issued Mar. 17, 1970 to Richard W. Asmus et al.; U.S. Pat. No. 3,734,044 issued May 22, 1973 to Richard W. Asmus et al.; U.S. Pat. No. 3,767,076 issued Oct. 23, 1973 to Leo J. Kennedy; U.S. Pat. No. 3,805,993 issued Apr. 23, 1974 to William H. Enzie et al.; and U.S. Pat. No. 3,808,074 issued Apr. 30, 1974 to John Graham Smith et al. However, the prior art does not disclose solutions to all of the problems associated with providing containers having membrane-type closures in the manner of or degree of the present invention. |
Q:
Sentinel-1A and Lansat-8 Resolution difference
Sentinel-1A level-1 high resolution GRD data have 20X22m resolution (rgXaz)with 5X1 number of looks. While Landsat-8 data have 30m resolution. My question is how can I compare the resolution differences between them? In other words, which one of them has higher resolution? When I digitize a lake size from the two sensors, lake size from Landsat-8 is always greater than Sentine-1A. Note that its a same lake; images were taken on the same acquisition date or with not much of time difference. What could be the reasons behind for differences in the lake sizes? Is it because of resolution difference? If yes, How can I interpret the results?
A:
This is a bit of an apples and oranges comparison. The Sentinel-1A sensor is an active radar system carrying a C-band synthetic aperture radar array. Whereas, Landsat 8 is a passive spectral system with 16-bit radiometric resolution across 0.43 - 2.29 micrometers (excluding the 100m2 IR bands). The characteristics of the sensors will dictate the feature resolution of the objects being detected.
Because the Landsat 8 sensor is carrying a blue edge band (0.43 - 0.45 um), it is likely able to discriminate water/wetland features more effectively than the backscatter signal from Sentinel-1A. This would account for the differences in feature size, regardless of the on the ground cell resolution differences. If you want to use radar SAR data for water/wetland classification it would be better to use L-band data from ALOS PALSAR. This would provide much better discrimination than Sentinel-1A C-band data.
I would have to add here that you really need to consider a classification algorithm. It is impossible to visually Elucidate information contained in both of these data. There could very well be information contained in the data that is providing feature discrimination that is not readily apparent in a visual assessment. The difference in the results could be associated with user bias and not information content. This is particularity relevant in a multivariate context where there may be very relevant information, contained across multiple bands that are not being used in and RGB composite, comprising the image backdrop being used in heads up digitizing. You are effectively throwing away data. In the radar data there is likely a pixel gradient that is not apparent in your visual interpretation. The image can also be seriously compromised based on the type of stretch used in display.
|
Symmetry breaking in interspecific Drosophila hybrids is not due to developmental noise.
Hybrids from crosses of different species have been reported to display decreased developmental stability when compared to their pure species, which is conventionally attributed to a breakdown of coadapted gene complexes. Drosophila subobscura and its close relative D. madeirensis were hybridized in the laboratory to test the hypothesis that genuine fluctuating asymmetry, measured as the within-individual variance between right and left wings that results from random perturbations in development, would significantly increase after interspecific hybridization. When sires of D. subobscura were mated to heterospecific females following a hybrid half-sib breeding design, F1 hybrid females showed a large bilateral asymmetry with a substantial proportion of individuals having an asymmetric index larger than 5% of total wing size. Such an anomaly, however, cannot be plainly explained by an increase of developmental instability in hybrids but is the result of some aberrant developmental processes. Our findings suggest that interspecific hybrids are as able as their parents to buffer developmental noise, notwithstanding the fact that their proper bilateral development can be harshly compromised. Together with the low correspondence between the co-variation structures of the interindividual genetic components and the within-individual ones from a Procrustes analysis, our data also suggest that the underlying processes that control (genetic) canalization and developmental stability do not share a common mechanism. We argue that the conventional account of decreased developmental stability in interspecific hybrids needs to be reappraised. |
When I first acquired a Google API key, I quickly noticed that many (if not all) keys start with the same 4+ characters: "AIza". These keys can give access to any API services the owner of the key has activated. While they are not incredibly secret (I have seen many used in Javascript for the Maps API), and it's easy to replace them, they still aren't things the owners would likely want to be harvested. When the new Github search was introduced and the related private files started to be found, I decided to see if any developers had - intentionally or accidentally - left them in their code.
When we first visit the search page and look for "AIza", we are presented with the following:
Nice - over 14,000 results. While we can immediately start thinking about ways to automate the process of harvesting these, we quickly run into an issue: Github only allows us to search 99 pages, each with 10 results. This only allows us to get less than 1,000 results.
While I haven't yet found a way to obtain all the results (comment below if you know of a way - I'd love to hear it!), my current solution to this is as follows:
Github allows us to sort by both "Best Match" and "Last Indexed". I've found that for a large number of results, they produce different output.
We can search through the overall results (all 99 pages) using both of these sorting methods. Then, we can also search using individual languages (notice them on the left side of the image). By searching these, we get a maximum of 99 pages for each language, and we can use both sorting methods again.
With this being the case, we can now surpass our limit of under 1,000 results. While we still can't always get every result, let's automate the harvesting process we have so far and see what we find.
Automating the Search
For automating this search, we'll employ a few basic web scraping techniques, and the following fantastic Python modules:
BeautifulSoup4 for HTML parsing (This post is using the default BeautifulSoup HTML parser. If you have lxml installed (recommended), BeautifulSoup will use it by default)
For the sake of this post, I'll step through the methodology used to harvest information like this. First, we can use requests to get the source of a webpage as follows:
Now that we have the raw source, we can create a BeautifulSoup object with it:
Now that we have an easily parseable object, what should we look for? Let's try to find a unique attribute about the results that would allow us to quickly search for it an find the text we're looking for. I usually do this by right-clicking the text in question, clicking "Inspect Element" (this is Chrome), and then seeing what kind of HTML element it is embedded in. Doing this, we see the following:
We can see here that the result text is within a 'div' with a class of 'line', so it's a safe bet to try and extract all elements that match this criteria.
In general, creating a BeautifulSoup object gives us the ability to quickly parse out a list of specific HTML elements we want using the syntax soup.find_all(element_tag, { attribute : value }). However, since searching for an element by class is such a common need, BeautifulSoup makes it easier for us by letting us search with the syntax soup.find_all(element_tag, class_name). Let's extract these values:
Great, making progress! The next thing we want to do is extract just the raw text from these results. BeautifulSoup makes it really easy for us to do this by allowing us to call result.text for each result in our list. Let's give that a shot.
The last thing we need to do is create a regex to find the matches. Since we know each expression begins with "AIza" and can see that they are each 39 characters long, the following expression should serve our purposes nicely:
Just like that, we have our keys. The last thing we need to do to make our script a bit more efficient is to get the number of pages for each language. We could do this two ways:
Navigating to each language page and getting the maximum number of pages
From our starting page, pull the number of results for each language and divide by 10
Since it will result in less requests if we go the second route, let's pursue that option. From our page, we can see the following HTML structure of the side panel:
So it looks like want both the URL to use, and the text from the "span" tag with the "count" class. We can easily retrieve both of these with BeautifulSoup:
Now that we have the general idea as well as the data we need, here's the entire script, which checks for duplicates, runs through all iterations listed in the beginning of this post, and writes the results to 'google_keys.txt'.
Since it looks like (at the time of this writing) I'm running into issues having Github return results for each specific language (no matter the search query), here is a small subset of the results found (approx. 1,000 out of 4,086 enumerated keys) when developing the tool: http://pastebin.com/XEe1WuvG. Of course, some authors have intentionally added obfuscation to their keys (such as replacing characters with 'XXXX', etc.), however this should be a reminder to always sanitize data before publishing it to a public repository! |
Ceramic amplifierThe realization of this project is related to the 600th anniversary of Polish-Turkish diplomacy. Partner: Potters’ Association in Menemen.The natural sound amplifier dedicated for smartphones. The amplifier was designed so that one can make it using a potter’s wheel. The craftsmen of Menemen have been using this technique for centuries. The product is designed to be a home item and an alternative to electric speakers. Earthenware is a very good material for acoustics, therefore the sound coming out of the speaker remains at decent quality. |
FlatOut 4 Total Insanity PC Download Free InstallShield
FlatOut 4 Total Insanity PC Download Free InstallShield
FlatOut 4 Total Insanity for PC Download The fourth full-fledged edition of the spectacular FlatOut racing series, launched in 2004 by the Finnish studio Bugbear Entertainment. After the cool adoption of the third part of the cycle, the task of creating its continuation was entrusted to the studio of Kylotonn, which has been awarded, among others, fifth and sixth installment of the popular WRC rally series. The whole is based, to a large extent, on the comments and suggestions of the player community, constituting to some extent a return to the roots of the brand. In the game, therefore, we traditionally take part in brutal racing competitions, in which we can win the victory not only thanks to fast driving, but also to the ruthless break-up of rival cars. In FlatOut 4: Total Insanity, there was also the possibility of destroying the elements of the surroundings and spectacular kraks – means the majority of elements that have become the showcase of the series since its inception.
Download FlatOut 4 Total Insanity for PC free InstallShield
FlatOut 4 Total Insanity DownloadThe FlatOut series focuses on the struggles of racing drivers who take part in violent competitions. The game in FlatOut 4: Total Insanity consists in fighting opponents, which should be hit and demolished, all in order to get to the finish in the first place. Developers have prepared twenty-seven vehicles in three different classes. We can sit behind the wheel of race cars, monster trucks, buggies and more. Each of them is carried out a little differently and has different properties, but all of them are perfect for efficient demolition of opponents. The struggle takes place on twenty routes, on which many destructible elements have been placed.
You can play in ten modes. In addition to the classic career, there is Arena in FlatOut 4: Total Insanity (a mode where only demolition of opponents counts), time challenges or individual races. In addition, the iconic Stunt series returns with twelve minigrams, in which we perform breakneck and spectacular evolutions by catapulting the driver from the car and maneuvering it properly in flight. In this fun variant, we can play alone or with friends with one machine. All this is complemented by network races for eight players.
Download and play FlatOut 4 Total Insanity
FlatOut 4: Total Insanity for PC Windows and other devices is another version of the well-known series of racing games, launched in 2004 by the Bugbear Entertainment studio. However, the title does not correspond to the creator of the first few parts, but the Kylotonn team supported by the Strategy First company. In order to fully satisfy the fans of the brand, the publisher and developer decided to invite the same players to cooperate who in various ways passed their production tips to the producers.
Privacy Overview
This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
You can adjust all of your cookie settings by navigating the tabs on the left hand side.
Strictly Necessary Cookies
Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.
If you disable this cookie, we will not be able to save your preferences. This means that every time you visit this website you will need to enable or disable cookies again. |
Identification of methicillin-resistant isolates of Staphylococcus aureus and coagulase-negative staphylococci responsible for bloodstream infections with the Phoenix system.
We evaluated the reliability of the new Phoenix system (Becton Dickinson Microbiology Systems, Sparks, Md.) in species-level identification and detection of oxacillin (methicillin) resistance among 493 staphylococcal isolates (Staphylococcus aureus, n = 223; coagulase-negative staphylococci, CoNS, n = 270) recovered from patients with bacteremia. Identification results were concordant with those of the ID 32 STAPH system (bioMérieux, Marcy l'Etoile, France) for 100% of S. aureus (223/223) and 97.4% (263/270) of CoNS isolates. For S. aureus isolates, Phoenix oxacillin-susceptibility results fully concurred with those of mecA polymerase chain reaction (PCR) (reference method): 96 mecA-positive isolates identified as resistant, 127 mecA-negative strains as susceptible. Two of the 210 mecA-positive CoNS isolates were misclassified as susceptible by the Phoenix (sensitivity 99%, positive predictive value 97.6%). Five of 60 mecA-negative CoNS isolates were classified as resistant by the Phoenix (specificity 91.7%; negative predictive value 96.5%). The Phoenix system can provide accurate and reliable identification of methicillin-resistant staphylococci responsible for bloodstream infections. |
package com.github.mauricio.async.db.exceptions
import com.github.mauricio.async.db.Connection
class ConnectionNotConnectedException( val connection : Connection )
extends DatabaseException( "The connection %s is not connected to the database".format(connection) ) |
United States Court of Appeals
FOR THE DISTRICT OF COLUMBIA CIRCUIT
Argued March 18, 2016 Decided August 19, 2016
No. 14-1253
OZBURN-HESSEY LOGISTICS, LLC,
PETITIONER
v.
NATIONAL LABOR RELATIONS BOARD,
RESPONDENT
UNITED STEEL, PAPER AND FORESTRY, RUBBER,
MANUFACTURING, ENERGY, ALLIED INDUSTRIAL AND SERVICE
WORKERS INTERNATIONAL UNION,
INTERVENOR
Consolidated with 14-1289, 15-1184, 15-1242
On Petitions for Review and Cross-Applications
for Enforcement of Orders
of the National Labor Relations Board
Benjamin H. Bodzy argued the cause for petitioner. With
him on the briefs was Stephen D. Goodwin.
David A. Seid, Attorney, National Labor Relations Board,
argued the cause for respondent. With him on the briefs were
2
Richard F. Griffin, General Counsel, John H. Ferguson,
Associate General Counsel, Linda Dreeben, Deputy Associate
General Counsel, and Robert J. Englehart, Supervisory
Attorney.
Katharine J. Shaw argued the cause and filed the briefs
for intervenor. With her on the briefs was Amanda M. Fisher.
Before: PILLARD and WILKINS, Circuit Judges, and
EDWARDS, Senior Circuit Judge.
Opinion for the Court filed by Circuit Judge PILLARD.
PILLARD, Circuit Judge: This appeal is the latest chapter
in an ongoing labor dispute between Ozburn-Hessey
Logistics, LLC (OHL or the Company) and the United Steel,
Paper and Forestry, Rubber, Manufacturing, Energy, Allied
Industrial and Service Workers International Union (the
Union). In 2009, the Union began a campaign to organize
workers at the OHL’s warehouse facilities in Memphis,
Tennessee. That campaign culminated in a July 27, 2011,
representation election, which the Union won by a one-vote
margin. The National Labor Relations Board (the Board)
found that the Company committed multiple unfair labor
practices during the months leading up to the representation
election. OHL violated the National Labor Relations Act, the
Board determined, by threatening, interrogating, and
surveilling employees; creating the impression of such
surveillance; confiscating union-related materials; urging
union supporters to resign; and disciplining two employees
because of their pro-union views. In that same decision, the
Board resolved pending ballot challenges and objections
arising from the July 27, 2011, representation election and
directed the Board’s Regional Director to count six of the
remaining challenged ballots, resulting in a wider margin of
3
victory for the Union. Pursuant to that revised election tally,
the Board’s Regional Director certified the Union as the
exclusive bargaining representative for the Company’s
Memphis employees. The Company nonetheless refused to
bargain with the Union, prompting a separate Board decision
determining that OHL violated the Act.
The Company petitions for review, raising multiple
objections to the Board’s underlying decisions. We have
accorded the Company’s arguments full consideration after
careful examination of the record, but address in detail only
those arguments that warrant further discussion. Having
found no basis to disturb the Board’s well-reasoned decisions,
we deny the petitions for review and grant the Board’s cross-
applications for enforcement of its orders.
I. Background
A. Facts
OHL is a third-party logistics company that provides
transportation, warehousing, and supply-chain management
services for other companies. It operates warehouses
throughout the country, including five in Memphis,
Tennessee. In May 2009, the Union began organizing
employees at OHL’s Memphis warehouses and, later that
year, filed an election petition with the Board to represent
those workers. See Hooks ex rel. NLRB v. Ozburn-Hessey
Logistics, LLC, 775 F. Supp. 2d 1029, 1035-36 (W.D. Tenn.
2011). The Union lost the ensuing representation election in
March 2010 and filed charges against OHL, alleging that the
Company committed multiple unfair labor practices during
the unionization campaign. Id. at 1035-39. The Board found
merit to those allegations and concluded in two separate
decisions that, between June 2009 and March 2010, OHL
4
violated the Act by threatening employees, confiscating union
materials, and disciplining union supporters. See Ozburn-
Hessey Logistics, LLC, 357 NLRB 1632 (2011) (Ozburn I)
(finding that OHL committed unfair labor practices between
June and October 2009), enforced mem., 609 F. App’x 656
(D.C. Cir. 2015) (per curiam judgment); Ozburn-Hessey
Logistics, LLC, 357 NLRB 1456 (2011) (Ozburn II) (finding
that Company committed unfair labor practices between
November 2009 and March 2010), enforced mem., 605 F.
App’x 1 (D.C. Cir. 2015) (per curiam judgment). 1
The Company’s challenged misconduct did not end there,
however. Just a few months after the election, OHL
disciplined employees Jennifer Smith and Carolyn Jones, on
the basis of their union-related conduct. On June 9, 2011, the
Company issued a final warning to Smith, a known union
leader who distributed union literature and handbills, solicited
coworkers to support the Union, and openly wore union hats
and shirts to work. The final warning accused Smith of
violating OHL’s anti-harassment and non-discrimination
policy by calling Stacey Williams, a fellow African-
American, a racial slur on June 8 during a heated argument
about certain office supplies. Final Employee Warning
Notice, 14 J.A. 717. Smith denied having made the
derogatory remark and refused to sign the final warning.
A few days later, on June 14, the Company fired Jones, a
known union leader who distributed union handbills and
organizing materials, solicited coworkers to support the
Union, and routinely attended union meetings. The
1
While these cases were awaiting Board review, the Union
sought, and a federal district court granted, a temporary injunction
prohibiting OHL from committing further unfair labor practices and
ordering the Company to make whole several unlawfully
disciplined employees. See Hooks, 775 F. Supp. 2d at 1034, 1053.
5
Company’s termination letter gave two reasons for Jones’s
discharge. First, the Company accused Jones of violating the
Company’s “guidelines regarding failure to cooperate with an
internal investigation” by fabricating a witness statement
about a heated verbal exchange that occurred on May 26,
2011. See Jones Termination Letter, 14 J.A. 558. On that
day, Jones had attended a meeting during which OHL
management disseminated information to employees about
union dues. Afterward, Jones went to a break room and told
her coworkers that the President supported their right to
unionize and that it was “stupid” for employees not to want a
union. ALJ Decision of May 15, 2012, 14 J.A. 740-41.
According to Jones, OHL Director of Operations Phil Smith
suddenly appeared behind her and said, “[I] just had two . . .
employees . . . sa[y] they were called stupid. . . . Well, you all
are the ones that are stupid because you’re trying to get a
union in here.” Hearing Transcript, 14 J.A. 25. Jones asked
if Phil Smith was referring to her, to which he replied, “[i]f
the shoe fits, then you wear it.” Id. When Jones explained to
Phil Smith that she did not call anybody “stupid” and tried to
end their conversation, id. at 26, Phil Smith warned her, “you
better watch your back,” id. at 26-27.
Jones soon prepared a witness statement documenting her
encounter with Phil Smith and asked her coworkers to sign it.
Four OHL employees signed the statement, which Jones then
submitted to OHL’s Human Resources Department. After
investigating the incident, OHL determined that Phil Smith
was innocent of any wrongdoing and that Jones had asked her
coworkers to sign a blank sheet of paper before she filled in
the witness statement about Phil Smith’s threatening
comment—conduct the Company characterized as fraudulent.
Second, the Company claimed that Jones was fired
because she violated the Company’s Anti-Harassment Policy
6
by repeatedly calling fellow employee Lee Smith a racial
epithet. Jones began calling Lee Smith that epithet in the
spring of 2011, shortly after he had voiced his opposition to
the Union. OHL conducted an internal investigation and
concluded that, despite her repeated denials, Jones in fact had
used the racial epithet on multiple occasions.
On June 14, 2011, the same day as Jones’s discharge, the
Union petitioned the Board for a second election to represent
workers at OHL’s Memphis warehouses. The Board held the
representation election on July 27, pursuant to a Stipulated
Election Agreement between OHL and the Union. The
parties agreed that “office clerical and professional
employees” would be excluded from the voting unit and
further stipulated that two administrative assistants would
vote subject to challenge by the Union. The Union won the
election by a vote of 165 to 164. The election tally reflected
fourteen ballot challenges, including the Company’s
challenge to Jones’s ballot and the Union’s challenge to
ballots of the two administrative assistants. OHL and the
Union thereafter each objected to the second election on
several grounds.
B. Decisions Below
1. The Unfair Labor Practice Case
Between June and September 2011, the Union filed a
series of unfair labor practice charges against OHL
challenging the Company’s conduct during the months
preceding the second representation election, including its
punishment of Jennifer Smith and Carolyn Jones. Based on
the Union’s charges, the Acting General Counsel issued a
consolidated complaint alleging, among other things, that the
Company disciplined Smith and Jones on account of their
7
union-related conduct and support in violation of section
8(a)(3) and (1) of the Act.
On May 15, 2012, the Administrative Law Judge
determined that OHL had committed the charged unfair labor
practices. As relevant here, the ALJ found that, based on
hearing testimony and other evidence, the Company violated
section 8(a)(3) of the Act by issuing a final warning to
Jennifer Smith and terminating Carolyn Jones because of their
pro-union activities and views. 2 Applying the Board’s two-
part analysis from Wright Line, 251 NLRB 1083 (1980), the
ALJ determined that anti-union animus motivated the
Company’s punishment of Smith and Jones and that the
Company’s putative justifications for meting out those
disciplinary measures were pretextual. Because the
Company’s proffered reasons for disciplining Smith and
Jones were “mere pretext[s],” ALJ Decision of May 15, 2012,
14 J.A. 746, the ALJ explained, it “fail[ed] by definition to
show that it would have taken the same [disciplinary] action
for those reasons, absent the protected conduct,” id. (quoting
Rood Trucking Co., 342 NLRB 895, 898 (2004)). The ALJ
therefore directed the Company to post an appropriate
remedial notice regarding its violations of the Act and
imposed three additional remedies. The ALJ ordered OHL
(1) to distribute electronically the remedial notice to all unit
employees; (2) to have the notice read aloud to the Memphis
employees by a Board representative in the presence of two
designated OHL managers; and (3) to cease and desist from
committing the charged unfair labor practices and from
otherwise violating the Act.
2
The ALJ also found that the Company violated section
8(a)(1) by threatening and interrogating employees, surveilling
employees, creating the impression of surveillance, confiscating
union materials, and telling pro-union employees to resign.
8
In the same decision, the ALJ resolved the pending ballot
challenges and objections arising from the second
representation election. After ruling on the parties’ electoral
disputes largely in the Union’s favor, the ALJ issued a
recommended order to count six of the remaining ten
challenged ballots. The ALJ further recommended that, if the
Union did not prevail after those six votes were counted, the
Regional Director should invalidate the second election so
OHL employees could vote in a third, untainted election.
On May 2, 2013, the Board affirmed the ALJ’s rulings,
findings, and conclusions, rejected all of OHL’s exceptions to
the ALJ’s decision, and adopted the ALJ’s remedial order,
with one modification. 3 Ozburn-Hessey Logistics, LLC, 359
NLRB No. 109, at *1-4 & n.2 (2013) (Ozburn III). The
Board “agree[d]” with the ALJ’s findings that OHL
“discharged employee Carolyn Jones for engaging in
protected activity” and “unlawfully issued employee Jennifer
Smith a written final warning in retaliation for her prounion
activity.” Id. at *1-2. “[A]dditional circumstances,” the
Board emphasized, supported the ALJ’s conclusion that
Jennifer Smith’s discipline was unlawful. Id. at *2. The
Board found that, based on the credited evidence, OHL’s
“purported belief that Smith used a racial slur was not
reasonable.” Id. The Board also determined that OHL “was
highly inconsistent in its response to racial slurs,” noting that
the Company readily applied its Anti-Harassment Policy
against pro-union employees Jones and Smith, while
overlooking grossly offensive statements by OHL supervisor
Phil Smith. Id. That uneven treatment, the Board concluded,
3
The Board’s amended remedy afforded OHL the option to
have its own managers read the notice aloud to employees in the
presence of a Board representative.
9
suggested that OHL “was using its antiharassment policy to
target union supporters, further corroborating the [ALJ’s]
finding of pretext.” Id. Finally, the Board adopted the ALJ’s
resolution of the parties’ election objections and ballot
challenges and thus directed the Regional Director to count
six of the challenged ballots. Id. at *3-5. OHL petitioned for
review of the Board’s May 2013 Decision.
In compliance with the Board’s May 2013 Decision, the
Regional Director issued a revised election tally of 169-166 in
the Union’s favor and, on May 24, 2013, certified the Union
as the exclusive bargaining representative for the designated
employee unit. In June 2013, OHL refused the Union’s
request to bargain, prompting the Union to file charges under
the Act. Pursuant to those charges, the Acting General
Counsel filed a complaint alleging that OHL’s refusal to
bargain with the Union violated section 8(a)(5) and (1) of the
Act.
The following year, the Supreme Court decided NLRB v.
Noel Canning, 134 S. Ct. 2550 (2014), which invalidated the
appointments of two Board members on the panel that had
issued the Board’s May 2013 Decision on the unfair labor
charges. On June 27, 2014, the Board set aside that decision
in light of Noel Canning and retained the case on its docket.
On November 17, 2014, upon de novo review of the
ALJ’s decision, a lawfully constituted panel of the Board
affirmed the ALJ’s rulings, findings, and conclusions and
adopted with modification the recommended remedial order
“to the extent and for the reasons stated” in its May 2013
Decision, which the Board expressly incorporated by
reference. Ozburn-Hessey Logistics, LLC, 361 NLRB No.
100, at *1 (2014) (Ozburn IV). Although the Board found
that the Regional Director lawfully certified the Union based
10
on an accurate, revised tally of the representation election, it
nevertheless issued a new Certification of Representative “in
an abundance of caution.” Id. at *1. Shortly thereafter, OHL
petitioned for review of the Board’s November 2014
Decision, and the Board cross-applied for enforcement of the
same. The two unfair-labor-practice cases were consolidated,
and the Union intervened.
2. The Refusal To Bargain Case
Meanwhile, in December 2014, the Union sent another
letter to OHL requesting that the Company bargain, and OHL
once more refused. The following month, with the Board’s
permission, the General Counsel amended its complaint to
allege that the Company in 2014 had again refused to bargain
in violation of section 8(a)(5) and (1) of the Act. OHL
admitted that it had refused to bargain with the Union, but
asserted that it was not obligated to do so because the Board
had erred in resolving the ballot challenges, overruling the
Company’s election objections, and certifying the Union.
OHL also sought dismissal of the General Counsel’s
complaint on the ground that the Union never filed a new
charge following the Board’s 2014 Certification of
Representative.
On June 15, 2015, the Board issued a Decision and Order
finding that OHL’s refusal to bargain with the Union was
unlawful under section 8(a)(5) and (1) of the Act. See
Ozburn-Hessey Logistics, LLC, 362 NLRB No. 118, at *1-5
(2015) (Ozburn V). The Board rejected the Company’s
efforts to relitigate the ballot challenges and election
objections previously adjudicated in the Board’s November
2014 Decision and found no merit to the Company’s
contention that the General Counsel’s amended complaint
was procedurally infirm for want of a separately filed charge
11
after the Board certified the Union in 2014. See id. at *2.
OHL petitioned for review of the Board’s 2015 Decision, and
the Board cross-applied for enforcement. The two refusal-to-
bargain cases were consolidated, and the Union intervened.
After briefing was completed, we granted the Company’s
request to consolidate the refusal-to-bargain cases with the
unfair-labor-practice cases. We have jurisdiction over the
consolidated appeals under 29 U.S.C. § 160(e) and (f).
II. Analysis
A. Standard of Review
We “accord[] a very high degree of deference to
administrative adjudications by the [Board]” and reverse its
findings “only when the record is so compelling that no
reasonable factfinder could fail to find to the contrary.”
Bally’s Park Place, Inc. v. NLRB, 646 F.3d 929, 935 (D.C.
Cir. 2011) (internal quotation marks omitted). Under that
very deferential standard, we “must uphold the judgment of
the Board unless, upon reviewing the record as a whole, we
conclude that the Board’s findings are not supported by
substantial evidence, or that the Board acted arbitrarily or
otherwise erred in applying established law to the facts of the
case.” Tenneco Auto., Inc. v. NLRB, 716 F.3d 640, 646-47
(D.C. Cir. 2013) (quoting Wayneview Care Ctr. v. NLRB, 664
F.3d 341, 348 (D.C. Cir. 2011)). We also “owe substantial
deference to inferences drawn by the Board from the factual
record,” Tenneco, 716 F.3d at 647 (internal quotation marks
omitted), and “[o]ur review of the Board’s conclusion as to
discriminatory motive is even more deferential, because most
evidence of motive is circumstantial,” Fort Dearborn Co. v.
NLRB, --- F.3d ---, 2016 WL 3361476, at *3 (D.C. Cir. Apr.
12, 2016) (reissued June 17, 2016) (internal quotation marks
12
omitted); see also Citizens Inv. Servs. Corp. v. NLRB, 430
F.3d 1195, 1198 (D.C. Cir. 2005). Furthermore, we “will
uphold the Board’s adoption of an ALJ’s credibility
determinations unless those determinations are hopelessly
incredible, self-contradictory, or patently unsupportable.”
United Servs. Auto. Ass’n v. NLRB, 387 F.3d 908, 913 (D.C.
Cir. 2004) (internal quotation marks omitted).
B. Section 8(a)(3) Violations
OHL first challenges the Board’s determination that it
violated section 8(a)(3) and (1) of the Act by issuing a final
warning to Jennifer Smith and terminating Carolyn Jones on
account of their union-related activity.
Under section 8(a)(3), it is “an unfair labor practice for an
employer . . . to encourage or discourage membership in any
labor organization” by “discriminati[ng] in regard to hire or
tenure of employment or any term or condition of
employment.” 29 U.S.C. § 158(a)(3). An employer violates
section 8(a)(3) “by taking an adverse employment action,
such as issuing a disciplinary warning, in order to discourage
union activity.” Tasty Baking Co. v. NLRB, 254 F.3d 114,
125 (D.C. Cir. 2001); see Fort Dearborn, 2016 WL 3361476,
at *3. And an employer that violates section 8(a)(3)
derivatively violates section 8(a)(1)’s prohibition on
“interfer[ing] with, restrain[ing], or coerc[ing] employees in
the exercise of the rights guaranteed in section [7 of the Act],”
29 U.S.C. § 158(a)(1). See Metro. Edison Co. v. NLRB, 460
U.S. 693, 698 n.4 (1983).
Where, as here, an employer purports to have disciplined
or discharged an employee for reasons unrelated to protected
union activity, the Board applies the so-called Wright Line
test. Fort Dearborn, 2016 WL 3361476, at *3; Shamrock
13
Foods Co. v. NLRB, 346 F.3d 1130, 1135 (D.C. Cir. 2003).
Under that test, the General Counsel “must first make a prima
facie showing sufficient to support the inference that
protected [i.e., union-related] conduct was a motivating factor
in the . . . adverse action.” Tasty Baking, 254 F.3d at 125
(alteration and omission in original) (internal quotation marks
omitted). “Relevant factors” in determining an employer’s
motive “include ‘the employer’s knowledge of the
employee’s union activities, the employer’s hostility toward
the union, and the timing of the employer’s action.’” Fort
Dearborn, 2016 WL 3361476, at *3 (quoting Vincent Indus.
Plastics, Inc. v. NLRB, 209 F.3d 727, 735 (D.C. Cir. 2000));
see Fortuna Enters., LP v. NLRB, 665 F.3d 1295, 1303 (D.C.
Cir. 2011). “Once a prima facie case has been established, the
burden shifts to the company to show that it would have taken
the same action in the absence of the unlawful motive.” Tasty
Baking, 254 F.3d at 126.
OHL does not seriously dispute the Board’s conclusion
that the General Counsel met his initial burden, at the first
step of the Wright Line analysis, to show that union animus
motivated the Company’s decisions to issue a warning to
Jennifer Smith and discharge Carolyn Jones. Nor could it.
Substantial evidence in the record supports the Board’s
findings that Smith and Jones were active supporters of the
Union, that OHL had knowledge of their union-related
conduct, and that OHL harbored animus toward the Union
and its supporters. See Fort Dearborn, 2016 WL 3361476, at
*3; Power Inc. v. NLRB, 40 F.3d 409, 418 (D.C. Cir. 1994).
OHL instead contends that the Board misapplied the
Wright Line test by denying the Company a meaningful
opportunity to show, at the second step of the Wright Line
analysis, that it would have issued a final warning to Smith
and discharged Jones even in the absence of the allegedly
14
unlawful motive. The Board further erred, OHL claims, by
concluding arbitrarily and without any basis in the record that
the Company’s proffered justifications for disciplining Smith
and discharging Jones were pretextual.
1. The Board’s Application of the Wright Line Test
We first consider OHL’s argument that the Board erred
by affirming what OHL characterized as the ALJ’s
misapplication of the Wright Line test. According to OHL,
the ALJ sidestepped the full Wright Line analysis by
concluding that, “[i]f the employer’s proffered defenses are
found to be a pretext, i.e., the reasons given for its actions are
either false or not, in fact, relied on, the employer fails by
definition to show that it would have taken the same action
for those reasons,” rendering it unnecessary “to perform the
second part of the Wright Line analysis.” ALJ Decision of
May 15, 2012, 14 J.A. 746. OHL argues that the ALJ’s
approach, which the Board subsequently affirmed and
adopted, impermissibly skipped over the second step of
Wright Line and thus abridged the Company’s opportunity to
rebut the General Counsel’s prima facie showing that it
disciplined Smith and Jones for unlawful reasons.
Neither the ALJ nor the Board deviated from the
analytical approach set forth in Wright Line. Applying that
test, the ALJ determined that the Company’s decisions to
punish Smith and Jones were motivated by anti-union animus
and rejected each of the reasons the Company claimed to have
relied on in taking those disciplinary actions. In doing so, the
ALJ did not, as OHL contends, deny it the opportunity to
present its affirmative defenses: the ALJ allowed the
Company to advance its defenses but, after considering them
in light of the record, concluded that they were “mere
pretext[s].” ALJ Decision of May 15, 2012, 14 J.A. 746.
15
Nothing in Wright Line forecloses that approach and,
indeed, the Board’s precedent interpreting and applying
Wright Line expressly authorizes it. In Rood Trucking, for
example, the Board clarified that:
[a] finding of pretext defeats any attempt by the
[company] to show that it would have discharged the
discriminate[e]s absent their union activities . . .
because where “the evidence establishes that the
reasons given for the [company’s] action are
pretextual—that is, either false or not in fact relied
upon—the [company] fails by definition to show that
it would have taken the same action for those reasons,
absent the protected conduct, and thus there is no
need to perform the second part of the Wright Line
analysis.
342 NLRB at 898 (quoting Golden State Foods Corp., 340
NLRB 382, 385 (2003)); see also Limestone Apparel Corp.,
255 NLRB 722 (1981) (“[W]here an administrative law judge
has evaluated the employer’s explanation for its action and
concluded that the reasons advanced by the employer were
pretextual, that determination constitutes a finding that the
reasons advanced by the employer either did not exist or were
not in fact relied upon.”). Accordingly, the ALJ’s articulation
of the legal standard comported with the Board’s guidance in
Rood Trucking.
The Company insists that even if Rood Trucking
countenances the ALJ’s approach here, that decision
“contravenes Wright Line” by “preclud[ing] the burden from
ever shifting” to the Company, resulting in the Board
“mak[ing] a premature declaration of pretext without ever
considering the employer’s justification for the disciplinary
16
decision.” 14 Petitioner’s Reply Br. 15-16. To the extent that
OHL asserts that the ALJ failed to consider the Company’s
defenses, it has mischaracterized the ALJ’s decision, which
considered OHL’s proffered reasons and found them to be
pretextual. To the extent that OHL claims legal error, we
decline its invitation to overturn Rood Trucking. To begin,
that decision constitutes the Board’s well-reasoned
“interpretation of its own precedent” in Wright Line and
therefore “is entitled to deference.” Ceridian Corp. v. NLRB,
435 F.3d 352, 355 (D.C. Cir. 2006) (internal quotation marks
omitted). Even absent such deference, however, we perceive
no conflict between Rood Trucking and the Wright Line test.
To be sure, Wright Line dictates that an employer may
rebut the General Counsel’s initial showing of union animus
by establishing that it “would have taken the same [adverse]
action [against the employee] in the absence of” the unlawful
motive. 251 NLRB at 1091. Rood Trucking’s logic is not to
the contrary. If the Board concludes, as it did here, that the
employer’s purported justifications for adverse action against
an employee are pretextual, then the employer fails as a
matter of law to carry its burden at the second prong of
Wright Line. See Rood Trucking, 342 NLRB at 898. Indeed,
the Board has articulated the Wright Line framework in
similar, if not identical, terms in numerous decisions both
before and since Rood Trucking. See, e.g., Ozburn II, 357
NLRB at 1456 n.3 (“We agree with the judge that the
[Company’s] proffered reason for terminating [the employee]
was shown to be pretextual, and that the [Company] therefore
failed to rebut the Acting General Counsel’s initial case by
showing it would have terminated [the employee] in the
absence of her union support.”); U-Haul of Cal., 347 NLRB
375, 388-89 (2006), enforced mem., 255 F. App’x 527 (D.C.
Cir. 2007) (judgment); Golden State Foods, 340 NLRB at
385; In re Sanderson Farms, Inc., 340 NLRB 402, 402
17
(2003). Courts, too, have formulated the Wright Line burden-
shifting test consistently with both Rood Trucking and the
ALJ’s decision here. See, e.g., USF Red Star, Inc. v. NLRB,
230 F.3d 102, 106 (4th Cir. 2000) (“If the Board believes the
employer’s stated lawful reasons are non-existent or
pretextual, the [employer’s affirmative] defense fails.”); cf.
NLRB v. Transp. Mgmt. Corp., 462 U.S. 393, 398 (1983),
abrogated on other grounds by Dir., Office of Workers’
Comp. Programs, Dep’t of Labor v. Greenwich Collieries,
512 U.S. 267 (1994). Because the ALJ correctly adhered to
the Board’s decisions in Wright Line and Rood Trucking, the
Board did not err in affirming and adopting the ALJ’s
articulation of the controlling legal standard.
2. Final Warning of Jennifer Smith
We next turn to OHL’s contention that the Board
arbitrarily found that the Company’s asserted justification for
issuing a final warning to Jennifer Smith—namely, that she
violated OHL’s Anti-Harassment Policy by calling her
coworker Stacey Williams a racial slur—“was a mere
pretext.” ALJ Decision of May 15, 2012, 14 J.A. 746. That
challenge misses the mark.
The ALJ determined, and the Board agreed, that Smith
never used that racial epithet. In reaching that determination,
the ALJ credited Smith’s testimony that she never called
Williams any such name because he “found her to be an
honest [and cooperative] witness.” Id. at 741. Smith’s
account, the ALJ emphasized, was consistent with the
accounts of other credible witnesses who observed the
altercation. Jennifer Smith’s co-worker, Jerry Smith, testified
that he would have heard the racial slur if Smith had actually
said it because he was “focused enough on what was going
on,” but that he did not hear it. Testimony of Jerry Smith, 14
18
J.A. 266-67. Likewise, Sheila Childress, a co-worker who
witnessed the altercation from about thirty feet away, stated
that she did not hear Smith utter the epithet. The ALJ
expressly discredited Stacey Williams’s testimony that
Jennifer Smith addressed him with a racial slur because he
“was a confusing, hostile, and argumentative witness,” whose
testimony was “disjointed.” ALJ Decision of May 15, 2012,
14 J.A. 741. The ALJ also found that OHL employee Shirley
Milan, who corroborated Williams’s account of events, was
“a biased witness, who previously made an unsubstantiated
claim that Smith threatened her with a knife, and who also
conceded that she dislikes Smith.” Id. We decline to disturb
the Board’s adoption of those credibility findings, which rest
on substantial record support and are certainly not reversible
as “hopelessly incredible, self-contradictory, or patently
unsupportable.” United Servs. Auto. Ass’n, 387 F.3d at 913
(internal quotation marks omitted); see Monmouth Care Ctr.
v. NLRB, 672 F.3d 1085, 1091-92 (D.C. Cir. 2012) (declining
to overturn administrative law judge’s credibility
determination “based on a combination of testimonial
demeanor and a lack of specificity and internal
corroboration”).
OHL nevertheless maintains that, even accepting the
Board’s factual finding that Jennifer Smith did not use a racial
slur against Stacey Williams, OHL reasonably believed that
she did based on the evidence at its disposal, and punished her
accordingly. Its reasonable belief, OHL claims, was
sufficient to rebut the General Counsel’s prima facie case of
anti-union motive at the second prong of the Wright Line
analysis. In support of that contention, OHL invokes our
decision in Sutter East Bay Hospitals v. NLRB, 687 F.3d 424
(D.C. Cir. 2012), where we held that “[i]f [a company’s]
management reasonably believed [the employee’s] actions
occurred, and the disciplinary actions taken were consistent
19
with the company’s policies and practice, then [a company]
could meet its burden under Wright Line regardless of what
actually happened.” Id. at 435-36; see also Fort Dearborn,
2016 WL 3361476, at *6.
Sutter East Bay is of little aid to OHL because, as the
Board concluded, “the record establishes that [OHL’s]
purported belief that Smith used a racial slur was not
reasonable.” Ozburn III, 359 NLRB No. 109 at *2 (emphasis
added), incorporated by reference in Ozburn IV, 361 NLRB
No. 100. The Board found that the credited testimony of
Jennifer Smith, Childress, and Jerry Smith, outlined above,
severely undercut the reasonableness of the Company’s belief,
which was based on the accounts of biased and incredible
witnesses. Id. In fact, the day before the Company issued
Jennifer Smith the final warning, Childress furnished to the
Company a signed statement explaining that she did not hear
Smith use any racial epithet during the verbal altercation with
Williams, giving the Company a significant reason to doubt
Williams’s allegation.
The Board also determined that “credited evidence in the
record” established “that [OHL] did not believe that the use of
racial slurs merited discipline.” Id. Most tellingly, that
record evidence showed that OHL supervisor Phil Smith was
not disciplined at all after hurling highly offensive racial and
homophobic slurs at employees in front of other managers
and employees. And several other witnesses testified that use
of racial slurs was commonplace among the workers at
OHL’s Memphis warehouses. Based on that and other
credited record evidence, the Board reasonably inferred that
OHL acted “inconsistent[ly] in its response to racial slurs”
and “was using its antiharassment policy to target union
supporters.” Ozburn III, 359 NLRB No. 109 at *2; see also
infra 23-25. Consequently, the Company cannot avail itself
20
of Sutter East Bay’s safe harbor, because, as the Board found,
it has not shown that it reasonably believed Jennifer Smith
used a racial epithet or that “it parceled out discipline as it
normally would when confronted with the same kind of
employee misconduct that its managers reasonably believed
had occurred.” See Fort Dearborn, 2016 WL 3361476, at *6.
The Board reasonably concluded, consistent with the
evidence, that, “even assuming [OHL] reasonably believed
that Smith had used a racial epithet,” the Company “could not
and did not establish that it would have disciplined her in the
absence of the union activity.” Ozburn III, 359 NLRB No.
109 at *2. We owe heightened deference to that well-
reasoned assessment of the Company’s discriminatory motive
and find no basis in the law or record to question the Board’s
determination that OHL’s proffered reason for disciplining
Smith was mere pretext. See Fort Dearborn, 2016 WL
3361476, at *3.
In sum, substantial evidence supports the Board’s
findings that Smith never used the alleged racial slur and that
it was unreasonable for the Company to believe that she did.
We therefore deny OHL’s petition for review, and grant the
Board’s cross-application for enforcement, of the Board’s
decision that OHL’s discipline of Smith violated section
8(a)(3) and (1) of the Act.
3. Discharge of Jones
OHL also challenges the Board’s determination that the
Company’s two putative justifications for terminating Jones
were pretextual. OHL maintains that it fired Carolyn Jones
for two legitimate reasons unrelated to her union support and
activity: (1) she violated the Company’s conduct guidelines
by fabricating a witness statement that supervisor Phil Smith
threatened her with the warning, “watch your back”; and (2)
21
she violated the Company’s Anti-Harassment Policy by
repeatedly using a racial slur against co-worker Lee Smith.
The Board found those reasons to be pretextual. We affirm
that finding.
a. Discharge Reason # 1: OHL Claims Jones
Fabricated Her Witness Statement
Substantial evidence supports the Board’s conclusion that
Carolyn Jones did not fabricate her witness statement
regarding Phil Smith’s alleged threat. All four witnesses who
signed the statement—Annie Ingram, Troy Hughlett, James
Bailey, and Kedric Smith—confirmed that they heard Phil
Smith tell Jones that she had better watch her back. And at
least two of those witnesses, Ingram and Hughlett, credibly
testified that the witness statement prepared by Jones had
some text on it before they had signed it, undercutting the
Company’s suggestion that Jones prepared the witness
statement only after obtaining the signatures. Kedric Smith
testified that Jones handed him a blank page to sign, but the
Board discounted that testimony because it found he had poor
recall of the pertinent issues. We decline to overturn the
Board’s well-reasoned credibility findings, which rested on a
comparison of “testimonial demeanor,” “specificity,” and
“internal corroboration.” Monmouth Care Ctr., 672 F.3d at
1091-92. The Board thus reasonably concluded, based on the
credible evidence, that Jones did not fraudulently manufacture
her witness statement.
Relying once more on our precedent in Sutter East Bay,
687 F.3d at 435-36, OHL insists that it reasonably believed
that Jones falsified her statement because all four witnesses
who signed her statement had given written statements
confirming that Jones handed them a blank page to sign. But
the Board concluded, based on the credible testimony of
22
Ingram, Hughlett, and Bailey, that OHL pressured or deceived
at least the three of them into signing false written statements
to that effect. Ingram testified that that Human Resources
Manager Evangelia Young interviewed her, gave her a blank
piece of paper to sign, and subsequently added false text
about Jones—notably, the very actions of which OHL accuses
Jones. Bailey testified that Young asked him to sign a
prepared statement confirming that Jones had given Bailey a
blank witness statement to sign. Although Bailey admits to
signing Young’s prepared statement, he testified that he did
not closely inspect the document because he assumed Young
was accurately writing “down what [he] said,” and that he
simply signed it because management’s “constant[]”
questioning about the incident “stressed [him] out.”
Testimony of James Bailey, 14 J.A. 139-41. Hughlett
testified that he signed a statement, prepared by Young,
declaring that Jones’s witness statement was blank when he
signed it, but he testified that he did so only because he did
not want to be questioned any more about the incident and felt
“pressure[d]” by management to sign the statement.
Testimony of Troy Hughlett, 14 J.A. 98. Given the ample
testimony suggesting that OHL itself manufactured evidence
to justify Jones’s termination, the Board had a sound basis for
concluding that OHL could not reasonably have believed that
Jones fabricated her witness statement. See Fort Dearborn,
2016 WL 3361476, at *6 (noting that, to rebut prima facie
case of anti-union motive, employer must show that it
“reasonably believed” that misconduct “had occurred”).
Substantial evidence in the record supports the Board’s
determination that OHL’s first reason for firing Carolyn Jones
was pretextual.
23
b. Discharge Reason # 2: OHL Claims Jones
Repeatedly Used a Racial Slur
We reach the same result with respect to the Company’s
second putative reason for Jones’s termination—her
ostensible use of a racial slur against her coworker Lee Smith.
Although the Board determined that Carolyn Jones did in fact
use that epithet, it rejected as pretextual OHL’s assertion that
Jones was fired for that reason. The Board found that OHL
punished Jones’s infraction far more severely than prior,
similar infractions by other employees. It pointed in
particular to the Company’s willingness to overlook racist and
other offensive statements made by supervisor Phil Smith,
which the Board found inconsistent with OHL’s decision to
fire Jones. The Board further concluded that OHL’s
termination of Jones deviated from the Company’s
progressive disciplinary policy, which sets forth lesser initial
penalties for violations like hers. Based on those findings, the
Board concluded that the Company would not have
discharged Jones based on her use of a racial slur absent her
union-related activity. Substantial evidence supports that
conclusion.
The record evidence confirms that OHL’s punishment of
Jones was far more severe than the discipline the Company
imposed on other, similar offenders. As the Board explained,
in ten prior disciplinary actions involving racial epithets or
other profane language, OHL issued eight warnings, one
suspension arising from recidivism, and one discharge arising
from recidivism and a connected assault. The only other
employee who was discharged, Ashley Burgess, was a repeat
offender who received a verbal warning for using profanity
against a supervisor in January 2006 and was fired after
hurling racial slurs at another employee during a heated
physical confrontation in September 2010. Unlike Burgess,
24
Jones was not a recidivist, did not assault, threaten, or
otherwise physically confront anyone at work, and had never
before been reported for using vulgar or offensive language.
In addition, OHL’s willingness to turn a blind eye to the racial
slurs and offensive remarks of OHL supervisor Phil Smith
further underscores the unusual harshness of OHL’s discipline
of Jones. As explained above, Phil Smith called an African
American worker a racial slur and another employee a
homophobic epithet. Unlike Jones, who received OHL’s
harshest punishment, however, OHL did not punish Phil
Smith at all.
OHL argues that the disciplinary cases evaluated by the
Board involved employees who committed different offenses
or were otherwise not comparably situated to Jones. But even
if none of those cases involved the exact circumstances or the
same racial epithets involved in Jones’s case, the Board
deemed them materially similar and held that they
demonstrated that no other employee who had engaged in
only verbal misconduct received as severe a punishment for
an initial infraction as she did. The evidence provides
substantial support for the Board’s findings that OHL engaged
in disparate treatment of Jones and that its stated justification
was mere pretext. See, e.g., Southwire Co. v. NLRB, 820 F.2d
453, 460 (D.C. Cir. 1987) (holding that absence of evidence
that employer discharged any other employee for similar
violation supported finding of pretext); La Gloria Oil & Gas
Co., 337 NLRB 1120, 1124 (2002) (observing that disparate
treatment of employees demonstrates pretext).
The record evidence likewise supports the Board’s
determination that OHL’s termination of Jones deviated from
the Company’s progressive disciplinary system. The
Company’s Handbook identifies four forms of discipline, the
most severe of which is termination. Under the Handbook,
25
termination may be warranted “[i]n cases in which [less
severe] disciplinary action has failed to correct unacceptable
behavior or performance, or in which the performance issue is
so severe as to make continued employment with OHL
undesirable.” OHL Handbook, 14 J.A. 649. The Company
emphasizes that OHL retains discretion under the Handbook
to “apply any level of discipline . . . without resort to prior
disciplinary steps.” Id. at 646. The Handbook makes equally
clear, however, that discipline “will generally be administered
at the lowest level of severity which will effect correction of
the problem.” Id. at 649. Rather than adhere to its general
disciplinary norm of starting out with the least severe penalty
that might accomplish the disciplinary objective, the
Company chose immediately to impose the harshest form of
discipline on Jones for her remarks, even though she was not
a recidivist and had not engaged in any violent conduct.
Accordingly, substantial evidence supports the conclusion
that the Company deviated from its progressive disciplinary
procedure, thus bolstering the Board’s finding of pretext. See
Fort Dearborn, 2016 WL 3361476, at *5 (concluding that
failure to apply progressive disciplinary policy without
explanation supports a finding of pretext).
Because substantial evidence supports the Board’s
determination that OHL’s proffered reasons for firing Jones
were pretextual, and because its decision is not otherwise
arbitrary or unlawful, we deny the Company’s petition for
review, and grant the Board’s cross-application for
enforcement, of the Board’s decision that OHL’s termination
of Jones violated section 8(a)(3) and (1) of the Act.
C. The Company’s Remaining Challenges
OHL challenges the Board’s decisions on several
additional grounds. It contends that the Board’s
26
determinations that the Company committed numerous
section 8(a)(1) violations were unsupported by substantial
evidence or otherwise erroneous; that the Board abused its
discretion by imposing three additional remedies; 4 and that
the Board denied OHL due process by affirming the decision
of an ALJ whom OHL believes harbors pro-union bias. The
Board then compounded those errors, OHL argues, by
mistakenly counting Carolyn Jones’s vote in the second
representation election, failing to count the votes of two
administrative assistants, rejecting OHL’s election objections,
and ruling on an amended complaint in the absence of an
amended unfair labor practice charge. After carefully
reviewing the Company’s remaining arguments in light of the
record and applicable legal authority, we conclude that they
lack merit and warrant no further discussion. See United
States v. McKeever, --- F.3d ---, 2016 WL 3213035, at *13
(D.C. Cir. June 10, 2016). Accordingly, “we grant without
amplification the Board’s cross-application for enforcement”
as to the remaining findings challenged by the Company.
Stephens Media, LLC v. NLRB, 677 F.3d 1241, 1251 (D.C.
Cir. 2012); see also Tenneco, 716 F.3d at 647-48.
4
We lack jurisdiction to consider OHL’s challenges to two of
the Board’s remedies—the cease-and-desist order and the electronic
distribution requirement—because the Company did not object to
those remedies before the Board. See 29 U.S.C. § 160(e); Nova Se.
Univ. v. NLRB, 807 F.3d 308, 313 (D.C. Cir. 2015); W&M Props.
of Conn., Inc. v. NLRB, 514 F.3d 1341, 1345 (D.C. Cir. 2008).
27
* * *
For the foregoing reasons, we deny the Company’s
petitions for review and grant the Board’s cross-applications
for enforcement.
So ordered.
|
purified protein derivative of tuberculin
Definition: purified tuberculin containing the active protein fraction; the tuberculin from which it is prepared differs from tuberculin (1) chiefly in that the bacteria are grown in a synthetic rather than in a broth medium.
Disclaimer: This site is designed to offer information for general educational purposes only. The health information furnished on this site and the interactive responses are not intended to be professional advice and are not intended to replace personal consultation with a qualified physician, pharmacist, or other healthcare professional. You must always seek the advice of a professional for questions related to a disease, disease symptoms, and appropriate therapeutic treatments. |
1. Field of the Invention
The present invention relates to a semiconductor element test apparatus which brings a plurality of probe needles into contact with semiconductor elements fabricated on a semiconductor wafer, as well as to a method of testing a semiconductor element.
2. Background Art
Processes for manufacturing a semiconductor integrated circuit, such as an IC or an LSI, include a test process generally called a wafer test process. As shown in FIG. 8, during the course of a wafer test process, there is employed a semiconductor element test device which brings a plurality of probe needles 7 of a probe card 1 attached to a wafer prober 2 into contact with semiconductor elements fabricated on a semiconductor wafer 5 placed on top of a stage 4. As shown in FIG. 8, the wafer prober 2 is provided with a test head 10, and the test head 10 is connected, by way of a cable 15, to a tester 3 constituted of a computer.
As shown in FIG. 9, the test apparatus performs a test as to whether or not semiconductor elements 6 are non-defective, through the following steps. Namely, the probe needles 7 are brought into contact with respective electrode pads 8 of a plurality of semiconductor elements 6 (i.e., semiconductor chips) fabricated on the semiconductor wafer 5. In this state, an electrical test input signal is sent to the semiconductor elements 6 from the tester 3 by way of the cable 15 and the probe needles 7. A test output signal processed by the semiconductor elements 6 is sent back to the tester 3 by way of the probe needles 7 and the cable 15. FIG. 10 shows a state of contact between the probe needles 7 and the electrode pads 8. The stage 4 is pushed up toward the probe needles 7 during a test, and the electrode pads 8 are brought into contact with the probe needles 7. After completion of the test, the stage 4 is lowered, thereby separating the electrode pads 8 from the probe needles 7.
FIG. 11 is a side view showing the constitution of the prober 2 while the stage 4 remains in a lowered position. FIG. 12 is a perspective view showing a probe card 1 having the probe needles 7 mounted thereon. FIG. 13 is a top view showing the probe card 1. The prober 2 is equipped with the probe card 1. The probe card 1 has a probe card substrate 12 which supports the plurality of probe needles 7. The prober 2 has a test head 10 which operates in cooperation with the probe card 12. A plurality of probe needles 7 are supported on the lower surface of the probe card substrate 12, and on the top of the probe card substrate 12 are provided a reinforcement member 13 for reinforcing the probe card substrate 12, and a plurality of ZIF connectors 11. A plurality of ZIF sockets 9 corresponding to ZIF connectors 11 are provided on the lower surface of the test head 10. The semiconductor elements 6 exchange a test input signal and test output signals with the tester 3, by means of the ZIF connectors 11 being coupled to the ZIF sockets 9. The ZIF sockets 9 incorporate springs and are connected to the ZIF connectors 11 by means of meshing action.
As shown in FIG. 14, the probe card substrate 12 is attached to a probe card hold member 26 along with the reinforcement member 13. As shown in FIG. 15, screws 17 are used for attaching the probe card substrate 12 and the reinforcement member 13. As shown in FIG. 15, the wafer prober 2 is provided with the probe card hold member 26, and the probe card hold member 26 is attached to a movable arm 27. The probe card hold member 26 is used in transporting the probe card 1 into the wafer prober 2 or in transporting the probe card 1 outside the wafer prober 2. The probe card hold member 26 is used for fixing the probe card 1 within the prober 2. The probe card hold member 26 is formed into a ring, and the probe card substrate 12 of the probe card 1 is attached to the probe card hold member 26 with the reinforcement member 13 such that the probe needles 7 protrude from an opening of the ring-shaped probe card hold member 26. As shown in FIG. 16, the probe card 1 is held so as to protrude from an opening 25 formed in an top of the prober 2 while being attached to the probe card hold member 26. The probe card 1 is positioned by means of positioning pins 14 of the test head 10. In this state, the probe card 1 opposes the semiconductor wafer 5 provided on top of the stage 4 with a predetermined space therebetween.
In the related-art apparatus using the screws 17, when a test is performed, the stage 4 is elevated, thereby pressing the semiconductor wafer 5 against the probe needles 7. At this time, stress concentrates at the portions of the probe card substrate 12 where the reinforcement member 13 is attached by means of the screws 17, as a result of which load is imposed so as to induce warpage in the probe card 1. Accordingly, warpage partially develops in the probe card substrate 12. When the probe card 1 has been used over a long period of time, the tip ends of the probe needles 7 become offset from their initial locations. Uniform contact between the probe needles 7 and the semiconductor elements 6 is not sustained. As a result, contact failures arise in some of the semiconductor elements 6, such that non-defective elements 6 may be determined to be defective.
In order to prevent occurrence of warpage in the probe card substrate 12, which would otherwise arise while the probe card substrate 12 is in use, the reinforcement member 13 constituted of a flat plate of hard material is used, as shown in FIG. 17. A structure for attaching the reinforcement member 13 to the probe card substrate 12 and to the probe card hold member 26 is specifically shown in FIG. 18. Counterbores 13a to be used for attaching the screws 17 are formed in two attachment arms 13A and 13C from among four attachment arms 13A through 13D of the reinforcement member 13. In contrast, no counterbores 13a are formed in the remaining two attachment arms 13B and 13D. Thus, the attachment structure is not uniform. such a non-uniform attachment structure is ascribable to the positioning pins 14 of the test head 10. In order to avoid the positioning pins 14, the counterbores 13a are formed in only the attachment arms 13A and 13C. However, the attachment structure is not uniform and fails to sufficiently prevent occurrence of warpage in the probe card substrate 12. Reference numeral 16 designates a through hole through which the attachment screws 17 penetrate.
The test head 10 is a housing in which a plurality of terminals are provided in a concentrated manner for connecting the tester 3 with the probe card 1. As shown in FIGS. 15 and 16, the test head 10 is provided on top of the wafer prober 2 in a reclosable manner. The positioning pins 14 of the test head 10 are provided for enabling the test head 10, the probe card 1, and the wafer prober 2 to be connected together at the same positions at all times. Positioning holes 21 (see FIG. 16) formed in the probe card substrate 12 are located close to the edges of the attachment arms 13A and 13C. Hence, the counterbores 13a are formed in only the attachment arms 13A and 13C.
As shown in FIG. 18, because of such a non-uniform attachment structure, short screws 17 are used for the attachment arms 13A and 13C, and long screws 17 are used for the attachment arms 13B and 13D. The difference in length between the screws 17 also accounts for occurrence of warpage in the probe card substrate 12. Use of two types of screws 17 having different lengths makes attachment and removal of the screws 17 complicated, thus resulting in consumption of excessive time.
When the probe needles 7 are brought into contact with the electrode pads 8 of the semiconductor element 6 under normal conditions, the stage 4 is elevated so as to scrub the surface of the electrode pads 8 after the probe needles 7 have been brought into contact with the electrode pads 8, so as to eliminate an oxide film which naturally arises in the surface of the electrode pads 8. During repetition of a wafer test, insulating material adheres to the tip ends of the probe needles 7, resulting in an increase in contact resistance. As a result, non-defective semiconductor elements 6 are determined to be defective, thereby undesirably deteriorating manufacturing yield of semiconductor elements. In order to prevent such deterioration, abrasion and cleaning of the tip ends of the probe needles 7 is periodically performed. In order to inspect the positional accuracy of the probe needles and the abrasion and cleaning state of the probe needles 7, the probe card substrate 12 is removed from the probe card hold member 26 in conjunction with the reinforcement member 13, by means of removing the screws 17. After inspection, the probe card substrate 12 must be attached again to the probe card hold member 26.
Use of the two types of screws renders attachment and removal of the screws complicated, thereby lengthening working time. As shown in FIGS. 19A and 19B, flat-head screws having flat heads 17A are used as the screws 17. The flat-head screws have shallow slots 17a to be used for rotating screws, and the slots 17a are easily collapsed. Attachment and removal of the screws 17 is performed often, and therefore the screws 17 must be replaced with new ones. Rust-resistant, hard stainless screws have hitherto been used for the screws 17. However, such screws cannot be magnetically attracted to a driver, which deteriorates workability.
The present invention proposes a semiconductor element test apparatus which improves a structure for attaching a probe card reinforcement member to a probe card hold member and can reduce warpage in the probe card substrate.
Further, the present invention proposes a semiconductor element test apparatus which improves a structure for attaching a probe card reinforcement member to a probe card hold member and can reduce warpage in a probe card substrate by means of realizing commonality of screws used for attaching the probe card reinforcement member.
Further, the present invention proposes a semiconductor element test apparatus which improves a structure for attaching a probe card reinforcement member to a probe card hold member, reduces warpage in a probe card substrate, and enables frequent replacement of screws by means of improving mount screws.
Further, the present invention proposes a semiconductor element test apparatus which improves a structure for attaching a probe card reinforcement member to a probe card hold member, reduces warpage in a probe card substrate, and facilitates attachment and removal of screws by means of improving mount screws.
Further, the present invention proposes a semiconductor element test apparatus which improves a structure for attaching a probe card reinforcement member to a probe card hold member and reduces warpage in a probe card substrate by means of improving the reinforcement member so as to increase the reinforcement strength thereof.
Further, the present invention proposes a semiconductor element test apparatus which improves a structure for attaching a probe card reinforcement member to a probe card hold member and reduces warpage in a probe card substrate, by means of increasing the fastening strength acting between the reinforcement member and the probe card substrate.
Further, the present invention proposes a semiconductor element test method which prevents undesirable deterioration in manufacturing yield of semiconductor elements, through use of a semiconductor element test apparatus which improves a structure for attaching a probe card reinforcement member to a probe card hold member and can reduce warpage in a probe card substrate.
According to one aspect of the present invention, a semiconductor element test apparatus comprises a stage on which a semiconductor wafer having semiconductor elements mounted thereon, and a probe card having a plurality of probe needles opposing the semiconductor wafer, and the semiconductor elements are tested by means of bringing the plurality of probe needles into contact with the semiconductor elements of the semiconductor wafer. The probe card has a probe card substrate for supporting the plurality of probe needles and a reinforcement member to be used with the probe card substrate. The semiconductor element test apparatus has a probe card hold member. The probe card substrate is attached to the probe card hold member in a plurality of mount positions, by means of screws and by way of the reinforcement member. Counterbores of substantially the same depth and shape are formed in respective mount positions on the reinforcement member. The probe card substrate is attached to the probe card hold member by means of the screws and by way of the counterbores.
According to another aspect of the present invention, in a method of testing a semiconductor element uses a test apparatus which brings a plurality of probe needles provided on a probe card into contact with semiconductor elements of a semiconductor wafer. The probe card has a probe card substrate for supporting the plurality of probe needles, and a reinforcement member to be used with the probe card substrate. The semiconductor element test apparatus has a probe card hold member having the probe card attached thereto. The reinforcement member is attached to the probe card substrate and to the probe card hold member at a plurality of mount positions, by means of screws. Counterbores of substantially the same depth and shape are formed in the respective mount positions on the reinforcement member. The probe card substrate is attached to the probe card hold member by means of the screws and by way of the counterbores.
Other and further objects, features and advantages of the invention will appear more fully from the following description. |
<html>
<head>
<meta name=viewport content="width=device-width, user-scalable=no">
</head>
<body>
<div id="container">
<img id="zoom" src="https://upload.wikimedia.org/wikipedia/commons/thumb/7/76/Solanum_melongena_24_08_2012_%281%29.JPG/1200px-Solanum_melongena_24_08_2012_%281%29.JPG">
</div>
<script src="./zoom.js"></script>
<script>
// disable native pinch zoom
document.body.addEventListener('touchstart', e => e.preventDefault(), { passive: false })
var elem = document.getElementById('zoom');
new Zoom(elem);
</script>
<style>
#container {
background:rgba(255, 0, 0, 0.1);
overflow: hidden;
width: 250px;
height: 400px;
}
img {
width: 100%;
}
</style>
</body>
</html>
|
WASHINGTON — Less than halfway through his first term, President Barack Obama has appointed more openly gay officials than any other president in history. Gay activists say the estimate of more than 150 appointments so far – from agency heads and commission members to policy officials and senior staffers – surpasses the previous high of about 140 reached during two full terms under President Bill Clinton.
White House spokesman Shin Inouye confirmed the record number, saying Obama has hired more gay officials than the Clinton and George W. Bush administrations combined. He said Obama “is proud that his appointments reflect the diversity of the American public.”
Advertisement
“He is committed to appointing highly qualified individuals for each post,” Inouye said. “We have made a record number of openly LGBT (lesbian, gay, bisexual or transgender) appointments and we are confident that this number will only continue to grow.”
Item #2 of 2 comes from a pro-homosexual web site that not only helps LGBT’s learn how to get appointed and where LGBT’s can upload their resumes and lists almost 200 of Obama’s LGBT Appointments.Source: http://www.glli.org/presidentialPresidential Appointments Project
Help the president lead the nation.
Whether you are a recent graduate or a seasoned professional, if you have ever considered public service, now is the time to share your expertise and give our community a voice at the table.
In addition to full-time appointed positions, there are opportunities to serve on boards, advisory committees and grant reviewing groups across the country. These bodies make recommendations on how government can work better, smarter and more effectively.
Learn what it takes to work for the president and apply today through the Presidential Appointments Project.
Advertisement
Learn about appointments and how the Project works.
Upload your resume now as a Microsoft Word (.doc or .docx) or Rich Text Format (.rtf) file and answer the required questions.
LGBT Appointments in the Obama Biden Administration
To date, the Obama-Biden Administration has appointed more than 190 openly LGBT professionals to full-time and advisory positions in the executive branch; more than all known LGBT appointments of other presidential administrations combined.
Corporation for National and Community ServiceKimberly Allman Deputy Director of Intergovernmental Relations
Frederick Loo Wong Public Affairs Specialist
Department of Housing and Urban DevelopmentRaul Alvillar Congressional Relations Officer
Raphael Bostic* Assistant Secretary, Policy Development and Research
Neill McG. Coleman General Deputy Assistant Secretary, Office of Public Affairs
Jennifer C. Jones Advisor, Office of the Assistant Secretary for Public and Indian Housing
Mercedes Marquez* Assistant Secretary for Community Planning and Development
Patrick Pontius Special Assistant to the Assistant Secretary for Policy Development and Research
Executive Office of the PresidentAnthony Bernal Scheduler/Trip Director, Office of Dr. Jill Biden, Office of the Vice President
Jeremy Bernard Social Secretary, The White House
Brian Bond Deputy Director, Office of Public Engagement, The White House
Brook Colangelo Chief Information Officer, Office of Administration
Jeffrey Crowley Director, Office of National AIDS Policy, The White House
Monique Dorsainvil Staff Assistant, Office of Public Engagement, The White House
Carlos Elizondo Residence Manager & Social Secretary to the Vice President & Dr. Biden, Office of the Vice President
Alan O. Fitts Deputy Director of Advance and Trip Director, Office of the First Lady
Michael Fleming Member, Council for Community Solutions, The White House
Kathleen Hartnett Associate Counsel to the President and Special Assistant to the President, The White House
Shin Inouye Director, Specialty Media, Office of Communications, The White House
Brad Kiley Director, Office of Management and Administration, The White House
Kei Koizumi Assistant Director for Federal Research and Development, Office of Science and Technology Policy
Jeffrey Lerner Regional Director, Office of Political Affairs, The White House
Denise Maes Director of Administration, Office of the Vice President
Ryan Metcalf Senior Analyst, Office of Presidential Correspondence, The White House
Greg Millett Senior Policy Advisor, Office of National AIDS Policy, The White House
Ven Neralla Director of Priority Placement, Presidential Personnel Office, The White House
Diana Noyes Researcher, Office of the White House Counsel, The White House
Ellie Sue Schafer Director, White House Visitors Office, The White House
Campbell Spencer Regional Director, Office of Political Affairs, The White House
Everette Stubbs Deputy Director, White House Visitors Center, The White House
Nancy Sutley* Chair, Council on Environmental Quality
Chris Van Es Correspondence Analyst, Office of Presidential Correspondence, The White House
Kamala Vasagam Special Assistant to the President, Office of Presidential Personnel, The White House
Ebs Burnough Deputy Social Secretary, Office of the First Lady, The White House
Karine Jean-Pierre Regional Director, Office of Political Affairs, The White House
Zach Liscow Staff Economist, Council of Economic Advisers
Alison Nathan Associate Counsel to the President, The White House
Zachary A. Portilla Assistant, Office of Presidential Personnel, The White House
Moe Vela Director of Operations, Office of the Vice President
William Woolston Staff Economist, Council of Economic Advisers
United States Commission on Civil Rights
Roberta Achtenberg Commissioner
United States CourtsEdward DuMont** Judge, United States Court of Appeals for the Federal Circuit
Emily Hewitt Chief Justice, U.S. Court of Federal Claims
J. Paul Oetken** Judge, United States District Court for the Southern District of New York
Donna M. Ryu Magistrate Judge, District Court for the Northern District of California
United States Interagency Council on HomelessnessJennifer Ho Deputy Director, Accountability Management
“You may not believe today, but one day you will. There are no unbelievers in heaven AND there are no unbelievers in hell. What you believe now determines where you will spend eternity.”
Lucky for me, I’m not going to either place. Since they don’t exist.
I’m returning to the earth, going to become worm food and to nourish nature as she has nourished me throughout my life.
Some of me will hopefully live on, in the form of organs and tissues donated to those who can still use them thanks to modern medical science. And hopefully more of me donated to research hospitals, to train the next generation of medical professionals to care for the living hear on Earth where it is needed.
“I feel very sad for every unrepentant sinful soul who ends up in hell for all eternity in eternal suffering, burning forever and forever, without ceasing. That makes me feel sad.”
I feel sad for those who worship and swear loyalty to a deity who would condemn innocent children to eternal torture simply by being born into a culture that doesn’t practice the same belief system as you.
If your God is omnipotent, omniscient and omni-benevolent, why would He make humans the only creatures in all of Creation about which He cares? Would He not be concerned about the souls of the horses? Or the trees? Or the dung beetle?
And if the souls of humanity are so important, and can only be saved by swearing fealty to Him, why would he create cultures such as the Bhuddists in India, or the aboriginal tribes in the Amazon, where worship of your God is not known? Is he creating these people solely for the purpose of populating his Hell with fresh souls, even though He never gave them a chance to be redeemed?
“If King James repented of his sins…then he is living in heaven, praising God the Father, God the Son, and God the Holy Ghost, for all eternity. That is the Good News of the Gospel of Jesus Christ. No one is beyond redemption.”
I feel very sad for every unrepentant sinful soul who ends up in hell for all eternity in eternal suffering, burning forever and forever, without ceasing. That makes me feel sad. If King James did not repent of all his sins, including the one that a commenter below points out, then King James is burning in hell forever and ever. If King James repented of his sins, possibly by reading the Bible named after him because he requested that scholars of the time put it together, then he is living in heaven, praising God the Father, God the Son, and God the Holy Ghost, for all eternity. That is the Good News of the Gospel of Jesus Christ. No one is beyond redemption. All can be forgiven of all their sins. Jesus Christ paid the price for our sins IF we accept Him as our Lord and Savior. Joseph’s brothers sought to do evil and sinned against him grievously, leaving him for dead and selling him to evildoers. But it turned out that that act saved the lives of Joseph’s brothers and his father. Read Genesis starting in chapter 3 and read it all the way to the end where it ends gloriously: It is written: Genesis 50:20 KJV But as for you (Joseph’s brothers), ye thought evil against me (Joseph); but God meant it unto good, to bring to pass, as it is this day, to save much people alive. — and there you have it! St. Paul wrote of this same thing in the New Testament: Romans 8:28 KJV And we know that all things work together for good to them that love God, to them who are the called according to his purpose.
You may not believe today, but one day you will. There are no unbelievers in heaven AND there are no unbelievers in hell. What you believe now determines where you will spend eternity.
Romans 14:11-12 KJV [11] For it is written, As I live, saith the Lord, every knee shall bow to me, and every tongue shall confess to God.
[12] So then every one of us shall give account of himself to God.
Those who don’t love homosexuals will hide the truth from them. They really don’t care where people spend eternity. They pretend to love the homosexual by not telling them what they must do to be saved. What must they do? They must repent of their sin and accept Christ as Lord and Savior. When they do these things the Holy Spirit will help them to live godly lives. Yes, they won’t be perfect. Yes, they might fall back into the sin, but in their hearts, they will know the sin of homosexuality is wrong and not okay with God. Why? They have been saved and they do not want to do things which are not pleasing in God’s sight. My prayer: May God bless each homosexual someone who loves them enough to tell them the truth. May heaven be their eternal home. May they live in true peace and joy as they wait for Christ’s return.
The word sodomite troubles the lost sinner because it calls sin, sin. Truth can feel like hate to the lost soul. I speak the truth in love. I pray for you that God remove the scales from your eyes and the blocks from your ears so that you can be a hearer of the Word of God and be saved. I pray that you can see the truth and that you find it quickly in The Word of God in the powerful and righteous King James Bible. In Jesus name, I pray, for you. Amen.
I’m starting to like the term “sodomite” more and more too. I think the funniest thing we could do is to start wearing it like a mark of pride and endearment. We’d thus take away the power of a word from hate, the one force that sustains the shriveled and blackened souls of “Christians’ in this country.
My only issue is this: as a hetero male who fancies himself as libertine as any gay man, what term can I use? Perhaps “Gomoroahan?”
By submitting these comments, I agree to the beliefnet.com terms of service, rules of conduct and privacy policy (the "agreements"). I understand and agree that any content I post is licensed to beliefnet.com and may be used by beliefnet.com in accordance with the agreements.
Heartwarming! Service dog helps teen with diabetes Calvin's Commentary: The following story and video is about a young 18-year old teenager, Sadie Jensen, with a debilitating, life-threatening, incurable, life-long illness and a service dog named Bailey. Bailey has been trained to detect and ...
Advertisement
Watchwoman on the Wall
by Donna Calvin
Shouting from the rooftops, proclaiming warning to a sleeping nation, Donna offers her unique perspective on current events
Disclaimer
Some items on Beliefnet may not reflect the views of Doers of the Word Church, Pastor Ernie Sanders, and “What’s Right, What’s Left, the Voice of the Christian Resistance,” all of whom promote this Watchwoman on the Wall blog on their web site < www.WRWL.org> and radio program. The Bible-based, conservative call-in talk radio program can be heard 10 PM-12 midnight, M-F on Salem Christian Radio Network, Cleveland’s powerful 50,000 watt radio station on the dial at 1220 AM & 1440 AM – and over the Internet worldwide at www.WRWL.org. Donna Calvin is a frequent co-host of “What’s Right, What’s Left.” The pro-life, conservative, non-compromising, King James only, Christian program, hosted by Pastor Ernie Sanders, has been airing for over 40 years and has over 6 million listeners.” |
Search for Local Senior Singles
Search pictures and profiles of Senior Singles near you right now. Discover how
online dating sites make finding singles in the United States, Canada, and all over
the world simple, safe and fun! Once you browse profiles and pictures start flirting,
messaging and connecting with other members of the SeniorPeopleMeet.com community. Search
today and find other sexy, intelligent singles for casual dating or a serious relationship
in your area.
A little about me...
I consider myself a person of spiritual integrity and a person who my friends say I am fun to be with. Most of the time I enjoy being with myself as well. My friends and family are important elements in the fabric of my life. I especially enjoy open, honest and oftentimes passionate conversations with all persons with whom I come into contact.
My employment background is in teaching and social work. I was born in New Orleans, LA, and have resided also in Alabama, Florida, Texas, and Mississippi. I have lived in North Carolina since 1981. Have traveled USA and abroad. Meeting people from and experiencing other cultures has been my privilege and joy. I have participated in community theatre--enjoy listening to music, reading, dancing, cooking and sometimes just hanging out with myself. I enjoy outdoor activities--hiking, horseback riding and canoeing. I also appreciate the pensive sharing of a glass of wine or cup of coffee with just about anyone who is not dangerous to him/herself or others. Traveling, short and long trips out and about are fun--meeting, learning, experiencing the hidden wholeness in the scheme of things.
About the one I'm looking for...
I am looking for a person who desires a long term, exclusive relationship and one who shares some similar interests. It is important that this person also has some interests different from mine--I'd like to keep expanding my horizons together with my partner. My partner would realize it doesn't matter whether I play golf, snow ski or play tennis since he's not looking for a sports buddy. I especially appreciate a man with a good sense of humor. I am looking for a man who is comfortable with sharing personal feelings, who is physically and verbally affectionate, thoughtful and caring but not possessive-- someone open to life's many surprises--someone who relishes a meaningful conversation with me as much as he would relish engaging in one of his favorite hobbies or activities. He would know it is important to stay emotionally connected because if one takes for granted this precious connection with one's partner, by default, it becomes irrevocably damaged and often lost completely. You need a good heart to win a good heart.. My ideal partner will have faults along with the wisdom and courage to know when and how to say, "I'm sorry" (That goes for me, too, of course.). He would also love just holding my hand.
I'd just like to add...
A brief suggestion for my potential partner: don't communicate only about how you spend your spare time or what kind of music or food you like. Please tell me where your heart is, what really matters to you in the bare "ongoingness of things." .If just being in one another's presence is mutually inviting, , I guarantee we will agree on what to do in our spare time, and it won't matter what music is playing. (As romantic songs go, though, I do sort of favor "As Time Goes By" followed up by "Misty.")
Personality Questions:
Q. Do you enjoy cooking?
A. I love it
Q. How patient do you consider yourself?
A. Very patient
Q. Are you romantic?
A. Extremely romantic
Q. How punctual are you typically?
A. Always on time
Q. Do you enjoy going to the movies?
A. Enjoy it
Q. How much do you enjoy going to live theatre?
A. Like it a lot
Q. How much do you like reading?
A. I enjoy it
My Top Interests:
Arts
The arts provide unique opportunities for enhancement for the mind as well as the spirit. Not to mention pure enjoyment
Family and Friends
Family and friends are a given must and are a priority for me.
Nature and Outdoors
Nature and the outdoors provide opportunities for healthful physical exercise and fun. I also find nature to be spiritually exhilerating
Message Starter Ideas
This member would like to know one or more of the following about you: |
Hot Products
CNET Editors' Rating
The GoodThe Room has intensely challenging puzzle gameplay, beautiful steampunk-like graphics, and immersive ambient sounds as you try to progress through the storyline.
The BadViewing and zoom controls can get a little frustrating at times.
The Bottom LineIf you like puzzle games, The Room is a unique and addictive app you should get right away, with great visuals, pitch-perfect sounds, and plenty more to keep you struggling to figure out just how to unravel the mystery.
Review Sections
The Room is an intensely challenging puzzle game with beautiful 3D graphics where the object is to unravel a mystery by opening steampunk-style boxes.
Though "opening boxes" may sound boring, with The Room, each box has several hidden secrets and clues that keep you engrossed, trying to find the right sequence of actions to open the box. There are dials to turn, complex safe combinations to figure out, hidden compartments that contain keys, and much more. You also have an inventory on the left where you store the various keys and other items you find as you search for a way to use them to open the box and discover what's inside. As you progress, you'll also find pages of a diary that fill in the storyline and provide clues to your mission. With so much of the game hinged on gesture-based trial and error, The Room simply sucks you in.
Part of what makes it so compelling are the amazing-looking visuals and the steampunk design of the boxes and various gadgets. The ornate, old-style puzzle boxes have gleaming metal dials, realistic wood grain, and the lighting effects and shadows really make you feel like you're working with real objects. Though you can get The Room on an Android mobile phone, I highly recommend playing on a tablet just so you can appreciate the work that went into the look and overall feel of the game.
The Room isn't just about the visuals, though. The sound design is excellent, with each gear whirring and old wood creaking as you pull open a drawer. You can hear creepy music and howling wind as you work. The overall feel of both the audio and visuals are part of what makes this game great, but it's the complex and challenging puzzles that keep you coming back for more.
The only thing that can get a little annoying with The Room is the control scheme for viewing the box and zooming in on details. Though the swipe-to-rotate controls are as smooth as silk, the pinch gestures for zooming in and out can get a little frustrating. I'm not sure what the developers can do here to make it better, but just be warned that in some instances, it may not feel as smooth as you'd like. Also, the graphics on the Android version, while still impressive, appeared to be just a notch below those on the iOS version of the game.
The Room seemed like a port of some browser-based, point-and-click adventure to me when I first came across it. But now that I've tried it, I can tell you it is much more, and that just about anyone will enjoy piecing together the mystery as you try to figure out how to open each box.
Jason Parker has been at CNET for nearly 15 years. He is the senior editor in charge of iOS software and has become an expert reviewer of the software that runs on each new Apple device. He now spends most of his time covering Apple iOS releases and third-party apps.
See full bio |
Structural characterization of the N-glycosylation of individual soybean β-conglycinin subunits.
Soybean (Glycine max) 7S β-conglycinin is a seed storage protein consisting of homo- and hetero-trimers of three subunits, namely α (~67 kDa), α' (~71 kDa), and β (~50 kDa), non-covalently associated. The N-glycans released from the whole β-conglycinin have been already characterized by (1)H NMR some decades ago. Nevertheless, the actual glycosylation of the potential sites and the glycoforms of the individual subunits have not been specifically investigated so far. In this study, up-to-date chromatographic, electrophoretic and mass spectrometric strategies have been combined to achieve the structural characterization of the glycoforms of the three individual β-conglycinin subunits. Glycosylation sites were assigned by analyzing the tryptic glycopeptides of the isolated subunits. Underivatized N-glycans were purified with a two-step clean-up, consisting in sequential reversed-phase and activated porous graphitized carbon micro-chromatography, and profiled by matrix assisted laser desorption ionization-time of flight (MALDI-TOF) mass spectrometry (MS). |
Q:
window.open not opening window
I'm trying to open a page in a popup as opposed to new tab - but whichever browser I try this simply opens in a new tab, not popup.
<input type="button" value="new win" onclick="window.open('http://yahoo.com', 'width=500, height=400')" />
Any reason why?
A:
Second parameter must be the window name:
<input type="button" value="new win"
onclick="window.open('http://yahoo.com', 'mywindow', 'width=500, height=400')" />
Working fine in Chrome and Firefox:
http://jsfiddle.net/DvMy5/2/
|
In a nutshell, the authors criticize the proposal by Attorney General Eric Holder that Congress expands the public safety exception to Miranda.
The Attorney General’s proposal takes America back to the 1990s, when the U.S. approached all ter- rorist attacks as a law-enforcement-only problem. In making this proposal, Holder reveals that he has not learned what the rest of the country has learned from the 1990s and of the years since 9/11: that this unconventional enemy requires the government to use all lawful tools at its disposal, including holding some terrorists captured in the U.S. as enemy combatants. Federal courts are a powerful weapon, but they are not the only weapon.
After 9/11, the White House rightly shifted the focus of counterterrorism operations from investi- gating attacks to preventing them. By being more concerned about safeguarding the opportunity to prosecute suspects than stopping terrorist plots, Holder is returning to the wrong-headed strategy that characterized this nation’s pre-9/11 approach to fighting terrorism. The Attorney General has forgotten that intelligence gathering, at the outset of capture, is more important than preparing for a trial. |
Q:
javascript: Behavior of variables in base object with inheritance
function Resource(){
var id = '';
this.getId = function(){
return id;
}
this.setId = function(value){
id = value;
}
}
function Model(){ }
Model.prototype = new Resource();
To test the above code, I have the following:
var m1 = new Model();
m1.setId("blah");
alert("m1 = " + m1.getId());
var m2 = new Model();
m2.setId("blahblah");
alert("m2 = " + m2.getId());
alert("m1 = " + m1.getId());
I would expect the following to be alerted: m1 = blah, m2 = blahblah, m1 = blah.
However, the following is being displayed: m1 = blah, m2 = blahblah, m1 = blahblah.
I have written the above code as an experiment so that I can begin to understand inheritance in JavaScript. Perhaps my expectation of what should be displayed is due to my Java/C# background. Clearly each new Model object created shares the same id variable defined in Resource. Why is this?
How can I write this so that each new Model can have its own unique id (other than the obvious - defining in within Model)?
BTW, I have purposely definted id as var id rather than this.id, as I am looking for the private-variable behavior.
A:
A prototype object is shared among all instances.
Because the id variable belongs to the Resource constructor, and because the prototype of Model is that single instance of Resource, all the Model instances are using that single instance, and therefore the same variable.
For each Model instance to have its own id variable, you'll need to create the id variable in the Model constructor, as well as the accessor methods.
function Model() {
var id = '';
this.getId = function(){
return id;
}
this.setId = function(value){
id = value;
}
}
Another way to accomplish this is to invoke Resource using the Model instances as the context. This is done using .call or .apply like this:
function Model() {
Resource.apply(this);
}
This will invoke the Resource function, which will create the id variable and its accessor methods, but the accessors will be assigned to the Model instance.
|
My Glamorous Life
Post navigation
During the week, when there are no shows going on, actors and most of the crew return to their regularly scheduled lives. Do you know what costumers do?
Laundry.
That is my incredibly glamourous life…
That’s just all the neck action, hankies, and socks for 1776 (now to be known as, “That show with too many men in it”.
The problem I keep running in to is that these fellows are all wearing rather a lot of stage makeup, but they also all wear jabots (aka, “neck action”). They keep getting makeup on their festive little neck thingsies, and then they come to me all distraught about it and ask if I can fix it (less than an hour before curtain, mind) or if I have a new bit of neck action that they can have instead. I made almost all of these. I barely had time to make what I needed, so I don’t really have a pile of spares lying about. Silly boys. So I say very sensible things at then, like “no” and “stop putting makeup under your chin” and “your costumer doesn’t even wear makeup. I’m not terribly adept at getting it out of clothes.” Then I lie and assure them it will never show under the lights. ;)
But after two weeks of shows, things were getting grim. (And by grim, I mean flesh tone.) So I went to the hardware store, and found this stuff:
I was going to use engine degreaser on them, but I wasn’t sure what that would do to the cloth. Tee…. This is slightly less dramatic, but it worked. Mostly. There were some major offenders, but at least things are back to being generally white. I suppose if I were a better costumer, I’d retreat the bad ones. |
Among the naturally occurring tetraborate ores, relatively few are ubiquitous, thus commercially valuable. As examples, there may be mentioned tincal or borax (Na.sub.2 B.sub.4 O.sub.7.10H.sub.2 O), kernite or rasorite (Na.sub.2 B.sub.4 O.sub.7.4H.sub.2 O), tincalconite (Na.sub.2 B.sub.4 O.sub.7.5H.sub.2 O), and the most common refined tetraborates, such as borax pentahydrate (Na.sub.2 B.sub.4 O.sub.7.5H.sub.2 O) and anhydrous borax (Na.sub.2 B.sub.4 O.sub.7). These are given as illustrative examples of boron-containing minerals which may be employed in the practice of the present invention, but it is to be understood that the invention is in no way intended to be limited thereto. In fact, this invention is quite versatile and is designed to utilize any of the tetraborate ores and refined tetraborates for manufacturing ammoniumtriborate and ammoniumpentaborate.
If the tetraborate ore is calcined to render it anhydrous prior to treatment, methylborate-ammonia adduct [(CH.sub.3 O).sub.3 B.NH.sub.3 ] is produced which can be further processed into ammoniumpentaborate, as described in my co-pending application, Ser. No. 135,177, filed Mar. 28, 1980, and in my U.S. Pat. No. 4,196,177, issued Apr. 1, 1980.
A quantitative esterification of boric acid to methylborate in the presence of sulfuric acid has been earlier demonstrated by H. I. Schlesinger, H. C. Brown, D. L. Mayfield and J. R. Gilbreath, J. Am. Chem. Soc., 75, 213-215 (1953). Several patents dealt with the recovery of boron content from ores through the formation and distillation of the volatile methylborate (R. P. Calvert et al, U.S. Pat. No. 1,308,577, 1919; F. H. May et al, U.S Pat. No. 2,833,623, 1958).
Addition compounds between methylborate, ammonia and amines have been described by Goubeau et al, (Z. anorg. u. allgem. Chem. 266, 27-37, 1951; ibid, 266, 161-174, 1951). H. A. Lehmann and W. Schmidt (Z. Chem. 5, 65-66 and 111, 1965) have described ammoniumpentaborate formation from boric acid and ammonia in polar solvents. But, the methylborate-ammonia adduct has not been prepared directly from alkali metal borates, such as tincal (borax) or other tetraborate ores.
As is also mentioned above, the commercially important ammonium pentaborate can be produced in accordance with the present invention. Ammoniumpentaborate was previously manufactured exclusively from the less abundantly occurring alkaline earth pentaborates, such as colemanite, Gerstley borate (e.g., U.S. Pat. No. 3,103,412; Swiss Pat. No. 354,760; Belgian Pat. No. 631,217; Italian Pat. No. 794,945) and potassiumpentaborate (e.g., U.S. Pat. No. 2,948,592). Transformation of borax to ammoniumpentaborate in dilute (10%) aqueous ammoniumchloride solution at 100.degree. C. was earlier reported (U.S. Pat. No. 2,867,502; Ch. O. Wilson et al, Advances in Chem., Serial No. 32, 20-26, 1961). In these processes, the separation of sodiumchloride and ammoniumpentaborate was cumbersome. Most importantly, the distillation of a large volume of water from the pentaborate required high energy. As will subsequently be described, the process according to the present invention, which uses different reagents and solvent, requires significantly lower energy.
As a sole product, ammoniumtetraborate (biborate of ammonia) was obtained from alkali metal or alkaline earth metal borate ores upon the treatment of their water suspension with ammoniumcarbonate, ammoniumhydrocarbonate, ammoniumsulfite or ammoniumbisulfite (Ch. Masson et al, Brit. Pat. No. 10,361, 1897). But, in the absence of methanol, no ammoniumtriborate (NH.sub.4.B.sub.3 O.sub.5.3CH.sub.3 --OH) could be formed. |
BRENTWOOD — The families of eight special needs students sued former teacher Dina Holder, her principal and other Brentwood school district employees Wednesday, claiming they failed to report abuse suspicions appropriately and alert the community that Holder was a convicted child abuser.
All eight students, who were aged 3 to 6 at the time, had very limited communication skills — six registered on the autism spectrum, and two had Down syndrome. They all claimed that Holder subjected them to physical and mental abuse and to neglect. The federal lawsuit alleges former Loma Vista Elementary School teacher Holder, former Loma Vista Principal Lauri James, former Superintendent Merrill Grant, Assistant Superintendent Margaret Kruse, director of special education Margo Olson and her predecessor, Jean Anthony, violated the students’ civil rights and the Americans with Disabilities Act.
It was only after this newspaper reported in January that Holder had been convicted of child abuse for kicking an autistic child while he lay on the floor in 2010 and the Brentwood Unified School District transferred Holder to another classroom, that the parents realized Holder’s history, the lawsuit claims. Already the district has settled with the family of the kicked autistic child for $950,000.
District documents show James conducted an internal investigation into the incident, but it was parents who reported it to police. School employees are required by state law to report child abuse suspicions to police or Child Protective Services. James, other teachers, the director of special education, the school psychologist, speech therapists, speech pathology aides, instructional aides and other employees failed to report abuse suspicions, the lawsuit alleges, that surfaced as early as 2008, when a parent reported her child was slapped by Holder.
The revelations led to an uprising by parents of special needs students and the eventual ouster of Grant.
Attorney Peter Alfert, who filed Wednesday’s lawsuit on behalf of the eight students and their parents, said the school district allowed Holder to continue teaching after her plea of no contest to a cruelty-to-a-child charge and did not inform parents of her legal issues.
“They were telling parents who asked why she was the new teacher that she was a good, accredited teacher that was good for their children,” Alfert said. “When in fact they knew she was none of those.
“They allowed her to teach knowing one of the conditions of her plea was she couldn’t be alone with a child,” Alfert said, adding that it was a disqualifying act for a teaching credential, but the district never reported it to the state.
New Brentwood Superintendent Dana Eaton said Wednesday he had not seen the lawsuit and was unable to comment.
The plaintiffs seek compensatory and punitive damages. Many claimed the students sometimes came home from school with mysterious bruises and scrapes and started acting out in different ways and regressing in their educations. |
Re: Multiple version of NX with teamcenter
In my company, users have admin rights, so the solution works well for us. I doubt it will work for companies where users do not have admin rights. Of course, that can be mitigated by providing a group policy against that registry key which allows write access.The main point is that all the settings to enable multiple clients to be launched from the same machine are all in a single file and not spread out. No additional files required. |
---
abstract: 'A relation between the stellar mass $M$ and the gas-phase metallicity $Z$ of galaxies, the MZR, is observed up to higher redshifts. It is a matter of debate, however, if the SFR is a second parameter in the MZR. To explore this issue at $z>1$, we used VLT-SINFONI near-infrared (NIR) spectroscopy of eight zCOSMOS galaxies at $1.3<z<1.4$ to measure the strengths of four emission lines: , , , and , additional to measured from VIMOS. We derive reliable O/H metallicities based on five lines, and also SFRs from extinction corrected measurements. We find that the MZR of these star-forming galaxies at $z \approx 1.4$ is lower than the local SDSS MZR by a factor of three to five, a larger change than reported in the literature using \[NII\]/-based metallicities from individual and stacked spectra. Correcting N2-based O/Hs using recent results by @newman14, also the larger FMOS sample at $z \sim 1.4$ of @zahid14 shows a similar evolution of the MZR like the zCOSMOS objects. These observations seem also in agreement with a non-evolving FMR using the physically motivated formulation of the FMR from @lilly13.'
author:
- 'Christian Maier$^1$, Simon J. Lilly$^2$, Bodo L. Ziegler$^1$, zCOSMOS team'
title: 'Oxygen abundances of zCOSMOS galaxies at $z \sim 1.4$ based on five lines and implications for the fundamental metallicity relation'
---
Introduction
============
In the local universe, the metallicity at a given mass also depends on the SFR of the galaxy [e.g., @mannu10], i.e., the SFR appears to be a “second-parameter” in the mass-metallicity relation (MZR). @mannu10 claimed that this local $Z(M,SFR)$ is also applicable to higher redshift galaxies, and coined the phrase fundamental metallicity relation (FMR) to denote this *epoch-invariant* $Z(M,SFR)$ relation. @lilly13 showed that a $Z(M,SFR)$ relation is a natural outcome of a simple model of galaxies in which the SFR is regulated by the mass of gas present in a galaxy.
Most studies of the Z(M,SFR) at $z \sim 1.4$ were based on small samples or samples with limited spectroscopic information, producing contradictory results. Except for the $z \sim 1.4$ study of @maier06, the derived O/H metallicities in the literature at $z \sim 1.4$ have been based just on \[NII\]/H$\alpha$, via the N2-method [@petpag04]. However, @newman14 found that the MZR at high $z$ determined using the N2-method might be up to a factor of three times too high in terms of metallicity, when the effects of both photoionization and shocks are not taken into consideration.
Results
=======
We have carried out NIR follow-up spectroscopy with VLT-SINFONI of eight massive star-forming galaxies at $1.3<z<1.4$, selected from the zCOSMOS-bright sample [@lilly09] based on their emission line observed with VIMOS. Observations with SINFONI in two bands (J to observe the and , and H to observe and ) were performed between November 2013 and January 2014. SFRs, masses and O/H metallicities were measured as described in @maier14.
![The median local MZR and 1$\sigma$ values of local SDSS galaxies from @trem04 are shown by the three upper black lines, while the three thinner lower black lines show the median SDSS MZR shifted downward by 0.3, 0.5 and 0.7 dex, respectively. The @zahid14 N2-based O/Hs are corrected for the N2-calibration issues discussed in @newman14, and shown as red dots for the individual measurements, and as open red circles for the O/Hs derived from the stacked spectra. The observed MZR at $z \approx 1.4$ is lower than the local SDSS MZR by a factor of three to five, and is also in agreement with the FMR prediction of @lilly13. []{data-label="MZRz14"}](Mass_OH_z14_AUG14.ps){width="0.5\columnwidth"}
In Fig.\[MZRz14\] we compare the MZR of galaxies at $1.3<z<1.7$ with the relation of SDSS. Stellar masses are converted to or computed with a @salp55 IMF, O/Hs are computed or converted to the @kewdop02 calibration. The @zahid14 N2-based O/Hs were converted to the @kewdop02 calibration and then systematically corrected to 0.4dex lower metallicities, in agreement with recent results of @newman14 who found that the MZR at $z>1$ determined using the N2-method might be $2-3$ times too high in terms of metallicity. We also overplot (as a magenta line) the @lilly13 prediction of a non-evolving Z(M,SFR), the case represented by the dashed lines in Fig.7 of @lilly13. The $z \sim 1.4$ observational data shown in Fig.\[MZRz14\] are in agreement with this FMR prediction.
Maier, C., Lilly, S. J,, Ziegler, B. L., et al. 2014, ApJ, 792, 3
Maier, C., Lilly, S. J., Carollo, C. M., et al. 2006, ApJ, 639, 858
Lilly, S. J., Le Brun, V., Maier, C., et al. 2009, ApJS, 184, 218
Kewley, L. J. & Dopita, M. A. 2002, ApJS, 142, 35
Lilly, S. J., Carollo, C. M., Pipino, A., et al. 2013, ApJ, 772, 119
Mannucci, F., Cresci, G., Maiolino, R., et al. 2010, MNRAS, 408, 2115
Newman, S. F., Buschkamp, P., Genzel, R., et al. 2014, ApJ, 781, 21
Pettini, P. & Pagel, B.E.J. 2004, MNRAS, 348, 59
Salpeter, E. E. 1955, ApJ, 121, 161
Tremonti, C. A., Heckman, T. M., Kauffmann, G., et al. 2004, ApJ, 613, 898
Zahid, H. J., Kashino, D., Silverman, J. D. et al. 2014, ApJ, 792, 75
|
Impossible.com
Impossible is an innovation group and incubator. It started as a gift economy platform created by Lily Cole in 2013 , and since then has expanded to other areas, mainly design and technology. Impossible is working on client projects with potentially far-reaching impacts.
Impossible People
Impossible People (previously Impossible.com) is an altruism-based mobile app which invites people to give their services and skills away to help others. Created by Lily Cole, the app allows users to post something they would like to do or need so that others can grant their wish. In May 2013, Cole presented the app's beta in conjunction and with the support of Wikipedia co-founder Jimmy Wales at a special event at Cambridge University. It is the first Yunus social business in the UK. The project became open source in March 2017.
Funding and support
In the past, the Impossible.com gift economy project received a grant of £200,000 from the Cabinet Office’s Innovation in Giving fund. Other investors include Lily Cole herself and boyfriend and Impossible's co-founder, Kwame Ferreira. Donations of services from Muhammad Yunus, Brian Boylan, chairman of Wolff Olins, Tea Uglow, creative director for Google’s Creative Lab, office space and "angel investor" role from Jimmy Wales, and legal services from Herbert Smith Freehills bolstered the social network.
References
External links
Category:Industrial design |
Canadian uranium could end up in Indian nuclear weapons triggering a new arms race between India and Pakistan, according to about 200 international experts and delegates of the World Uranium Symposium who denounced the Canada-India uranium deal signed on the eve of the NPT review conference to be held in New York City in two weeks’ time as an attempt by Harper government to undermine and discredit the key international treaty prohibiting the proliferation of nuclear weapons.
“Despite rules specifying no military use of Canadian materials, some uranium from Canada could well end up in Indian bombs,” said Dr. Gordon Edwards of the Canadian Coalition for Nuclear Responsibility. “At the very least, Canadian uranium will free up more Indian uranium for weapons production purposes.”
India, which maintains an arsenal of nuclear weapons and has never signed the United Nations’ Nuclear Non-Proliferation Treaty (NPT), signed a five-year contract with the Saskatoon-based company Cameco to supply over seven million pounds of uranium to India over the next five years during Indian Prime Minister Narendra Modi’s visit to Canada.
“Canada’s attitude sends a terrible message to the international community regarding the necessity for all countries to respect and to reinforce the Nuclear Non-Proliferation Treaty,” said Arielle Denis, Director of the International Campaign for the Abolition of Nuclear Weapons (ICAN) for Europe, the Middle East and Africa.
“India’s nuclear weapons program is very active, as demonstrated by a series of nuclear test explosions,” Shri Prakash, one of several participants from India at the World Uranium Symposium, said. “Moreover tensions between India and Pakistan, a country with its own nuclear arsenal, are running very high. The attitude of Canada is irresponsible and alarming,”
“We should be reinforcing the NPT and not undermining it,” Arielle Denis of ICAN explained. “Canada is going against the Austrian Agreement launched last December to fill the gap present in international law by making it not only illegal to use nuclear weapons, but also to possess them. Nuclear warheads are the only Weapons of Mass Destruction (WMD) not forbidden under existing international conventions,”
The experts point out that India has already broken its promise to Canada in the past by using a Canadian reactor given as a gift in 1956 to produce the plutonium for its first atomic bomb, detonated in 1974.
Canada broke off all nuclear cooperation with India, a policy that was maintained until the Harper government decided to resume nuclear cooperation between Canada and India despite its nuclear arsenal.
Australian delegates to the World Uranium Symposium also expressed grave misgivings about the negotiations towards a similar agreement between India and Australia, whereby Australian uranium would be sold to India. |
Chris Graythen/Getty Images
Despite reports to the contrary, Buffalo Bills general manager Brandon Beane said Friday morning the team is not trading for Pittsburgh Steelers wide receiver Antonio Brown.
According to ESPN's Adam Schefter, Beane said: "We inquired about Antonio Brown on Tuesday and kept talks open with the Steelers. We had positive discussions, but ultimately it didn't make sense for either side. As great a player as Antonio Brown is, we have moved on, and our focus is on free agency."
NFL Network's Ian Rapoport reported Thursday night the Bills were "closing in" on a deal for Brown, but Brown referred to it as "fake news" in responding to an Instagram post by the NFL.
Early Friday morning, Schefter reported that a trade to Buffalo was "unlikely." He added that a deal wasn't as close as Rapoport initially indicated:
Rapoport then reported the Bills and Steelers engaged in "intense talks" overnight and "almost [got] there on trade compensation" before Buffalo dropped out.
There were several factors that may have played a role, and chief among them is compensation, as it's possible Pittsburgh was asking for the No. 9 overall pick in the 2019 draft. It would have been tough for the Bills to justify moving that pick for a 30-year-old wideout since Buffalo has several holes to fill on both sides of the ball.
Also, there is no guarantee that Brown would have reported to the Bills. Albert Breer of The MMQB reported that Brown "made it clear" he didn't want to go to Buffalo.
NFL Network's Tom Pelissero also reported on that possibility, noting Brown could have threatened retirement in the event of a trade to Buffalo:
Jason La Canfora of CBS Sports reported that it wasn't about money, adding that Brown was simply "unwilling" to go to Buffalo. He also called the market for Brown "bleak."
Rapoport counted the Green Bay Packers as another team not in on Brown and noted that "many teams" are out of the running.
Brown has been named to the Pro Bowl in each of the past six seasons, and he has also reached at least 100 catches for 1,200 yards and eight touchdowns in each of those campaigns. Last season, Brown led the NFL with 15 touchdowns receptions to go along with 104 grabs for 1,297 yards.
Of course, Brown would have provided a huge boost to a Bills offense devoid of a go-to wideout. Last season, second-year man Zay Jones led the Bills with 652 receiving yards, while undrafted rookie Robert Foster was second with 541.
Quarterback Josh Allen needs better targets if he's to make strides in his second season, but Buffalo will have to look toward the draft or free agency to make that happen. |
*v - 2)
Factor -311364*z**2 - 12276*z - 121.
-(558*z + 11)**2
Suppose 2*t**2/9 + 1616*t/9 - 1618/9 = 0. What is t?
-809, 1
Suppose -4*h**3 - 68*h**2 - 224*h - 208 = 0. Calculate h.
-13, -2
Factor 2*d**4/9 + 8*d**3/3 + 2*d**2 - 44*d/9.
2*d*(d - 1)*(d + 2)*(d + 11)/9
What is u in 2*u**4/3 - 12*u**3 + 30*u**2 = 0?
0, 3, 15
Factor 4*o**4 + 580*o**3 + 9528*o**2 + 49280*o + 65024.
4*(o + 2)*(o + 8)**2*(o + 127)
What is x in -45*x**4 + 25*x**3 + 65*x**2 - 25*x - 20 = 0?
-1, -4/9, 1
Determine y so that -2*y**5/21 + 52*y**4/21 + 62*y**3/21 - 208*y**2/21 - 72*y/7 = 0.
-2, -1, 0, 2, 27
Find m, given that 84*m**2 + 1755*m - 189 = 0.
-21, 3/28
Factor 2*z**5 - 6*z**4 - 2*z**3 + 6*z**2.
2*z**2*(z - 3)*(z - 1)*(z + 1)
Let -d**4/2 - 22*d**3 - 21*d**2 + 22*d + 43/2 = 0. Calculate d.
-43, -1, 1
Factor w**4/4 - 91*w**3/2 + 2502*w**2 - 72917*w/2 - 156025/4.
(w - 79)**2*(w - 25)*(w + 1)/4
Factor 2*n**4/13 - 8*n**3/13 + 10*n**2/13 - 4*n/13.
2*n*(n - 2)*(n - 1)**2/13
Let -2*x**4/7 + 5504*x**3/7 + 1574*x**2 - 5504*x/7 - 11016/7 = 0. Calculate x.
-2, -1, 1, 2754
Let -6*o**2/7 + 300*o/7 - 3750/7 = 0. What is o?
25
Factor 2*z**4 - 128*z**3 + 2588*z**2 - 17280*z + 36450.
2*(z - 27)**2*(z - 5)**2
Let 4*l**5/11 - 4*l**4/11 - 39*l**3/11 - 34*l**2/11 - 8*l/11 = 0. What is l?
-2, -1/2, 0, 4
Factor -5*n**3 - 45*n**2 + 50*n.
-5*n*(n - 1)*(n + 10)
Suppose 4*n**5 + 24*n**4 - 76*n**3 - 64*n**2 + 240*n - 128 = 0. Calculate n.
-8, -2, 1, 2
Factor 3*f**4 + 393*f**3 + 12261*f**2 - 40401*f.
3*f*(f - 3)*(f + 67)**2
Let -k**3/2 + 2*k**2 - 3*k/2 = 0. Calculate k.
0, 1, 3
Factor 3*v**2/5 + 6*v/5 - 189/5.
3*(v - 7)*(v + 9)/5
Find c, given that c**2 + 352*c - 353 = 0.
-353, 1
Suppose 12*n**4 - 20*n**3 - 16*n**2 + 16*n = 0. What is n?
-1, 0, 2/3, 2
Determine c so that 64*c**3 + 5472*c**2 + 2724*c + 340 = 0.
-85, -1/4
Suppose -3*j**4 - 183*j**3 - 2295*j**2 - 10725*j - 17250 = 0. What is j?
-46, -5
Factor 2*h**5 + 616*h**4 - 3744*h**3 + 7516*h**2 - 6274*h + 1884.
2*(h - 3)*(h - 1)**3*(h + 314)
Let -2*x**5 + 30*x**4 + 2*x**3 - 30*x**2 = 0. Calculate x.
-1, 0, 1, 15
Factor -60*x**3 + 32*x**2.
-4*x**2*(15*x - 8)
Factor z**5 + 10*z**4 - 108*z**3 + 182*z**2 - 85*z.
z*(z - 5)*(z - 1)**2*(z + 17)
Solve b**3 - b = 0.
-1, 0, 1
Factor 3*z**3/8 + 15*z**2 + 111*z/8 - 117/4.
3*(z - 1)*(z + 2)*(z + 39)/8
What is h in -2*h**5/7 - 2*h**4 - 4*h**3 - 4*h**2/7 + 30*h/7 + 18/7 = 0?
-3, -1, 1
Suppose -2*y**5/9 - 4*y**4/9 + 4*y**3/9 + 16*y**2/9 + 14*y/9 + 4/9 = 0. What is y?
-1, 2
Let -5*o**2 + 3920*o - 768320 = 0. Calculate o.
392
Solve 21*t**4 + 510*t**3 + 1089*t**2 + 732*t + 132 = 0 for t.
-22, -1, -2/7
Let -9*p**5 - 60*p**4 + 33*p**3 + 84*p**2 = 0. Calculate p.
-7, -1, 0, 4/3
Find r, given that 10*r**4/13 + 8*r**3/13 - 2*r**2/13 = 0.
-1, 0, 1/5
Solve 5*m**4 - 50*m**3 - 265*m**2 - 210*m = 0.
-3, -1, 0, 14
Solve -3*p**3/7 + 240*p**2/7 = 0 for p.
0, 80
Determine b so that 10*b**2 - 2158*b - 432 = 0.
-1/5, 216
Factor -2*y**4 + 56*y**3 + 58*y**2.
-2*y**2*(y - 29)*(y + 1)
Factor -2*y**2 + 56*y + 58.
-2*(y - 29)*(y + 1)
Find i such that 4*i**2 - 376*i + 8836 = 0.
47
Factor 345744*w**3 - 1039584*w**2 + 7060*w - 12.
4*(w - 3)*(294*w - 1)**2
Find i such that i**5/3 - 40*i**4 + 3718*i**3/3 - 2360*i**2 + 3481*i/3 = 0.
0, 1, 59
Factor -177*d**4/4 - 171*d**3/4 + 3*d**2/2.
-3*d**2*(d + 1)*(59*d - 2)/4
Factor -4*t**4 - 68*t**3 + 148*t**2 - 76*t.
-4*t*(t - 1)**2*(t + 19)
Factor -5*u**2 - 425*u + 430.
-5*(u - 1)*(u + 86)
Suppose 167*o**2/6 + 57*o/2 + 2/3 = 0. Calculate o.
-1, -4/167
Let -8*q**3 + 23*q**2 + 4*q - 36 = 0. What is q?
-9/8, 2
Factor 212*r**4 + 200*r**3 - 436*r**2 + 24*r.
4*r*(r - 1)*(r + 2)*(53*r - 3)
Let -5*x**4 + 95*x**3 + 330*x**2 + 20*x - 440 = 0. Calculate x.
-2, 1, 22
Find g such that g**5 - 34*g**4/3 + 83*g**3/3 + 28*g**2 - 12*g = 0.
-1, 0, 1/3, 6
Suppose -2*y**5 + 114*y**4 + 244*y**3 - 456*y**2 - 944*y = 0. Calculate y.
-2, 0, 2, 59
Solve 4*b**3 + 83*b**2 - 205*b + 46 = 0.
-23, 1/4, 2
Find t, given that t**5 - 5*t**4 - 62*t**3 + 206*t**2 + 1085*t - 1225 = 0.
-5, 1, 7
Let -14*b**4/19 + 13220*b**3/19 - 3135926*b**2/19 + 7106448*b/19 - 1774728/19 = 0. What is b?
2/7, 2, 471
Determine s, given that -3*s**3 - 33*s**2 + 126*s = 0.
-14, 0, 3
Determine p, given that -3*p**3 - 372*p**2 - 5967*p - 25758 = 0.
-106, -9
Factor -4*n**5 + 40*n**4 - 64*n**3.
-4*n**3*(n - 8)*(n - 2)
Suppose -2*n**4/17 - 72*n**3/17 - 618*n**2/17 - 812*n/17 + 4704/17 = 0. Calculate n.
-24, -7, 2
Factor -b**3/9 + 254*b**2/9 - 4928*b/3 - 19360.
-(b - 132)**2*(b + 10)/9
Determine k so that k**2/5 + 134*k/5 + 1464/5 = 0.
-122, -12
Determine m so that -170*m**3/11 + 174*m**2/11 + 336*m/11 - 8/11 = 0.
-1, 2/85, 2
Factor 115*q**2 + 1055*q + 180.
5*(q + 9)*(23*q + 4)
What is f in -3*f**2/5 + 189*f/5 - 2976/5 = 0?
31, 32
Factor -85*v**3 - 75*v**2 + 180*v - 20.
-5*(v - 1)*(v + 2)*(17*v - 2)
Factor a**3/3 - 46*a**2/3 + 88*a/3.
a*(a - 44)*(a - 2)/3
Factor 3*q**2/7 + 13572*q/7 + 15349932/7.
3*(q + 2262)**2/7
Determine q so that q**3 + 14*q**2 + 43*q + 30 = 0.
-10, -3, -1
Factor 2*h**3/7 - 2*h/7.
2*h*(h - 1)*(h + 1)/7
Solve 4*d**2 + 6024*d + 2268036 = 0.
-753
Factor -v**4/5 + 2*v**3/5 + 72*v**2/5 - 416*v/5 + 128.
-(v - 4)**3*(v + 10)/5
Determine m, given that -5*m**2/6 + 190*m/3 = 0.
0, 76
Determine a, given that 14*a**2/15 + 100*a/3 - 48/5 = 0.
-36, 2/7
Factor 3*c**3 - 7686*c**2 + 4930560*c - 9830400.
3*(c - 1280)**2*(c - 2)
Let -4*o**5/3 + 116*o**4 - 316*o**3/3 - 820*o**2 + 4496*o/3 - 688 = 0. What is o?
-3, 1, 2, 86
Determine v so that -4*v**3 + 720*v**2 - 11988*v + 52488 = 0.
9, 162
Let -3*s**5 - 6*s**4 + 174*s**3 - 588*s**2 + 693*s - 270 = 0. What is s?
-10, 1, 3
Determine y so that 10*y**5/11 + 6*y**4 + 46*y**3/11 - 234*y**2/11 - 344*y/11 - 120/11 = 0.
-5, -2, -1, -3/5, 2
Determine s so that s**5/10 - 3*s**4/5 - 7*s**3/2 - 24*s**2/5 - 2*s = 0.
-2, -1, 0, 10
Suppose -4*r**2 - 2212*r = 0. Calculate r.
-553, 0
Factor d**4 - 66*d**3 + 949*d**2 + 4620*d + 4900.
(d - 35)**2*(d + 2)**2
Let 3*h**3 + 60*h**2 + 57*h = 0. What is h?
-19, -1, 0
Factor b**2/4 + 25*b/4 + 6.
(b + 1)*(b + 24)/4
Find q, given that q**5 - 6*q**4 - 28*q**3 + 110*q**2 - 117*q + 40 = 0.
-5, 1, 8
Factor -i**4/4 + 143*i**3/4 - 4887*i**2/4 - 16571*i/4 + 5329.
-(i - 73)**2*(i - 1)*(i + 4)/4
Determine i so that -2*i**5 + 3652*i**4 - 2221632*i**3 + 449507772*i**2 + 451733058*i = 0.
-1, 0, 609
Find p such that -2*p**5/9 - 62*p**4/3 - 2204*p**3/3 - 110900*p**2/9 - 96250*p - 281250 = 0.
-25, -9
Factor i**3/9 + 7*i**2/3 + 11*i - 121/9.
(i - 1)*(i + 11)**2/9
Factor -1458*o**3 + 6156*o**2 - 5760*o - 1600.
-2*(9*o - 20)**2*(9*o + 2)
What is y in -y**2 - 786*y - 154449 = 0?
-393
What is l in 2*l**3/3 + 10*l**2/3 = 0?
-5, 0
Suppose m**5/4 - 69*m**4/2 + 4761*m**3/4 = 0. What is m?
0, 69
Factor 4*l**2 - 400*l + 10000.
4*(l - 50)**2
What is t in 2*t**4/5 - 6*t**3/5 - 2*t**2/5 + 6*t/5 = 0?
-1, 0, 1, 3
What is g in 27*g**4/5 - 646*g**3/5 - 15*g**2 + 646*g/5 + 48/5 = 0?
-1, -2/27, 1, 24
What is c in -5*c**3 - 150*c**2 + 5*c + 150 = 0?
-30, -1, 1
Factor -j**2/4 - 49*j/2 + 99/4.
-(j - 1)*(j + 99)/4
Factor -874*w**3/13 - 752*w**2/13 + 72*w/13.
-2*w*(19*w + 18)*(23*w - 2)/13
Factor 9*r**2 + 915*r + 606.
3*(r + 101)*(3*r + 2)
Factor 5*v**5 + 50*v**4 + 20*v**3 - 950*v**2 - 1625*v + 2500.
5*(v - 4)*(v - 1)*(v + 5)**3
Solve 5*n**2 + 1635*n + 3250 = 0.
-325, -2
Factor 3*t**2 - 54*t + 135.
3*(t - 15)*(t - 3)
Factor 65*x**4 + 1395*x**3 + 10185*x**2 + 26705*x + 10290.
5*(x + 7)**3*(13*x + 6)
Solve -r**5/7 - 33*r**4/7 - 326*r**3/7 - 72*r**2 + 4320*r/7 - 3456/7 = 0.
-12, 1, 2
Let 3*x**4 - 375*x**3 + 2865*x**2 - 6705*x + 4212 = 0. What is x?
1, 3, 4, 117
Factor -3*a**3 - 21798*a**2 - 52794756*a - 42622966344.
-3*(a + 2422)**3
Let -2*k**2/3 + 344*k = 0. Calculate k.
0, 516
Determine u so that -2*u**4/3 + 4*u**3/3 + 1288*u**2/3 + 3920*u = 0.
-14, 0, 30
Factor 3*y**4/4 + 21*y**3/4 - 21*y**2/2 - 36*y.
3*y*(y - 3)*(y + 2)*(y + 8)/4
Factor 5*b**3/6 + 3*b**2 - 4*b/3.
b*(b + 4)*(5*b - 2)/6
Factor -5*d**5 + 35*d**4 - 50*d**3 - 10*d**2 + 55*d - 25.
-5*(d - 5)*(d - 1)**3*(d + 1)
Factor -3*f**4/2 + 1929*f**3/2 - 412155*f**2/2 + 28983075*f/2 + 29815125.
-3*(f - 215)**3*(f + 2)/2
Let 3*y**2 + 264*y + 4080 = 0. Calculate y.
-68, -20
Factor 9*b**4/7 - 44*b**3/7 + 67*b**2/7 - 4*b - 4/7.
(b - 2)**2*(b - 1)*(9*b + 1)/7
What is v in 3*v**2/ |
Near daily we got emails, messages, telegrams, and feedback at events asking a simple question: When Android?
It was a busy summer! 🌞
This post details our experiences in launching a Blockchain native wallet on top of the Android platform.
First Some Background ✅
Monolith is a non-custodial banking alternative, powered by Ethereum. You can securely store Ethereum-based tokens in your own decentralised Monolith wallet. You can then exchange them to fiat and load them onto your Monolith Visa debit card.
This means that in order to deliver a great User Experience (UX), we have to enable users to seamlessly interact with both the decentralised world of Ethereum as well as the traditional financial world.
Monolith was conceived back in 2016 by our founders, Mel and David. After acquiring the appropriate licenses and partnerships, we set out to build what we thought the bank of the future should look like.
Launching on iOS first 📲
The bank of the future is accessible anywhere, so we started working on our mobile app. We built on iOS first due to its homogenous operating system distribution and the availability of the secure enclave on their devices. It’s a public secret at this point that building on iOS first is generally easier (with some exceptions).
We built our application in a cross-platform way using react-native. Our User Interfaces (UIs) were designed in a platform-agnostic way, whilst still respecting native elements and patterns such as UI components and controls. At such an early stage it made sense to prioritise go-to-market speed and iterative design over a platform-specific design.
We launched our invite-only iOS closed alpha in March, and publicly on the App Store in June. Our time exclusively on iOS helped us really nail down what a good onboarding experience feels like for our product and hammer out the kinks of interacting with something as nuanced as the Ethereum network.
Creation of the Android squad 💪
3rd of June, 2019. We had finally released on iOS and whilst we knew that we could probably get the Android app out relatively quickly. However, we needed to make sure to do in a way that protected our users.
Given the diverse Android landscape, we got a team together and we defined our goal:
Demystifying the Android landscape 🌪
A key assumption in our security model is that customer keys are securely stored in their devices. On iOS, we could rely on the Secure Enclave. However, Android has a number of different Keystore implementations. Some are backed by the Secure Element (SE), Trusted Execution Environment (TEE) and most recently StrongBox. Furthermore, not all manufacturers implement Android APIs consistently. We very quickly had a number of questions to answer. For example:
What type of ECC signature schemes does Android natively support, if any? To generate the mnemonic, we use BIP39 which requires entropy of 32 bits. On iOS, we use the PrimeRandom secure enclave API. What are the alternatives for this on Android? Can we perform feature detection for devices that do not have a hardware Keymaster implementation (that back the Keystore) and restrict based on this? How? What are the different features to consider? Which ones would we restrict? […] etc, etc, etc
So we got busy researching and prototyping:
Given our findings, the minimum OS level that we were comfortable with supporting was Android Nougat (API 24) as it introduced unified support for the Keystore interface in a manner that ensures that users don’t accidentally remove their private key whilst messing around with their biometrics.
But what devices did our users have? ⁉️
To get this answered we had a look through the Google Analytics data for our website. We then manually researched the minimum OS level available for Android for each device.
From this, we concluded that even though the global distribution dashboards estimated that only 58% of global users had Android Nougat or higher, with mostly European-based users, the number was closer to 85%. We also understood that this was going to get better over time, and we didn’t want to compromise on security.
Building, Testing, Fixing 🤖
The road ahead was now clearer. First, we had to make the existing functionality work on Android. While most of the react-native components worked straight out of the box, there were native bridges for a number of APIs including key generation and derivation, transaction signing and customer onboarding which needed to be built.
As soon as July came around, we had a basic version and asked our team members to test it in fury. 😀
Mel, our founder was demoing a (partially broken!) internal Android app on our summer livestream on July the 10th.
This phase of development actually took a bit longer than expected. It turned out that our app looked inconsistent on a couple of devices. Once we were confident with the security aspect of things, we released the app as a beta app to 100 community members in August.
We relied on Instabug and Discord to have meaningful conversations with our customers. Over the summer, product managers at Monolith personally responded to most customer messages, bug reports and suggestions! 😊
Two months and 5 Android app iterations later, we were ready to launch!
Sweet Victory ❤
As a team, we’ve gone through a lot to get here, but it’s clear that this is just the beginning. We’ll keep iterating on our product as we strive to understand our customers, and how they use our Wallet and Card in the real world. 🙏
Our goal is to one day be able to confidently ask them to close their bank accounts and live their financial lives on this new paradigm of Decentralised Finance. 🚀
We want to thank everyone who has supported us on this journey. 🎉
Summary |
Heavier outboard engines usually are permanently mounted to the transom of the boat. In many cases it is not desirable to mount the ski line bracket to the outboard engine. Boats with inboard engines also benefit from a ski line bracket. |
The economist Marc Blaug defined economic models as “mental exercises without the slightest possibility of ever being practically relevant.” These models and their fanciful assumptions look convincing on the back of an envelope, but they provide little real-world policy guidance.
Enter University of Calgary economist Trevor Tombe’s timely and counterintuitive back-of-the-envelope argument against rejecting new oil pipelines as a way of reducing greenhouse gas (GHG) emissions. Tombe concedes that a moratorium on new pipelines would lower GHG emissions domestically (upstream) and globally (downstream). But he thinks there’s a more efficient approach. Namely, a carbon price of approximately $50 per tonne.
Tombe’s admittedly rough calculations estimate the implied costs for Alberta if new pipelines are blocked. Rejecting new pipelines, he argues, poses a massive economic opportunity cost for Alberta oil producers, “far above the likely damages from a tonne of GHG emissions.”
The likely damage of a tonne of carbon emitted into the atmosphere is called the “social cost of carbon,” for which Tombe relies on the federal government’s updated estimate of $40. Tombe argues that pricing carbon at about $50 per tonne—which the government’s plan does by 2022—will reduce emissions more cheaply than prohibiting pipelines.
Tombe’s argument is just the kind of economic thinking best left to classroom chalkboards. As a basis for Canada’s climate policy, it falls short in three ways.
First, Tombe assumes without justification that Alberta oil producers’ implied costs—purely notional costs not actually borne by anyone—are relevant to a public policy whose purpose is to reduce emissions and the very real costs we’ll all actually have to bear if we fail to stabilize our climate.
Second, the government’s estimated social cost of carbon might vastly underestimate the true cost. While most economists agree that pricing carbon is the ideal way to reduce emissions, economists also appear to agree that economic models seriously underestimate the future costs of climate change. Models of the social cost of carbon depend on modellers’ assumptions, including how much to discount future damages. Estimates vary, but many economists think the true cost is closer to $200 than Tombe’s $50.
Think of it this way: if Canada’s proposed price of $50 by 2022 were really high enough to reduce emissions and stimulate “clean growth” by sending a clear signal to the market to shift production from fossil fuels to renewable energy, we wouldn’t even be having this debate! In a world where carbon is properly priced and not substantially subsidized, building new pipelines would be economically unthinkable. Which is kind of the point.
But because $50 is almost assuredly too low, we can’t rely on it alone, no matter how economically ideal it may seem. Instead, we must pursue a mix of policies, including one Tombe agrees will work – blocking pipelines.
The Minister of the Environment and Climate Change gets this—mostly—when she explained that “some people say just have a price on carbon. If you were to do that, the price would be so high it wouldn’t make any sense. So that’s why you have to have a variety of different measures.” Minister McKenna’s absolutely right, of course, if what she means is that a full carbon price wouldn’t make sense politically.
Finally, Tombe assumes other oil producers like Saudi Arabia will fill the gap left by Alberta’s forgone production. As his calculations show, however, even if Saudi Arabia fully makes up for Alberta’s reduced production, global emissions will still decline. More importantly, Tombe’s assumption reveals a corrosive cynicism about the Paris Agreement, which both Canada and Saudi Arabia have ratified. As a developed country, Canada is legally obligated to reduce its absolute economy-wide emissions. Building new pipelines squarely contradicts this obligation, and it tells developing countries that Canada is prepared neither to lead nor to innovate. If each oil-producing signatory to the Paris Agreement looked at the world the way Tombe seems to, each would conclude that its foregone opportunity costs would be too high to phase-out oil production, leading to a tragedy of the global commons through a collectively-fulfilling prophesy.
But as the renowned climate scientist James Hansen and his colleagues recently concluded, “we have a global emergency. Fossil fuel C0 2 emissions should be reduced as rapidly as practical.”
Canada needs to heed this policy guidance. Otherwise, we’ll never have Paris, only pipelines that will soon outlive their purpose.
Jason MacLean is an assistant professor at the Bora Laskin Faculty of Law, Lakehead University |
A comparison of the prenatal health behaviors of women from four cultural groups in Turkey: an ethnonursing study.
This research was conducted to uncover women's health behaviors during prenatal periods using a transcultural approach. The qualitative ethnonursing method was used, and the research was conducted at the family health center in Bornova District in Izmir. The data were collected between November 2007 and August 2008 using the purposive sampling method. Eighteen pregnant women were included in the study and in-depth face-to-face semi-structured interviews were recorded on an audio recording device. A thematic analysis revealed four main themes: family, social learning-tradition transfer, perceptions, and behavioral changes. |
---
author:
-
-
-
bibliography:
- 'bibliography.bib'
title: |
ArduSoar: an Open-Source Thermalling Controller\
for Resource-Constrained Autopilots
---
|
Release of serotonin from human platelets in vitro by radiographic contrast media.
Both ionic and nonionic, monomeric and dimeric contrast media were found to release serotonin from intact human platelets in vitro. The monomeric contrast media were compared at the concentration range of 25 mg I/ml. Iothalamate was the strongest and the statistically equal metrizamide iopamidol, and P-297 were the weakest releasers. Monomeric and dimeric contrast media were compared at concentration ranges of 50 and 100 mg I/ml. They ranked, in descending order of serotonin releasing potency: iodipamide, iothalamate, P-127, iopamidol, and a statistically indistinguishable group of the monoacid dimer P-286, the nonionic dimer ZK 74 435, and metrizamide. The capability of contrast media to release serotonin seems to be a composite result of their specific physical and molecular structural properties. |
To get the answer to the question of how young people perceive Upper Secondary problems associated with new technologies of brain-computer interfaces, we surveyed the work of participants 2nd Stage IV Media Olympiad, in which participants responded going to create on their relationship to the technology of brain-computer interfaces. The survey results are an attempt to determine the ratio of young people to technical progress and issues associated with the place of man in a world of evolving technology. |
<loading data-bind="css: classes('loading-panel'), visible: $component.loading()" params="status: 'Loading Cohort Characterization list...'"></loading>
<div data-bind="visible: !$component.loading()">
<characterizations-tabbed-grid params="{
activeTab: $component.gridTab,
isViewPermitted: $component.isGetCCListPermitted,
gridColumns: $component.gridColumns,
gridOptions: $component.gridOptions,
data: $component.data,
createNew: $component.createCharacterization,
createNewEnabled: $component.isCreatePermitted
}"/>
</div> |
Despite near everyone claiming how omeg scary they were, I saw both and pretty much was bored to death through PA while I wanted to crack up laughing through Insidious. The people actually getting scared in the theatre during Insidious just made me laugh even more.
Well, I would like to formally apologize for everyone in the world that was scared by either of those films. We are all sorry that you are so steely nerved that nothing can shake you and the rest of us do have moments where films can scare us. We bow down to your iron fisted resolve in not being scared...protect us?
Sure those movies aren't over the top, shit your pants scary through and through. But, there are a few moments in both of them that I did jump and wonder WTF!!!
One that would come to mind for me is 'The Descent'. It was okay, but I don't see how it's in any way, shape or form 'revolutionary' for the horror genre. It isn't the first time that a horror film has had an all specific gender cast, and it won't be the last.
I think The Texas Chainsaw Massacre is overrated. The original has hardly any gore and it's actually tame compared to other films around in the 70s, yet they banned it anyway. Its sequels are way more gory. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.