text
stringlengths
8
5.77M
(REUTERS) - England need to secure at least a draw in their Euro 2016 opener against Russia on Saturday to avoid putting themselves on the back foot in the fight for places in the knockout rounds, midfielder James Milner has said. England face Russia in their first Group B game in Marseille and Milner urged his team mates to focus on giving themselves a platform to build on by taking at least a point should they fail to win the game. "What I've learned more than anything is: don't lose your first game, whatever happens," the Liverpool midfielder told British media. "It's great to win it and get off to a flying start but if you can't get that win, make sure you are solid and don't concede late on looking for the winner. "Make sure you get a solid start, at least a draw, because as soon as you lose that first game in a three-game group you put yourselves under pressure straight away. That's the biggest thing." England drew criticism for the manner in which they played during their drab 1-0 win over Portugal in their final warm-up game on Thursday but Milner said the team would raise their game at the tournament. "We've had three wins out of three against different, but difficult, opposition without playing our best football," he added of the 2-1 wins over Turkey and Australia prior to the Portugal match. "We've got gears to go through and we want to peak at the tournament," he added. "We can play a variety of formations, there are players with so many different strengths - pace, trickery, solid players and more defensive-minded players - so I think as a squad we've got so many options." The Straits Times We have been experiencing some problems with subscriber log-ins and apologise for the inconvenience caused. Until we resolve the issues, subscribers need not log in to access ST Digital articles. But a log-in is still required for our PDFs.
package org.wikipedia; public final class Constants { // Keep loader IDs unique to each loader. If the loader specified by the ID already exists, the // last created loader is reused. public static final int RECENT_SEARCHES_FRAGMENT_LOADER_ID = 101; public static final String PLAIN_TEXT_MIME_TYPE = "text/plain"; public static final int ACTIVITY_REQUEST_SETTINGS = 41; public static final int ACTIVITY_REQUEST_CREATE_ACCOUNT = 42; public static final int ACTIVITY_REQUEST_RESET_PASSWORD = 43; public static final int ACTIVITY_REQUEST_WRITE_EXTERNAL_STORAGE_PERMISSION = 44; public static final int ACTIVITY_REQUEST_VOICE_SEARCH = 45; public static final int ACTIVITY_REQUEST_LANGLINKS = 50; public static final int ACTIVITY_REQUEST_EDIT_SECTION = 51; public static final int ACTIVITY_REQUEST_GALLERY = 52; public static final int ACTIVITY_REQUEST_LOGIN = 53; public static final int ACTIVITY_REQUEST_DESCRIPTION_EDIT_SUCCESS = 54; public static final int ACTIVITY_REQUEST_DESCRIPTION_EDIT = 55; public static final int ACTIVITY_REQUEST_DESCRIPTION_EDIT_TUTORIAL = 56; public static final int ACTIVITY_REQUEST_INITIAL_ONBOARDING = 57; public static final int ACTIVITY_REQUEST_FEED_CONFIGURE = 58; public static final int ACTIVITY_REQUEST_ADD_A_LANGUAGE = 59; public static final int ACTIVITY_REQUEST_ADD_A_LANGUAGE_FROM_SEARCH = 60; public static final int ACTIVITY_REQUEST_BROWSE_TABS = 61; public static final int ACTIVITY_REQUEST_OPEN_SEARCH_ACTIVITY = 62; public static final int ACTIVITY_REQUEST_SUGGESTED_EDITS_ONBOARDING = 63; public static final int ACTIVITY_REQUEST_IMAGE_CAPTION_EDIT = 64; public static final int ACTIVITY_REQUEST_IMAGE_TAGS_ONBOARDING = 65; public static final int ACTIVITY_REQUEST_IMAGE_TAGS_EDIT = 66; public static final String INTENT_RETURN_TO_MAIN = "returnToMain"; public static final String INTENT_FEATURED_ARTICLE_FROM_WIDGET = "featuredArticleFromWidget"; public static final String INTENT_APP_SHORTCUT_CONTINUE_READING = "appShortcutContinueReading"; public static final String INTENT_APP_SHORTCUT_RANDOMIZER = "appShortcutRandomizer"; public static final String INTENT_APP_SHORTCUT_SEARCH = "appShortcutSearch"; public static final String INTENT_EXTRA_REVERT_QNUMBER = "revertQNumber"; public static final String INTENT_EXTRA_DELETE_READING_LIST = "deleteReadingList"; public static final String INTENT_EXTRA_VIEW_FROM_NOTIFICATION = "viewFromNotification"; public static final String INTENT_EXTRA_NOTIFICATION_SYNC_PAUSE_RESUME = "syncPauseResume"; public static final String INTENT_EXTRA_NOTIFICATION_SYNC_CANCEL = "syncCancel"; public static final String INTENT_EXTRA_GO_TO_MAIN_TAB = "goToMainTab"; public static final String INTENT_EXTRA_GO_TO_SE_TAB = "goToSETab"; public static final String INTENT_EXTRA_INVOKE_SOURCE = "invokeSource"; public static final String INTENT_EXTRA_ACTION = "intentAction"; public static final int SUGGESTION_REQUEST_ITEMS = 5; public static final int API_QUERY_MAX_TITLES = 50; public static final int PREFERRED_CARD_THUMBNAIL_SIZE = 800; public static final int PREFERRED_GALLERY_IMAGE_SIZE = 1280; public static final int MAX_TABS = 100; public static final int MAX_READING_LIST_ARTICLE_LIMIT = 5000; public static final int MAX_READING_LISTS_LIMIT = 100; public static final int MIN_LANGUAGES_TO_UNLOCK_TRANSLATION = 2; public enum InvokeSource { CONTEXT_MENU("contextMenu"), LINK_PREVIEW_MENU("linkPreviewMenu"), PAGE_OVERFLOW_MENU("pageOverflowMenu"), NAV_MENU("navMenu"), MAIN_ACTIVITY("main"), PAGE_ACTIVITY("page"), NEWS_ACTIVITY("news"), READING_LIST_ACTIVITY("readingList"), MOST_READ_ACTIVITY("mostRead"), RANDOM_ACTIVITY("random"), ON_THIS_DAY_ACTIVITY("onThisDay"), GALLERY_ACTIVITY("gallery"), READ_MORE_BOOKMARK_BUTTON("readMoreBookmark"), BOOKMARK_BUTTON("bookmark"), SUGGESTED_EDITS("suggestedEdits"), ONBOARDING_DIALOG("onboarding"), FEED("feed"), NOTIFICATION("notification"), APP_SHORTCUTS("appShortcuts"), TOOLBAR("toolbar"), WIDGET("widget"), INTENT_SHARE("intentShare"), INTENT_PROCESS_TEXT("intentProcessText"), FEED_BAR("feedBar"), VOICE("voice"), ON_THIS_DAY_CARD_BODY("onThisDayCard"), ON_THIS_DAY_CARD_FOOTER("onThisDayCardFooter"), LEAD_IMAGE("leadImage"), TABS_ACTIVITY("tabsActivity"), FILE_PAGE_ACTIVITY("filePage"), SNACKBAR_ACTION("snackbar"), PAGE_ACTION_TAB("pageActionTab"), TALK_ACTIVITY("talkActivity"); private String name; InvokeSource(String name) { this.name = name; } public String getName() { return name; } } public enum ImageEditType { ADD_CAPTION("addCaption"), ADD_CAPTION_TRANSLATION("addCaptionTranslation"), ADD_TAGS("addTags"); private String name; ImageEditType(String name) { this.name = name; } public String getName() { return name; } } private Constants() { } }
The complementary interaction of epidemiological and experimental animal studies: a key foundation of the preventive effort. This overview will examine the contributions made by animal models of atherosclerosis to our understanding of its pathogenesis and will describe the principles for prevention of this disease process. In addition, some of the recent biochemical and cell biological discoveries, many utilizing animal cells and animal lipoproteins, will be discussed. Further, the increasing knowledge brought to worldwide efforts for the prevention and regression of atherosclerosis by the aforementioned research approaches will be reviewed. The results of some of the most promising animal studies of advanced atherosclerotic plaque regression will be briefly summarized, emphasizing the quantitation of changes. The prospects for verifying some of these findings in quantitative studies of human atherosclerosis will be presented along with some of the obstacles to be overcome.
Bonds The release of the annual Social Security Trustees Report brought out the same stale arguments from both the optimists and the pessimists who have been around for decades. The pessimists insist that Social Security is going broke, and the optimists continue to argue that Social Security has enough money to pay full benefits for 20 more years. How could the optimists and pessimists reach such radically different conclusions from the same data? They couldn't, if they were being honest with the public. As an economist, who has been researching and writing about Social Security financing for the past 14 years, I am appalled at the distorted misinformation the American people are... Related "Bonds" Articles The release of the annual Social Security Trustees Report brought out the same stale arguments from both the optimists and the pessimists who have been around for decades. The pessimists insist that Social Security is going broke, and the optimists... Talen Energy Corp., the PPL spinoff so vital to downtown Allentown, is considering a move to Phillipsburg, according to sources in Allentown. Even as the future Fortune 500 company works with a New Jersey developer to build a headquarters at Sixth and... Charles H. Keating Jr., who became the poster child of the savings and loan debacle 25 years ago after his Irvine thrift was seized by federal regulators in what at the time was the nation's costliest bank collapse, has died. He was 90. In the early... NEW YORK – The prospect of a U.S. debt default has unnerved investors, corporate executives and foreign leaders. What's so troubling about Congress failing to raise the nation’s debt ceiling by Thursday? Here are basic facts about the debt ceiling and... Until recently, the markets largely ignored the political jockeying over whether to raise the federal debt ceiling, figuring there's no way Congress would purposely default on the nation's obligations and potentially throw the U.S. economy into another... T. Rowe Price Group closed its high-yield bond funds to new investors as of Monday, the Baltimore money manager announced Tuesday. They include the investor class shares and advisor class shares of the $9.2 billion High Yield Fund as well as the $2.5... Most of us know some basics about Social Security. We understand retirees can start taking retirement benefits at age 62. We realize we'll get a bigger monthly check if we wait a few years more until our normal retirement age to tap benefits. And who... If you had slept Rip Van Winkle-style through 2011, you'd be awakening now to find that your stock portfolio was much the same as you left it.Presumingyou stayed awake, you endured a volatile year for equities.Market swings were so violent... The strain of deteriorating economic conditions and the failure of governments to contain the damage are raising fears that a new financial market meltdown may be unavoidable. The grim mood was evident worldwide as a cascade of selling in stocks and... The move by the federal government to end the sale of paper savings bonds at banks and credit unions next year is bad news for savers in more ways than one.The amount of savings bonds consumers can buy each year will be significantly reduced. And even... Years ago, before we were married and moved to Maryland, my husband opened a checking account in Indiana and had the statements mailed to his office there. The account wasn't forgotten, just ignored. That wasn't a problem until his employer moved and... For taxpayers who are procrastinators, this is your year. Not only is the Internal Revenue Service asking those with more complex returns to hold off filing until mid-February, but the usual tax deadline has been extended three days, to April 18, because...
In this paper I have chosen to discuss the history of a political idea that, given Thailand’s dominant royalist political culture today, counts as the most seditious—a culturally foreign, almost unthinkable idea for Thais to entertain: republicanism...
<?xml version="1.0" encoding="UTF-8"?> <databaseChangeLog xmlns="http://www.liquibase.org/xml/ns/dbchangelog" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.liquibase.org/xml/ns/dbchangelog http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-2.0.xsd"> <preConditions> <dbms type="sybase"/> <runningAs username="${loginUser}"/> </preConditions> <changeSet id="1" author="nvoxland"> <comment> You can add comments to changeSets. They can even be multiple lines if you would like. They aren't used to compute the changeSet MD5Sum, so you can update them whenever you want without causing problems. </comment> <createTable tableName="person"> <column name="id" type="int" autoIncrement="true"> <constraints primaryKey="true" nullable="false"/> </column> <column name="firstname" type="varchar(50)"/> <column name="lastname" type="varchar(50)"> <constraints nullable="false"/> </column> </createTable> </changeSet> <changeSet id="2" author="nvoxland"> <comment>Add a username column so we can use "person" for authentication</comment> <addColumn tableName="person"> <column name="username" type="varchar(8)" /> </addColumn> </changeSet> <!--<changeSet id="3" author="nvoxland">--> <!--<comment>Fix misspelled "username" column</comment>--> <!--<renameColumn tableName="person" oldColumnName="usernae" newColumnName="username"/>--> <!--</changeSet>--> <changeSet id="5" author="nvoxland" context="test"> <insert tableName="person"> <column name="firstname" value="John"/> <column name="lastname" value="Doe"/> <column name="username" value="jdoe"/> </insert> <insert tableName="person"> <column name="firstname" value="Jane"/> <column name="lastname" value="Doe"/> <column name="username" value="janedoe"/> </insert> <insert tableName="person"> <column name="firstname" value="Bob"/> <column name="lastname" value="Johnson"/> <column name="username" value="bjohnson"/> </insert> </changeSet> <changeSet id="6" author="nvoxland"> <comment>Don't keep username in the person table</comment> <dropColumn tableName="person" columnName="username"/> </changeSet> <changeSet id="7" author="nvoxland"> <createTable tableName="employee"> <column name="id" type="int" autoIncrement="true"> <constraints primaryKey="true" nullable="false"/> </column> <column name="name" type="varchar(50)"> <constraints nullable="false"/> </column> </createTable> </changeSet> <changeSet id="7" author="bjohnson" context="test"> <insert tableName="employee"> <column name="name" value="ACME Corp"/> </insert> <insert tableName="employee"> <column name="name" value="Widgets Inc."/> </insert> </changeSet> <changeSet id="7a" author="nvoxland"> <addColumn tableName="employee"> <column name="company_id" type="int"> <constraints nullable="true" foreignKeyName="fk_employee_company" references="employee(id)"/> </column> </addColumn> </changeSet> <changeSet id="8" author="bjohnson"> <dropNotNullConstraint tableName="employee" columnName="name" columnDataType="varchar(50)"/> </changeSet> <changeSet id="8.1" author="bjohnson"> <comment>I guess name needs to be not-null</comment> <addNotNullConstraint tableName='employee' columnName="name" defaultNullValue="UNKNOWN" columnDataType="varchar(50)"/> </changeSet> <changeSet id="9" author="nvoxland"> <renameTable oldTableName="employee" newTableName="company"/> </changeSet> <changeSet id="10" author="nvoxland"> <createTable tableName="testtable"> <column name="id" type="int" autoIncrement="true"> <constraints primaryKey="true" nullable="false"/> </column> <column name="value" type="varchar(50)"> <constraints nullable="false"/> </column> <column name="person_id" type="int"> <constraints nullable="false" foreignKeyName="fk_test_person" references="person(id)"/> </column> </createTable> </changeSet> <changeSet id="11" author="nvoxland"> <dropTable tableName="testtable"/> </changeSet> <changeSet id="12" author="nvoxland"> <createIndex indexName="idx_company_name" tableName="company"> <column name="name"/> </createIndex> <createIndex indexName="idx_person_lastname" tableName="person"> <column name="lastname"/> </createIndex> </changeSet> <changeSet id="13" author="nvoxland"> <dropIndex indexName="idx_person_lastname" tableName="person"/> </changeSet> <changeSet id="14" author="nvoxland"> <createTable tableName="liquibaseRunInfo"> <column name="timesRan" type="int"/> </createTable> <insert tableName="liquibaseRunInfo"> <column name="timesRan" valueNumeric="1"/> </insert> </changeSet> <changeSet id="15" author="nvoxland" runAlways="true"> <sql>update liquibaseRunInfo set timesRan=timesRan+1</sql> </changeSet> <changeSet id="16" author="nvoxland"> <createView viewName="personView"> select * from liquibase.person </createView> </changeSet> <changeSet id="18" author="nvoxland"> <dropView viewName="personView"/> </changeSet> <changeSet id="19" author="nvoxland"> <mergeColumns tableName="person" column1Name="firstname" joinString=" " column2Name="lastname" finalColumnName="fullname" finalColumnType="varchar(100)"/> </changeSet> <changeSet id="20" author="nvoxland"> <createView viewName="personView"> select id, fullname from person </createView> </changeSet> <changeSet id="21" author="nvoxland"> <renameView oldViewName="personView" newViewName="v_person"/> </changeSet> <changeSet id="22" author="nvoxland"> <addColumn tableName="person"> <column name="employer_id" type="int"/> </addColumn> </changeSet> <changeSet id="23" author="nvoxland"> <addForeignKeyConstraint baseTableName="person" baseColumnNames="employer_id" constraintName="fk_person_employer" referencedTableName="company" referencedColumnNames="id" deleteCascade="true"/> </changeSet> <changeSet id="24" author="nvoxland"> <dropForeignKeyConstraint baseTableName="person" constraintName="fk_person_employer"/> </changeSet> <changeSet id="25" author="nvoxland"> <createTable tableName="address"> <column name="id" type="int" autoIncrement="true"/> <column name="line1" type="varchar(255)"/> <column name="line2" type="varchar(255)"> <constraints nullable="true"/> </column> <column name="city" type="varchar(255)"/> <column name="state" type="char(2)"> <constraints nullable="true"/> </column> <column name="postalcode" type="varchar(15)"/> </createTable> </changeSet> <!-- This test does not work on Sybase because it results in no action. Since its not possible to create an int column which is nullable this statement wont work if the constraint is used above in the CreateTable (ID=25). As such I removed this test. Paul Keeble changeSet id="25.1" author="nvoxland"> <addNotNullConstraint tableName="address" columnName="id" columnDataType="int"/> </changeSet--> <changeSet id="25.2" author="nvoxland"> <addPrimaryKey tableName="address" columnNames="id" constraintName="pk_address"/> </changeSet> <changeSet id="26" author="nvoxland"> <insert tableName="address"> <column name="line1" value="123 4th St"/> <column name="line2" value="Suite 432"/> <column name="city" value="New York"/> <column name="state" value="NY"/> <column name="postalcode" value="01235"/> </insert> <insert tableName="address"> <column name="line1" value="6123 64th St"/> <column name="city" value="New York"/> <column name="state" value="NY"/> <column name="postalcode" value="01235"/> </insert> <insert tableName="address"> <column name="line1" value="One Liquibase Way"/> <column name="city" value="Fargo"/> <column name="state" value="ND"/> <column name="postalcode" value="58103"/> </insert> <insert tableName="address"> <column name="line1" value="123 Main Ave"/> <column name="city" value="City With No State"/> <column name="postalcode" value="00000"/> </insert> </changeSet> <changeSet id="27" author="nvoxland"> <addLookupTable existingTableName="address" existingColumnName="state" newTableName="state" newColumnName="id" newColumnDataType="char(2)"/> </changeSet> <changeSet id="28" author="nvoxland"> <addDefaultValue tableName="address" columnName="line2" defaultValue="N/A"/> </changeSet> <changeSet id="30" author="nvoxland"> <dropPrimaryKey tableName="address" constraintName="pk_address"/> </changeSet> <changeSet id="31" author="nvoxland"> <addPrimaryKey tableName="address" columnNames="id" constraintName="pk_address"/> </changeSet> <changeSet id="32" author="nvoxland"> <addUniqueConstraint tableName="address" columnNames="line1, line2" constraintName="uq_address_line1line2"/> </changeSet> <changeSet id="33" author="nvoxland"> <dropUniqueConstraint tableName="address" constraintName="uq_address_line1line2"/> </changeSet> <changeSet id="50" author="nvoxland"> <modifyDataType tableName="address" columnName="postalcode" newDataType="varchar(20)"/> </changeSet> <include file="changelogs/sybase/complete/included.changelog.xml"/> <include file="changelogs/sybase/complete/renamed.changelog.xml"/> <include file="changelogs/common/common.tests.changelog.xml" /> <include file="changelogs/common/autoincrement.tests.changelog.xml" /> <changeSet id="56" author="nvoxland"> <customChange class="liquibase.change.custom.ExampleCustomSqlChange"> <param name="tableName" value="person"/> <param name="columnName" value="employer_id"/> <param name="newValue" value="3"/> </customChange> </changeSet> <changeSet id="57" author="nvoxland"> <customChange class="liquibase.change.custom.ExampleCustomSqlChange" tableName="person" columnName="employer_id" newValue="4"/> </changeSet> <changeSet id="58" author="nvoxland"> <customChange class="liquibase.change.custom.ExampleCustomTaskChange" helloTo="world"/> </changeSet> <changeSet id="60" author="nvoxland"> <executeCommand executable="getmac" os="Windows XP"> <arg value="/FO"/> <arg value="CSV"/> </executeCommand> </changeSet> </databaseChangeLog>
--- abstract: 'To meet the growing spectrum demands, future cellular systems are expected to share the spectrum of other services such as radar. In this paper, we consider a network multiple-input multiple-output (MIMO) with partial cooperation model where radar stations cooperate with cellular base stations (BS)s to deliver messages to intended mobile users. So the radar stations act as BSs in the cellular system. However, due to the high power transmitted by radar stations for detection of far targets, the cellular receivers could burnout when receiving these high radar powers. Therefore, we propose a new projection method called small singular values space projection (SSVSP) to mitigate these harmful high power and enable radar stations to collaborate with cellular base stations. In addition, we formulate the problem into a MIMO interference channel with general constraints (MIMO-IFC-GC). Finally, we provide a solution to minimize the weighted sum mean square error minimization problem (WSMMSE) with enforcing power constraints on both radar and cellular stations.' author: - | Ahmed Abdelhadi and T. Charles Clancy\ Hume Center, Virginia Tech, Arlington, VA, 22203, USA\ {aabdelhadi, tcc}@vt.edu bibliography: - 'pub.bib' title: Network MIMO with Partial Cooperation between Radar and Cellular Systems --- MIMO Radar, Small Singular Values Space Projection, Radar Cellular Coexistence, Network MIMO Introduction ============ Federal Communications Commission (FCC) and the National Telecommunications and Information Administration (NTIA) studies show very low utilization of huge chunks of spectrum held by the federal agencies, especially in urban areas. Meanwhile, there is a very heavy utilization of spectrum held by commercial operators, e.g. cellular operators, in these urban areas. President’s Council of Advisors on Science and Technology (PCAST) recommendations in order to efficiently utilize federal spectrum are to share federal spectrum with commercial operators [@PCAST12]. The sharing will result in enormous economic and social advances for the nation. Meanwhile, this sharing should not endanger the main mission of federal incumbents, e.g. sharing radar spectrum should not affect its target tracking capabilities. Therefore, new approaches should be developed with these considerations in mind. A recent report by NTIA [@NTIA12] concluded that sharing radar spectrum with WiMAX requires huge exclusion zones up to tens of kilometers to protect the WiMAX receivers from harmful interference signal transmitted by radar. This is due to WiMAX receivers are designed to handle low power levels in the range of Watts while the power transmitted by radar is in the range of Kilo and Mega Watts. This includes shipborne radars that are deployed on military ships on the east and west coasts of the United States. Which in turn results in depriving these areas, i.e. where the majority of the US population live, from the benefits of sharing radar spectrum. On the other hand, within the cellular system, interference is a major obstacle against achieving the spectral efficiency expected from developed multiple-antenna techniques [@linear_precoding_journal]. It is shown in [@DMP_Inter1; @MIMO_Inter] that multiple-input multiple-output (MIMO) capacity gains are deteriorated due to inter-cellular interference. In a radar/cellular coexistence scenario, radar receivers have highly sensitive receivers for detecting reflected signals from far targets. Therefore, it is highly susceptible to interference from commercial wireless system operating on radar bands. In the past, radar has been guaranteed exclusive rights to radio spectrum allocation to avoid its operation from being affected by commercial wireless systems interference [@NTIA12; @KAC14WTS]. Therefore, a radar/cellular network-level interference management is of fundamental importance to sustain the radar/cellular coexistence along with limiting inter-cellular interference and harnessing the advantages of cellular MIMO technology. To address the aforementioned challenges, we propose a novel coexistence scenario and model between radar and cellular system. In this model, the radar signal is steered to null-space plus small singular values space of the interference channel between the radar and cellular system. The approach benefits both radar and cellular systems. On the radar side, it will increase the projection space dimensions and therefore radar performance metrics are improved compared to projection with smaller dimensions, e.g. null space projection [@KAC+14ICNC], see [@KAC_QPSK] for more details. On the cellular side, this approach suppresses the high power of radar in the direction of cellular network so it does not burn out the cellular receivers. In addition, the transmitted radar signal could be used to transmit communication messages to enhance the overall system performance and quality of service (QoS) of cellular system. In our model, we propose network MIMO with partial cooperation for merging radar stations in the cellular network. Related Work ------------ To benefit from radar spectrum, researchers have proposed the use of spatial domain to mitigate MIMO radar interference to communication system [@KAC+14ICNC]. One of the studies proposed projection of radar signal into the null space of the interference channel between radar and communication systems [@KAC14_MILCOM]. In another study, researchers designed radar waveform that doesn’t cause harmful interference with communication system and successfully achieves the radar mission objectives [@KAC14DySPANWaveform]. In the past, sharing of government bands has been allowed for commercial wireless systems under the condition of low power transmission in order to protect incumbent from harmful interference [@KAC_IEEE_Sensors]. Famous examples are WiFi and Bluetooth at 2450-2490 MHz band, wireless local area network (WLAN) at the 5.25-5.35 and 5.47-5.725 GHz [@FCC_5GHz_Radar06], and the recently proposed 3550-3650 MHz radar band for small cells usage, i.e. wireless BSs operating on low power [@FCC12_SmallCells]. In [@MultiCellMIMO_inter], network MIMO, also known as multi-cell cooperation, has shown network-level interference management that significantly improve cellular systems performance. In network MIMO, multiple BSs cooperate their transmission to each user. Network MIMO can be reduced to MIMO broadcast channel (BC) in case of full cooperation between all BSs as shown in [@MIMO_capacity]. In another scenario, network MIMO can be reduced to a MIMO interference channel (MIMO-IFC) in case of absence of collaboration between BSs shown in [@MIMO_capacity; @IA_DoF; @MIMO_X_IA]. The general case is forming clusters of BSs that collaborate to transmit to a certain user [@MIMO_X_IA; @MIMO_downlink; @MIMO_cluster; @Ahmed_ITW10; @Ahmed_INFOCOM10]. Our Contributions ----------------- Our contributions in this paper are as follows: - We propose a small singular values space projection method that facilitate coexistence between radar and cellular systems. - We incorporate radar stations in the cellular system and show the equivalence of the new model to MIMO interference channel with general constraints (MIMO-IFC-GC) model for network MIMO with partial cooperation shown in [@linear_precoding_journal]. - We provide a suboptimal solution of the weighted sum-MSE minimization (WSMMSE) problem in Section \[sec:opt\] for our proposed model. *Notation*: Matrices and vectors are denoted by bold upper and lower case letters, respectively. Transpose and Hermitian operators are denoted by $(\cdot)^T$, and $(\cdot)^H$, respectively. The paper is organized as follows. Section \[sec:sys\_model\] discusses system model for MIMO downlink system with radar and cellular coexistence. Moreover, it discussed the user message precoding at radar and cellular stations. Section \[sec:projection\_matrix\] describes how to construct small singular values space projection matrix. We show the equivalence of our proposed model with MIMO-IFC-GC in Section \[sec:equ\_model\]. Section \[sec:opt\] contains the WSMMSE minimization problem under investigation and Section \[sec:min\] presents its solution. Section \[sec:conc\] concludes the paper. System Model {#sec:sys_model} ============ We consider a MIMO downlink system with $L$ radar stations, including shipborne radars, forming a set $\mathcal{L}$, $M$ cellular base stations (BSs) forming a set $\mathcal{M}$, and $K$ mobile users forming a set $\mathcal{K}$, see Figure \[fig:system\_model\]. Each BS has $n_t$ antennas for transmission, each radar station has $n_{\text{\text{rad}}}$ antennas for transmission, and each mobile user has $n_r$ antennas. The $m$th BS has the messages for users set $\mathcal{K}_m \subseteq \mathcal{K}$ where $|\mathcal{K}_m| = K_m$. Similarly, the $l$th radar has the messages for users set $\mathcal{K}_l \subseteq \mathcal{K}$ where $|\mathcal{K}_l| = K_l$. Therefore, the $k$th user receives its intended message from a subset of $M_k$ BSs $\mathcal{M}_k \subseteq \mathcal{M}$ and a subset of radar stations $L_k$ radars $\mathcal{L}_k \subseteq \mathcal{L}$. In total, $k$th user receives its message from $L_k + M_k$ stations $\mathcal{M}_k \cup \mathcal{L}_k \subseteq \mathcal{M} \cup \mathcal{L}$. This channel is generally referred to as MIMO interference channel with partial message sharing, see [@linear_precoding_journal]. If $\mathcal{K}_m$, or $\mathcal{K}_l$, contains one user for each transmitter $m$, or $l$, then the model reduces to a standard MIMO interference channel (MIMO-IFC). When all transmitters cooperate in transmitting to all the users, i.e. $M_k = M$ and $L_k = L$, then we have MIMO broadcast channel (MIMO-BC), when number of some transmitters cooperate, i.e. $M_k < M$ or $L_k < L$, then we have multicast interference channel [@Ahmed_ITW10]. In this paper, we consider MIMO interference channel with partial message sharing (MIMO-IFC-PMS). ![System Model.[]{data-label="fig:system_model"}](system_model.eps){width="\linewidth"} Precoding --------- We define ${{\mathbf{u}}_k = [u_{k,1} ...u_{k,d_k} ]^T} \in \mathbb{C}^{d_k}$ to represent the $d_k \leq \min({L_k} {n_{\text{\text{rad}}}} + {M_k} {n_t} ,n_r )$ independent streams sent to user $k$. It is assumed that ${\mathbf{u}}_k \sim \mathcal{CN}(0,{\mathbf{I}})$. The data streams ${\mathbf{u}}_k$ are known to all the cellular base stations in the set $\mathcal{M}_k$ and all the radar stations in the set $\mathcal{L}_k$. Assuming $l \in \mathcal{L}_k \subset \mathcal{L}$, the $l$th radar station precodes vector ${\mathbf{u}}_k$ via a matrix ${\mathbf{F}}_{k,l} \in \mathbb{C}^{ n_{\text{rad}} \times d_k} $, then projects it using projection matrix ${\mathbf{P}}_{l} \in \mathbb{C}^{n_{\text{rad}} \times n_{\text{rad}}}$, which is described in Section \[sec:projection\_matrix\]. The signal $\tilde{{\mathbf{x}}}_{l} \in \mathbb{C}^ {n_{\text{rad}}}$ sent by radar station and received by the user can be given as, $$\tilde{{\mathbf{x}}}_{l}={\mathbf{P}}_{l}\sum_{k\in K_l} {\mathbf{F}}_{k,l}{\mathbf{u}}_k$$ Assuming $P_l$ is the allowed radar power level to the communication system, then power constraint is given by $$\begin{aligned} \mathbb{E}\left[\left\|{\tilde{{\mathbf{x}}}_{l}}\right\|^2\right]&=& \text{tr} \left\{\mathbb{E}\left[\tilde{{\mathbf{x}}}_{l} {\tilde{{\mathbf{x}}}^{H}_{l}}\right]\right\}\\ \notag &=&\sum_{k\in K_l} \text{tr}\left\{{\mathbf{P}}_{l} {\mathbf{F}}_{k,l} {\mathbf{F}}^{H}_{k,l} {\mathbf{P}}^{H}_{l} \right\} \leq P_l, l=1,\cdots,L.\end{aligned}$$ Similarly, assuming $m \in \mathcal{M}_k $, for the $m$th base station, we have $$\tilde{{\mathbf{x}}}_m=\sum_{k\in K_m} {\mathbf{F}}_{k,m} {\mathbf{u}}_k.$$ and, $$\begin{aligned} \mathbb{E}\left[\left\|{\tilde{{\mathbf{x}}}_m}^2\right\|\right]&=& \text{tr} \left\{E\left[\tilde{{\mathbf{x}}}_m {\tilde{{\mathbf{x}}}^{H}_{m}}\right]\right\}\\ \notag &=&\sum_{k\in K_m} \text{tr} \left\{{\mathbf{F}}_{k,m} {\mathbf{F}}^{H}_{k,m}\right\} \leq P_m, m=1,\cdots,M.\end{aligned}$$ The $k$th user receives the following signal: $$\begin{aligned} {{\mathbf{y}}}_k & = & \sum^{L}_{l=1} \tilde{{\mathbf{H}}}_{k,l} {\tilde{{\mathbf{x}}}_{l}} + \sum^{M}_{m=1} \tilde{{\mathbf{H}}}_{k,m} {\tilde{{\mathbf{x}}}_{m}} + {\tilde{{\mathbf{n}}}_{k}}\\ \notag & = &\sum_{l\in \mathcal{L}_k} \tilde{{\mathbf{H}}}_{k,l} {\mathbf{P}}_{l} {{{\mathbf{F}}}_{k,l}} {{{\mathbf{u}}}_{k}}+\sum_{o\neq k}\sum_{j\in \mathcal{L}_o} \tilde{{\mathbf{H}}}_{o,j} {\mathbf{P}}_{o}{{{\mathbf{F}}}_{o,j}} {{{\mathbf{u}}}_{o}} \\ \notag & & + \sum_{m\in \mathcal{M}_k} \tilde{{\mathbf{H}}}_{k,m} {{{\mathbf{F}}}_{k,m}} {{{\mathbf{u}}}_{k}} + \sum_{o\neq k}\sum_{j\in \mathcal{M}_o} \tilde{{\mathbf{H}}}_{o,j} {{{\mathbf{F}}}_{o,j}} {{{\mathbf{u}}}_{o}} + {\tilde{{\mathbf{n}}}_{k}}\end{aligned}$$ where $\tilde{{\mathbf{H}}}_{k,l} \in \mathbb{C}^{ n_r \times n_{\text{rad}}}$ and $\tilde{{\mathbf{H}}}_{k,m} \in \mathbb{C}^{ n_r \times n_t}$ are the channel matrices between the $l$th radar station and $k$th user and the $m$th BS and $k$th user, respectively, and $\tilde{{\mathbf{n}}}_{k}$ is noise where $\tilde{{\mathbf{n}}}_{k} \sim \mathcal{CN}(0,{\mathbf{I}})$. Small Singular Values Space Projection {#sec:projection_matrix} ====================================== In this section, we design our projection matrix such that steering the radar power in the direction of small singular values results in radar power that is in the order of communication transmit power, i.e. $P_l = \sigma_{\text{th}}P_{\text{rad}} = \Theta(P_{m}) \quad \forall m, l$ where $P_{\text{rad}}$ is the radar transmit power and $\sigma_{\text{th}}$ is a singular value threshold. As a result, the diminished radar power received at the communication system will not burnout or saturate the communication receivers. Given that the $l$th MIMO radar has channel state information of $\tilde{{\mathbf{H}}}_{k,l} $ and the $k$th user. Let $(o)_l$ be the $o$th user in the set $\mathcal{K}_l$. Therefore the augmented channel matrix between the $l$th radar and the set of users $\mathcal{K}_l$ is given by $$\tilde{{\mathbf{H}}}_l = [\tilde{{\mathbf{H}}}_{(1)_l,l}^{T} \tilde{{\mathbf{H}}}_{(2)_l,l}^{T} \cdots \tilde{{\mathbf{H}}}_{(K_l)_l,l}^{T}]^{T}.$$ We proceed by first finding SVD of $\tilde{{\mathbf{H}}}_l$, i.e., $$\tilde{{\mathbf{H}}}_l = {\mathbf{U}}_l {\boldsymbol{\Sigma}}_l {\mathbf{V}}_l^H.$$ Now, let us define $${\widetilde{\boldsymbol{\Sigma}}}_l \triangleq \text{diag} ({\widetilde{\sigma}}_{l,1}, {\widetilde{\sigma}}_{l,2}, \ldots, {\widetilde{\sigma}}_{l,p})$$ where $p \triangleq \min (K_l n_r, n_{\text{rad}})$ and ${\widetilde{\sigma}}_{l,1} > {\widetilde{\sigma}}_{l,2} > \cdots > {\widetilde{\sigma}}_{l,q} > {\widetilde{\sigma}}_{l,q+1} = {\widetilde{\sigma}}_{l,q+2} = \cdots = {\widetilde{\sigma}}_{l,p} = 0$ are the singular values of $\tilde{{\mathbf{H}}}_l $. Next, we define $${\widetilde{\boldsymbol{\Sigma}}}_l^\prime \triangleq \text{diag} ({\widetilde{\sigma}}_{l,1}^\prime,{\widetilde{\sigma}}_{l,2}^\prime, \ldots, {\widetilde{\sigma}}_{l,n_{\text{rad}}}^\prime)$$ where $$\begin{aligned} {\widetilde{\sigma}}_{l,u}^\prime \triangleq \begin{cases} 0, \quad \text{for} \; {\widetilde{\sigma}}_{l,u}^\prime > \sigma_{\text{th}},\\ 1, \quad \text{for} \; {\widetilde{\sigma}}_{l,u}^\prime \leq \sigma_{\text{th}}. \end{cases}\end{aligned}$$ Using above definitions, we can now define our small singular values space projection matrix ${\mathbf{P}}_l \triangleq {\mathbf{V}}_l {\widetilde{\boldsymbol{\Sigma}}}_l^\prime {\mathbf{V}}_l^H$. Note that ${\mathbf{P}}_l \in {\mathbb{C}}^{n_{\text{rad}} \times n_{\text{rad}}}$ is a projection matrix as it satisfies ${\mathbf{P}}_l^H = ({\mathbf{V}}_l {\widetilde{\boldsymbol{\Sigma}}}_l^\prime {\mathbf{V}}_l^H)^H = {\mathbf{P}}_l$ and ${\mathbf{P}}_l^2 = {\mathbf{V}}_l {\widetilde{\boldsymbol{\Sigma}}}_l {\mathbf{V}}^H_l \times {\mathbf{V}}_l {\widetilde{\boldsymbol{\Sigma}}}_l {\mathbf{V}}^H_l = {\mathbf{P}}_l$. Equivalence with MIMO-IFC-GC {#sec:equ_model} ============================ The MIMO-IFC-GC consists of $K$ transmitters with $m_{t,k}$ antennas at the the $k$th transmitter and $K$ receivers with $m_{r,k}$ antennas at the $k$th receiver. The $k$th receiver received signal is given by: $${{\mathbf{y}}_k} = {{\mathbf{H}}_{k,k}} {{\mathbf{x}}_k} + \sum_{o\neq k}{{\mathbf{H}}_{k,o}} {{\mathbf{x}}_o} + {\mathbf{n}}_k$$ where ${{\mathbf{n}}}_{k}$ is additive complex Gaussian noise ${{\mathbf{n}}}_{k} \sim \mathcal{CN}(0,{\mathbf{I}})$, ${\mathbf{x}}_k \in \mathbb{C}^{m_{t,k}}$ are the inputs to receiver and ${\mathbf{H}}_{k,o} \in \mathbb{C}^{m_{r,k}\times m_{t,k}}$ is the channel matrix between the $o$th transmitter and the $k$th receiver. The $k$th user intended information stream vector is ${\mathbf{u}}_k \in \mathbb{C}^{d_k}$ where $d_k \leq \min(m_{t,k},m_{r,k})$ and ${{\mathbf{u}}}_{k} \sim \mathcal{CN}(0,{\mathbf{I}})$. In this model, the $k$th user precoding matrix is given by ${\mathbf{F}}_k \in \mathbb{C}^{m_{t,k} \times d_k}$ therefore ${\mathbf{x}}_k = {\mathbf{F}}_k {\mathbf{u}}_k $. The input vectors ${\mathbf{x}}_k$ have to satisfy both the $L$ generalized linear constraints given by $$\sum^{K}_{k=1} \text{tr}\left\{\mathbf{\Phi}_{k,l} \mathbb{E}\left[\textbf{x}_k {\textbf{x}^{H}_{k}}\right]\right\} =\sum^{K}_{k=1} \text{tr}\left\{\mathbf{\Phi}_{k,l} \textbf{F}_k {\textbf{F}^{H}_{k}}\right\} \leq P_l$$ and $M$ generalized linear constraints given by $$\sum^{K}_{k=1} \text{tr}\left\{{\mathbf{\Phi}}_{k,m} E\left[{\mathbf{x}}_k {{\mathbf{x}}^{H}_{k}}\right]\right\} =\sum^{K}_{k=1} \text{tr}\left\{{\mathbf{\Phi}}_{k,m} {\mathbf{F}}_k {{\mathbf{F}}^{H}_{k}}\right\} \leq P_m$$ for $\mathbf{\Phi}_{k,l} \in \mathbb{C}^{m_{t,k} \times m_{t,k}}$ and $l = 1,...,L$ and $\mathbf{\Phi}_{k,m} \in \mathbb{C}^{m_{t,k} \times m_{t,k}}$ and $m = 1,...,M$ are weight matrices where $\sum^{L}_{l=1} \mathbf{\Phi}_{k,l} + \sum^{M}_{m=1} \mathbf{\Phi}_{k,m}$ are positive definite for all $k = 1,...,K$. Assume that the $o^{th}$ base station or radar station in subset $\mathcal{M}_k \cup \mathcal{L}_k$ is given by the index $(o)_k$ is informed about user $k$’s message. The MIMO-IFC-PMS is a special case of a MIMO-IFC-GC with $m_{r,k} = n_r$, $m_{t,k} = M_k n_t + L_k n_{\text{\text{rad}}}$, channel matrices $$\begin{aligned} &{\mathbf{H}}_{k,o} = & &[\tilde{{\mathbf{H}}}_{k,(1)_o} \cdots \tilde{{\mathbf{H}}}_{k,(M_k)_o} \tilde{{\mathbf{H}}}_{k,(M_k+1)_o}{\mathbf{P}}_{(M_k+ 1)_o} \\ & & & \cdots \tilde{{\mathbf{H}}}_{k,(M_k + L_k)_o}{\mathbf{P}}_{(M_k + L_k)_o} ] \end{aligned}$$ augmented precoding (beamforming) matrices, $${\mathbf{F}}_{k}={\left[{{\mathbf{F}}}^{T}_{k,\left(1\right)_k} \cdots {{\mathbf{F}}}^{T}_{k,\left(M_k\right)_k} {{\mathbf{F}}}^{T}_{k,\left(1+M_k\right)_k} \cdots {{\mathbf{F}}}^{T}_{k,\left(M_k+ L_k\right)_k}\right]}^T,$$ and weight matrices $\mathbf{\Phi}_{k,l}$ with the $o^{th}$ $n_{\text{rad}} \times n_{\text{rad}}$ submatrix on the main diagonal is $\textbf{P}_l^H\textbf{P}_l$ , if $l = (o)_k$ and the rest of the matrix elements are zeros and $\mathbf{\Phi}_{k,m}$ with the $o^{th}$ $n_t \times n_t$ submatrix on the main diagonal is ${\mathbf{I}}_{n_t}$ , if $m = (o)_k$ and the rest of the matrix elements are zeros. If $k \notin \mathcal{K}_m$ then $\mathbf{\Phi}_{k,m} = \mathbf{0}$ and if $l \notin \mathcal{K}_l$ then $\mathbf{\Phi}_{k,l} = \mathbf{0}$. It is easy to show by inspection. The effect of small singular values space projection on the MIMO-IFC-GC equivalent model is only in the augmented channel matrix or the weight matrices $\mathbf{\Phi}_{k,l}$ which are inputs to the optimization problem as shown in section \[sec:opt\]. Optimization Problem {#sec:opt} ==================== Given the equivalence between MIMO-IFC-GC and the proposed Network MIMO with partial cooperation for radar and cellular systems model. The rest of the paper shows how to use a modified version of the receiver in [@linear_precoding_journal] for our new proposed system model. The $k$th user uses the equalization matrix ${\mathbf{G}}_k \in \mathbb{C}^{d_k \times m_{r,k}}$ to estimate its message ${\mathbf{u}}_k$ as $$\hat{{\mathbf{u}}}_k={\mathbf{G}}^{H}_{k} {\mathbf{y}}_k.$$ Therefore, the Mean Square Error (MSE)-matrix for user $k$ is given by $${\mathbf{E}}_k=\mathbb{E}\left[\left(\hat{{\mathbf{u}}}_k - {\mathbf{u}}_k \right) \left(\hat{{\mathbf{u}}}_k - {\mathbf{u}}_k \right)^H \right].$$ Using the equivalent MIMO-IFC-GC model, the MSE-matrix can be written as $$\begin{aligned} \label{eqn:error} \notag {\mathbf{E}}_k={\mathbf{G}}^{H}_{k} {\mathbf{H}}_{k,k} {\mathbf{F}}_k {\mathbf{F}}^{H}_{k} {\mathbf{H}}^{H}_{k,k} {\mathbf{G}}_{k} - {\mathbf{G}}^{H}_{k} {\mathbf{H}}_{k,k} {\mathbf{F}}_{k} \\ -{\mathbf{F}}^{H}_{k} {\mathbf{H}}^{H}_{k,k} {\mathbf{G}}_{k} + {\mathbf{G}}^{H}_{k} {\mathbf{\Omega}}_{k} {\mathbf{G}}_{k}+{\mathbf{I}}_{k}\end{aligned}$$ where $${\mathbf{\Omega}}_{k}={\mathbf{I}} + \sum_{o\neq k} {\mathbf{H}}_{k,o} {\mathbf{F}}_{o} {\mathbf{F}}^{H}_{o} {\mathbf{H}}^{H}_{k,o}.$$ For each user $k$, the equalization matrices can be evaluated using the MMSE solution as $$\label{eqn:equalization} {\mathbf{G}}_k=\left({\mathbf{H}}_{k,k} {\mathbf{F}}_k {\mathbf{F}}_k^{H} {\mathbf{H}}_{k,k}^{H} + \mathbf{\Omega}_k]\right)^{-1}{\mathbf{H}}_{k,k} {\mathbf{F}}_k.$$ In this paper, we focus on the weighted sum-MSE minimization (WSMMSE) problem given by the following: $$\label{eqn:optimizaiton} \begin{aligned} & \underset{{\mathbf{F}}_{k},{\mathbf{G}}_{k},k = 1,\cdots,K}{\text{min}} & & \sum^{K}_{k=1} \text{tr} \left\{{\mathbf{W}}_k {\mathbf{E}}_k \right\}\\ & \text{subject to} & & \sum^{K}_{k=1} \text{tr}\left\{{\mathbf{\Phi}}_{k,m} {\mathbf{F}}_k {{\mathbf{F}}^{H}_{k}}\right\} \leq P_m, m=1,\cdots,M\\ & & & \sum^{K}_{k=1} \text{tr}\left\{{\mathbf{\Phi}}_{k,l} {\mathbf{F}}_k {{\mathbf{F}}^{H}_{k}}\right\} \leq P_l, l=1,\cdots,L \end{aligned}$$ where ${\mathbf{W}}_k \in \mathbb{C}^{d_k \times d_k}$ are the diagonal weight matrices with non-negative weights. MMSE Minimization {#sec:min} ================= The extended MMSE interference alignment (eMMSE-IA) technique applied to an interference channel with per-transmitter power constraints and where each receiver is endowed with multiple antenna [@linear_precoding_journal], is extended here to include radar system coexisting with communication system. The technique starts by an arbitrarily ${\mathbf{F}}_k$. Then, at each iteration $j$ the equalization matrix ${\mathbf{G}}^{\left(j\right)}_{k}$ is evaluated using (\[eqn:equalization\]) resulting in $${\mathbf{G}}^{\left(j\right)}_{k} = \left({\mathbf{H}}_{k,k} {\mathbf{F}}^{\left(j-1\right)}_{k} {\mathbf{F}}^{\left(j-1\right)H}_{k} {\mathbf{H}}^{H}_{k,k} + \mathbf{\mathbf{\Omega}}^{\left(j-1\right)}_{k}\right)^{-1}{\mathbf{H}}_{k,k} {\mathbf{F}}^{\left(j-1\right)}_{k}$$ where, $$\mathbf{\Omega}^{\left(j-1\right)}_{k} = {\mathbf{I}} + \sum_{o\neq k} {\mathbf{H}}_{k,o} {\mathbf{F}}^{\left(j-1\right)}_{o} {\mathbf{F}}^{\left(j-1\right)H}_{o} {\mathbf{H}}^{H}_{k,o}$$ Given the matrices ${\mathbf{G}}^{\left(j\right)}_{k}$, the optimization problem (\[eqn:optimizaiton\]) becomes $$\label{eqn:optimizaiton2} \begin{aligned} & \underset{{\mathbf{F}}_{k},k = 1,\cdots,K}{\text{min}} & & \sum^{K}_{k=1} \text{tr} \{{\mathbf{W}}_k {\mathbf{E}}^{\left(j\right)}_{k} \}\\ & \text{subject to} & & \sum^{K}_{k=1} \text{tr} \left\{{\mathbf{\Phi}}_{k,m} {\mathbf{F}}_k {{\mathbf{F}}^{H}_{k}}\right\} \leq P_m, \forall m\\ & & & \sum^{K}_{k=1} \text{tr}\left\{{\mathbf{\Phi}}_{k,l} {\mathbf{F}}_k {{\mathbf{F}}^{H}_{k}}\right\} \leq P_l, \forall l \end{aligned}$$ where ${\mathbf{E}}^{\left(j\right)}_{k}$ as in (\[eqn:error\]) with ${\mathbf{G}}^{\left(j\right)}_{k}$ instead of ${\mathbf{G}}_{k}$. For a fixed ${\mathbf{G}}^{\left(j\right)}_{k}$, the optimization problem in (\[eqn:optimizaiton2\]) is convex and therefore there exists a unique global optimal solution for ${\mathbf{F}}^{\left(j\right)}_{k}$. Using KKT conditions, we have $$\begin{aligned} & \notag {\mathbf{F}}^{(j)}_{k} = & & \Bigl(\sum^{K}_{o=1} {\mathbf{H}}^{H}_{o,k} {\mathbf{G}}^{(j)}_{o} {\mathbf{W}}_{o} {\mathbf{G}}^{(j)H}_{o} {\mathbf{H}}_{o,k} + \sum_{m} \mu_m \mathbf{\Phi}_{k,m} + \\ & & & \sum_{l} \mu_l \mathbf{\Phi}_{k,l}\Bigr)^{-1} \times {\mathbf{H}}^{H}_{k,k} {\mathbf{G}}^{(j)}_{k} {\mathbf{W}}_{k} \end{aligned}$$ where $\mu_m \geq 0$ are Lagrangian multipliers satisfying $$\mu_m\left(\sum^{K}_{k=1} \text{tr} \left\{\mathbf{\Phi}_{k,m} {\mathbf{F}}^{\left(j\right)}_{k} {\mathbf{F}}^{\left(j\right)H}_{k}\right\}-P_m\right)=0$$ and $\mu_l \geq 0$ are Lagrangian multipliers satisfying $$\mu_l\left(\sum^{K}_{k=1} \text{tr} \left\{\mathbf{\Phi}_{k,l} {\mathbf{F}}^{\left(j\right)}_{k} {\mathbf{F}}^{\left(j\right)H}_{k}\right\}-P_l\right)=0.$$ Using ${\mathbf{F}}^{\left(j\right)}_{k}$, the iterative algorithm continues with the $(j + 1)$th iteration. Conclusion {#sec:conc} ========== In this paper, we considered a network MIMO with partial cooperation model where radar stations cooperate with cellular base stations (BS)s to deliver messages to intended mobile users. We designed a new projection matrix to mitigate radar stations interference to cellular system. In addition, this projection provides useful enhancement to the cellular system performance and QoS when radar stations cooperate in communication messages delivery. We showed that our constructed model, i.e. the radar stations act as BSs in the cellular system, is equivalent to a MIMO interference channel under generalized linear constraints (MIMO-IFC-GC). Finally, we provided a solution to minimize the weighted sum mean square error minimization problem (WSMMSE) with enforcing power constraints on both radar and cellular stations. Future Work =========== Comparison with other MSE minimization schemes will be considered in our future work. In addition, we plan to conduct a study on the sum rate maximization improvement for the new proposed model.
Q: C# TryParseExact CultureInfo.InvariantCulture Changing behavior when culture change I am trying to convert date with the following format "d/MM/yyyy" example : 5/11/2014 12/12/2013 31/01/2012 I am using TryParseExact method : private const string DateFormat = "d/MM/yyyy"; Culture = CultureInfo.InvariantCulture; DateTime creationDateConverted; DateTime.TryParseExact(creationDate, DateFormat, Culture, DateTimeStyles.None, out creationDateConverted); The display method is : @Html.DisplayFor(m => object.date) date having : DisplayFormat( DataFormatString="{0:d/MM/yyyy}") The server which host the website has a US culture. if I set the culture to fr-FR using Web.config it works well. But if I don't set that culture it does not work, it convert it do MM/dd/yyyy format and of course some date don't pass with that format. I don't understand why the culture would have influence on that method because I provide a specific format and a culture invariant. I also tried to provide a Culture fr-Fr to the method but no luck EDIT : when I say fail, I means that the TryParseExact Method return false A: DateTime has no format. And the ToString method output depend on the culture. If you want a specific format use this : creationDateConverted.ToString("d/MM/yyyy");
In the early nineties, the console market was rising to prominence. The Super Nintendo Entertainment System (SNES) and Sega Mega Drive were popular but were largely dismissed as kid's toys because they featured tiny red and green plumbers and spiky blue hedgehogs. However, that all changed with with the launch of the Sony PlayStation, which featured three-dimensional gameplay and gave millions their first taste of 32-bit gaming. The original PlayStation was a big risk for Sony. It had never developed a computer game, let alone a complete system. In 1991, the company was reeling from a failed partnership with Nintendo -- which would have resulted in it supplying CD-ROM technology for the popular SNES -- so it decided to go it alone. Back then, console makers like Nintendo, SEGA and Atari would develop their own titles in order to showcase what their platforms could do. Sony didn't have its own game development studio, but it did promise powerful new features and flexibility for developers thanks to its disc-based architecture. With the chance to render 3D worlds and deliver stereo CD-quality sound, games studios jumped on board and the rest, as they say, was history. In this episode of Tech Hunters, Julia Hardy traces the history of the console that made gaming the massive industry it is today and takes a look at giving the trailblazing system a modern makeover.
The long-term objective of this proposal is to develop an organizational and scientific infrastructure necessary to establish the regional Craniofacial Pain Research Center for Discovery (CPRCD). The CPRCD Administrative Core Planning Committee has analyzed clinical needs as well as institutional and regional scientific strengths. A multidisciplinary organizational concept has been drafted to guide planning and development, with an emphasis on analysis of the state-of- the-art in craniofacial pain in its broadest context, from molecules to the delivery of care. An Administrative Core Planning Committee will integrate the activities of a Basic Science Core Planning Committee, a Clinical Sciences Core Planning Committee, a Technology Transfer Core Planning Committee and a Health Service Delivery Core Planning Committee. To continue to plan and develop the Center for Discovery, we will: l) develop a comprehensive mission statement; 2) prepare a careful analysis of clinical needs, the state-of-the art, and our regional resources; 3) analyze organizations that catalyze the process of discovery; 4) implement an organizational paradigm entirely dedicated to the mission and the process of discovery; 5) install infrastructure, communication, and computer resources for regional multidisciplinary collaboration; 6) recruit selected scientists, clinicians and administrators, and support personnel; and 8) collaboratively develop goals and objectives and a comprehensive multidisciplinary research plan for the next grant phase. The CPRCD will emerge from this strategy for planning and development. The organization will be poised to understand the biology of craniofacial pain in the context of patients and caregivers, identify issues, discover new solutions and assess the impacts of the discoveries on pain patients and on society. Regional collaborations have been set up with renowned scientists at the Mayo Clinic, University of Wisconsin (Madison), University of Iowa (Iowa City) and University of Washington (Seattle). Several health maintenance organizations will collaborate with the center including the Group Health Foundation, Inc. which has a comprehensive data base and access to 700,000 members. Collaborations and support have been offered by the University of Minnesota Office of Research and Technology Transfer, Minnesota Project Innovation, Inc. and Medical Alley, Inc. for the transfer of technology. Collaborations have been set up with the University of Minnesota Institute for Health Services Research and other Institutes and Centers in the University of Minnesota Academic Health Center.
Consumers often prefer to cook food in a microwave oven rather than conventional ovens because of the reduced cooking time required to heat foods in a microwave oven. Consumers also want to be provided with the opportunity to cook pre-packaged food products in the package in which they were purchased without the hassle of transferring the food from the package to a piece of cookware. Unfortunately, foods cooked in a microwave oven tend to be tough and/or dry in texture and consistency, rather than tender and moist. When liquid is added to the food in an attempt to retain moisture, the food can become soggy and undesirable. In addition, microwave ovens do not evenly distribute heat to the product being cooked. This results in a cooked food product that may be very hot in one area, but cold in another area. Because of these problems, many people consider microwave cooking to be problematic and generally undesirable. One method for improving the texture and consistency of food cooked in a microwave oven is to use steam generated by the heated food product to assist in cooking the food. Cooking with steam not only provides moisture for the food being cooked, but also results in more consistent heating throughout the food product. Specifically, a device which uses the steam generated by the heated food product to assist in cooking the food takes advantage of the ideal gas law, a distillation of several kinetic theories including Boyle's Law and Gay-Lussac's Law. More specifically, such a device takes advantage of the proportional relationship between pressure and temperature when volume and number of gas molecules remain constant. This proportional relationship can be expressed as a mathematical equation, (P2/P1)=(T2/T1), where P1 is the initial pressure, P2 is the final pressure, T1 is the initial temperature, and T2 is the final temperature. Accordingly, any increase in pressure will result in a proportional increase in temperature that would not occur at ambient pressures. For example, if the pressure was to increases 1.2 fold (e.g., from 1 to 1.2 atmospheres), the temperature would also increase 1.2 fold (e.g., from 275 K to 330 K, which is an increase from 35° F. to 134° F.). Such a steam cooking method is described in commonly assigned U.S. Pat. No. 6,559,431, which is incorporated herein by this reference. In order to steam cook by this method, the volume of the device must remain substantially constant; accordingly, the device must be at least partially sealed. When a sealed device is used to heat a food product contained therein, pressure rapidly builds as steam is generated from the heated food product. As heating continues, this pressure will continue to escalate until the device's seal is broken, thereby relieving the pressure. This relief often comes in the form of an explosion forcing the seal to open and resulting in food being ejected from the device. Not only does such an explosion create a mess, but it also undermines the attempt to use steam to cook the food product because the explosion causes a rapid release of the collected steam from the no longer sealed cooking environment. The release of pressure can be controlled by including vents within the seal of the device, resulting in weakened portions in the seal. When the sealed cooking environment attains a pressure creating a risk of explosion, the weakened portions in the seal allow for a controlled pressure release at the vents. In addition to providing a point for the release of pressure and steam, the opening formed at the weakened portion in the seal creates one or more tabs which may be grasped and pulled, breaking the remainder of the seal such that consumption of the food product may occur. Such a venting configuration is described in commonly assigned U.S. Pat. No. 6,559,431 which has been incorporated herein by reference. Although this type of venting configuration allows some degree of control over the location that the pressure release will occur, any one or more of the weakened portions in the seal may allow venting during a particular heating session. For example, during one heating session, two vents in a first portion of the seal may allow for pressure release while the seal remains intact at the other vents, while during another heating session, a single vent in a second portion of the seal may allow for pressure release while the seal remains intact at the other vents. Thus, the exact location and number of points at which the seal will break is difficult, if not impossible, to predict. Furthermore, the surface area over which the seal is broken dictates how rapidly the pressure within the cooking environment will drop, thereby effecting the texture and consistency of the prepared food. Also effecting the texture and consistency of the prepared food is the location at which the seal is broken, which dictates the path of the steam being used to cook the food. Finally, since the opening formed in the seal when venting occurs creates one or more tabs which may be grasped and pulled to open the container for consumption of the food product, the location at which the seal is broken effects the convenience with which the device may be used. Therefore, it would be desirable to provide a microwave cooking device for steam cooking which allows for improved venting control.
1. Field of the Invention The present invention relates generally to videoconferencing systems, and more particularly to microphone arrays used in videoconferencing system. 2. Description of Related Art Videoconferencing is rapidly becoming a popular choice of communication among corporations and individuals. Increasingly, business transactions, for example, are occurring between participants in widely different geographic locations. Since it is often difficult for all such participants to meet in a single location, many business participants rely on teleconferencing mechanisms such as videoconference systems. Videoconferencing systems are generally preferably to other teleconferencing mechanisms because these systems allow participants to view other participants, observe remote demonstrations, and more easily identify a speaking participant at any given moment. In effect, videoconferencing allows people at two or more locations to interact with each other. More importantly, information and communication is exchanged essentially in real-time. Referring to FIG. 1A, a conventional videoconferencing system 100 is shown. The videoconferencing system 100 includes a video display 102, speakers 106, a microphone 108, and a videoconference unit 110 further comprising a camera 112. The conventional videoconferencing system 100 may be used with a personal computer or, alternatively, may have the videoconference unit 110 coupled to a large display or projection system located in a large videoconferencing room. A disadvantage with the conventional videoconferencing system 100 is that the videoconferencing system 100 does not have the ability to focus on an individual who is speaking. The focusing process requires determination of a position of the individual, movement (i.e., panning, tilting, and zooming) of the camera 112 to the proper position of the individual, and adjustment of lenses so that the camera 112 is in focus on the individual. When more than one individual is involved in a videoconference, it may be desirable to focus the camera 112 on each individual as each individual is speaking. This focusing task is often difficult, however, because the position of the individual speaking must be determined and the camera 112 moved to that position relatively quickly and smoothly. Therefore, the videoconference systems 100 are typically left in a stationary position, and thus capture an image of the entire room or what is directly in front of the camera 112. Although there may be some videoconferencing systems 100 with the ability to pan and tilt to focus on individuals, the pan and tilt functions are usually manually controlled. Further, some conventional videoconferencing systems 100 may have the ability to localize acoustic source. These videoconferencing systems 100 often use a vertical and horizontal microphone array to locate an acoustic source within a room. As shown in FIG. 1B, the videoconference unit 110 includes a plurality of microphones 120 arranged in a horizontal array 122 and a vertical array 124. In order to accurately determine position of the sound source, both the horizontal array 122 and vertical array 124 of microphones 120 must be used. The microphones 120 are typically placed so that distance between the microphones 120 in each array 122 and 124 is precisely known. Further, the horizontal array 122 and vertical array 124 are situated so that a relative angle between the arrays 122 and 124 is precisely known. Typically, a processor (usually located within the videoconference unit 110) is used to determine acoustic source location. Initially, the microphones 120 detect sound, produce signals representing these sounds, and transmit these signals to the processor. The processor then uses this signal information which may include signal strength, signal time, and position of the microphones 120 to calculate an acoustic source location. Conventional methods used to determine the sound source location, such as cross-correlation techniques, are typically slow, inaccurate, and unreliable. Further, because the information cannot be processed fast enough or accurately enough, camera manipulation is not smooth and focused. Furthermore, accuracy in determining sound source location increases with an increase in number of microphones 120 used in the horizontal array 122 and vertical array 124. Therefore, it is desirable to have as many microphones as possible positioned in both the horizontal array 122 and vertical array 124. Unfortunately, it is often not feasible or economical to have so many microphones 120. Referring back to FIG. 1A, the conventional videoconferencing system 100 may have the horizontal and vertical arrays 122 and 124, respectively, mounted to a top section of the videoconference unit 110. Since the relative angle between the horizontal array 122 and the vertical array 124 must be precisely known in order for the camera 112 to track the acoustic source location, the horizontal array 122 and the vertical array 124 must be permanently mounted to the videoconference unit 110. This configuration has the further disadvantage of limiting the number of microphones used because increasing the number of microphones would require making the videoconference unit 110 both taller and wider. Additional a larger videoconference unit 110 structure is more difficult to set up and support on top of the video display 102, and is therefore less appealing to consumers. Therefore, there is a need for a videoconferencing system which uses horizontal and vertical microphone arrays which may be mounted in various locations. There is a further need for a method of analyzing data from these microphone arrays which is fast and accurate enough to properly manipulate a camera.
Q: Generating a template My professor gave me a template which I compiled but it gave me lots of errors. The code is: \documentclass[12pt]{article} \usepackage{rotating,graphics,psfrag,epsfig} \usepackage{graphicx} \usepackage{color} \usepackage{latexsym} \usepackage[utf8]{inputenc} \usepackage[spanish]{babel} \usepackage[T1]{fontenc} \usepackage{multicol} \usepackage[dvipdfm]{hyperref} \usepackage{fancyhdr} \pagestyle{fancy} \usepackage{lastpage} \usepackage{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{marvosym} \newcommand{\ff}{Marzo del 2007} \renewcommand{\headrulewidth}{0.0pt} \renewcommand{\footrulewidth}{0.4pt} %\usepackage[thickspace]{SIunits} \usepackage[a]{esvect} \usepackage{shortlst} \usepackage{marvosym} \usepackage[version=3]{mhchem} \newcommand{\solucion}{\noindent\textbf{Soluci\'on}} \newcommand{\comentario}{\noindent\textbf{Comentario}} \newcommand{\nota}{\noindent\textbf{Nota}:} \newcommand{\suge}{\noindent\textbf{Sugerencia}:} \newcommand{\analisis}{\noindent\textbf{An\'alisis}} \newcommand{\modelo}{\noindent\textbf{Modelo}} \fancyhead[L]{} \fancyhead[C]{} \fancyhead[R]{} % \fancyfoot[L]{http://fisica.usach.cl/$\sim$cesparza/} %\fancyfoot[C]{\ff} \fancyfoot[C]{} \fancyfoot[R]{TeoIngMat-EXA1s11.pdf -- 15 noviembre 2011} \headwidth 18.0cm \textwidth 18.0cm %18.0cm \textheight 25.00cm % era25 \oddsidemargin -0.8cm \evensidemargin -0.8cm \topmargin -3.5000cm %era -2.0cm \columnsep 0.3cm \parindent 0pt \newcommand{\ia}{\'{\i}} \newcommand{\xu}{\rm {\widehat{\bf{x}}}} \newcommand{\yu}{\rm {\widehat{\bf{y}}}} \newcommand{\zu}{\rm {\widehat{\bf{z}}}} \newcommand{\iu}{\mathbf{\hat{\imath}}} \newcommand{\ju}{\mathbf{\hat{\jmath}}} \newcommand{\ku}{\mathbf{\hat{\kappa}}} \newcommand{\kuw}{\mathbf{\widehat{\kappa}}} \usepackage[dvipdfm]{hyperref} \hypersetup{ pdfauthor = Carlos Esparza Barrera, pdftitle = TeoF1-IngMat-EXA1s11.pdf, pdfsubject = 2red, pdfkeywords =carlos.esparza@usach.cl, pdfcreator = {LaTeX con la paqueteria hyperref }, pdfproducer = {dvpdfm} } \usepackage{type1cm} \usepackage{eso-pic} \usepackage{color} \begin{document} \psfrag{xu}{$\widehat{\bf{x}}$} \psfrag{yu}{$\widehat{\bf{y}}$} \psfrag{iu}{$\mathbf{\hat \imath}$} \psfrag{ju}{$\mathbf{\hat \jmath}$} \psfrag{i}{$\hat \imath$} \psfrag{j}{$\hat \jmath$} \textbf{Departamento de Fisica \hfill Universidad de Santiago de Chile} Teoria a Física I (22104)\hfill \textbf{EXAMEN} \hfill Ingeniería Matemática \vspace{0.1cm} \hline \hline \vspace{0.2cm} Prof. Carlos H. Esparza-Barrera \hfill \begin{LARGE} \raisebox{-0.5ex}{\Email} \end{LARGE} ~carlos.esparza@usach.cl \hfill \begin{LARGE}\Telefon \end{LARGE} ~(56-2-)7181239 \hfill Oficina 27 \\[0.1cm] \hline \begin{itemize} \item \begin{large} \textbf{Esta prueba consta de cuatro problemas, usted debe responder tres de ellos. La elección es suya y debe indicarla en la hoja. Si responde los cuatro, se considerarán los tres peor evaluados}\end{large}. \hfill Cada problema vale $2,0$ puntos. \item Fundamente su razonamiento; haga esquemas explicatorios. Use l\'apiz indeleble; en caso contrario, no puede pretender una segunda revisi\'on de su trabajo. Exprese sus repuestas numéricas con 3 CS. \item Las situaciones y los valores num\'ericos de los datos suministrados no necesariamente corresponden a los de un experimento o situaci\'on pr\'actica real. Se pretende que usted trabaje cada problema con el modelo adecuado en el marco de la teor\'ia pertinente y obtenga las predicciones correspondientes. \end{itemize} \hline \begin{multicols}{2} \begin{enumerate} \item problema 1 \item ... \item ... \item ... \item ... \item ... \end{enumerate} Compiling in Latex=>PS=>PDF, I have the following errors: Misplaced \noalign x4 and Emergency stop. I don't know how to fix this. The result should be the following: After the last horizontal line, I edited the problems so there's no text in there as you can see in the code. I had to repost this because I messed up in the other thread. I appreciate the help and thanks in advance. A: I left numerous comments in the code below highlighting bad code, however this version runs without error in latex and pdflatex \documentclass[12pt]{article} % never load epsfig it is just for emulating latex2.09 documents from 1980's % rotating might be needed for sideways table, otherwise not % psfrag doesn't work as is with pdflatex or dvipdfm % graphics is loaded later %\usepackage{rotating,graphics,psfrag,epsfig} \usepackage{graphicx} % color loaded later %\usepackage{color} \usepackage{latexsym} \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \usepackage[spanish]{babel} %\usepackage[T1]{fontenc} \usepackage{multicol} %%% this is loaded later with different options %\usepackage[dvipdfm]{hyperref} \usepackage{fancyhdr} \pagestyle{fancy} \usepackage{lastpage} \usepackage{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{marvosym} \newcommand{\ff}{Marzo del 2007} \renewcommand{\headrulewidth}{0.0pt} \renewcommand{\footrulewidth}{0.4pt} %\usepackage[thickspace]{SIunits} \usepackage[a]{esvect} % no idea what this is, not in texlive %\usepackage{shortlst} \usepackage{marvosym} \usepackage[version=3]{mhchem} \newcommand{\solucion}{\noindent\textbf{Soluci\'on}} \newcommand{\comentario}{\noindent\textbf{Comentario}} \newcommand{\nota}{\noindent\textbf{Nota}:} \newcommand{\suge}{\noindent\textbf{Sugerencia}:} \newcommand{\analisis}{\noindent\textbf{An\'alisis}} \newcommand{\modelo}{\noindent\textbf{Modelo}} \fancyhead[L]{} \fancyhead[C]{} \fancyhead[R]{} % % don't use math \sim \fancyfoot[L]{\url{http://fisica.usach.cl/~cesparza/}} %\fancyfoot[C]{\ff} \fancyfoot[C]{} \fancyfoot[R]{TeoIngMat-EXA1s11.pdf -- 15 noviembre 2011} \headwidth 18.0cm \textwidth 18.0cm %18.0cm \textheight 25.00cm % era25 \oddsidemargin -0.8cm \evensidemargin -0.8cm \topmargin -3.5000cm %era -2.0cm \columnsep 0.3cm % I left this but having \parindent and \parskip both 0pt % means that paragraph breaks are not really visible \parindent 0pt \newcommand{\ia}{\'{\i}} % The \rm never did anything here and both % \rm and \bf have been deprecated since 1994 %\newcommand{\xu}{\rm {\widehat{\bf{x}}}} %\newcommand{\yu}{\rm {\widehat{\bf{y}}}} %\newcommand{\zu}{\rm {\widehat{\bf{z}}}} \newcommand{\xu}{\widehat{\mathbf{x}}} \newcommand{\yu}{\widehat{\mathbf{y}}} \newcommand{\zu}{\widehat{\mathbf{z}}} \newcommand{\iu}{\mathbf{\hat{\imath}}} \newcommand{\ju}{\mathbf{\hat{\jmath}}} \newcommand{\ku}{\mathbf{\hat{\kappa}}} \newcommand{\kuw}{\mathbf{\widehat{\kappa}}} %\usepackage[dvipdfm]{hyperref} \usepackage{hyperref} \hypersetup{ pdfauthor = Carlos Esparza Barrera, pdftitle = TeoF1-IngMat-EXA1s11.pdf, pdfsubject = 2red, pdfkeywords =carlos.esparza@usach.cl, pdfcreator = {LaTeX con la paqueteria hyperref }, pdfproducer = {dvpdfm} } \usepackage{type1cm} \usepackage{eso-pic} \usepackage{color} \DeclareUnicodeCharacter{00ED}{\'{\i}} \DeclareUnicodeCharacter{00F3}{\'{o}} \DeclareUnicodeCharacter{00E1}{\'{a}} \begin{document} % psfrag doesn't work with pdf backends % \psfrag{xu}{$\widehat{\bf{x}}$} % \psfrag{yu}{$\widehat{\bf{y}}$} %\psfrag{iu}{$\mathbf{\hat \imath}$} % \psfrag{ju}{$\mathbf{\hat \jmath}$} % \psfrag{i}{$\hat \imath$} % \psfrag{j}{$\hat \jmath$} \textbf{Departamento de Fisica \hfill Universidad de Santiago de Chile} Teoria a Física I (22104)\hfill \textbf{EXAMEN} \hfill Ingeniería Matemática \vspace{0.1cm} %\hline is only used in tables %\hline \hline \vspace{0.2cm} Prof. Carlos H. Esparza-Barrera \hfill \begin{LARGE} \raisebox{-0.5ex}{\Email} \end{LARGE} ~carlos.esparza@usach.cl \hfill \begin{LARGE}\Telefon \end{LARGE} ~(56-2-)7181239 \hfill Oficina 27 % never end a paragraph with \\ %\\[0.1cm] %\hline \begin{itemize} \item \begin{large} \textbf{Esta prueba consta de cuatro problemas, usted debe responder tres de ellos. La elección es suya y debe indicarla en la hoja. Si responde los cuatro, se considerarán los tres peor evaluados} %Always include a paragraph end _within_ a size change this blank line is needed \end{large}. \hfill Cada problema vale $2,0$ puntos. \item Fundamente su razonamiento; haga esquemas explicatorios. Use l\'apiz indeleble; en caso contrario, no puede pretender una segunda revisi\'on de su trabajo. Exprese sus repuestas numéricas con 3 CS. \item Las situaciones y los valores num\'ericos de los datos suministrados no necesariamente corresponden a los de un experimento o situaci\'on pr\'actica real. Se pretende que usted trabaje cada problema con el modelo adecuado en el marco de la teor\'ia pertinente y obtenga las predicciones correspondientes. \end{itemize} %\hline \begin{multicols}{2} \begin{enumerate} \item problema 1 \item ... \item ... \item ... \item ... \item ... \end{enumerate} \end{multicols} \end{document} A: Here is an example more in the spirit of LaTeX, as I see it. (Not to compete with the other answers but to demonstrate the benefits of the LateX approach, for those who see it as beneficial). If you needed to produce many such documents, you could design a custom preamble (or package) along these lines. This completely separates presentation from style, so that the markup inside the document environment is completely semantic. And I think it also uses fairly simple solutions to achieve the desired layout style. \documentclass{article} \usepackage{lmodern} \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \usepackage[spanish]{babel} \usepackage[margin=1.5in]{geometry} \usepackage{multicol} \usepackage{marvosym} % for icons \usepackage{url} % for e-mail address \usepackage{sectsty} % for customizing section font \sectionfont{\normalsize\bfseries} % Horizontal rule \newlength{\normalrulethickness} \setlength{\normalrulethickness}{0.4pt} \newcommand{\thinrule}{\noindent\rule{\linewidth}{\normalrulethickness}} \newcommand{\thickrule}{\noindent\rule{\linewidth}{2\normalrulethickness}} \newcommand{\doublerule}{% \thickrule% \llap{\rule[2pt]{\linewidth}{\normalrulethickness}}% } % Letterhead at top of page \newcommand{\letterhead}{% \noindent% \textbf{\department}\hfill \textbf{\university}\newline \course\hfill \documentname\hfill \subject\newline \doublerule\par\smallskip {\small\noindent% \professor\hfill \emailicon\ \email\hfill \phoneicon\ \phone\hfill \office% }\par\bigskip% } % Commands to set the values in the \letterhead \newcommand{\setdepartment}[1]{\gdef\department{#1}} \newcommand{\setuniversity}[1]{\gdef\university{#1}} \newcommand{\setcourse}[1]{\gdef\course{#1}} \newcommand{\setsubject}[1]{\gdef\subject{#1}} \newcommand{\setdocumentname}[1]{\gdef\documentname{#1}} \newcommand{\setprofessor}[1]{\gdef\professor{#1}} \newcommand{\setemail}[1]{\gdef\email{\url{#1}}} \newcommand{\setphone}[1]{\gdef\phone{#1}} \newcommand{\setoffice}[1]{\gdef\office{#1}} % Icons in letterhead \newcommand{\emailicon}{\Email} % from marvosym \newcommand{\phoneicon}{\Telefon} % Block of instructions with rule before and after \newenvironment{instructions}{\thickrule}{\thickrule} % 2-column exam questions \newenvironment{examquestions} {\begin{multicols}{2}} {\end{multicols}} \newcommand{\pointvalue}[1]{\hfill\mbox{#1}} \newcommand{\strong}[1]{\textbf{#1}} \begin{document} \setdepartment{Departamento de Física} \setuniversity{Universidad de Santiago de Chile} \setcourse{Teoría Física (22104)} \setsubject{Ingeniería Matemática} \setdocumentname{EXAMEN} \setprofessor{Prof.~Carlos H.~Esparza-Barrera} \setemail{carlos.esparza@usach.cl} \setphone{(56-2-)7181239} \setoffice{Oficina 27} \letterhead \begin{instructions} \begin{itemize} \item \strong{% Esta prueba consta de cuatro problemas, usted debe responder tres de ellos. La elección es suya y debe indicarla en la hoja. Si responde los cuatro, se considerarán los tres peor evaluados. }\pointvalue{Cada problema vale $2,0$ puntos.} \item Fundamente su razonamiento; haga esquemas explicatorios. [\dots] \item Las situaciones y los valores numéricos de los datos suministrados [\dots] \end{itemize} \end{instructions} \begin{examquestions} \section{Insectos} Algunos coleópteros pueden proyectarse verticalmente [\dots] \section{Esferas de aluminio y fierro} La masa de un metro cúbio de Al es $2,70 \times 10^3$~kg [\dots] %better to use siunitx \end{examquestions} \end{document}
Electronic structure and elastic properties of Y(n+1)Co(3n+5)B(2n) ([Formula: see text]). We have studied the electronic structure and elastic properties of Y(n+1)Co(3n+5)B(2n) (space group P6/mmm), where n = 1, 2, 3 and [Formula: see text], using ab initio calculations. These ternary borides exhibit a bulk-modulus-to-C(44) ratio from 1.6 to 1.9, which is rather unusual for ceramics. This may be understood on the basis of the electronic structure: predominantly covalent-ionic YCo(3)B(2) layers are interleaved with predominantly metallic YCo(5) layers. Covalent-ionic bonding between B and Co may give rise to a large bulk modulus, while weak coupling between the YCo(3)B(2) and YCo(5) layers may be responsible for the low C(44) value. On the basis of the similarity in electronic structure and elasticity data, it is reasonable to assume that the Y(n+1)Co(3n+5)B(2n) compounds investigated here may exhibit similar properties to the so-called MAX phases (Barsoum 2000 Prog. Solid State Chem. 28 201).
--- abstract: 'We prepare a maximally entangled state of two ions and couple both ions to the mode of an optical cavity. The phase of the entangled state determines the collective interaction of the ions with the cavity mode, that is, whether the emission of a single photon into the cavity is suppressed or enhanced. By adjusting this phase, we tune the ion–cavity system from sub- to superradiance. We then encode a single qubit in the two-ion superradiant state and show that this encoding enhances the transfer of quantum information onto a photon.' author: - 'B. Casabone' - 'K. Friebe' - 'B. Brandst[ä]{}tter' - 'K. Sch[ü]{}ppert' - 'R. Blatt' - 'T. E. Northup' bibliography: - 'bibliography.bib' - 'misc\_arxiv.bib' nocite: '[@Nielsen2000; @Efron93; @Tan99]' title: 'Enhanced Quantum Interface with Collective Ion-Cavity Coupling' --- Sub- and superradiance are fundamental effects in quantum optics arising in systems that are symmetric under the interchange of any pair of particles [@Dicke54; @Gross82; @Garraway11]. Superradiance has been widely studied in many-atom systems, in which effects such as a phase transition [@Baumann10; @Baden14] and narrow-linewidth lasing [@Bohnet12] have recently been observed. For few-atom systems, each atom’s state and position can be precisely controlled, and thus collective emission effects such as Rydberg blockade [@Lukin01] and the Lamb shift [@Meir13] can be tailored. In a pioneering experiment using two trapped ions, variation of the ions’ separation allowed both sub- and superradiance to be observed, with the excited-state lifetime extended or reduced by up to 1.5%  [@DeVoe1996]. The contrast was limited because spontaneous emission from the ions was not indistinguishable, as the ions’ separation was on the order of the wavelength of the emitted radiation. [This limitation can be overcome by observing preferential emission into a single mode, such as the mode defined by incident radiation [@Dicke54] or by an optical cavity. In a cavity setting, indistinguishability is guaranteed]{} when the emitters are equally coupled to the mode, even if they are spatially separated. Subradiance corresponds to a suppressed interaction of the joint state of the emitters with the cavity mode, while for the superradiant state, the interaction is enhanced. In the context of quantum networks [@Kimble08a; @Duan10], superradiance can improve a [ quantum interface when one logical qubit is encoded across $N$ physical qubits. In the DLCZ protocol for heralded remote entanglement, efficient retrieval of stored photons is based on superradiance [@Duan01; @Oliveira14]. Superradiance can also improve the performance of a deterministic, cavity-based interface, which enables the direct transmission of quantum information between network nodes [@Cirac97]. If a qubit is encoded in the state $\frac{1}{\sqrt{N}}\sum_i^N|\downarrow_1...\uparrow_i...\downarrow_N\rangle$, the coupling rate to the cavity is enhanced from the single-qubit rate $g$ to the effective rate $g \sqrt{N}$, relaxing the technical requirements for strong coupling between light and matter [@Lamata11]. This state corresponds to the first step in the superradiant cascade described by Dicke [@Dicke54]. In contrast, subradiant states are antisymmetrized, resulting in suppressed emission. From a quantum-information perspective, subradiant states are interesting because they span a decoherence-free subspace [@Plenio99; @Beige00; @Lidar03].]{} A subradiant state of two superconducting qubits coupled to a cavity has recently been prepared [@Filipp11]. ![(a) Two [\^[40]{}\^[+]{} $^{40}$Ca$^{+}$ ]{}ions in a linear Paul trap couple with equal strength to the mode of a high-finesse optical cavity. A magnetic field orthogonal to the cavity axis defines the quantization axis. Quantum information stored in the ions is manipulated using two 729 nm beams: the global beam couples to both ions, while the addressing beam is focused onto one ion. A 393 nm laser beam drives a cavity-mediated Raman transition, generating a single photon in the cavity. At the cavity output, two wave plates ($\lambda/2$, $\lambda/4$) select the basis in which photon polarization is analyzed. Two avalanche photodiodes (APD1 and APD2) detect the horizontally (H) or vertically (V) polarized photons at the output of a polarizing beamsplitter (PBS). (b) Populations of the states ${\ensuremath{\,|S\rangle}}{\ensuremath{\,|S\rangle}}$ (red diamonds), ${\ensuremath{\,|D\rangle}}{\ensuremath{\,|D\rangle}}$ (blue circles), and $ {\ensuremath{\,|S\rangle}}{\ensuremath{\,|D\rangle}}$ or ${\ensuremath{\,|D\rangle}}{\ensuremath{\,|S\rangle}}$ (green triangles) as a function of the M[ø]{}lmer–S[ø]{}rensen gate duration. After 55 $\mu$s (dashed vertical line) a maximally entangled state is generated. Solid lines indicate the ideal time evolution of the gate [operation [@Benhelm08]. (c) Oscillations in the parity of the ion populations as a function of the phase of a $\pi$/2 pulse on the ${\ensuremath{\,|S\rangle}} \leftrightarrow {\ensuremath{\,|D\rangle}}$ transition, following entanglement. The dashed vertical line at phase $1.2\,\pi$ corresponds to ${\ensuremath{\,|\Psi^+\rangle}}$. ]{} Error bars represent projection noise.[]{data-label="fig1"}](fig1.pdf){width="47.50000%"} Here, we generate collective states of two ions coupled to an optical cavity and use a state [ that maximizes the coupling rate]{} to improve ion–photon quantum information transfer. Our system is described by the Tavis–Cummings Hamiltonian [@Tavis68], the interaction term of which is $$\label{TCH} H_\text{int} = \hbar g \left( \sigma^{(1)}_{-} + e^{i\zeta} \sigma^{(2)}_{-}\right)a^{\dagger} + \text{h.~c.},$$ where $\sigma^{(j)}_{-}$ is the lowering operator for the $j$th ion, $\zeta$ represents a relative phase [@SM], and $a^\dagger$ is the creation operator of a photon in the cavity mode. We prepare a maximally entangled two-ion state and tune its emission properties between sub- and superradiance, that is, between a dark state ${\ensuremath{\,|\Psi_\text{sub}\rangle}}$ and a state ${\ensuremath{\,|\Psi_\text{super}\rangle}}$ that couples with enhanced strength $g \sqrt{2}$ to the cavity. Furthermore, we transfer quantum information from a state with enhanced emission probability onto a single photon and show that the process fidelity and efficiency are higher than for a single-ion qubit. In these experiments, two ${\ifmmode ^{40}\text{Ca}^{+} \else $^{40}$Ca$^{+}$~\fi}$ [ separated by $5.6~\mu$m]{} are confined along the axis of a linear Paul trap and coupled to an optical cavity in an intermediate coupling regime [@SM]. We position the ions so that $g_1 \approx g_2 $, where $g_j$ represents the coupling strength of the $j$th ion to the cavity [@Casabone13]. In a cavity-mediated Raman process, each ion prepared in a state from the $4^2S_{1/2}$ manifold produces a single cavity photon [@Barros09]. The process is driven both by a laser at $393$ nm detuned from the $4^2S_{1/2}-4^2P_{3/2}$ transition and by the cavity, whose detuning from the 854 nm $4^2P_{3/2}-3^2D_{5/2}$ transition satisfies a Raman resonance condition [@Stute12a]. Together, laser and cavity provide the interaction term of Eq. (\[TCH\]), in which the relative phase $\zeta$ between the ions’ coupling arises from the angle between the Raman beam and the ion-trap axis [@SM]. Photons leave the cavity preferentially through one mirror and are detected on photodiodes (Fig. \[fig1\]a). Entanglement between the ions is generated using a ‘global’ 729 nm laser beam (Fig. \[fig1\]a) that couples with equal strength to both ions on the $4^2S_{1/2}-3^2 D_{5/2}$ quadrupole transition. The target state $${\ensuremath{\,|\Psi^+\rangle}} \equiv \left({\ensuremath{\,|S\rangle}}{\ensuremath{\,|D\rangle}}+{\ensuremath{\,|D\rangle}}{\ensuremath{\,|S\rangle}}\right)/\sqrt{2}$$ is prepared via a M[ø]{}lmer–S[ø]{}rensen gate operation followed by a $\pi / 2 $ rotation, where ${\ensuremath{\,|S\rangle}} \equiv {\ensuremath{\,|4^2S_{1/2}, m_j=-1/2\rangle}}$ and ${\ensuremath{\,|D\rangle}} \equiv {\ensuremath{\,|3^2D_{5/2}, m_j=-1/2\rangle}}$. [ In the M[ø]{}lmer–S[ø]{}rensen gate, a bichromatic field that drives blue and red motional sidebands generates a spin-dependent force, coupling the ion’s motion and internal state [@Sorensen99].]{} Fig. \[fig1\]b shows the evolution of the two-ion state populations during application of the gate. [ A maximally entangled state ${\ensuremath{\,|\Phi\rangle}}= \big ({\ensuremath{\,|S\rangle}} {\ensuremath{\,|S\rangle}} +i{\ensuremath{\,|D\rangle}} {\ensuremath{\,|D\rangle}}\big )/\sqrt{2}$ is generated for a gate duration of 55 $\mu$s. Subsequently, a $\pi / 2 $ rotation maps ${\ensuremath{\,|\Phi\rangle}}$ to ${\ensuremath{\,|\Psi^+\rangle}}$. A lower bound of 95(2)% on the state fidelity with respect to ${\ensuremath{\,|\Phi\rangle}}$ is determined by varying the phase of the $\pi / 2 $ rotation and measuring the parity of the ions’ populations, which oscillates as a function of phase (Fig. \[fig1\]c) [@Sackett00]. ]{} ![ (a) The two ions are prepared in either a separable state ${\ensuremath{\,|\psi_1\rangle}}$ or ${\ensuremath{\,|\psi_2\rangle}}$ or an entangled state ${\ensuremath{\,|\Psi (\varphi)\rangle}}$ for various values of $\varphi$. The global beam then drives a Raman transition between ${\ensuremath{\,|S\rangle}}$ and ${\ensuremath{\,|D\rangle}}$, generating a single cavity photon for each ion in ${\ensuremath{\,|S\rangle}}$. Since ${\ensuremath{\,|D'\rangle}}$ is decoupled from the cavity interaction, both ${\ensuremath{\,|\psi_1\rangle}}$ and ${\ensuremath{\,|\psi_2\rangle}}$ represent a single ion interacting with the cavity. (b) Ratio $r(\varphi)$ of the probability to detect a photon for ${\ensuremath{\,|\Psi (\varphi)\rangle}}$ to that of ${\ensuremath{\,|\psi_1\rangle}}$ as a function of the phase $\varphi$ for the first 6 $\mu$s of the Raman process. The reference single-ion case is shown as a dashed horizontal line. (c) Temporal shape of the photon at the cavity output as a function of detection time $t$, for the entangled states ${\ensuremath{\,|\Psi_{\text{super}}\rangle}}$ (circles) and ${\ensuremath{\,|\Psi_{\text{sub}}\rangle}}$ (diamonds) and the single-ion cases ${\ensuremath{\,|\psi_1\rangle}}$ and ${\ensuremath{\,|\psi_2\rangle}}$ (up and down triangles, respectively). The temporal photon shapes are calculated by normalizing the detected photon counts [ per 1 $\mu$s time bin]{} by the number of photon generation attempts. Data are shown until 20 $\mu$s, the time scale for which enhancement and suppression are most prominent. Lines are simulations. The shaded area represents the time window used in Fig. \[fig2\]b. Error bars represent Poissonian statistics and are mostly smaller than the plot symbols. []{data-label="fig2"}](fig2.pdf){width="47.50000%"} A second, ‘addressing’ 729 nm beam with a waist smaller than the ion–ion separation couples to just one ion. When detuned, this beam induces AC-Stark shifts in the addressed ion, which contribute a phase $\varphi$ to the entangled state [@schindler13]: $$\begin{aligned} {\ensuremath{\,|\Psi (\varphi)\rangle}} \equiv \left({\ensuremath{\,|S\rangle}}{\ensuremath{\,|D\rangle}} + e^{i\varphi} {\ensuremath{\,|D\rangle}}{\ensuremath{\,|S\rangle}}\right)/\sqrt{2}. \label{eq_Psi} \end{aligned}$$ By adjusting the length of the Stark-shift pulse, we shift this phase, which determines the effective coupling $g_\text{eff}$ of ${\ensuremath{\,|\Psi (\varphi)\rangle}}$ to the cavity mode under the action of $H_\text{int}$. Specifically, the superradiant and subradiant states are given by $$\begin{aligned} \label{eq_super} &{\ensuremath{\,|\Psi_{\text{super}}\rangle}} \equiv {\ensuremath{\,|\Psi (\varphi= -\zeta)\rangle}} \\ &{\ensuremath{\,|\Psi_{\text{sub}}\rangle}} \equiv {\ensuremath{\,|\Psi (\varphi= -\zeta+\pi)\rangle}}. \notag\end{aligned}$$ Note that if $\zeta$ were zero, ${\ensuremath{\,|\Psi_{\text{super}}\rangle}}$ and ${\ensuremath{\,|\Psi_{\text{sub}}\rangle}}$ would be the Bell states ${\ensuremath{\,|\Psi^+\rangle}}$ and ${\ensuremath{\,|\Psi^-\rangle}}$, respectively. The Raman process between ${\ensuremath{\,|S\rangle}}$ and ${\ensuremath{\,|D\rangle}} $ generates a single cavity photon from ${\ensuremath{\,|\Psi (\varphi)\rangle}}$, as only one ion is in ${\ensuremath{\,|S\rangle}}$. This photon has a temporal shape initially determined by $g_\text{eff}$ between the two-ion state and the cavity mode. For later times, the shape is determined by the rates of both cavity decay and off-resonant scattering. Varying $g_\text{eff}$ by changing the phase $\varphi$ of ${\ensuremath{\,|\Psi (\varphi)\rangle}}$ thus modifies the temporal shape, that is, the probability to generate the photon early in the Raman process. Ideally, in the absence of scattering, the coupling of ${\ensuremath{\,|\Psi_{\text{sub}}\rangle}}$ to the cavity vanishes ($g_\text{eff}=0$) so that no photon is generated. For ${\ensuremath{\,|\Psi_{\text{super}}\rangle}}$, in contrast, the coupling is maximized such that $g_\text{eff}=g\sqrt{2} $. [ Thus, the probability to generate and detect a photon from ${\ensuremath{\,|\Psi_{\text{super}}\rangle}}$ early in the process is expected to be twice that of one ion. For time scales much shorter than $1/g$, a photon generated in the cavity has not yet been reabsorbed, and therefore, cavity back-action does not play a role. ]{} We now determine this probability for a range of phases $\varphi$. The experimental sequence starts with 1 ms of Doppler cooling. The ions are then optically pumped to ${\ensuremath{\,|S\rangle}}$, followed by 1.3 ms of sideband cooling on the axial center-of-mass mode [@Wineland98]. Next, global and addressing 729 nm pulses generate the state ${\ensuremath{\,|\Psi (\varphi)\rangle}}$. In the last step, the cavity-mediated Raman transition is driven for 55 $\mu$s and photons are detected (Fig. \[fig2\]a). In order to determine whether we achieve enhancement and suppression of the cavity coupling with respect to the single-ion rate $g$, we carry out a reference measurement. For this single-ion case, one of the two ions is hidden in a state ${\ensuremath{\,|D'\rangle}} \equiv {\ensuremath{\,|3^2D_{5/2}, m_j=3/2\rangle}}$ that is decoupled from the Raman process. Thus, the initial state is ${\ensuremath{\,|\psi_{1}\rangle}} \equiv {\ensuremath{\,|S\rangle}}{\ensuremath{\,|D'\rangle}}$ or ${\ensuremath{\,|\psi_{2}\rangle}} \equiv {\ensuremath{\,|D'\rangle}}{\ensuremath{\,|S\rangle}}$. For the states ${\ensuremath{\,|\Psi(\varphi)\rangle}}$, we calculate $\eta(\varphi)$, the probability to detect a photon in the first 6 $\mu$s of the Raman process, an interval in which the effective coupling rate determines the initial slope. For the single-ion cases, we calculate $\eta_{\psi}$, the average value of the photon detection probability for ${\ensuremath{\,|\psi_1\rangle}}$ and ${\ensuremath{\,|\psi_2\rangle}}$ in the same time window. Fig. \[fig2\]b shows the ratio $r(\varphi)=\eta(\varphi)/\eta_{\psi}$ as the phase $\varphi$ is varied. For $\varphi = 0.68 \, \pi$, the experimentally determined minimum, the ratio is 0.22(9): photon generation is strongly suppressed. We therefore identify ${\ensuremath{\,|\Psi({\varphi = 0.68\,\pi})\rangle}}$ with ${\ensuremath{\,|\Psi_{\text{sub}}\rangle}}$. As $\varphi$ is increased, the ratio approaches one, then enters the superradiant regime. A maximum value of $r(\varphi)$ is found for $\varphi = 1.58\, \pi$. For the corresponding state, identified with ${\ensuremath{\,|\Psi_{\text{super}}\rangle}}$, the probability to detect a photon is 1.84(4), [ close to its maximum value of two]{}, thus demonstrating strong enhancement in photon generation. For these states ${\ensuremath{\,|\Psi_{\text{sub}}\rangle}}$ and ${\ensuremath{\,|\Psi_{\text{super}}\rangle}}$, we now analyze the temporal photon shapes at the detector (Fig. \[fig2\]c). The temporal shapes corresponding to ${\ensuremath{\,|\psi_1\rangle}}$ and ${\ensuremath{\,|\psi_2\rangle}}$ are considered as a reference; from their overlap, we find the coupling strengths of the two ions, $g_1$ and $g_2$, to be within 10% of one another. Photons generated from ${\ensuremath{\,|\Psi_{\text{super}}\rangle}}$ exhibit a steeper initial slope than the single-ion case, while ${\ensuremath{\,|\Psi_{\text{sub}}\rangle}}$ has a flatter slope. The photon shapes are consistent with enhanced and suppressed coupling to the cavity and are in good agreement with simulations. The simulations are based on numerical integration of the master equation and include imperfect preparation of the initial state, which together with off-resonant scattering accounts for the small but nonzero probability to generate photons from ${\ensuremath{\,|\Psi_{\text{sub}}\rangle}}$. For ${\ensuremath{\,|\Psi_{\text{super}}\rangle}}$, these effects reduce the photon generation probability by about 10% for the first 6 $\mu$s of the process [@SM]. We now describe the implementation of a quantum interface that exploits the enhanced coupling of the superradiant state to the cavity [@Lamata11]. The state ${\ensuremath{\,|\Psi({\varphi})\rangle}}$ as defined in Eq. \[eq\_Psi\] contains two contributions: one from the ground state ${\ensuremath{\,|S\rangle}}$ and the other from ${\ensuremath{\,|D\rangle}}$. We extend this definition so that the ground-state component can be stored in either of two states, that is, in ${\ensuremath{\,|S\rangle}}$ or in ${\ensuremath{\,|S'\rangle}} \equiv {\ensuremath{\,|4^2S_{1/2}, m_j=+1/2\rangle}}$. A logical qubit is encoded in these two states, and this qubit is mapped onto the polarization state of a single cavity photon. To perform the mapping process, we use a phase-stable bichromatic Raman transition that coherently transfers ${\ensuremath{\,|S\rangle}}$ to ${\ensuremath{\,|D\rangle}}$, producing a horizontally polarized photon ${\ensuremath{\,|H\rangle}}$, and ${\ensuremath{\,|S'\rangle}}$ to ${\ensuremath{\,|D\rangle}}$, producing a vertically polarized photon ${\ensuremath{\,|V\rangle}}$ [@Stute13] (Fig. \[fig3\]a). Defining a superposition state $${\ensuremath{\,|\alpha,\beta\rangle}} \equiv \cos \alpha {\ensuremath{\,|S\rangle}} + e^{i\beta} \sin \alpha {\ensuremath{\,|S'\rangle}},$$ the mapping process can be represented by $$\begin{aligned} &\left({\ensuremath{\,|\alpha,\beta\rangle}}{\ensuremath{\,|D\rangle}} + e^{i\varphi} {\ensuremath{\,|D\rangle}} {\ensuremath{\,|\alpha,\beta\rangle}}\right) {\ensuremath{\,|0\rangle}} /\sqrt{2} \nonumber \\ &\quad\mapsto {\ensuremath{\,|D\rangle}}{\ensuremath{\,|D\rangle}} \left(\cos \alpha {\ensuremath{\,|H\rangle}} + e^{i\beta} \sin \alpha {\ensuremath{\,|V\rangle}}\right) , \label{mapping} \end{aligned}$$ where ${\ensuremath{\,|0\rangle}}$ stands for the cavity vacuum and the phase is set to $\varphi = 1.58\pi$, corresponding to ${\ensuremath{\,|\Psi_{\text{super}}\rangle}}$. ![(a) A bichromatic Raman transition maps a superposition of ${\ensuremath{\,|S\rangle}}$ and ${\ensuremath{\,|S'\rangle}}$ onto a superposition of single-photon polarization states ${\ensuremath{\,|H\rangle}}$ and ${\ensuremath{\,|V\rangle}}$. The superposition is encoded either in two entangled ions or in a single ion, with the other ion decoupled in ${\ensuremath{\,|D'\rangle}}$. (b) Process fidelity for ${\ensuremath{\,|\Psi_\text{super}\rangle}}$ (filled blue circles) and ${\ensuremath{\,|\psi_{1}\rangle}}$ (open black circles) as a function of the photon detection time window. Lines are simulations (continuous line: two entangled ions; dashed line: single-ion case). Inset: absolute value of the process matrix $\chi_{ij}$ for ${\ensuremath{\,|\Psi_\text{super}\rangle}}$ reconstructed from photons detected between 2 and 4 $\mu$s, yielding the maximum process fidelity $|\chi_{00}| = 96.0(3)\%$. Error bars are derived from non-parametric bootstrapping. (c) Cumulative process efficiency for ${\ensuremath{\,|\Psi_\text{super}\rangle}}$ (filled blue circles) and ${\ensuremath{\,|\psi_{1}\rangle}}$ (open black circles) as a function of the photon detection time window. Error bars represent Poissonian statistics and are smaller than the plot symbols. []{data-label="fig3"}](fig3.pdf){width="47.50000%"} In order to characterize the mapping, we extract the process matrix $\chi$, which describes the transformation from the input to the output density matrix: $\rho_\text{out} = \sum_{i,j} \chi_{ij} \, \sigma_i \, \rho_\mathrm{in} \, \sigma_j$, where $\sigma_i \in \{\mathbb{1}, \sigma_x, \sigma_y, \sigma_z \}$ are the Pauli operators [@Chuang97]. Following Doppler cooling, optical pumping, and sideband cooling as above, the two ions are prepared in ${\ensuremath{\,|\Psi_\text{super}\rangle}}$. Next, two global 729 nm pulses prepare one of the four orthogonal input states ${\ensuremath{\,|\alpha, \beta\rangle}}$, with $(\alpha, \beta) \in \{(\pi /2, 0), (0, 0), (\pi /4 ,0), (\pi/4, \pi/2)\}$. Finally, the Raman transition is driven and the photon is detected in one of three orthogonal polarization bases  [@James01]. This set of measurements allows $\chi$ to be reconstructed via the maximum likelihood method. As the target mapping corresponds to the identity operation, the process fidelity is given by the matrix entry $\chi_{00}$. For comparison, we carry out reference measurements in which enhancement is not present, for which the ions are prepared in ${\ensuremath{\,|\psi_1\rangle}}$. The mapping process is then given by $$\begin{aligned} {\ensuremath{\,|\alpha,\beta\rangle}}{\ensuremath{\,|D'\rangle}} {\ensuremath{\,|0\rangle}} \mapsto {\ensuremath{\,|D\rangle}}{\ensuremath{\,|D'\rangle}} \left(\cos \alpha {\ensuremath{\,|H\rangle}} + e^{i\beta} \sin \alpha {\ensuremath{\,|V\rangle}}\right). \label{mapping} \end{aligned}$$ Fig. \[fig3\]b shows the process fidelities $\chi_{00}$ for ${\ensuremath{\,|\Psi_\text{super}\rangle}}$ and ${\ensuremath{\,|\psi_1\rangle}}$ as a function of the photon detection time window. Not only is the fidelity of the superradiant case higher for all data points, but also the improvement over the single-ion case increases with the length of the detection window. For a detection time window of 6 $\mu$s, the fidelity is 93.3(3)% for ${\ensuremath{\,|\Psi_\text{super}\rangle}}$ and 90.9(5)% for ${\ensuremath{\,|\psi_1\rangle}}$, indicating that in both cases the logical qubit is correctly mapped onto photon polarization with very high probability. A maximum value of 96.0(3)% is found for ${\ensuremath{\,|\Psi_\text{super}\rangle}}$ for photons detected between 2 and 4 $\mu$s. As the detection window length is increased, $\chi_{00}$ decreases for both cases because the probability for off-resonant excitation to the $4^2P_{3/2}$-manifold increases with time. If such an event happens during the Raman process, the initial state ${\ensuremath{\,|\alpha, \beta\rangle}}$ is randomly projected onto ${\ensuremath{\,|0,0\rangle}}={\ensuremath{\,|S\rangle}}$ or ${\ensuremath{\,|\pi/2,0\rangle}}={\ensuremath{\,|S'\rangle}}$, and the qubit is then mapped onto either ${\ensuremath{\,|H\rangle}}$ or ${\ensuremath{\,|V\rangle}}$, regardless of the information in the initial superposition [@Stute13]. However, while the probability for scattering is the same for both states, photons are produced earlier from ${\ensuremath{\,|\Psi_\text{super}\rangle}}$ because of the enhanced effective coupling. Thus, the improvement in the fidelity stems from an increased probability to generate a photon before scattering occurs. After $55~\mu$s, we find $\chi_{00}=73.4(3)$% for ${\ensuremath{\,|\Psi_\text{super}\rangle}}$ in comparison with 68.7(2)% for ${\ensuremath{\,|\psi_1\rangle}}$. Simulations that take into account detector dark counts, imperfect state initialization, [ different coupling strengths of the ions to the cavity, ]{} and magnetic field fluctuations are in good agreement with the data. We also investigate the cumulative process efficiency $\varepsilon(t)$, defined as the probability to detect a photon before time $t$ (Fig. \[fig3\]c). For $t=6~\mu$s, the process efficiency for ${\ensuremath{\,|\Psi_\text{super}\rangle}}$ is $\varepsilon_\text{s}(t) = 0.33(1)$%, while for ${\ensuremath{\,|\psi_1\rangle}}$, it is $\varepsilon_1 (t) = 0.17(1) $%, corresponding to a ratio $\varepsilon_\text{s}/\varepsilon_1$ of 1.94(13). The ratio decreases monotonically with $t$, and by $t=55~\mu$s, it is 1.34(5). While the enhanced coupling modifies the temporal shape of the photons early in the process, for longer times its effect on the cumulative process efficiency is small, such that the ratio is expected to approach one. A single photon generated in the cavity is detected with an efficiency of [ 8(1)%]{}, due to losses in the cavity mirrors, optical path losses and the detection efficiency of the avalanche photodiodes. [ The enhanced fidelity and efficiency of quantum state transfer in the superradiant regime can be understood in terms of a stronger effective ion–cavity coupling. Further improvements are thus expected by encoding the logical qubit across more physical qubits, as in a planar microfabricated trap [@Cetina13]. Maximum enhancement would be achieved by encoding not just one but $N/2$ excitations in a symmetrized $N$-ion state. The cooperative emission rate would then be $g\sqrt{\frac{N}{2}\left(\frac{N}{2}+1\right)}$, which scales with $N$ for large $N$, as observed in atomic ensembles [@Baumann10; @Baden14; @Bohnet12]. However, it remains an open question how to transfer quantum information between such states and single photons, as required for a quantum transducer [@Lamata11]. Finally, we emphasize two advantages of ions as qubits in these experiments: first, that the coupling strength of each ion to the cavity can be precisely controlled, and second, that a universal set of gate operations [@Haeffner08] allows preparation of a range of states, from sub- to superradiant. By tuning over this range, one could selectively turn off and on the coupling of logical qubits to the cavity. This technique would provide a versatile tool for addressable read–write operations in a quantum register. ]{} We thank L. Lamata and F. Ong for helpful discussions and A. Stute for early contributions to the experiment design. We gratefully acknowledge support from the Austrian Science Fund (FWF): Project. Nos. F4003 and F4019, the European Commission via the Atomic QUantum TEchnologies (AQUTE) Integrating Project, and the Institut für Quanteninformation GmbH. [ While preparing this manuscript, we learned of related work with two neutral atoms coupled to a cavity [@Reimann14].]{} Appendix ======== System parameters ================= Two ${\ifmmode ^{40}\text{Ca}^{+} \else $^{40}$Ca$^{+}$~\fi}$ ions are confined in a linear Paul trap and coupled to an optical cavity. The cavity decay rate is $\kappa = 2\pi \times 50$ kHz, and the atomic decay rate is $\gamma = 2\pi \times 11.5$ MHz, which is the sum of the decay channels from $\mathbb{P}$ to $ \mathbb D$ and from $\mathbb P$ to $ \mathbb S$, where the manifolds are defined as $\mathbb P \equiv 4^2P_{3/2} $, $ \mathbb D\equiv 3^2D_{5/2}$, and $\mathbb S\equiv 4^2S_{1/2}$. The coupling strength of a single ion to the cavity mode on the $\mathbb {P - D}$ transition is $g^{\,}_{PD} = 2 \pi \times 1$ MHz. A Raman beam with Rabi frequency $\Omega$ is used to drive the $\mathbb {S - P}$ transitions. The cavity parameters are described in further detail in Ref. [@Casabone13Sup]. The three-level system $\mathbb S$-$\mathbb P$-$\mathbb D$ can be mapped onto an effective two-level system $\mathbb S$-$\mathbb D$ if a Raman resonance condition is met, i.e., when both Raman beam and cavity resonance have the same detuning from $\mathbb P$ [@Stute12a]. During a cavity-mediated process, the electronic population of the ion is coherently transferred from a state in $\mathbb S$ to a state in $\mathbb D$, generating a cavity photon. For sufficiently large $\Delta$, negligible population is transferred to $\mathbb P$. The rates of the effective two-level system are [$g =\frac{\xi_{SD} \, \Omega \, g^{\,}_{PD} }{2 \Delta}$]{} and $\gamma_{\text{eff}}=\gamma \left(\frac{\Omega}{2 \Delta}\right)^2$. Here, $\Delta\sim 400$ MHz [and $\xi_{SD}$ is a geometric factor that takes into account both the projection of the vacuum-mode polarization onto the atomic dipole moment and the Clebsch-Gordon coefficients of the $\mathbb {S - P}$ and $\mathbb {D - P}$ transitions [@Stute12a].]{} Ten individual Raman transitions between $\mathbb S$ and $\mathbb D$ can be identified when all Zeeman sublevels are considered. A magnetic field of $B=4.5$ G, orthogonal to both the cavity axis and the wavevector of the Raman beam, lifts the degeneracy of the Zeeman sublevels such that each transition can be individually addressed [@Stute12a]. The strength of the magnetic field is determined via spectroscopy of the $\mathbb S-\mathbb D$ transitions. In the main text, two experiments are presented. In the first experiment, we examine the probability to generate a photon as a function of the phase of the two-ion entangled state. To perform the experiment, a Raman beam with Rabi frequency [$\Omega=19$ MHz]{} drives the ${\ensuremath{\,|S\rangle}} \equiv {\ensuremath{\,|\mathbb S, m_j=-1/2\rangle}}$ to ${\ensuremath{\,|D\rangle}} \equiv {\ensuremath{\,|\mathbb D, m_j=-1/2\rangle}}$ transition. [For $\Omega=19$ MHz, the rates of the effective two-level system are $\gamma_{\text{eff}}=2\pi \times 6$ kHz and $g_{\text{eff}}=2\pi \times 18$ kHz. The cavity decay rate $\kappa = 2\pi \times 50$ kHz is the fastest of the three, placing the system in the bad cavity regime.]{} In the second experiment, we use a superradiant state to enhance the performance of a cavity-based quantum interface. In this case, a bichromatic Raman beam with Rabi frequencies [$19$ and $9.5$ MHz]{} drives the ${\ensuremath{\,|S\rangle}}$ to $ {\ensuremath{\,|D\rangle}}$ and ${\ensuremath{\,|S'\rangle}} \equiv {\ensuremath{\,|\mathbb S, m_j=1/2\rangle}}$ to ${\ensuremath{\,|D\rangle}}$ transitions. [These transitions do not have equal transition probabilities and additionally, the orthogonally polarized photons couple differently to the cavity because of the orientation of the cavity with respect to the magnetic field [@Stute13]. By choosing the Rabi frequency for the ${\ensuremath{\,|S'\rangle}}$ to ${\ensuremath{\,|D\rangle}}$ transition to have twice the value of the ${\ensuremath{\,|S\rangle}}$ to ${\ensuremath{\,|D\rangle}}$ transition, these differences are balanced and both transitions are driven with equal strength. ]{} In both experiments, the Rabi frequencies are first determined experimentally via Stark-shift measurements with an uncertainty on the order of [10%]{}. Next, in simulations of single-photon generation, we adjust the Rabi frequencies within the experimental uncertainty range and find values for which the temporal photon shapes have the best agreement with data. Relative Raman phase ==================== In the first experiment, the part of the Hamiltonian that describes the interactions of the Raman laser and the cavity with the ion is $$\begin{aligned} H_\text{int} = & g^{\,}_{PD} \, \big (\sigma_{PD}^{(1)} - \sigma^{(2)}_{PD} \big )a^\dagger + \nonumber \\ & \Omega \, \big( e^{i\phi_{R_1}}\sigma^{(1)}_{SP} + e^{i\phi_{R_2}}\sigma^{(2)}_{SP} \big) + \text{h.c.}, \label{H_int} \tag{sm 1}\end{aligned}$$ where $\sigma^{(i)}_{PD} \equiv {\ensuremath{\,|D\rangle}}{\ensuremath{\langle P|\,}}$, $\sigma^{(i)}_{SP} \equiv {\ensuremath{\,|P\rangle}}{\ensuremath{\langle S|\,}}$, $a^{\dagger}$ is the photon creation operator, and $\phi_{R_i}$ is the optical phase of the Raman beam when interacting with the $i$th ion. Here, the rotating wave approximation has been used and an appropriate transformation to the interaction picture has been applied such that the Hamiltonian is time-independent. In this model, both ions are coupled to the cavity with the same strength, and the minus sign between the first and the second terms of Eq. (\[H\_int\]) accounts for the fact that in our cavity system the two ions are located in adjacent antinodes [@Casabone13]. When the Raman resonance condition is met, Eq. (\[H\_int\]) can be rewritten as Eq. (1), identifying $\zeta = (\phi_{R_1} - \phi_{R_2})$ and $\sigma_-={\ensuremath{\,|D\rangle}}{\ensuremath{\langle S|\,}}$. The relative phase $\zeta$ is given by $\zeta=2 \pi\, d \, \sin\theta / \lambda$, where $d$ is the ions’ separation, $\theta \approx 45^\circ$ is the angle between trap axis and Raman beam, and $\lambda = 393$ nm is the wavelength of the Raman beam. Initial state preparation ========================= To generate ${\ensuremath{\,|\Psi(\phi)\rangle}} = \left({\ensuremath{\,|S\rangle}}{\ensuremath{\,|D\rangle}} + e^{i\phi} {\ensuremath{\,|D\rangle}}{\ensuremath{\,|S\rangle}}\right)/\sqrt{2}$, we first produce a maximally entangled state [${\ensuremath{\,|\Phi\rangle}}= \big ({\ensuremath{\,|S\rangle}} {\ensuremath{\,|S\rangle}} +i{\ensuremath{\,|D\rangle}} {\ensuremath{\,|D\rangle}}\big )/\sqrt{2}$]{} by means of a M[ø]{}lmer–S[ø]{}rensen gate[-operation]{} [@Sorensen99]. To perform the gate, we off-resonantly drive the blue and red sidebands of the axial center-of-mass motion of the ${\ensuremath{\,|S\rangle}} \leftrightarrow {\ensuremath{\,|D\rangle}}$ transition with a detuning $\delta$. The ions are initialized in ${\ensuremath{\,|S\rangle}}{\ensuremath{\,|S\rangle}}$. After a time $T=1/\delta=55~\mu$s, with a detuning $\delta=18.2$ kHz, the two ions are prepared in the entangled state [${\ensuremath{\,|\Phi\rangle}}$]{} (see Fig. (1b)). For comparison, the coherence time for information stored in the $\mathbb {S-D}$ qubit is 475 $\mu$s. We calculate the fidelity [$F_{\Phi}$]{} of the experimental state with respect to [${\ensuremath{\,|\Phi\rangle}}$ ]{} in the following way  [@Benhelm08]. After [${\ensuremath{\,|\Phi\rangle}}$ ]{}is created, we apply an ‘analysis’ $\pi/2$ pulse on the ${\ensuremath{\,|S\rangle}} \leftrightarrow {\ensuremath{\,|D\rangle}}$ transition with a variable phase with respect to the previous entangling pulse. Subsequently, the parity operator $P=p_{SS} + p_{DD} - p_{SD,DS}$ is calculated from fluorescence measurements of the ion populations, where $p_{SS}$ and $p_{DD}$ are the probabilities to find both ions in ${\ensuremath{\,|S\rangle}} {\ensuremath{\,|S\rangle}}$ and ${\ensuremath{\,|D\rangle}} {\ensuremath{\,|D\rangle}}$, respectively, and $p_{SD,DS}$ is the probability to find one ion in ${\ensuremath{\,|S\rangle}}$ and the other in ${\ensuremath{\,|D\rangle}}$. Fig. (1c) shows the parity $P$ as function of the phase of the analysis pulse. If $A$ is the amplitude of the parity oscillation, then the fidelity [$F_{\Phi}$]{} is bound from above via [$F_{\Phi}\ge A$]{}. From a fit to the data of Fig (2a), we calculate that [${\ensuremath{\,|\Phi\rangle}}$]{} is created with a fidelity of at least 95(2)%. After the state [${\ensuremath{\,|\Phi\rangle}}$]{} is generated, a $\pi/2$-pulse on the ${\ensuremath{\,|S\rangle}} \leftrightarrow {\ensuremath{\,|D\rangle}}$ transition rotates the state to $({\ensuremath{\,|S\rangle}} {\ensuremath{\,|D\rangle}}+{\ensuremath{\,|D\rangle}} {\ensuremath{\,|S\rangle}})/\sqrt{2}$, identified in Fig. (1c). Finally, to convert $({\ensuremath{\,|S\rangle}} {\ensuremath{\,|D\rangle}} +{\ensuremath{\,|D\rangle}} {\ensuremath{\,|S\rangle}})/\sqrt{2}$ to ${\ensuremath{\,|\Psi(\phi)\rangle}}$, we perform a single-ion rotation, introducing AC-Stark shifts to one ion using the addressing beam [@schindler13]. The phase $\phi$ of ${\ensuremath{\,|\Psi(\phi)\rangle}}$ is proportional to the duration $\tau$ of the Stark-shift pulse, where the proportionality constant depends on the Rabi frequency of the addressing beam, $\Omega_{\text{AC}}$, and the detuning of the laser from the ${\ensuremath{\,|S\rangle}} \leftrightarrow {\ensuremath{\,|D\rangle}}$ transition, $\delta_\text{AC}$. We choose $\delta_\text{AC}=10$ MHz and $\Omega_\text{AC}=8.6$ MHz for a rotation that has a period of $5.3~ \mu$s. The implementation of the Stark-shift gate is demonstrated via the generation of the state ${\ensuremath{\,|S\rangle}} {\ensuremath{\,|D\rangle}}$. After optical pumping of both ions to ${\ensuremath{\,|S\rangle}} {\ensuremath{\,|S\rangle}}$, we apply a $\pi/2$ rotation on the ${\ensuremath{\,|S\rangle}} \leftrightarrow {\ensuremath{\,|D\rangle}}$ transition using the global beam. Next, the Stark-shift gate is applied to one ion for a time $\tau$. Subsequently, another global $\pi/2$ rotation on the ${\ensuremath{\,|S\rangle}} \leftrightarrow {\ensuremath{\,|D\rangle}}$ transition is applied with the same phase as the first $\pi/2$ rotation. Finally, ${\ensuremath{\,|S\rangle}} {\ensuremath{\,|S\rangle}}$, ${\ensuremath{\,|D\rangle}} {\ensuremath{\,|D\rangle}}$ and ${\ensuremath{\,|S\rangle}} {\ensuremath{\,|D\rangle}}$ populations are extracted via fluorescence detection. The results are shown in Fig (\[fig4\]) as a function of $\tau$. After $2.6~\mu$s, [the ions are in a state with a fidelity of 91(4)% with respect to ${\ensuremath{\,|S\rangle}} {\ensuremath{\,|D\rangle}}$.]{} [There are at least two other methods by which one could tune the phase $\phi$ in the experiment. First, the angle of the Raman beam could be changed. Second, the ion–ion separation could be changed by means of the voltages that determine the trap potential. Both methods would shift the relative phase seen by each ion. In initial experiments, we used the second method; however, when the ion–ion separation is adjusted to correspond to a desired phase, both ions must also remain equally and near-maximally coupled to the cavity [@Casabone13], and it is not straightforward to satisfy both conditions simultaneously. In practice, we found the Stark-shift gate described above to be the most precise and reproducible approach. ]{} To generate the single ion cases ${\ensuremath{\,|\psi_1\rangle}}$ and ${\ensuremath{\,|\psi_2\rangle}}$, we use the addressing beam. In this case, the frequency of the addressing beam is set to drive the ${\ensuremath{\,|S\rangle}} \leftrightarrow {\ensuremath{\,|D'\rangle}}\equiv {\ensuremath{\,|\mathbb D, m_j=3/2\rangle}}$ transition on resonance. As the addressing beam interacts with the second ion, a $\pi$-pulse transfers the state ${\ensuremath{\,|S\rangle}} {\ensuremath{\,|S\rangle}}$ to ${\ensuremath{\,|\psi_1\rangle}} = {\ensuremath{\,|S\rangle}} {\ensuremath{\,|D'\rangle}}$. To generate ${\ensuremath{\,|\psi_2\rangle}} = {\ensuremath{\,|D'\rangle}} {\ensuremath{\,|S\rangle}}$, we subsequently apply a $\pi$-rotation on the ${\ensuremath{\,|S\rangle}} \leftrightarrow {\ensuremath{\,|D'\rangle}}$ transition to both ions, such that ${\ensuremath{\,|S\rangle}} {\ensuremath{\,|D'\rangle}}$ is rotated to ${\ensuremath{\,|D'\rangle}} {\ensuremath{\,|S\rangle}}$. The single-ion cases are prepared with a fidelity of 95(3)%. ![Populations of the states ${\ensuremath{\,|S\rangle}}{\ensuremath{\,|S\rangle}}$ (red diamonds), ${\ensuremath{\,|D\rangle}}{\ensuremath{\,|D\rangle}}$ (blue circles) and ${\ensuremath{\,|S\rangle}}{\ensuremath{\,|D\rangle}}$ (green triangles) as function of the duration of the AC-Stark shift pulse. After $2.6~\mu$s, state ${\ensuremath{\,|S\rangle}} {\ensuremath{\,|D\rangle}}$ is generated with a fidelity of 91(4)%. Error bars represent projection noise.[]{data-label="fig4"}](fig4.pdf){width="47.50000%"} Two-ion crystal as a single–photon source ========================================= We have previously demonstrated that one ion in $\mathbb S$ produces a single photon when a Raman transition between $\mathbb S$ and $\mathbb D$ is driven [@Barros09]. In the experiments presented in the main text, we consider two ions in the entangled state ${\ensuremath{\,|\Psi(\phi)\rangle}}$, in which the probability to find one ion in ${\ensuremath{\,|S\rangle}}$ is one. When a Raman transition is driven between ${\ensuremath{\,|S\rangle}}$ and ${\ensuremath{\,|D\rangle}}$, the entangled state ${\ensuremath{\,|\Psi(\phi)\rangle}}$ is transferred to ${\ensuremath{\,|D\rangle}} {\ensuremath{\,|D\rangle}}$ and a single photon is expected. However, imperfect preparation of ${\ensuremath{\,|\Psi(\phi)\rangle}}$ leaves some population in ${\ensuremath{\,|S\rangle}} {\ensuremath{\,|S\rangle}}$, resulting in the generation of two photons. In order to estimate the number of two-photon detection events, we consider detector dark counts and imperfect preparation of the ions’ state. The following four events are relevant and contribute to two-photon detections: 1. State ${\ensuremath{\,|S\rangle}} {\ensuremath{\,|S\rangle}}$ is generated; two photons are produced and detected. 2. State ${\ensuremath{\,|S\rangle}} {\ensuremath{\,|S\rangle}}$ is generated; two photons are produced, one is lost and the other is detected together with a dark count. 3. State ${\ensuremath{\,|\Psi (\phi)\rangle}}$ is generated; one photon is produced and is detected together with a dark count. 4. Two darks count are detected. State tomography reveals that in 3(2)% of attempts to generate ${\ensuremath{\,|\Psi(\phi)\rangle}}$, the state ${\ensuremath{\,|S\rangle}} {\ensuremath{\,|S\rangle}}$ is prepared instead. The probability to detect one photon during the $55~\mu$s duration of the Raman process is 5.4(3)%, which is mainly limited by cavity absorption and detector efficiencies [@Stute12a]. Detector dark count rates are 3.2(1) s$^{-1}$ and 3.8(1) s$^{-1}$ for the two avalanche photodiodes. With these values, we expect one two-photon event in $8.2(8) \times 10^3$ attempts to generate a single photon. To measure two-photon events, we generate $\big ({\ensuremath{\,|S\rangle}} {\ensuremath{\,|D\rangle}} + {\ensuremath{\,|D\rangle}} {\ensuremath{\,|S\rangle}} \big ) / \sqrt{2}$ and $\big ({\ensuremath{\,|S'\rangle}} {\ensuremath{\,|D\rangle}} + {\ensuremath{\,|D\rangle}} {\ensuremath{\,|S'\rangle}} \big ) / \sqrt{2}$ and drive a cavity-mediated Raman transition such that a horizontally or a vertically polarized photon is generated. Photons leaving the cavity cross a half-wave plate aligned such that 50% of the light is reflected and 50% transmitted by a polarizing beam splitter. Photons are detected by the two avalanche photodiodes at each beam-splitter output and the second-order correlation function $g^{(2)}(0)$ is calculated. After 223,106 attempts to generate photons, 28 two–photon events were measured, and 27(3) two–photon events were expected from the considerations above. The observed number of two-photon detection events are thus consistent with single-photon generation. Process fidelity ================ Tomography of the state-mapping process consists of state tomography of the photonic output qubit for four orthogonal input states. Measurements in the three bases of horizontal/vertical, diagonal/antidiagonal and right/left circular polarization constitute state tomography of the photonic qubit [@Nielsen2000]. Each basis is measured a second time with the APDs swapped by rotating the $\lambda/2$- and $\lambda/4$-waveplates. Analysis is done with the sum of the two measurements to compensate for the different detection efficiencies of the two APDs. Process matrices $\chi_{ij}$ are reconstructed using a maximum-likelihood method. The process fidelity $\chi_{00}$ is given by the overlap of the reconstructed process matrix with the target process (i.e., the identity operation). Uncertainties in the process fidelities are given as one standard deviation, derived from non-parametric bootstrapping assuming a multinomial distribution [@Efron93]. Simulations =========== Numerical simulations are based on the Quantum Optics and Computation Toolbox for MATLAB [@Tan99] via integration of the master equation. We simulate two ${\ifmmode ^{40}\text{Ca}^{+} \else $^{40}$Ca$^{+}$~\fi}$ ions interacting with an optical cavity and a Raman beam. For each ion, we consider six levels: ${\ensuremath{\,|S\rangle}}$, ${\ensuremath{\,|S'\rangle}}$, ${\ensuremath{\,|D\rangle}}$, ${\ensuremath{\,|D'\rangle}}$, ${\ensuremath{\,|\mathbb P, m_j=-1/2\rangle}}$ and ${\ensuremath{\,|\mathbb P, m_j=1/2\rangle}}$. For the optical cavity, we consider two orthogonal modes $a $ and $ b$ with the Fock state basis truncated at 2 for each mode. Additional input parameters for the simulations are the cavity parameters $g, \kappa$, and $\gamma$; the magnetic field amplitude $B$, the Rabi frequency $\Omega$ of the Raman laser, the Raman laser linewidth, and the output path losses. The laser linewidth, atomic decay, and cavity decay are introduced in the Lindblad form. The Raman laser linewidth is set to the measured value of $30$ kHz. For the simulation of the first experiment, the initial density matrix $\rho_0$ is assigned [5%]{} of populations equally distributed between ${\ensuremath{\,|S\rangle}}{\ensuremath{\,|S\rangle}}$ and ${\ensuremath{\,|D\rangle}}{\ensuremath{\,|D\rangle}}$, and the coherence terms between ${\ensuremath{\,|S\rangle}}{\ensuremath{\,|S\rangle}}$ and ${\ensuremath{\,|D\rangle}}{\ensuremath{\,|D\rangle}}$ are set to zero, consistent with measurements. The rest of the population is distributed between ${\ensuremath{\,|S\rangle}}{\ensuremath{\,|D\rangle}}$ and ${\ensuremath{\,|D\rangle}}{\ensuremath{\,|S\rangle}}$, preserving the coherences such that $\rho_0$ has an overlap of [95%]{} with ${\ensuremath{\,|\Psi (\varphi)\rangle}}$. In the case of the second experiment, the superposition state $${\ensuremath{\,|\alpha,\beta\rangle}} \equiv \cos \alpha {\ensuremath{\,|S\rangle}} + e^{i\beta} \sin \alpha {\ensuremath{\,|S'\rangle}},$$ is introduced via an operator $\hat M$ that performs the mapping $${\ensuremath{\,|S\rangle}} \to \cos \alpha {\ensuremath{\,|S\rangle}} + e^{i\beta} \sin \alpha {\ensuremath{\,|S'\rangle}}$$ for each ion. This operator $\hat M$ is applied to $\rho_0$. From the integration of the master equation up to a time $t$, we obtain the time-dependent density matrix $\rho(t)$. The mean photon numbers of the cavity modes are calculated via the expectation values $\langle a^\dagger a (t)\rangle$ and $\langle b^\dagger b (t) \rangle$. Contributions of the detector dark counts are added to the mean photon number. Errors in the generation of the superposition state and magnetic field fluctuations are introduced by scaling the off-diagonal terms of $\rho(t)$ by a factor of 0.96 and by the exponential $e^{(2t/\tau)^2}$ respectively, where $\tau=190~\mu$s is the coherence time of the qubit stored in ${\ensuremath{\,|S\rangle}}$ and ${\ensuremath{\,|S'\rangle}}$. [Finally, the coupling of one of the ions to the cavity mode is reduced to 90% of it maximum value. This reduction is based on measured drifts over the course of the experiment.]{} Fig. (2c) shows the simulated and experimental temporal photon shapes as function of detection time. In order to have good agreement between the experimental data and simulations, we adjusted the Rabi frequency $\Omega$ within the experimental uncertainty range (see the “System Parameters" section). In the main text, we note that scattering and imperfect state preparation reduce the photon generation probability of the entangled state during the first 6 $\mu$s of the Raman process. In order to quantify this effect, we simulate the temporal photon shape as a function of detection time for the superradiant state, for the case of perfect state initialization and no scattering from $\mathbb P$ to $\mathbb S$ and $\mathbb D$. Comparing the area under this curve to that from the simulation in Fig. (2c), which takes both imperfect state initialization and scattering into account, we extract a reduction of 9.1%.
Q: Conversion From String to Clob and Clob to String I am trying to save XML response which comes as StrignBuffer to method. I need to save this string data to Sql as CLOB and retrieve the same. The response can be upto 3-4Kb, which should be retrieved to StringBuffer again Can any one give me tutorial or sample code which doesn't include db specific methods/jars. I can change db column type if CLOB is not suitable or any other better alternative. Please suggest. A: You can create a CLOB by calling connection.createClob(), then you can populate it with methods such as setString, setCharacterStream or setAsciiStream. Create a PreparedStatement and call its setClob() to store your CLOB in the databae. Retrieving the data is just the same, read it from a ResultSet and call getCharacterStream, getAsciiStream or getSubStream on it.
AFLP analysis of Cynodon dactylon (L.) Pers. var. dactylon genetic variation. Cynodon dactylon (L.) Pers. var. dactylon (common bermudagrass) is geographically widely distributed between about lat 45 degrees N and lat 45 degrees S, penetrating to about lat 53 degrees N in Europe. The extensive variation of morphological and adaptive characteristics of the taxon is substantially documented, but information is lacking on DNA molecular variation in geographically disparate forms. Accordingly, this study was conducted to assess molecular genetic variation and genetic relatedness among 28 C. dactylon var. dactylon accessions originating from 11 countries on 4 continents (Africa, Asia, Australia, and Europe). A fluorescence-labeled amplified fragment length polymorphism (AFLP) DNA profiling method was used to detect the genetic diversity and relatedness. On the basis of 443 polymorphic AFLP fragments from 8 primer combinations, the accessions were grouped into clusters and subclusters associating with their geographic origins. Genetic similarity coefficients (SC) for the 28 accessions ranged from 0.53 to 0.98. Accessions originating from Africa, Australia, Asia, and Europe formed major groupings as indicated by cluster and principal coordinate analysis. Accessions from Australia and Asia, though separately clustered, were relatively closely related and most distantly related to accessions of European origin. African accessions formed two distant clusters and had the greatest variation in genetic relatedness relative to accessions from other geographic regions. Sampling the full extent of genetic variation in C. dactylon var. dactylon would require extensive germplasm collection in the major geographic regions of its distributional range.
Have you been affected by poor dental work? Patients ‘at risk’ as insurer turns its back on disgraced dentist David Corless-Smith, dentist, solicitor, barrister and director at the Dental Law Partnership – a law firm with nearly 15 years’ experience specialising in dental negligence claims – said: “The DDU has said it will not indemnify D’Mello, effectively putting thousands of claimants at risk by forcing them to claim directly against the disgraced dentist and his assets, with no guarantee of any damages. “This is a disgraceful failure of the safety net that patients fully deserve and expect to be in place! That’s why we are pushing the government for urgent reforms in the way that dentists are insured to safeguard patients of healthcare scandals, such as this one, have access to the proper legal recourse.” Late last year, the Nottingham-based dental practice was thrust into the media spotlight, when serious malpractice around cleanliness and infection control was exposed. Following this, health chiefs launched an appeal to urge all of D’Mello’s patients throughout his 32 years of practice, to come forward for testing – some 22,000 patients, thought to be the biggest recall in in British medical history. David continued: “This was, and still is, a recall of epic proportions. We are already acting for hundreds of ex-D’Mello patients and expect hundreds more. These people have a right to justice and should be pursuing their claims through the DDU. However, the DDU’s Board of Management has specifically refused to indemnify D’Mello, meaning claimants will have to pursue D’Mello directly in order to seek any recompense and the more claimants, the less any potential pay-out will be. “We believe that adequate professional liability insurance should be mandatory for dentists to prevent this sorry state of affairs from ever happening again. Through our ‘Bridge the Gap’ campaign, we have been working to raise awareness of current legal loopholes and force all dentists to have adequate liability insurance and have this policed by the General Dental Council. Unfortunately, it appears that this loophole has been exploited again, leaving claimants potentially high and dry. “This level of mandatory insurance should be there to protect patients. However, the irony is that although insurance for dentists is mandatory, patients are still not necessarily covered as bodies like the DDU which were set-up to insure dentists, can use their discretionary powers to withdraw cover, as in this case – it just makes a mockery of the whole system!” David concluded: “It’s a basic right to expect good quality dental treatment every time you visit the dentist and we should be able to trust all practitioners. Only by taking rogue dentists to task and bringing them to justice through the courts, can we ever expect to stamp out malpractice in the dental profession.”
This red-shouldered hawk was an early-bird holiday traveler when it got stuck in a Penn Station transit booth Sunday night before being rescued by elite Emergency Service Unit cops. MTA employees watched the raptor chase a pigeon into the otherwise-empty booth — where information clerks generally sit — just before 10 o’clock Sunday night and called cops. Detective Joseph Bucchignano showed up and found that the poor bird was trapped. He pulled it out of the booth, caged it, and brought it to the West Side Highway, where he released it over the Hudson River. NYPD Special Ops tweeted out a photo of Bucchignano with the bird after he made the grab. “Meet Penn Station’s Early Bird traveler,” the photo was captioned. “This Red Shouldered Hawk attempted to beat the holiday rush last night but missed the train.” A source said, “The officers used their eagle eye to take the hawk into custody and set him free.” Even though all ended well for the hawk, we know what you’re thinking — but there was no word on whether the pigeon survived the encounter and joined the many other rail birds that flock to the station.
R. Kelly wants ex-wife to stop talking about him publicly FILE - In this June 26, 2019, file photo, Musician R. Kelly departs from the Leighton Criminal Court building after a status hearing in his criminal sexual abuse trial in Chicago. Lawyers for Kelly are asking a judge in Chicago to fine and jail his ex-wife for breaking a confidentiality clause in their divorce agreement. The Chicago Sun-Times reports that Kelly's attorney Steve Greenberg said in a Tuesday, July 2, 2019, filing that Andrea Kelly appeared "on countless media outlets disparaging Robert Kelly and making accusations and charges." (AP Photo/Amr Alfiky, File) FILE - In this June 26, 2019, file photo, Musician R. Kelly departs from the Leighton Criminal Court building after a status hearing in his criminal sexual abuse trial in Chicago. Lawyers for Kelly are asking a judge in Chicago to fine and jail his ex-wife for breaking a confidentiality clause in their divorce agreement. The Chicago Sun-Times reports that Kelly's attorney Steve Greenberg said in a Tuesday, July 2, 2019, filing that Andrea Kelly appeared "on countless media outlets disparaging Robert Kelly and making accusations and charges." (AP Photo/Amr Alfiky, File) CHICAGO (AP) — Lawyers for R&B singer R. Kelly are asking a judge in Chicago to fine and jail his ex-wife for breaking a confidentiality clause in their divorce agreement. The Chicago Sun-Times reports that Kelly’s attorney Steve Greenberg said in a Tuesday filing that Andrea Kelly appeared “on countless media outlets disparaging Robert Kelly and making accusations and charges.” The singer faces more than 20 sex-related counts in Cook County criminal courts alleging he abused and assaulted women. He denies all wrongdoing. ADVERTISEMENT Andrea Kelly’s lawyer says the singer has broken the confidentiality agreement himself many times. The couple’s divorce has garnered attention since the singer was jailed for three days in March for failing to pay about $160,000 in child support. The next hearing in the case is July 10. ___ Information from: Chicago Sun-Times, http://chicago.suntimes.com/
Q: DMA Engine Timeout and DMA Memory Mapping I am trying to use a Linux DMA driver. Currently, when I send the transaction out and begin waiting, my request times out. I believe this has to do with the way I am setting up my buffers when I am performing DMA Mapping. char *src_dma_buffer = kmalloc(dma_length, GFP_KERNEL); char *dest_dma_buffer = kzalloc(dma_length, GFP_KERNEL); tx_dma_handle = dma_map_single(tx_chan->device->dev, src_dma_buffer, dma_length, DMA_TO_DEVICE); rx_dma_handle = dma_map_single(rx_chan->device->dev, dest_dma_buffer, dma_length, DMA_FROM_DEVICE); In Xilinx's DMA driver, they take special care to look at memory alignment. In particular, they use a property of the dma_chan->dma_device called copy_align. @copy_align: alignment shift for memcpy operations const int dma_length = 16*1024; len = dmatest_random() % test_buf_size + 1; len = (len >> align) << align; if (!len) len = 1 << align; src_off = dmatest_random() % (test_buf_size - len + 1); dst_off = dmatest_random() % (test_buf_size - len + 1); src_off = (src_off >> align) << align; dst_off = (dst_off >> align) << align; It looks like that original address is totally random from dmatest_random(). Not sure what can be said/what guarentees can be made about that memory. static unsigned long dmatest_random(void) { unsigned long buf; get_random_bytes(&buf, sizeof(buf)); return buf; } They then use these offsets to set up their source and destination buffers for DMA. u8 *buf = thread->srcs[i] + src_off; dma_srcs[i] = dma_map_single(tx_dev->dev, buf, len, DMA_MEM_TO_DEV); I am very confused as to what this does. My only guess is that it will page align the beginning of the source and destination buffers in virtual memory. Looking at the way I set up my buffers with kmalloc and kzalloc, do I have any guarantee that my buffers start at page boundaries? Am I right in that I need my buffers to start at page boundaries? The source code to the Xilinx DMA test driver is here: https://github.com/Xilinx/linux-xlnx/blob/master/drivers/dma/xilinx/axidmatest.c You can find the high level description of the problem I am trying to solve here: https://forums.xilinx.com/t5/Embedded-Linux/AXI-DMA-Drivers-for-Kernel-v-4-9-PetaLinux-2017-3/td-p/828917 A: Having a look to this link seems that you don't have any guarantee that your memory allocation will start at the beginning of a frame page. However, this other link can be helpful, where it is explained alloc_pages, that maybe fit better with what you need. In regarding with the alignment of the memory to be used in DMA transactions, into this link we can read the following: What memory is DMA'able? The first piece of information you must know is what kernel memory can be used with the DMA mapping facilities. There has been an unwritten set of rules regarding this, and this text is an attempt to finally write them down. If you acquired your memory via the page allocator (i.e. __get_free_page*()) or the generic memory allocators (i.e. kmalloc() or kmem_cache_alloc()) then you may DMA to/from that memory using the addresses returned from those routines. This means specifically that you may not use the memory/addresses returned from vmalloc() for DMA. It is possible to DMA to the underlying memory mapped into a vmalloc() area, but this requires walking page tables to get the physical addresses, and then translating each of those pages back to a kernel address using something like __va(). [ EDIT: Update this when we integrate Gerd Knorr's generic code which does this. ] This rule also means that you may use neither kernel image addresses (items in data/text/bss segments), nor module image addresses, nor stack addresses for DMA. These could all be mapped somewhere entirely different than the rest of physical memory. Even if those classes of memory could physically work with DMA, you'd need to ensure the I/O buffers were cacheline-aligned. Without that, you'd see cacheline sharing problems (data corruption) on CPUs with DMA-incoherent caches. (The CPU could write to one word, DMA would write to a different one in the same cache line, and one of them could be overwritten.) Also, this means that you cannot take the return of a kmap() call and DMA to/from that. This is similar to vmalloc(). What about block I/O and networking buffers? The block I/O and networking subsystems make sure that the buffers they use are valid for you to DMA from/to. So that, only we need the address to be aligned with cacheline-size, and we don't need to get memory aligned to frame page (it would work too, but it is not needed). In regarding with the manual about kmalloc, if we specify the flag GFP_DMA we get memory suitable for DMA transactions (aligned to cacheline-size).
2*4/4 - -1. Let q(o) be the first derivative of 4*o - 1/3*o**3 - r - 3/2*o**2. Give q(-4). 0 Let j(t) = -t - 3. Let m be (-37)/13 + (-32)/208 - 3. Give j(m). 3 Let j(l) be the second derivative of -l**5/20 - l**4/4 + l**3/2 + l**2 - 4*l. Calculate j(-4). 6 Let m(j) = 3*j. Let i(y) = y**2 - 7*y - 5. Suppose -4*v + 9*v = 40. Let a be i(v). Calculate m(a). 9 Let r(a) = a**2 + 11*a - 32. Let i be r(-13). Let t(s) = 7*s**2 + 6 + 3*s**3 - s**2 - 2*s**3. What is t(i)? 6 Suppose 3*z = 4*d + 7 + 6, 0 = -2*z + 6. Let r be (d + 1 - -4)/(-1). Let g(k) = k**3 + 4*k**2 - k + 3. Give g(r). 7 Let i(t) be the first derivative of -t**2/2 - 2*t - 1. Suppose -5*b - 7 = -27. Suppose -u = -3*u + 2*n, -3*u - b*n = -35. What is i(u)? -7 Let j(h) = h + 6. Suppose 5*q + 0*q = 10. Let m(w) = 2 - 3 + q*w**2 + 0*w**2 + 6*w**3. Let c be m(-1). Calculate j(c). 1 Let u(o) be the third derivative of o**6/120 + 7*o**5/60 + o**4/4 + o**3 - 39*o**2. Calculate u(-6). 6 Suppose 0*x + 4*x = 8. Let p(i) = 5*i - x*i - 7 + 0*i - i. Let o = -4 + 9. What is p(o)? 3 Let f(u) = -2*u - 2. Let h be 2*(0/2 + -2). Calculate f(h). 6 Let r(j) = j**2 + 1. Let w be r(-1). Let k(z) = 2*z + 3. Let a(b) = -b + 12. Let g be a(7). Let l(o) = -o - 1. Let f(p) = g*l(p) + 2*k(p). Determine f(w). -1 Let b(k) = 7*k**2 - 3*k - 9. Let d(s) = s**2 - s - 1. Let l be 0 - 9/3 - -3. Suppose 0*c + 3*c - 18 = l. Let p(n) = c*d(n) - b(n). Give p(-4). -1 Let x(o) = -3*o + 3. Let d(j) be the second derivative of -j**4/12 - j**3/2 + 7*j**2/2 + 5*j. Let c be d(-4). Give x(c). -6 Let r(v) be the first derivative of v**4/4 - v**3 - 3*v**2/2 + 2*v - 5. What is r(3)? -7 Suppose -5*a + 5*z - 20 = 0, z + 44 = -5*a + 12. Let y(o) be the second derivative of o**5/20 + 5*o**4/12 - 7*o**3/6 - 3*o**2/2 + 4*o. Give y(a). 3 Let l(n) = -n**3 - 8*n + 5 + 5*n**2 + n - 1. Let u = -1 - -5. Determine l(u). -8 Suppose 2 = s, 7*x - 11 = 2*x - 3*s. Let n(t) = 66*t + 27. Let j(y) = -5*y - 2. Let u(a) = -54*j(a) - 4*n(a). Give u(x). 6 Let n(l) be the second derivative of l**4/12 - 4*l**3/3 + 5*l**2/2 - 2*l. Let a be n(6). Let q be 33/(-7) - (-2)/a. Let z(h) = -h**2 - 6*h. Determine z(q). 5 Let p = -5 + 3. Let w = 0 + p. Let t(q) = -q**2 - 2*q. Determine t(w). 0 Let u = 0 + 2. Suppose -u*l + 7 = 1. Let d(y) = -2 - 1 + y + 2*y. Determine d(l). 6 Let o be 14*(0 - -1)*-1. Let r be 132/28 + (-4)/o. Let c(h) = -3*h**2 + 16*h + 14. Let d(w) = w**2 - 5*w - 5. Let u(x) = -2*c(x) - 7*d(x). Calculate u(r). -3 Let z(h) be the second derivative of -7/6*h**3 - 2*h + 3/2*h**2 + 0. Let q(u) = 3*u - 2. Let j(n) = -5*q(n) - 2*z(n). Give j(3). 1 Let h(m) = m**3 - 5*m**2 + 4*m - 5. Let o(s) = -s + 7. Let r be o(3). Calculate h(r). -5 Let i(u) be the second derivative of u**5/20 - u**4/3 - 5*u**3/3 + 6*u**2 + 10*u. Let h(o) = o - 1. Let b(k) = -5*h(k) - i(k). What is b(5)? -7 Let a(m) = 2*m + 2. Suppose 0 = z + 3*z - 12. Suppose -4*p - h + 16 = -z*h, -4*h = -2*p + 20. Give a(p). 6 Let g(q) = 2 + 15*q - 48*q + 17*q + 17*q. Give g(-3). -1 Let i(v) = v**2 + 6*v + 3. Let q = -6 - -10. Suppose -5*y = -5*a - 20, 5*y + 0*a = -a - q. Let x = y - 4. What is i(x)? -5 Let n(z) be the second derivative of -z**4/2 + z**2/2 - 5*z. Determine n(1). -5 Let r(w) be the second derivative of -w**4/12 + w**3/6 - 5*w**2/2 - w. What is r(0)? -5 Let u(q) = q**2 - 13*q - 7. Let r(d) = 2*d**2 - 27*d - 15. Let s(v) = -6*r(v) + 13*u(v). Suppose 19 = -5*y + 44. Give s(y). -11 Let r be 300/(-78) + 2/(-13). Let w(u) = -u**2 - 3*u. Give w(r). -4 Suppose 4*b - 2*m = m + 16, 0 = -b - 3*m + 4. Let k(j) = -3*j**2 - b*j + 3*j - 1 + j**3 + 2*j**2 + 3. Calculate k(2). 4 Let r(d) = 24*d**2 - 8*d**2 + 0*d + 3*d + 7 - 17*d**2. Calculate r(5). -3 Let u(l) be the second derivative of 1/6*l**4 + 1/2*l**2 + 0 - 1/3*l**3 - 3*l. Calculate u(1). 1 Let w(c) be the first derivative of -c**4/4 - 10*c**3/3 - 6*c**2 - 13*c + 5. Calculate w(-9). 14 Suppose 4*s - 2*m = 8, -4*m - 7 = 2*s - 7*s. Let j(b) = -5*b**2 - 35*b + 16. Let l(a) = -2*a**2 - 12*a + 5. Let x(z) = s*j(z) - 8*l(z). Determine x(7). -6 Let v(w) = w**3 - w**2 - 2*w + 2. Let l = -35 + 37. What is v(l)? 2 Let t(o) = -5 - o + 16 + 0 - 8. Calculate t(-6). 9 Let g(c) = -c**3 - 5*c**2 + 3*c - 6. Let m(b) = -2*b**2 + 2*b - 2. Let r be m(2). Calculate g(r). 12 Let d(q) = q - 8. Let p(k) = -k + 7. Let f(r) = -2*d(r) - 3*p(r). Give f(0). -5 Let l(r) = r**2 + r + 1. Let a(g) = 5*g**2 + g. Let x(o) = a(o) - 3*l(o). What is x(3)? 9 Let a(c) = -2*c**2 + c**3 - 5*c + 0*c + 0*c**3 + 2. What is a(4)? 14 Let o be (-9)/(-4) + 2/(-8). Let g(q) = 0*q**2 - 7 - q - q**o + 7*q. Give g(5). -2 Let z(o) = -8*o - 1. Let u(q) = 77*q + 17. Let d(k) = 26*k + 6. Let f(g) = 8*d(g) - 3*u(g). Let b(n) = -6*f(n) + 17*z(n). Give b(1). 3 Let n(c) = -2*c**2 + 4*c + 4 - c**3 - 5*c - 6. What is n(-3)? 10 Let d(v) = -2*v - 10. Let k(r) = 7*r - 1 + 30 + 3*r - 4*r. Let y(a) = -11*d(a) - 4*k(a). Calculate y(-6). 6 Let j(a) = -a**3 + a. Let n(b) = 2*b**3 + 2*b**2 - 2*b - 3. Let p(r) = -j(r) - n(r). Suppose -2*f - 1 = 5*v + 14, -2*v - 6 = -5*f. Determine p(v). 9 Let k = 9 - 13. Let g(c) = c**2 + 3*c - 2. Determine g(k). 2 Let b be (-10)/(-7) - 27/63. Let l(x) be the first derivative of -3*x**4/2 + x**2/2 - 2. Give l(b). -5 Let d(m) = -2*m - 4. Let b(u) = u**2 - 4*u + 1. Let f be b(3). Let k be f + -1 + -1*1. Let i = k - 0. What is d(i)? 4 Let k(w) = -2*w**2 - 2*w - 1. Let a be k(-1). Let i = -5 - a. Let o(x) be the first derivative of -x**4/4 - x**3 + 3*x**2 + 5*x + 3. Calculate o(i). -3 Let q(r) = -r + 1. Let m(z) = -4*z. Let t(n) = m(n) - 2*q(n). Calculate t(4). -10 Let k(w) = -w - 11. Let m = 41 - 46. Give k(m). -6 Let h(y) = -y**2 + 4*y + 2. Suppose -6 = -2*c, -4*c - 22 = -5*f - 3*c. What is h(f)? -3 Let k(m) = 6*m**2 - 4*m - 7. Let p(d) = 6*d**2 - 4*d - 8. Let j(t) = 4*k(t) - 3*p(t). Let s(u) = -11*u**2 + 7*u + 7. Let a(l) = 7*j(l) + 4*s(l). Give a(-1). -2 Let v(h) = h**3 + 4*h**2 - h + 1. Suppose w = -4*w - 4*f - 48, 3*w + 16 = 4*f. Let q = 4 + w. Determine v(q). 5 Suppose 6*r + 5 = 5*r. Let k(w) be the third derivative of w**4/24 + 5*w**3/3 + w**2. Give k(r). 5 Let o(n) = -n - 2. Let h be (10/(-3))/(((-64)/12)/(-8)). Calculate o(h). 3 Let x(y) = -3*y**3 + 17*y**2 + 8*y + 1. Let f(w) = 2*w**3 - 9*w**2 - 4*w - 1. Let p(h) = -5*f(h) - 3*x(h). Calculate p(-5). -3 Let y(j) = -5*j - 18*j**2 + 2*j + 2*j + 21*j**2. Let q(s) = s + 7. Let o be q(-6). Give y(o). 2 Let t(k) be the second derivative of -k**5/20 + k**4/6 - 2*k**3/3 + 3*k**2/2 - 2*k. Suppose -2*l = 10, -4*c - 12 = -3*l + 7*l. Calculate t(c). -5 Let m(s) = 2*s**2 + 3*s - 3. Suppose 2 = -12*g + 13*g. Determine m(g). 11 Let z(u) = u**3 - 4*u**2 - 10*u + 3. Let m(g) = g + 1. Let f(h) = -3*m(h) - z(h). Let y(d) = -3*d - 25. Let a be y(-10). What is f(a)? 4 Let k(t) = t**2 + t + 1. Let v be k(-3). Let g(n) = -9*n. Let r(c) = 10*c. Let y(o) = v*g(o) + 6*r(o). Give y(-1). 3 Let h(z) = z**3 + 6*z**2 - z + 2. Let j = 9 - 15. Determine h(j). 8 Let x(y) = -y**2 + 3*y - 3. Let s(p) = 2*p**2 - 6*p + 6. Let q(r) = -3*s(r) - 7*x(r). Suppose -a + 2 + 4 = -o, 4*a + 4*o = -8. What is q(a)? 1 Let q(r) be the first derivative of r**4/4 + 7*r**3/3 + 3*r**2 + 3*r + 79. Let g(w) = -w**3 + 3*w**2 + 3*w - 2. Let i be g(4). What is q(i)? 3 Let c(t) = t - 7. Suppose -3*k - k + 5*v = -26, v + 25 = 3*k. Let n = -6 + k. Give c(n). -4 Suppose 25 = 2*a + t + 9, -3*a + 5*t = 2. Let d(z) = -z**3 + 5*z**2 + 6*z - 2. Give d(a). -2 Let z be 2*(120/(-50) - 3/5). Let q(p) = -p**2 - 7*p - 2. Calculate q(z). 4 Let u(i) = -i**3 + 8*i**2 - 5*i - 10. Suppose l - 4*l = -21. What is u(l)? 4 Let s(k) = k**3 + 4*k**2 - 6*k. Let a be s(-5). Let m(n) = 0*n + 2 + a*n - 6*n. Determine m(3). -1 Let a(n) = 4*n - n + n**2 - 2*n**2. Suppose 10*q - 12 = 7*q. Suppose -q*i - 3 = 1, 0 = o - i - 6. Give a(o). -10 Let t(v) = v**2 + 2*v + 1. Suppose 5*y - 5 = 2*j, -5*y + 3*j = 2*j. What is t(y)? 0 Let v be (5/2)/((-3)/(-6)). Let j(k) be the third derivative of k**6/120 - k**5/12 + k**3/3 - 4*k**2. Calculate j(v). 2 Suppose 5*n + 5 + 0 = 5*h, 5*h + 4*n = 14. Let i(j) be the third derivative of 0 + 1/2*j**3 - 2*j**2 - 1/12*j**4 + 0*j - 1/60*j**5. Determine i(h). -5 Let x(a) = 8*a**3 - 2*a**2 - a + 3. Let p(v) = 9*v**3 - v**2 - v + 2. Let r(u) = 3*p(u) - 2*x(u). What is r(1)? 11 Let u(z) = -2*z + 1. Le
FSH suppression of nitric oxide synthesis in porcine oocytes. The present study was designed to evaluate the regulation of nitric oxide (NO) synthesis in porcine oocytes during follicular development. Cumulus-oocyte complexes were obtained by aspirating the small follicles of immature porcine ovaries and cultured at 39 degrees C for 24-72 h with FSH in a serum-free medium. The oocyte-surrounding cumulus cells markedly proliferated and expressed LH receptor mRNA in response to FSH. The endothelial type of NO synthase (eNOS) (130 kDa) was detected in the oocyte, but not in the proliferated cumulus cells, by immunoblotting. The amount of oocyte eNOS did not significantly alter during culture, but measurement of nitrite and nitrate revealed FSH suppression of NO synthesis by approximately 50%. NO-releasing agents were added to the cultures to examine the effect of NO on the growth of cumulus cells. NO-releasing agents showed inhibitory effects on proliferation of the cumulus cells and expression of LH receptor mRNA. Thus, synthesis of eNOS-derived NO is suppressed in the porcine oocyte during development with no change in the enzyme amount, and it is suggested that it has an inhibitory function in the growth of cumulus cells.
[Impulse cytophotometry studies of bone marrow and blood cells in children with acute lymphoblastic leukemia (ALL). 2. Lymphatic cells in blood]. The DNA-content of mononuclear cells of the peripheral blood of infantile and juvenile ALL patients was investigated using Pulse Cytophotometry. The fraction of cells in S- and G2 + M-phase is significantly increased in comparison with samples of healthy probands. The fraction of DNA-synthesising cells (S-phase) of both peripheral blood (mononuclear cells) and bone marrow of leukemia patients cannot be significantly distinguished by mathematical methods. On the other hand, the fraction of cells in later phases of cell cycle (G2 + M-phase) is significant enhanced in the bone marrow in comparison with the peripheral blood. A high correlation was found between the number of leukocytes and fraction of G2 + M-phase cells in the peripheral blood of SR- and MR-patients. No correlation was found between the number of leukocytes and S-phase-fraction. The occurrence of aneuploid cell populations in the mononuclear fraction of peripheral blood in the acute state of ALL could be of importance for prognosis and regime of therapy.
Q: Class constructor declaration... Two ways of declaring the same thing? I would like an explanation of difference for example between this declaration: class Clazz(param1: String, param2: Integer) and this one: class Clazz(param1: String)(param2: Integer) Does second declaration affect just the way of instantiating the objects or is there any deeper reason I don't know about. One reason I thought about would be multiple variable length of parameters for example: class Clazz(param1: String*)(param2: Integer*) So are there any others? A: #1 Type inference. It goes from left to right and is done per parameter list. scala> class Foo[A](x: A, y: A => Unit) defined class Foo scala> new Foo(2, x => println(x)) <console>:24: error: missing parameter type new Foo(2, x => println(x)) ^ scala> class Foo[A](x: A)(y: A => Unit) defined class Foo scala> new Foo(2)(x => println(x)) res22: Foo[Int] = Foo@4dc1e4 #2 Implicit parameter list. scala> class Foo[A](x: A)(implicit ord: scala.Ordering[A]) { | def compare(y: A) = ord.compare(x, y) | } defined class Foo scala> new Foo(3) res23: Foo[Int] = Foo@965701 scala> res23 compare 7 res24: Int = -1 scala> new Foo(new {}) <console>:24: error: No implicit Ordering defined for java.lang.Object. new Foo(new {}) ^ A: In the second version you are declaring a curried primary constructor for Clazz. So the difference between the two versions is the same as difference between "normal" and curried functions in Scala, i.e. def foo(param1: String, param2: Int) def foo(param1: String)(param2: Int) Most of the time both declarations can be used interchangeably but if you often need to curry function then it makes more sense to declare it in curried form. Note you can also convert a normal function or even constructor into a curried form, for e.g you could transform your normal Clazz constructor into curried form using this: (new Clazz(_, _)).curried You also need multiple parameter lists if you are passing an implicit value (as the keyword implicit applies to the complete parameter list)
Berkeley man denied bail over break-ins A man accused of committing a string of break-ins during a three-week crime spree in November last year, is likely to be charged with more property offences this week. Craig Douglas Thompson, 41, of Berkeley, already faces 43 counts of stealing and fraud after he allegedly went on a robbery bender, breaking into several homes and stealing everything from perfume to a Jaguar car. Opposing Thompson's latest bid for bail yesterday, prosecutor Sergeant Rob Casey told Wollongong Local Court that Thompson was likely to be charged over a separate crime spree in coming days. He said the fresh allegations related to similar offences. The court was told several of Thompson's matters had already been referred to the Director of Public Prosecutions, including claims Thompson attacked an elderly couple when they disturbed him stealing from their home. Sergeant Casey said a hat had been left at a November 13 break-in and he was confident DNA on the item would match Thompson. He also noted Thompson had allegedly gone to great lengths to evade police, including jumping out of a second-storey window. Thompson was refused bail on December 27 after he was charged over the break-in blitz. Police claimed Thompson had broken into Wollongong homes between November 5 and November 23 last year, often pinching car keys which he would then use to steal the home owners' vehicles. It is alleged Thompson also used stolen credit cards to buy items at service stations. Officers had been searching for Thompson for weeks before his Boxing Day arrest. Defence solicitor Angela Melouney claimed yesterday the case against Thompson was weak, arguing there was no evidence linking her client to the allegations. Ms Melouney said Thompson could report to police and comply with a curfew. Registrar Nicole Hoffmann refused bail and adjourned the matter to January 7.
Genital Herpes Simplex is a chronic recurring disease associated with physical and psychological discomfort which affects 20 million Americans. One third of those affected experience four or more recurrences per year during which painful blisters or ulcers are present on their genitals for one or two weeks. Patients and health professionals believe that recurrences are associated with stress. However, there is sparse empirical evidence for such a relationship. This investigation will examine the effects of major life stress and minor stresses on reactivations of genital herpes. To do this two studies are being proposed. In Study 1, 150 HSV seropositive male and female subjects with a history of genital HSV will be followed prospectively for a six month period during which stressful events will be monitored monthly and recurrent lesions will be cultured to document their occurrence objectively. Indices of both major and minor stressful events will be utilized to account for a wide range of stressful events--from a death in the family to traffic problems. Because stress affects individuals differently, evaluations will be made at the beginning and end of the investigation of characteristics that have been shown to modify the effects of stress: perceived degree of personal control, arousal seeking tendency and social support. The research will identify characteristics of patients which make them susceptible to frequent HSV recurrences under varying conditions of stress. This will clarify our understanding of the stress/illness relationship and ultimately lead to more effective preventive and therapeutic approaches. Study 2 will expand and elaborate on the findings of Study 1. Based on the findings of Study 1, 15 female subjects who are determined to be at risk for frequent HSV recurrences will be recruited for participation in an intensive, month long investigation. They will monitor minor life stresses on a daily basis and provide daily samples of vaginal secretions for detection of asymptomatic viral shedding, by viral culture and by immunoperoxidase staining of imprint smears. In this way, a temporal relationship of stress to viral activity may be demonstrated, without the bias of retrospective reporting of stressful events.
Q: Is this way of factorizing an integer faster than trial division? I recently found a way to quickly find a factor of a given integer, starting from another integer not too far away from that factor and I was wondering whether this way is faster than trial division. It all depends on a simple lemma. Lemma: Let $n,d\in\mathbb{N}$ with $d<n$, $d\mid n$ and let $k\in\mathbb{Z}_{\ge 0}$ with $$k<\frac{d^2}{n-d}.$$ Let $r$ be the least positive integer congruent to $-n$ modulo $d+k$. Now, either $\gcd(n,d+k)$, $\gcd(n,r)$ or $2$ is a non-trivial divisor of $n$. Proof: We have: $$(d+k)\frac nd=n+\frac{nk}{d}\implies -n\equiv \frac{nk}{d}\pmod {d+k}$$ Also: \begin{align*} k &< \frac{d^2}{n-d}\\ nk-dk &< d^2\\ nk &< d^2+dk\\ \frac{nk}{d} &< d+k \end{align*} from which it follows that $r=\frac{nk}{d}$, so either $\gcd(r,n)$ is a non-trivial divisor of $n$ or $d\mid k$, in which case $\gcd(n,d+k)$ is a non-trivial divisor of $n$, except if $d=k=n/2$, in which case we have $2\mid n$. Q.E.D. Now, In order to find a $d$, we just have to find $d+k$, so instead of finding one value ($d$), we need to find one of the $d^2/(n-d)$ values of $d+k$. We've nog longer got a single needle in a haystack, but a whole bunch of them. And we only need to find one. Questions First, did I make any mistakes? If not, how long would it take to find a single non-trivial factor of a given $n$? And how long would complete prime factorization take? Is this faster than trial division (trivial if you've got the answer to the above questions) Edit: From @mixedmath's answer, we know that we can ignore the time complexity of computing the $\gcd$ when calculating the time complexity of finding a single non-trivial factor. Let $$S(n):=\sum_{d\mid n, d\neq n}\left\lfloor\frac{d^2}{n-d}\right\rfloor$$ This is how many values of $d+k$ will give us a non-trivial divisor of $n$. When randomly guessing a value of $d+k$, the chance of succes is: $$\frac{S(n)}{n}$$ which means the chance of failure is $1-S(n)/n$. This means the time complexity should be: \begin{align*} &O(\log_{1-S(n)/n}0.5)\\ &= O\left(\frac{-\log 2}{\log(1-S(n)/n)}\right)\\ &= O\left(\left(\log\left(1-\frac{S(n)}{n}\right)\right)^{-1}\right) \end{align*} A: Let's try your method on factoring $17\cdot 19 = 323$. There are two factors to identify. Let's suppose that we were to identify the factor $d = 17$. What range of $k$ suffices? You say we need $$ 0 \leq k < \frac{d^2}{n-d} = \frac{17^2}{323 - 17} = \frac{289}{306} = 0.944...$$ This only leaves $k = 0$. So to guess $d$, one would need to guess exactly the factor $17$. This saves no time. Perhaps the other factor is more helpful. Let's suppose we were to identify $d = 19$. Then $$ 0 \leq k < \frac{19^2}{323 - 19} = \frac{361}{304} = 1.1875.$$ So we would need to guess $d+k$ as either $19$ or $20$. This is not giving much space. Of the three possible guesses for $d+k$, the only one that isn't a factor of $n$ directly (i.e. the only one that your method proposes that would be "another needle in the haystack") is $20$. Let's check that your method on $20$ works. We compute $\gcd(n, d+k) = \gcd(323, 20) = 1$. You say $r \equiv -n \pmod{d+k}$, and is chosen as the least positive residue. Here, that means $r = 17$, and $\gcd(n, r) = \gcd(323, 17) = 17$. So this identifies $17$ as a divisor. The good news is that it does appear that your method does appear to work. The bad news is that in cases where $n$ is a product of two primes that are relatively near each other in size (the classic challenge), your method does not do very well and adds almost nothing to trial division. Even if the numbers are larger, you often gain nothing. Your method works best when $n$ contains many factors. In particular, your method is best at finding the smallest nontrivial factor (corresponding to the largest nontrivial $d$, which has the largest $k$ range). When $n$ has many factors, you identify much more rapidly than random-guess trial division. Once you've identified the small factors, the method becomes worse and worse at identifying larger factors. As a final note, you ask about the time complexity of computing gcds. Through the Euclidean algorithm, one can compute gcds in logarithmic time. You can pretend that this is instant, as gcds will not be what holds the computation back. Computing gcds is so fast that many implementations of factorization algorithms first compute the gcd of $n$ with the product of the first very many primes, just to get all the small factors out of the way.
Send with your love to your dear ones, this Pretty Adrenaline Blush Bouquet to enchant them with its beauty. Our bestselling holiday Arrangement is hand-designed with fragrant mix of Sunflowers and Tube Roses in a classic Tin Pot.(Delivery available only in Peninsula Malaysia)... Uplift the moods while you are on a picnic with your close ones by adding to the moment, this Exquisite A Slice of Heaven Gourmet Hamper containing elements of fun, excitement and happiness. Inside this gorgeous Hamper your recipient will discover Taiwanese Kumquat Lemon Tea Cubes of 320 grams, Macau Monkey Mushroom Cookies of 380 grams, Cordycep Flower Sinensis of 50 grams, Himematsutake Mushroom... I must appreciate; I have received excellent service from your online service. Very efficient and 100% delivery accuracy. – Oliver, Singapre “Thank you so much, it was a wonderful experience when the delivery boy knocked on the door with a pleasant surprise”. – Harvinder Singh, Ludhiana This is to tell you that a marvelous job has been done as far as the product of gift and delivery is concerned. My family is very pleased. Congratulations on a well job done. –Daishell. Zambia Thanks for such a fast service and the Product arrived well packed. Family is very happy. I would definitely make future purchase with u, will 100% recommend to all family and friends.- Daniel, Gremany Appreciate the customer service and the handling of the issue. – Zane, Ireland **Declaration:Wine, Whiskey, Liquor can only be purchased and delivered to persons who are at least 21 years of age. Placing an order validates that you and the person who accepts delivery will be 21 years of age or older. When such items are delivered, the person accepting delivery may be required to show identification.
Honeycomb – Albany Marina Residences Building 1 | BIG The Honeycomb will become the tallest structure in Albany, visible from every point of the resort and serve as a beacon from the sea. Surrounded by extraordinary views of the marina and its surrounding community, the residences in the building offer a variety of floor plans that will suit the diverse lifestyles of its tenants Albany Marina Residences Building – Courtesy of BIG To emphasize the relationship with its surroundings, the building has a full glass façade and is wrapped in balconies, connecting the interiors with the natural beauty of the Island. Since the prime views are towards the south, these balconies are deep enough to not only provide an outdoor space, but also summer kitchens and a pool sunken into the balcony floor of each unit. These unique pools have a transparent edge towards the plaza, eliminating the visual barrier between the pool and the surrounding environment and allow bathers to become fully immersed in the view of the marina and the ocean beyond. Albany Marina Residences Building – Courtesy of BIG The additional weight of the water is supported by a floor-high beam below, while also providing privacy between adjacent apartments. The repetition of this motif creates a hexagonal pattern in the facade, alluding to natural geometries found in certain coral formations or honeycombs. Different unit sizes introduce dynamic irregularities in the pattern that further amplify this notion. The transparent front of the pools that span each balcony unit literally becomes the face of the building. At night they are illuminated, creating lively reflections on the ceilings of the balconies and activate the facade.
Q: What are the correct utility functions? It is common to talk about utility functions. For example in a universe with only two goods, we might assume each person (or group of people) carries a function $u(x,y)$ in their heads. When offered some baskets $B_j$ containing $x_j$ amount of good 1 and $y_j$ amount of good 2 the person will select the basket $B_k$ that maximises $u(x_k,y_k)$. I wonder has anyone ever tried to find what there utility functions look like experimentally? Of course they might vary from person to person, and depend on the type of goods. But if the functions are a realistic model it should be possible in principle to see what they look like. Has there been any attempts to do so? The closest I can find are regression attempts, where data is gathered and then fit to some class of functions. For example linear regression tries to draw the straight line (hyperplane) that best fits the data. This is not what I am looking for however, since regression assumes the type of function in advance. I'm looking for something as simple as offering a bunch of baskets, plotting the data, and comparing to a bunch of different types of functions. A: It would be really tough to find papers (other than structural models) that empirically determine a particular utility function. Because of their subjective nature, it is incredibly tough to obtain observable data to estimate them. So researchers study the primitive preference relations that utility functions represent. Arguably, the preference relation is what you want to study when talking about rationality and choice. Utility functions are useful insofar as it opens up a tool-box of real analysis tools which a binary relation doesnt typically allow. So you should rather look for papers that test rationality from observable choice data. I have elaborated a bit on this in an answer to a related question: Click Here
Words She ran through the forest and leaves rained down, like teardrops dipped in tangerine paint. She stood, breathless, on the mud-stained path, amongst the snarled roots and impressively tall rocks. This moment was simple. Beautiful. And yet, it was something so much more... It was the pinnacle of everything been working towards for years: With no one but the ancient trees as her witness, she wrapped her shaky arms around her entire being & hugged herself, with unfamiliar softness. This moment of supreme, silky tenderness shook her to the core. She wept tears, like diamonds, of exquisite joy. It was not quite that she had learned to love herself, she was still working on that. It was that she had learned what self-compassion tasted like. She had learned get out of her own damn way and melt, like caramel, into who she always wished to be: Herself, unedited. As raw as the falling leaves. As deliciously imperfect as the straggly, windswept branches. And on that unsuspecting day, on the damp forest floor she found finally it.
Q: How to number each row in UITableIVew? I have a UITableView and am programmatically adding new rows to it.. How can I number each row? Is there a storybard option for this? Any help will be appreciated, thank you. A: There is no “storyboard option” to number the rows, assuming you mean you want the rows to display their row numbers on the screen. In tableView:cellForRowAtIndexPath:, you need to set the text of a label to contain the cell's row number. For example: cell.textLabel.text = [NSString stringWithFormat:@"Row %d", indexPath.row]; If you don't understand that line of code, or don't understand where to put it, you need to work through some table view tutorials before proceeding.
Ok since it seemed like noone else had created an Endomondo challenge or any other such challenge for us commuters I have created one on Endomondo. I know there was talk about using other sites like strava or such but since no one has actually done anything I thought I may as well set something up...I need something to push me. Same deal as 2012...pure commuting kilometres and challenge is from 00:01 01/01/2013 - 23:59 31/12/2013 TailWind wrote:Well, I'm already eating your dust, gretaboy .I think for this year I'll be pretty chuffed if I finish in the top 3....giddyup! Tailwind...with your consistency I know I have to put a lead on you as much as I can as soon as I can in order to have any chance....the 1000/1500 lead you got on me last year was simply too much to reel in ##EDIT: I am hoping to put in close to 2k in commuting for Jan, if all goes well. Who is online About the Australian Cycling Forums The largest cycling discussion forum in Australia for all things bike; from new riders to seasoned bike nuts, the Australian Cycling Forums are a welcoming community where you can ask questions and talk about the type of bikes and cycling topics you like.
Dwarf cornel Dwarf cornel may refer to the following plants: Canadian dwarf cornel, Cornus canadensis Eurasian dwarf cornel, Cornus suecica
Letters released at the end of last week show Abbott fiddling in the affairs of the national broadcaster, demanding the show move from the light entertainment section of ABC, into the news division. Until that time, government ministers have been banned from appearing on the program. But if you wander through their Twitter timelines the boycott really does seem a little silly.
Psoriasis and the arachidonic acid cascade. Arachidonic acid (5.8,11,14-eicosatetraenoic acid C20:4, n-6) is released from the cell membrane by the action of phospholipases on membrane phospholipids. Metabolites of arachidonic acid, which are generically termed eicosanoids, including prostaglandins, thromboxane, leukotrienes and hydroxyeicosatetraenoic acids, have been implicated as mediators or modulators of a number of physiological functions and pathological conditions in both normal and diseased human skin. Particularly, eicosanoids have been suspected to play an important role in the pathogenesis of psoriasis, because a number of phenomena observed in psoriasis can be explained, at least in part, by the action of eicosanoids. This review will focus on recent progress regarding the significance of eicosanoids in the pathogenesis of psoriasis. Recent developments in the molecular biology in the eicosanoids have renewed interest in the role of eicosanoids in psoriasis. New understanding of the etiology of psoriasis and advances in its treatment due to recent progress in eicosanoid biology will also be presented.
array ( 'platform' => 'SCANNER', 'service' => 'RAILS', 'name' => 'RailsJsonYamlScanner', 'path' => '/', )
1. Field of the Invention The invention relates to a connection device in a semiconductor device and a method for forming the connection, and more particularly, to a connection device which accurately electrically connects a conducting layer to an internal wired layer on a silicon substrate and where the wired layer has a line width which is the minimally attainable width utilizing present lithographic etching process techniques. 2. Information Disclosure Statement Generally, in manufacturing an integrated circuit semiconductor device, in order to electrically connect a conducting layer through an insulating layer to an internal wired layer deposited on a substrate, a portion of the insulating layer formed on the internal wired layer should be precisely etched to define a contact region which exposes only the surface of the underlying internal wired layer. Therefore, the contact region for forming a contact hole through which an electrical connection is accomplished should be retained by precisely arranging a contact mask at a predetermined position in accordance with design rules for the semiconductor device. Furthermore, the width of the underlying internal wired layer to which the conducting layer is to be connected should be greater than the bore/width of the contact hole considering the minimum misalignment tolerance required for the mask patterning process for the internal wired layer and the critical dimension loss which occurs during the lithographic etching process. If the width of the contact hole is either larger than that of the internal wired layer or is the same as that of the internal wired layer, and the contact mask arranged on the insulating layer for forming the contact hole is misaligned, the resulting etched contact hole is also misaligned which exposes unintended portions of the underlying substrate. Therefore when a conducting layer is deposited on the resulting structure including the exposed portion of the substrate, the conducting layer is undesirably connected through the misaligned contact hole to the exposed portion of the substrate, so that the semiconductor device fails to operate as designed. Therefore, according to the prior art, in order to form a contact hole for connecting a conducting layer through the contact hole to the underlying internal wired layer formed on the substrate, the width of the wired layer should be larger than that of the contact hole, so that the problems set forth above can be avoided or at least minimized. However upon increasing the width of the wired layer, the cell width of the resulting semiconductor device is undesirably increased so that integrity of semiconductor device decreases. Accordingly, it is an object of the present invention to solve the problems set forth above in the prior art, and to provide a connection device in a semiconductor device by providing an etch stop layer on an insulating layer at a position overlapping with one side of the underlying internal wired layer, and forming a contact hole in the insulating layer in a position from the portion of the etch stop layer to the portion of the wired layer, and depositing a conducting layer on the resulting structure to electrically connect the conducting layer through the contact hole to the wired layer, so that an electrical connection therebetween is desirably accomplished, although the resulting contact hole is misaligned during the contact mask patterning process, and a method for manufacturing the same. According to the present invention the etching process for forming the contact hole prevents the underlying substrate from being exposed by the formation process of the contact hole so that the conducting layer to be deposited is desirably connected through even a misaligned contact hole to the wired layer having a line width which is the minimally attainable width utilizing present lithographic etching process techniques formed on the substrate. Further, because of the use of the minimally attainable width of the wired layer the area of the resulting semiconductor can be decreased so that the integrity thereof increases. The preceding objects should be construed as merely presenting a few of the more pertinent features and applications of the invention. Many other beneficial results can be obtained by applying the disclosed invention in a different manner or modifying the invention within the scope of the disclosure. Accordingly, other objects and a fuller understanding of the invention may be had by referring to both the summary of the invention and the detailed description, below, which describe the preferred embodiment in addition to the scope of the invention defined by the claims considered in conjunction with the accompanying drawings.
A ferroelectric memory (FeRAM) which attracts attention as a new memory device in recent years is to read/write information actively utilizing spontaneous polarization characteristics of a ferroelectric thin film, and is expected as an excellent memory which can overcome drawbacks of previous DRAM, SRAM, FLASH memory and the like, in view of volatility, write speed, reliability, cell area and the like. As a ferroelectric material for FeRAM, metal oxide materials such as lead zirconate titanate (PZT, PLZT) and bismuth layer-structure Perovskite ferroelectric (BLSF) have been proposed and studied. Usually, for formation of such a ferroelectric thin film, a film formation method such as a physical vapor deposition method (PVD) such as a sputtering method or a chemical vapor deposition method such as a MOCVD method, or a chemical solution film formation method (solution method) has been proposed. Among them, the solution method is known to be useful to form a ferroelectric thin film most simply at the lowest cost without any special and expensive apparatus required. Further, the solution method has such advantages that the composition can easily be controlled precisely, and that a change in characteristics due to a difference in composition, which is shown in many ferroelectric materials, can be suppressed, and accordingly it is being studied as one of very useful processes for producing a ferroelectric thin film. Preparation of a ferroelectric thin film by the solution method is a process for forming a ferroelectric thin film by coating on a substrate a solution having a metal compound (precursor) of components as a material homogeneously dissolved, drying the resulting coating film, and pre-baking the coating film as the case requires, and then baking the coating film, for example, in the air at about 700° C. or higher temperature to form a thin film of a crystalline metal oxide. As a soluble metal compound as the material, an organic metal compound such as a metal alkoxide or its partial hydrolysate, or an organic acid salt or a chelate complex compound has been commonly used. Further, with respect to a cell structure employing the above FeRAM, several cell structures have been proposed, and practically used at present is a so-called planar structure wherein a ferroelectric capacitor and a transistor are connected with local wiring, having a structure disadvantageous in view of reduction of the cell area i.e. high integration. As a structure which overcomes the above drawback, a stack structure wherein a ferroelectric capacitor is formed on a plug has been proposed, but the reducing atmosphere at the time of formation of multilevel interconnection causes fatal deterioration of characteristics of the ferroelectric thin film. Further, as a structure which overcomes such problems, such a structure has been proposed that after formation of multilevel interconnection i.e. after completion of the logic process, a ferroelectric thin film and a plate line are formed on an outermost layer. In such a structure, a film is formed on a logic circuit, and accordingly the baking temperature at the time of formation of a ferroelectric thin film is required to be decreased to a level of from 400° C. to 450° C. To cope with the above, various means have been proposed to reduce the crystallization temperature also in preparation of a ferroelectric thin film by the solution method. They may, for example, be a method of appropriately controlling the structure of a precursor as shown in e.g. U.S. Pat. No. 5,925,183, a method of preliminarily adding bismuth silicate as a paraelectric to a coating liquid (Ferroelectrics, vol. 271, p. 289 (2002)), a method of using a lead titanate layer as a seed layer (Jpn. J. Appl. Phys., vol. 35, p. 4,896 (1996)), selection of a proper substrate (J. Am. Ceram. Soc., vol. 75, p. 2,785 (1992)) and a vacuum annealing method (Jpn. J. Appl. Phys., vol. 38, p. 5,346 (1999)). However, reduction of the baking temperature in such conventional methods is limited to a level of 550° C. Accordingly, heretofore, formation of a ferroelectric thin film on a logic circuit which is required for high integration has been considered practically difficult in a case of the solution method. Further, it has been attempted to form a thin film by using a composition wherein fine ferroelectric particles coexist with a soluble metal salt (Jpn. J. Appl. Phys., vol. 41, p. 6,969 (2002)). However, since the fine ferroelectric particles are particles obtained by long term mechanical grinding, crystallinity or the like of the particles tends to decrease, and no desired chrematistics have been obtained.
Biogas Plant Could Supply Bournemouth Residents With Fuel Residents in Bournemouth could have their fuel generated by biogas production in the near future, if plans for a new plant on an existing sewage treatment works is given the go ahead. Wessex Water is in talks about building the site at the Berry Hill Sewage Treatment Works facility in Watery Lane, north-east of Muscliff, reported the Bournemouth Echo. The reason behind the plans is because the site is “currently producing more biogas than the combined heat and power unit can combust”. The planning statement added that a feasibility study “concluded that the best course of action is to install a gas to grid plant at Berry Hill for injection into the gas main”. A biogas plant, to be built by Wessex Water Enterprises, would turn sewage sludge into biomethane by using tank breather valves. This gas could then be used on the National Grid, supplying power to residents in the region. However, Wessex Water might face some objections to its proposals, as the site would be located in the green belt. Therefore, it would need to demonstrate “special circumstances” for its plans to be approved. While the plant is expected to include a 18-metre tall column and four further stacks ranging between eight and 14 metres high, the organisation claims there will be little impact for locals, with minimal noise, odours or transport problems. This comes after the Leader Live revealed a new biogas facility has been approved in Deeside. Flintshire Council agreed to Logik WTE’s proposals for a large waste management plant at Deeside Industrial Park, which is expected to handle 182,000 tons of rubbish a year and produce two megawatts of green energy as a result.
Blizzard isn't just famous for its games, it's famous for a lengthy and considered development output. It's one of a privileged few studios in the world that can say "we'll release the game when it's ready", and really mean it. The agonising wait fans have to endure for new Blizzard games is testament to this. That philosophy didn't just turn up overnight. Something had to happen to convince Blizzard's leadership that it was the right thing to do. And that thing was StarCraft's torturous development. In this piece, former Blizzard exec Patrick Wyatt looks back and details the mountain of problems the company's sci-fi epic had to overcome to even be playable, let alone a successful commercial product. The beginnings of StarCraft During the development of StarCraft, a two and a half year slog with over a year of crunch time prior to launch, the game was as buggy as a termite nest. While its predecessors (Warcraft I and II) were far more reliable games than their industry peers, StarCraft crashed frequently enough that play-testing was difficult right up until release, and the game continued to require ongoing patching efforts post-launch. Why? There were sooooo many reasons. Orcs in space StarCraft was originally envisioned as a game with modest goals that could fit into a one-year development cycle so that it could be released for Christmas, 1996. The project leadership was comprised of the same folks who had started Shattered Nations (video), a turn-based strategy game along the lines of X-COM that Blizzard announced in May 1995 but canceled some months later. The team members were regrouped to build something that could reach market quickly so Blizzard wouldn't have a long gap between game launches. The decision to rush the game's development seems ludicrous in retrospect, but Allen Adham, the company's president, was under pressure to grow revenue. While Blizzard's early games had been far more successful than expected, that just raised expectations for future growth. Given a short timeframe and limited staff, the StarCraft team's goal was to implement a modest game - something that could best be described as "Orcs in space". A picture from around the time of the E3 game show in Q2 1996 shows the path the game team originally chose: StarCraft as it appeared in May 1996 at the Electronic Entertainment Expo. Yeah - I wouldn't play it either. But a higher priority project overshadowed StarCraft and stole its developers one by one. Diablo, a role-playing game being developed by Condor Studios in Redwood City California, was in need of additional help. Condor, a company formed by Dave Brevik along with Max Schaefer and his brother Erich Schaefer, was given a budget of only $1.2 million - ridiculously small even in those days. The Condor team had no hope of making the game they aspired to build, but they did such ground-breaking work in developing something fun that it made sense for Blizzard to acquire Condor, rename it Blizzard North, and start pouring in the money and staff the game really deserved. Initially Collin Murray, a programmer on StarCraft, and I flew to Redwood City to help, while other developers at Blizzard "HQ" in Irvine California worked on network "providers" for battle.net, modem and LAN games as well as the user-interface screens (known as "glue screens" at Blizzard) that performed character creation, game joining, and other meta-game functions. As Diablo grew in scope eventually everyone at Blizzard HQ - artists, programmers, designers, sound engineers, testers - worked on the game until StarCraft had no one left working on the project. Even the project lead was co-opted to finish the game installer that I had half-written but was too busy to complete. After the launch of Diablo at the end of 1996, StarCraft development was restarted, and everyone got a chance to see where the game was headed, and it wasn't pretty. The game was dated, and not even remotely impressive, particularly compared to projects like Dominion Storm, which looked great in demos at E3 six months before. The massive success of Diablo reset expectations about what Blizzard should strive for: StarCraft became the game that defined Blizzard's strategy of not releasing games until they were ready. But a lot of pain had to occur along the way to prove out this strategy. Something to prove With everyone looking critically at StarCraft, it was clear that the project needed to be vastly more ambitious than our previous ground-breaking efforts in defining the future of the real-time strategy (RTS) genre with the first two Warcraft games. At the time of the StarCraft reboot, according to Johnny Wilson, then Editor in Chief of Computer Gaming World, the largest-distribution gaming magazine of that time, there were over eighty (80!!) RTS games in development. With so many competitors on our heels, including Westwood Studios, the company that originated the modern RTS play-style, we needed to make something that kicked ass. And we were no longer an underdog; with the successes of Warcraft and Diablo continuing to fill the news we sure wouldn't be getting any slack from players or the gaming press. In the gaming world you're only ever as good as your last game. We needed to go far beyond what we'd done previously, and that required taking risks. New faces Warcraft II had only six core programmers and two support programmers; that was too few for the larger scope of StarCraft, so the dev team grew to include a cadre of new and untested game programmers who needed to learn how to write game code without much mentoring. Our programming leadership was weak: we hadn't yet learned how essential it is to provide guidance to less experienced developers early in the project so they learn much-needed lessons before the game launches, so it was very much a sink-or-swim proposition for new Padawans. A big part of the problem was just how thin we were on the ground - every programmer was coding like mad to meet goals, with no time for reviews, code-audits, or training. And not only were there inexperienced junior members on the team, the leader of the StarCraft programming effort had never architected a shipping game engine. Bob Fitch had been programming games for several years with great results but his previous efforts were game ports, where he worked within an existing engine, and feature programming for Warcraft I and II, which didn't require large-scale engine design. And while he had experience as the tech lead for Shattered Nations, that project was canceled, therefore no validation of its architectural decisions was possible. The team was incredibly invested in the project, and put in unheard of efforts to complete the project while sacrificing personal health and family life. I've never been on a project where every member worked so fiercely. But several key coding decisions in the project, which I'll detail presently, would haunt the programming team for the remainder of the project. Some things have changed After spending months working to launch Diablo, and further months of cleanup effort and patching afterwards, I returned to help with the reboot of StarCraft. I wasn't looking forward to diving into another bug-fest, but that's exactly what happened. I thought it would be easy to jump back into the project because I knew the Warcraft code so well - I'd literally worked on every component. I was instead terrified to discover that many components of the engine had been thrown away and partially rewritten. The game's unit classes were in the process of being rewritten from scratch, and the unit dispatcher had been thrown out. The dispatcher is the mechanism I created to ensure that each game unit gets time to plan what it wants to do. Each unit periodically asks: "what should I do now that I finished my current behavior?", "should I re-evaluate the path to get where I'm going?", "is there a better unit to attack instead of the one that I'm targeting now?", "did the user give me a new command?", "I'm dead, how do I clean up after myself?", and so forth. There are good reasons code needs to be rewritten, but excising old code comes with risks as well. Joel Spolsky said it most eloquently in Things You Should Never Do, Part I: It's important to remember that when you start from scratch there is absolutely no reason to believe that you are going to do a better job than you did the first time. First of all, you probably don't even have the same programming team that worked on version one, so you don't actually have "more experience". You're just going to make most of the old mistakes again, and introduce some new problems that weren't in the original version. The Warcraft engine had taken months of programming effort to get right, and while it needed rework for new gameplay features, a fresh programming team was now going to spend a great deal of time relearning lessons about how and why the engine was architected the way it was in the first place. Game engine architecture I wrote the original Warcraft engine for Microsoft DOS in C using the Watcom Compiler. With the switch to releasing on Microsoft Windows, Bob chose to use the Visual Studio compiler and re-architected the game engine in C++. Both were reasonable choices but for the fact that - at that point - few developers on the team had experience with the language and more especially with its many pitfalls. Though C++ has strengths it is easy to misuse. As Bjarne Stroustrup, the language's creator, so famously said: "C makes it easy to shoot yourself in the foot; C++ makes it harder, but when you do it blows your whole leg off." History tells us that programmers feel compelled to try every feature of their new language during the first project, and so it was with class inheritance in StarCraft. Experienced programmers will shudder when seeing the inheritance chain that was designed for the game's units: CUnit < CDoodad < CFlingy < CThingy CThingy objects were sprites that could appear anywhere on the game map, but didn't move or have behaviors, while CFlingys were used for creating particles; when an explosion occurred several of them would spin off in random directions. CDoodad - after 14 years I think this is the class name - was an uninstantiated class that nevertheless had important behaviors required for proper functioning of derived classes. And CUnit was layered on top of that. The behavior of units was scattered all throughout these various modules, and it required an understanding of each class to be able to accomplish anything. And beyond the horror of the class hierarchy, the CUnit class itself was an unholy mess defined across multiple header files: Each of those headers was several hundred lines, leading to an overall class definition that could at best be called amusing. It wasn't until many years later that the mantra "favor composition over inheritance" gained credence among programmer-kind, but those who worked on StarCraft learned the hard way much earlier. We're only two months from launch With its troubled early history, after the reboot the development team was pressured to finish up, and so schedules were bandied about that showed the game could be launched in two months. Given the number of game units and behaviors that needed to be added, the changes necessary to switch from top-down to isometric artwork, a completely new map editor, and the addition of Internet play over battle.net, it was inconceivable that the game actually could ship in that time, even assuming that the art team, designers, sound engineers, game-balancers and testers could finish their end of the bargain. But the programming team continually worked towards shipping in only two months for the next fourteen months! The entire team worked long hours, with Bob working stretches of 40 hours, 42 hours, even 48 hours programming. As I recall no one else attempted these sorts of masochistic endeavors, though everyone was putting in massive, ridiculous hours. My experiences developing Warcraft, with frequent all-nighters coding, and later Diablo, where I coded fourteen-plus hour days seven days a week for weeks at a time, suffered me to learn that there wasn't any point in all-nighters. Any code submissions [ha! what an appropriate word] written after a certain point in the evening would only be regretted and rewritten in the clear light of following days. Working these long hours made people groggy, and that's bad when trying to accomplish knowledge-based tasks requiring an excess of creativity, so there should have been no surprises about the number of mistakes, misfeatures and outright bugs. Incidentally, these sorts of crazy hours weren't mandated - it was just the kind of stuff we did because we wanted to make great games. In retrospect it was foolish - we could have done better work with more reasonable efforts. One of my proudest accomplishments was to ship four Guild Wars campaigns in a two-year window without leading the development team down that dark path. The most common cause of StarCraft game crashes While I implemented some important features in StarCraft, including fog-of-war, line-of-sight, flying unit pathing-repulsion, voice-chat, AI reinforcement points, and others, my primary job gravitated to fixing bugs. Wait: voice-chat! In 1998?!? Yeah: I had it all working in December 1997. I used a 3rd-party voice-to-phoneme compressor, and wrote the code to send the phonemes across the network, decompress them, and then play them back on the other seven players' computers. But every single sound-card in our offices required a driver upgrade to make it work, if the sound card was even capable of full-duplex sound (simultaneous recording and playback of sounds), so I regretfully made the recommendation to scrap the idea. The tech-support burden would have been so high that we would have spent more money on game support than we would have made selling the game. So anyway I fixed lots of bugs. Some of my own, sure, but mostly the elusive bugs written by other tired programmers. One of the best compliments I've received came just a few months ago, when Brian Fitzgerald, one of two best programmers I've had occasion to work with, mentioned a code-review of StarCraft; they were blown away by how many changes and fixes I had made over the entire code-base. At least I got some credit for the effort, if only well after the fact! Given all the issues working against the team, you might think it was hard to identify a single large source of bugs, but based on my experiences the biggest problems in StarCraft related to the use of doubly-linked linked lists. Linked lists were used extensively in the engine to track units with shared behavior. With twice the number of units of its predecessor - StarCraft had a maximum of 1600, up from 800 in Warcraft 2 - it became essential to optimize the search for units of specific types by keeping them linked together in lists. Recalling from distant memory, there were lists for each player's units and buildings, lists for each player's "power-generating" buildings, a list for each Carrier's fighter drones, and many many others. All of these lists were doubly-linked to make it possible to add and remove elements from the list in constant time - O(1) - without the necessity to traverse the list looking for the element to remove - O(N). Unfortunately, each list was "hand-maintained" - there were no shared functions to link and unlink elements from these lists; programmers just manually inlined the link and unlink behavior anywhere it was required. And hand-rolled code is far more error-prone than simply using a routine that's already been debugged. Some of the link fields were shared among several lists, so it was necessary to know exactly which list an object was linked into in order to safely unlink. And some link fields were even stored in C unions with other data types to keep memory utilization to a minimum. So the game would blow up all the time. All the time. But why did you do it that way? Tragically, there was no need for these linked-list problems to exist. Mike O'Brien, who, along with Jeff Strain, cofounded ArenaNet with me, wrote a library called Storm.DLL, which shipped with Diablo. Among its many features, storm contained an excellent implementation of doubly-linked lists using templates in C++. During the initial development of StarCraft, that library was used. But early in the development the team ripped out the code and hand-rolled the linked-lists, specifically to make writing save-game files easier. Let me talk about save games for a second to make this all clearer. Save games Many games that I played before developing Warcraft had crappy save-game functionality. Gamers who played any game created by Origin will remember how looooooong it took to write save-game files. I mean sure, they were written by slow microprocessors onto hard-drives that - by today's standards - are as different as tricycles and race cars. But there was no reason for them to suck, and I was determined that Warcraft wouldn't have those problems. So Warcraft did some tricks to enable it to write large memory blocks to disk in one chunk instead of meandering through memory writing a bit here and there. The entire unit array (600 units times a few hundred bytes per unit) could be written to disk in one chunk. And all non-pointer-based global variables could similarly be written in one chunk, as could each of the game-terrain and fog-of-war maps. But oddly enough, this ability to write the units to disk in one chunk wasn't essential to the speed of writing save game files, though it did drastically simplify the code. But it worked primarily because Warcraft units didn't contain "pointer" data. StarCraft units, which as mentioned previously contained scads of pointers in the fields for linked lists, was an entirely different beast. It was necessary to fixup all the link pointers (taking special care of unioned pointer fields) so that all 1600 units could be written at once. And then unfixup the link pointers to keep playing. Yuck. Change back! So after fixing many, many linked list bugs, I argued vehemently that we should switch back to using Storm's linked lists, even if that made the save-game code more complicated. When I say "argued vehemently", I should mention that was more or less the only way we knew how to argue at Blizzard - with our youthful brashness and arrogant hubris, there was no argument that wasn't vehement unless it was what was for lunch that day, which no one much wanted to decide. I didn't win that argument. Since we were only "two months" from shipping, making changes to the engine for the better was regularly passed over for band-aiding existing but sub-optimal solutions, which led to many months of suffering, so much that it affected my approach to coding (for the better) ever since, which is what I'll discuss in part two of this article. More Band-Aids: path-finding in StarCraft I wanted to mention one more example of patching over bugs instead of fixing the underlying problem: when StarCraft switched from top-down artwork to isometric artwork, the background tile-graphics rendering engine, which dated back to code I had written in 1993/4, was left unchanged. Rendering isometric-looking tiles using a square tile engine isn't hard, though there are difficulties in getting things like map-editors to work properly because laying down one map tile on another requires many "edge fixups" since the map editor is trying to place diagonally-shaped images drawn in square tiles. While rendering isn't so bad, isometric path-finding on square tiles was very difficult. Instead of large (32×32 pixel) diagonal tiles that were either passable or impassable, the map had to be broken into tiny 8×8 pixel tiles - multiplying the amount of path-searching by a factor of 16 as well as creating difficulties for larger units that couldn't squeeze down a narrow path. Had Brian Fitzgerald not been a stellar programmer, the path-finding problem would have prevented the game from launching indefinitely. As it was pathing was one of the problems that was only finalized at the end of the project. I plan to write more about path-finding in StarCraft because there are lots interesting technical and design bits. As a game developer with more than 22 years in the industry Patrick has helped build small companies into big ones (VP of Blizzard, Founder of ArenaNet, COO of En Masse Entertainment); lead the design and development efforts for best-selling game series (Warcraft, Diablo, Starcraft, Guild Wars); written code for virtually every aspect of game development (networking, graphics, AI, pathing, sound, tools, installers, servers, databases, ecommerce, analytics, crypto, dev-ops, etc.); designed many aspects of the games he's shipped; run platform services teams (datacenter operations, customer support, billing/accounts, security, analytics); and developed state-of-the-art technologies required to compete in the AAA+ game publishing business.
[Properties of phospholipase A2 from the venom of the large hornet Vespa orientalis]. Some properties (catalytic and hemolytic activity, pH and temperature optima, stability, substrate specificity, effects of detergents and metal ions, N-terminal sequence, chemical modification of histidine in the enzyme active center, etc.) of phospholipase A2 from hornet (Vespa orientalis) venom were studied. It was shown that phospholipase A2 from hornet venom differs essentially from other enzymes of this species in terms of stability, catalytic properties and structural features. The active center of the enzyme contains an essential histidine residue, similar to other phospholipases A2 from various sources. Unlike other known forms of phospholipase A2, the enzyme under study exerts a pronounced hemolytic action. The hemolysis is inhibited by Ca2+ at concentrations capable of inducing the activation of the hydrolytic activity of the enzyme.
please print. also make a folder for enron india. thanks. ---------------------- Forwarded by Jeffrey A Shankman/HOU/ECT on 10/02/2000 12:23 PM --------------------------- Rick Bergsieker@ENRON_DEVELOPMENT 09/29/2000 03:19 PM To: Mike McConnell/HOU/ECT@ECT, Jeffrey A Shankman@ECT cc: Wade Cline/ENRON_DEVELOPMENT@ENRON_DEVELOPMENT, Clay Harris/ENRON_DEVELOPMENT@ENRON_DEVELOPMENT Subject: Re: DPC/Metgas Equity Sale Mike/Jeff: you need to intervene on this at a higher level to make sure that we don't give up any important LNG rights when we selldown DPC ---------------------- Forwarded by Rick Bergsieker/ENRON_DEVELOPMENT on 09/29/2000 03:17 PM --------------------------- enron india From: Wade Cline 09/28/2000 12:45 PM To: Clay Harris/ENRON_DEVELOPMENT@ENRON_DEVELOPMENT cc: Bill Gathmann/ENRON_DEVELOPMENT@ENRON_DEVELOPMENT, Rick Bergsieker/ENRON_DEVELOPMENT@ENRON_DEVELOPMENT, Jeffrey A Shankman@ECT, V V Rao/SIN/ECT@ECT, Sanjay Bhatnagar/ENRON_DEVELOPMENT@ENRON_DEVELOPMENT Subject: Re: DPC/Metgas Equity Sale Clay, we've been banging this drum so long in India that we've broken all our drumsticks. We've always maintained that access to the Metgas terminal and storage and pipeline is key to any plans Enron has in the LNG business. Personally, I have always thought Enron should just finance and build the pipeline itself and then sell it once it becomes operational, maintaining access to the terminal and pipeline capacity. Glad to see you're on board, because we can use the help. But this is going to require some Enron commitment. To meet Enron's objectives (no debt consolidation, cash flow and earnings), the current selldowns being considered are to industry and related players who want their share of the upsides of the business. So giving Enron any preferred rights is going to be tough with these players. Let's discuss next week in India. I have requested the Metgas meetings with the EGM LNG team be on Wednesday. Clay Harris 09/29/2000 03:41 AM To: Wade Cline/ENRON_DEVELOPMENT@ENRON_DEVELOPMENT, Bill Gathmann/ENRON_DEVELOPMENT@ENRON_DEVELOPMENT cc: Rick Bergsieker/ENRON_DEVELOPMENT@ENRON_DEVELOPMENT, Jeffrey A Shankman@ECT, V V Rao/SIN/ECT@ECT Subject: DPC/Metgas Equity Sale One of the keys to growing the LNG business is access to LNG terminals and fuel supply opportunities. We should insure that the sell down of DPC and Metgas be structured such that Enron/EGM retain terminal rights and fuel supply rights at Dabhol, if appropriate. The Metgas terminal/regas rights under the TARA and EFI's fuel management/fuel supply rights under the Fuel Management Agreement need to stay with Enron/EGM. Lets discuss next week in Mumbai. Clay
Molsidomine and lipid metabolism. A postmarketing surveillance study was carried out in 3,757 patients suffering from chronic stable angina pectoris or angina at rest and treated with molsidomine alone or in combination with other antianginal drugs. During the 12-month survey, there was a significant decrease of total cholesterol, triglycerides, apolipoprotein B and B/A1 ratio, but no change of apolipoprotein A1. These results indicate that further investigations should be undertaken to analyze the possible favorable effect of molsidomine on lipid metabolism.
1. Field of the Invention The invention relates to a multi-beamlet multi-column particle-optical system, in particular a multi-beamlet multi-column particle-optical system comprising a plurality of multi-beamlet particle-optical columns wherein at least one of the multi-beamlet particle-optical columns comprises an electrode element having an aperture of a noncircular shape. The invention further relates to a method of exposing a substrate by multi-beam multi-column exposure using the multi-beamlet multi-column particle-optical system. 2. Brief Description of Related Art The increasing demand for ever smaller and more complex microstructured devices and the continuing demand for an increase of a throughput in the manufacturing and inspection processes thereof have been an incentive for the development of particle-optical systems that use multiple charged particle beamlets in place of a single charged particle beam, thus significantly improving the throughput of such systems. The use of multiple beamlets is associated with a whole range of new challenges to the design of particle-optical components, arrangements and systems, such as microscopes and lithography systems. A particle-optical arrangement for forming a plurality of charged-particle beamlets wherein the beamlets are arranged in an array pattern is described in U.S. Pat. Nos. 5,369,282 and 5,399,872, for instance. Multi-beamlet particle-optical systems make use of a pattern of multiple charged particle beamlets focused on a substrate to be exposed. For example, in an inspection system, a single beam of charged particles is provided by a particle source or, alternatively, multiple beamlets may be provided by an array of charged particle sources. The beam or beamlets is/are then typically directed onto a multi-aperture plate having a plurality of apertures formed therein for generating multiple beamlets from those charged particles of the single beam or beamlets that pass through the apertures of the multi-aperture plate. The multiple beamlets are generally subsequently focused on the substrate, typically by means of a focussing particle-optical lens downstream of the multi-aperture plate. An array of charged particle spots is thus formed on the substrate. Secondary charged particles such as secondary electrons may be emitted by the substrate to be inspected, follow a secondary beamlet path and are incident on a detector. Further more, in particle-optical lithography, methods of so-called maskless lithography have been established, which, for instance, make use of a blanking aperture array. Such a blanking aperture array typically comprises a multi-aperture plate wherein each of a plurality of apertures is further equipped with a deflecting arrangement, generally comprising electrodes, which, in, a “switched-on” or activated state, is capable of deflecting a beamlet passing through the respective aperture such that it is deflected from a beam path of the beamlets to such an extent that it does not reach the specimen and does not contribute to an exposure of the substrate or specimen. The deflecting arrangements of the individual apertures can thus be switched off or de-activated to let a beamlet pass undisturbed through the respective aperture and a switched-on state where a passing beamlet is deflected away from a beam path and incident on an obstacle in the form of a non-transmitting portion of an aperture or the like such that it will not be incident onto the specimen. By suitable movement of the blanking aperture array relative to the specimen to be exposed and suitable switching sequences of the individual apertures, a pattern can be generated and written onto the specimen, such as described, for instance in US 2003/0155534 A1, the entire content of which is incorporated by reference herein. In addition to using a plurality of beamlets, systems employing a plurality of two or more particle-optical systems, or columns, operating in parallel to simultaneously expose or inspect the same substrate are being developed. Given that, due to interactions of charged particles, a throughput and a performance of an individual particle-optical system (column) is generally limited by a maximum acceptable current of charged particles in the system, the multi-column approach allows to increase the throughput of such a particle-optical lithography system without further increasing the current through an individual column and therefore avoids a decrease in performance due to space charge effects. Thus, multi-column particle-optical systems comprise a plurality of particle-optical columns which each, in terms of their components and their arrangement, largely correspond to a conventional particle-optical system as described above. An example of a multi-column particle-optical system is described in US Patent Application with publication number US 2005/0104013 A1, the entire content of which is incorporated by reference herein. Using an array or pattern of beamlets of charged particles requires a multi-beamlet particle-optical system to provide those beamlets in a reliable and accurate manner such that the individual beamlets show little, if any, variation in intensity, deviation from a predetermined position within the array and target position on a substrate, variation in optical properties, such as aberrations and the like. The quality of the pattern of beamlets and, correspondingly, the quality of the pattern of charged particle spots generated in an image or substrate plane, respectively, will generally depend, amongst others, on properties of the beamlet generating arrangement used as well characteristics of the focussing arrangement, such as a lens. In addition, external factors originating in an environment of the particle-optical system may also influence a performance of the particle-optical system. An example of such an external factor exerting a negative influence on an imaging performance are electromagnetic fields from outside the charged particle system penetrating into the system, as discussed, for instance in US 2005/0072933 A1, the entire content of which is incorporated by reference herein. The electrostatic lens system described therein comprises an electrostatic lens arrangement having more than three electrode elements which are arranged coaxially in series along an optical axis of the electrostatic lens arrangement. Additional shielding is provided by provision of an outer member ring to fill a space between two adjacent electrode elements thus preventing intrusion of interfering electromagnetic fields. The system described therein is a single column system. In multi-column systems, additional problems arise from the close arrangements of individual columns and their electrostatic and/or electromagnetic fields, which may cause interferences in neighbouring columns. These interferences, for instance a disturbance of a focussing electrostatic and/or electromagnetic field, may cause imaging errors such as particle-optical aberrations and thus deteriorate an imaging performance. It is therefore an object of the present invention to provide a multi-beamlet multi-column particle-optical system providing an improved imaging performance. It is a further object of the present invention to provide a multi-beamlet multi-column particle-optical system configured to decrease an influence from one or more neighbouring columns on an imaging performance. It is another object to provide an improved method of multi-beam multi-column particle-optical exposure.
1. Field of the Invention The invention relates generally to integrated circuits, and in particular to function blocks for use in integrated circuits such as gate arrays. 2. Related Art Gate arrays are popular among integrated circuit (IC) designers as a generally economical way of customizing ICs to obtain application specific integrated circuits. Gate arrays are generally composed of a predefined matrix (or array) of configurable transistor blocks or, in general, function blocks, which can be formed into a specified circuit by interconnecting them. Of great importance to an IC designer in implementing circuit designs with a gate array is the functionality available from the gate array. That is, the IC designer may have circuit designs which include a large number of different combinational functions (e.g., Boolean logic), sequential functions (e.g., flip-flops, latches), and/or memory functions (e.g., SRAM), and the designer would prefer a gate array which efficiently implements a significant majority of his or her design so that the overall design is implemented in the smallest space possible. Since gate arrays are formed of a matrix of function blocks, the functionality available in each gate array will be primarily determined by the function block architecture. Also important to an IC designer is customization time. Particularly during the design stages, the IC designer wants to obtain a model, or prototype, of his or her designs quickly so that the designs can be tested and used with other circuitry. One approach to gate arrays is to create a function block with primarily freestanding transistors, that is, transistors that have few, if any, internal connections to one another within the function block. The transistors within such a function block often vary in size and drive capability with respect to one another to aid in achieving various functions. In order to customize a function block with freestanding transistors, routing of connections between the transistors within the function block must be undertaken as specified by an IC designer. There are generally three to five layers of connecting wires formed over the transistor layer, and each layer requires at least two masking steps to form (one step to form vias to the layer below and one step to form connecting wires). Thus, six to ten masking steps must be undertaken to fully customize a gate array of this type. So although this approach allows for circuit flexibility by allowing for implementation of combinational and sequential functions, as well as memory functions, such an approach will bear additional costs due to multiple masking and routing steps. In addition, because of the multiple masking steps required, production time for customizing the gate array can be considerable. A second approach to gate arrays, and one having a more rapid customization time, is field programmable gate arrays (FPGAs). The function block configuration in an FPGA is often composed of a fixed circuit of multiplexers and other logic gates and is usually arranged such that varying the input signals to the function block will form various useful functions. Thus, to customize a gate array, an IC designer can specify signals to be coupled to the inputs and outputs for each function block. FPGA customization time tends to be more rapid than other types of gate arrays because the transistor layer and all connection layers (all vias and wires) are fixed. Also fixed and in between the function blocks in the matrix is an interconnect structure formed of a plurality of intersecting wires. At each intersection is either a fuse or a programmable RAM bit. Thus, to program function block functionality (i.e., to control input signals to each function block), either a fuse is stressed to melt and form a connection at the intersection, or a RAM bit is programmed to form this connection. Since the entire FPGA structure is fixed by the manufacturer, no additional mask steps are required and FPGA programming can actually be done by the IC designer with equipment and software at his or her own place of business. Commonly, an IC designer will specify a function (often from a library) which the designer wishes the function block to perform and the signals to be coupled to function block inputs and outputs are then determined and programmed by software. Despite rapid and easy customization, FPGAs currently available have drawbacks. First, FPGAs are often used in intermediate design steps for test purposes, but cannot often be used in a final product: because of the nature of the FPGA interconnect structure, an FPGA often will not meet the performance expectations of the final product (e.g., timing) and thus has only limited use in test situations. Second, few, if any, FPGA manufacturers have developed a function block architecture which can fully support the functionality (e.g., combinational, sequential, and memory functions) required by an IC designer. Almost all FPGA producers produce function blocks capable of implementing a variety of combinational circuits (e.g., Boolean function). A few FPGA suppliers in addition to providing circuits capable of combinational logic, will also provide distinct function blocks for sequential logic (e.g., flip-flops, latches) spaced periodically throughout the FPGA array. While providing the designer with periodic function blocks for sequential function support is helpful, these sequential function blocks may not be in an ideal location with respect to other function blocks (e.g., those supporting combinational functions), may not occur often enough to adequately support IC designs, and particularly may be less than ideal with respect to routing, timing, and other placement issues. Other FPGA providers provide function blocks which can support both combinational and sequential functions. However, these function blocks are usually designed so that the circuitry supporting each of these function types is separate and distinct within the function block. While providing more options to the designer, this approach will significantly limit gate arrays in size since each function block takes up considerably more space in accommodating distinct circuitry to support each function type. Nonetheless, most FPGA providers using this approach still tend to only place function blocks containing both combinational and sequential logic at periodic intervals throughout the array. As IC designers create more and more complex IC designs, they are demanding more functional capabilities from gate arrays while further demanding that customization time remain low, that gate array die size remain small, and that device reliability remain high. So, although available gate arrays allow some flexibility to the IC designer, improved architectures for gate arrays are always desirable. Particularly desirable is any architectural design that allows increased flexibility and functionality while reducing customization time.
This model kit requires assembly. Cement, paint and other construction materials not included unless specifically stated in the description. The Israeli Air Force is the world's second-largest operator of F-16s, just behind the U.S. Air Force. Israel was among the first countries to receive the F-16A in 1981, courtesy of the American Foreign Military Sales arrangement Peace Marble I. By April of that year, the F-16 scored its first air-to-air "kills": a Syrian Mi-8 and a MiG-21 "Fishbed." This limited edition kit features a new resin chaff/flare dispenser and antenna parts.
/* linux/drivers/i2c/busses/i2c-s3c2410.c * * Copyright (C) 2004,2005,2009 Simtec Electronics * Ben Dooks <ben@simtec.co.uk> * * S3C2410 I2C Controller * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; either version 2 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. */ #include <linux/kernel.h> #include <linux/module.h> #include <linux/i2c.h> #include <linux/init.h> #include <linux/time.h> #include <linux/interrupt.h> #include <linux/delay.h> #include <linux/errno.h> #include <linux/err.h> #include <linux/platform_device.h> #include <linux/pm_runtime.h> #include <linux/clk.h> #include <linux/cpufreq.h> #include <linux/slab.h> #include <linux/io.h> #include <linux/of.h> #include <linux/of_gpio.h> #include <linux/pinctrl/consumer.h> #include <linux/mfd/syscon.h> #include <linux/regmap.h> #include <asm/irq.h> #include <linux/platform_data/i2c-s3c2410.h> /* see s3c2410x user guide, v1.1, section 9 (p447) for more info */ #define S3C2410_IICCON 0x00 #define S3C2410_IICSTAT 0x04 #define S3C2410_IICADD 0x08 #define S3C2410_IICDS 0x0C #define S3C2440_IICLC 0x10 #define S3C2410_IICCON_ACKEN (1 << 7) #define S3C2410_IICCON_TXDIV_16 (0 << 6) #define S3C2410_IICCON_TXDIV_512 (1 << 6) #define S3C2410_IICCON_IRQEN (1 << 5) #define S3C2410_IICCON_IRQPEND (1 << 4) #define S3C2410_IICCON_SCALE(x) ((x) & 0xf) #define S3C2410_IICCON_SCALEMASK (0xf) #define S3C2410_IICSTAT_MASTER_RX (2 << 6) #define S3C2410_IICSTAT_MASTER_TX (3 << 6) #define S3C2410_IICSTAT_SLAVE_RX (0 << 6) #define S3C2410_IICSTAT_SLAVE_TX (1 << 6) #define S3C2410_IICSTAT_MODEMASK (3 << 6) #define S3C2410_IICSTAT_START (1 << 5) #define S3C2410_IICSTAT_BUSBUSY (1 << 5) #define S3C2410_IICSTAT_TXRXEN (1 << 4) #define S3C2410_IICSTAT_ARBITR (1 << 3) #define S3C2410_IICSTAT_ASSLAVE (1 << 2) #define S3C2410_IICSTAT_ADDR0 (1 << 1) #define S3C2410_IICSTAT_LASTBIT (1 << 0) #define S3C2410_IICLC_SDA_DELAY0 (0 << 0) #define S3C2410_IICLC_SDA_DELAY5 (1 << 0) #define S3C2410_IICLC_SDA_DELAY10 (2 << 0) #define S3C2410_IICLC_SDA_DELAY15 (3 << 0) #define S3C2410_IICLC_SDA_DELAY_MASK (3 << 0) #define S3C2410_IICLC_FILTER_ON (1 << 2) /* Treat S3C2410 as baseline hardware, anything else is supported via quirks */ #define QUIRK_S3C2440 (1 << 0) #define QUIRK_HDMIPHY (1 << 1) #define QUIRK_NO_GPIO (1 << 2) #define QUIRK_POLL (1 << 3) /* Max time to wait for bus to become idle after a xfer (in us) */ #define S3C2410_IDLE_TIMEOUT 5000 /* Exynos5 Sysreg offset */ #define EXYNOS5_SYS_I2C_CFG 0x0234 /* i2c controller state */ enum s3c24xx_i2c_state { STATE_IDLE, STATE_START, STATE_READ, STATE_WRITE, STATE_STOP }; struct s3c24xx_i2c { wait_queue_head_t wait; kernel_ulong_t quirks; unsigned int suspended:1; struct i2c_msg *msg; unsigned int msg_num; unsigned int msg_idx; unsigned int msg_ptr; unsigned int tx_setup; unsigned int irq; enum s3c24xx_i2c_state state; unsigned long clkrate; void __iomem *regs; struct clk *clk; struct device *dev; struct i2c_adapter adap; struct s3c2410_platform_i2c *pdata; int gpios[2]; struct pinctrl *pctrl; #if defined(CONFIG_ARM_S3C24XX_CPUFREQ) struct notifier_block freq_transition; #endif struct regmap *sysreg; unsigned int sys_i2c_cfg; }; static const struct platform_device_id s3c24xx_driver_ids[] = { { .name = "s3c2410-i2c", .driver_data = 0, }, { .name = "s3c2440-i2c", .driver_data = QUIRK_S3C2440, }, { .name = "s3c2440-hdmiphy-i2c", .driver_data = QUIRK_S3C2440 | QUIRK_HDMIPHY | QUIRK_NO_GPIO, }, { }, }; MODULE_DEVICE_TABLE(platform, s3c24xx_driver_ids); static int i2c_s3c_irq_nextbyte(struct s3c24xx_i2c *i2c, unsigned long iicstat); #ifdef CONFIG_OF static const struct of_device_id s3c24xx_i2c_match[] = { { .compatible = "samsung,s3c2410-i2c", .data = (void *)0 }, { .compatible = "samsung,s3c2440-i2c", .data = (void *)QUIRK_S3C2440 }, { .compatible = "samsung,s3c2440-hdmiphy-i2c", .data = (void *)(QUIRK_S3C2440 | QUIRK_HDMIPHY | QUIRK_NO_GPIO) }, { .compatible = "samsung,exynos5440-i2c", .data = (void *)(QUIRK_S3C2440 | QUIRK_NO_GPIO) }, { .compatible = "samsung,exynos5-sata-phy-i2c", .data = (void *)(QUIRK_S3C2440 | QUIRK_POLL | QUIRK_NO_GPIO) }, {}, }; MODULE_DEVICE_TABLE(of, s3c24xx_i2c_match); #endif /* * Get controller type either from device tree or platform device variant. */ static inline kernel_ulong_t s3c24xx_get_device_quirks(struct platform_device *pdev) { if (pdev->dev.of_node) { const struct of_device_id *match; match = of_match_node(s3c24xx_i2c_match, pdev->dev.of_node); return (kernel_ulong_t)match->data; } return platform_get_device_id(pdev)->driver_data; } /* * Complete the message and wake up the caller, using the given return code, * or zero to mean ok. */ static inline void s3c24xx_i2c_master_complete(struct s3c24xx_i2c *i2c, int ret) { dev_dbg(i2c->dev, "master_complete %d\n", ret); i2c->msg_ptr = 0; i2c->msg = NULL; i2c->msg_idx++; i2c->msg_num = 0; if (ret) i2c->msg_idx = ret; if (!(i2c->quirks & QUIRK_POLL)) wake_up(&i2c->wait); } static inline void s3c24xx_i2c_disable_ack(struct s3c24xx_i2c *i2c) { unsigned long tmp; tmp = readl(i2c->regs + S3C2410_IICCON); writel(tmp & ~S3C2410_IICCON_ACKEN, i2c->regs + S3C2410_IICCON); } static inline void s3c24xx_i2c_enable_ack(struct s3c24xx_i2c *i2c) { unsigned long tmp; tmp = readl(i2c->regs + S3C2410_IICCON); writel(tmp | S3C2410_IICCON_ACKEN, i2c->regs + S3C2410_IICCON); } /* irq enable/disable functions */ static inline void s3c24xx_i2c_disable_irq(struct s3c24xx_i2c *i2c) { unsigned long tmp; tmp = readl(i2c->regs + S3C2410_IICCON); writel(tmp & ~S3C2410_IICCON_IRQEN, i2c->regs + S3C2410_IICCON); } static inline void s3c24xx_i2c_enable_irq(struct s3c24xx_i2c *i2c) { unsigned long tmp; tmp = readl(i2c->regs + S3C2410_IICCON); writel(tmp | S3C2410_IICCON_IRQEN, i2c->regs + S3C2410_IICCON); } static bool is_ack(struct s3c24xx_i2c *i2c) { int tries; for (tries = 50; tries; --tries) { if (readl(i2c->regs + S3C2410_IICCON) & S3C2410_IICCON_IRQPEND) { if (!(readl(i2c->regs + S3C2410_IICSTAT) & S3C2410_IICSTAT_LASTBIT)) return true; } usleep_range(1000, 2000); } dev_err(i2c->dev, "ack was not received\n"); return false; } /* * put the start of a message onto the bus */ static void s3c24xx_i2c_message_start(struct s3c24xx_i2c *i2c, struct i2c_msg *msg) { unsigned int addr = (msg->addr & 0x7f) << 1; unsigned long stat; unsigned long iiccon; stat = 0; stat |= S3C2410_IICSTAT_TXRXEN; if (msg->flags & I2C_M_RD) { stat |= S3C2410_IICSTAT_MASTER_RX; addr |= 1; } else stat |= S3C2410_IICSTAT_MASTER_TX; if (msg->flags & I2C_M_REV_DIR_ADDR) addr ^= 1; /* todo - check for whether ack wanted or not */ s3c24xx_i2c_enable_ack(i2c); iiccon = readl(i2c->regs + S3C2410_IICCON); writel(stat, i2c->regs + S3C2410_IICSTAT); dev_dbg(i2c->dev, "START: %08lx to IICSTAT, %02x to DS\n", stat, addr); writeb(addr, i2c->regs + S3C2410_IICDS); /* * delay here to ensure the data byte has gotten onto the bus * before the transaction is started */ ndelay(i2c->tx_setup); dev_dbg(i2c->dev, "iiccon, %08lx\n", iiccon); writel(iiccon, i2c->regs + S3C2410_IICCON); stat |= S3C2410_IICSTAT_START; writel(stat, i2c->regs + S3C2410_IICSTAT); if (i2c->quirks & QUIRK_POLL) { while ((i2c->msg_num != 0) && is_ack(i2c)) { i2c_s3c_irq_nextbyte(i2c, stat); stat = readl(i2c->regs + S3C2410_IICSTAT); if (stat & S3C2410_IICSTAT_ARBITR) dev_err(i2c->dev, "deal with arbitration loss\n"); } } } static inline void s3c24xx_i2c_stop(struct s3c24xx_i2c *i2c, int ret) { unsigned long iicstat = readl(i2c->regs + S3C2410_IICSTAT); dev_dbg(i2c->dev, "STOP\n"); /* * The datasheet says that the STOP sequence should be: * 1) I2CSTAT.5 = 0 - Clear BUSY (or 'generate STOP') * 2) I2CCON.4 = 0 - Clear IRQPEND * 3) Wait until the stop condition takes effect. * 4*) I2CSTAT.4 = 0 - Clear TXRXEN * * Where, step "4*" is only for buses with the "HDMIPHY" quirk. * * However, after much experimentation, it appears that: * a) normal buses automatically clear BUSY and transition from * Master->Slave when they complete generating a STOP condition. * Therefore, step (3) can be done in doxfer() by polling I2CCON.4 * after starting the STOP generation here. * b) HDMIPHY bus does neither, so there is no way to do step 3. * There is no indication when this bus has finished generating * STOP. * * In fact, we have found that as soon as the IRQPEND bit is cleared in * step 2, the HDMIPHY bus generates the STOP condition, and then * immediately starts transferring another data byte, even though the * bus is supposedly stopped. This is presumably because the bus is * still in "Master" mode, and its BUSY bit is still set. * * To avoid these extra post-STOP transactions on HDMI phy devices, we * just disable Serial Output on the bus (I2CSTAT.4 = 0) directly, * instead of first generating a proper STOP condition. This should * float SDA & SCK terminating the transfer. Subsequent transfers * start with a proper START condition, and proceed normally. * * The HDMIPHY bus is an internal bus that always has exactly two * devices, the host as Master and the HDMIPHY device as the slave. * Skipping the STOP condition has been tested on this bus and works. */ if (i2c->quirks & QUIRK_HDMIPHY) { /* Stop driving the I2C pins */ iicstat &= ~S3C2410_IICSTAT_TXRXEN; } else { /* stop the transfer */ iicstat &= ~S3C2410_IICSTAT_START; } writel(iicstat, i2c->regs + S3C2410_IICSTAT); i2c->state = STATE_STOP; s3c24xx_i2c_master_complete(i2c, ret); s3c24xx_i2c_disable_irq(i2c); } /* * helper functions to determine the current state in the set of * messages we are sending */ /* * returns TRUE if the current message is the last in the set */ static inline int is_lastmsg(struct s3c24xx_i2c *i2c) { return i2c->msg_idx >= (i2c->msg_num - 1); } /* * returns TRUE if we this is the last byte in the current message */ static inline int is_msglast(struct s3c24xx_i2c *i2c) { /* * msg->len is always 1 for the first byte of smbus block read. * Actual length will be read from slave. More bytes will be * read according to the length then. */ if (i2c->msg->flags & I2C_M_RECV_LEN && i2c->msg->len == 1) return 0; return i2c->msg_ptr == i2c->msg->len-1; } /* * returns TRUE if we reached the end of the current message */ static inline int is_msgend(struct s3c24xx_i2c *i2c) { return i2c->msg_ptr >= i2c->msg->len; } /* * process an interrupt and work out what to do */ static int i2c_s3c_irq_nextbyte(struct s3c24xx_i2c *i2c, unsigned long iicstat) { unsigned long tmp; unsigned char byte; int ret = 0; switch (i2c->state) { case STATE_IDLE: dev_err(i2c->dev, "%s: called in STATE_IDLE\n", __func__); goto out; case STATE_STOP: dev_err(i2c->dev, "%s: called in STATE_STOP\n", __func__); s3c24xx_i2c_disable_irq(i2c); goto out_ack; case STATE_START: /* * last thing we did was send a start condition on the * bus, or started a new i2c message */ if (iicstat & S3C2410_IICSTAT_LASTBIT && !(i2c->msg->flags & I2C_M_IGNORE_NAK)) { /* ack was not received... */ dev_dbg(i2c->dev, "ack was not received\n"); s3c24xx_i2c_stop(i2c, -ENXIO); goto out_ack; } if (i2c->msg->flags & I2C_M_RD) i2c->state = STATE_READ; else i2c->state = STATE_WRITE; /* * Terminate the transfer if there is nothing to do * as this is used by the i2c probe to find devices. */ if (is_lastmsg(i2c) && i2c->msg->len == 0) { s3c24xx_i2c_stop(i2c, 0); goto out_ack; } if (i2c->state == STATE_READ) goto prepare_read; /* * fall through to the write state, as we will need to * send a byte as well */ case STATE_WRITE: /* * we are writing data to the device... check for the * end of the message, and if so, work out what to do */ if (!(i2c->msg->flags & I2C_M_IGNORE_NAK)) { if (iicstat & S3C2410_IICSTAT_LASTBIT) { dev_dbg(i2c->dev, "WRITE: No Ack\n"); s3c24xx_i2c_stop(i2c, -ECONNREFUSED); goto out_ack; } } retry_write: if (!is_msgend(i2c)) { byte = i2c->msg->buf[i2c->msg_ptr++]; writeb(byte, i2c->regs + S3C2410_IICDS); /* * delay after writing the byte to allow the * data setup time on the bus, as writing the * data to the register causes the first bit * to appear on SDA, and SCL will change as * soon as the interrupt is acknowledged */ ndelay(i2c->tx_setup); } else if (!is_lastmsg(i2c)) { /* we need to go to the next i2c message */ dev_dbg(i2c->dev, "WRITE: Next Message\n"); i2c->msg_ptr = 0; i2c->msg_idx++; i2c->msg++; /* check to see if we need to do another message */ if (i2c->msg->flags & I2C_M_NOSTART) { if (i2c->msg->flags & I2C_M_RD) { /* * cannot do this, the controller * forces us to send a new START * when we change direction */ s3c24xx_i2c_stop(i2c, -EINVAL); } goto retry_write; } else { /* send the new start */ s3c24xx_i2c_message_start(i2c, i2c->msg); i2c->state = STATE_START; } } else { /* send stop */ s3c24xx_i2c_stop(i2c, 0); } break; case STATE_READ: /* * we have a byte of data in the data register, do * something with it, and then work out whether we are * going to do any more read/write */ byte = readb(i2c->regs + S3C2410_IICDS); i2c->msg->buf[i2c->msg_ptr++] = byte; /* Add actual length to read for smbus block read */ if (i2c->msg->flags & I2C_M_RECV_LEN && i2c->msg->len == 1) i2c->msg->len += byte; prepare_read: if (is_msglast(i2c)) { /* last byte of buffer */ if (is_lastmsg(i2c)) s3c24xx_i2c_disable_ack(i2c); } else if (is_msgend(i2c)) { /* * ok, we've read the entire buffer, see if there * is anything else we need to do */ if (is_lastmsg(i2c)) { /* last message, send stop and complete */ dev_dbg(i2c->dev, "READ: Send Stop\n"); s3c24xx_i2c_stop(i2c, 0); } else { /* go to the next transfer */ dev_dbg(i2c->dev, "READ: Next Transfer\n"); i2c->msg_ptr = 0; i2c->msg_idx++; i2c->msg++; } } break; } /* acknowlegde the IRQ and get back on with the work */ out_ack: tmp = readl(i2c->regs + S3C2410_IICCON); tmp &= ~S3C2410_IICCON_IRQPEND; writel(tmp, i2c->regs + S3C2410_IICCON); out: return ret; } /* * top level IRQ servicing routine */ static irqreturn_t s3c24xx_i2c_irq(int irqno, void *dev_id) { struct s3c24xx_i2c *i2c = dev_id; unsigned long status; unsigned long tmp; status = readl(i2c->regs + S3C2410_IICSTAT); if (status & S3C2410_IICSTAT_ARBITR) { /* deal with arbitration loss */ dev_err(i2c->dev, "deal with arbitration loss\n"); } if (i2c->state == STATE_IDLE) { dev_dbg(i2c->dev, "IRQ: error i2c->state == IDLE\n"); tmp = readl(i2c->regs + S3C2410_IICCON); tmp &= ~S3C2410_IICCON_IRQPEND; writel(tmp, i2c->regs + S3C2410_IICCON); goto out; } /* * pretty much this leaves us with the fact that we've * transmitted or received whatever byte we last sent */ i2c_s3c_irq_nextbyte(i2c, status); out: return IRQ_HANDLED; } /* * Disable the bus so that we won't get any interrupts from now on, or try * to drive any lines. This is the default state when we don't have * anything to send/receive. * * If there is an event on the bus, or we have a pre-existing event at * kernel boot time, we may not notice the event and the I2C controller * will lock the bus with the I2C clock line low indefinitely. */ static inline void s3c24xx_i2c_disable_bus(struct s3c24xx_i2c *i2c) { unsigned long tmp; /* Stop driving the I2C pins */ tmp = readl(i2c->regs + S3C2410_IICSTAT); tmp &= ~S3C2410_IICSTAT_TXRXEN; writel(tmp, i2c->regs + S3C2410_IICSTAT); /* We don't expect any interrupts now, and don't want send acks */ tmp = readl(i2c->regs + S3C2410_IICCON); tmp &= ~(S3C2410_IICCON_IRQEN | S3C2410_IICCON_IRQPEND | S3C2410_IICCON_ACKEN); writel(tmp, i2c->regs + S3C2410_IICCON); } /* * get the i2c bus for a master transaction */ static int s3c24xx_i2c_set_master(struct s3c24xx_i2c *i2c) { unsigned long iicstat; int timeout = 400; while (timeout-- > 0) { iicstat = readl(i2c->regs + S3C2410_IICSTAT); if (!(iicstat & S3C2410_IICSTAT_BUSBUSY)) return 0; msleep(1); } return -ETIMEDOUT; } /* * wait for the i2c bus to become idle. */ static void s3c24xx_i2c_wait_idle(struct s3c24xx_i2c *i2c) { unsigned long iicstat; ktime_t start, now; unsigned long delay; int spins; /* ensure the stop has been through the bus */ dev_dbg(i2c->dev, "waiting for bus idle\n"); start = now = ktime_get(); /* * Most of the time, the bus is already idle within a few usec of the * end of a transaction. However, really slow i2c devices can stretch * the clock, delaying STOP generation. * * On slower SoCs this typically happens within a very small number of * instructions so busy wait briefly to avoid scheduling overhead. */ spins = 3; iicstat = readl(i2c->regs + S3C2410_IICSTAT); while ((iicstat & S3C2410_IICSTAT_START) && --spins) { cpu_relax(); iicstat = readl(i2c->regs + S3C2410_IICSTAT); } /* * If we do get an appreciable delay as a compromise between idle * detection latency for the normal, fast case, and system load in the * slow device case, use an exponential back off in the polling loop, * up to 1/10th of the total timeout, then continue to poll at a * constant rate up to the timeout. */ delay = 1; while ((iicstat & S3C2410_IICSTAT_START) && ktime_us_delta(now, start) < S3C2410_IDLE_TIMEOUT) { usleep_range(delay, 2 * delay); if (delay < S3C2410_IDLE_TIMEOUT / 10) delay <<= 1; now = ktime_get(); iicstat = readl(i2c->regs + S3C2410_IICSTAT); } if (iicstat & S3C2410_IICSTAT_START) dev_warn(i2c->dev, "timeout waiting for bus idle\n"); } /* * this starts an i2c transfer */ static int s3c24xx_i2c_doxfer(struct s3c24xx_i2c *i2c, struct i2c_msg *msgs, int num) { unsigned long timeout; int ret; if (i2c->suspended) return -EIO; ret = s3c24xx_i2c_set_master(i2c); if (ret != 0) { dev_err(i2c->dev, "cannot get bus (error %d)\n", ret); ret = -EAGAIN; goto out; } i2c->msg = msgs; i2c->msg_num = num; i2c->msg_ptr = 0; i2c->msg_idx = 0; i2c->state = STATE_START; s3c24xx_i2c_enable_irq(i2c); s3c24xx_i2c_message_start(i2c, msgs); if (i2c->quirks & QUIRK_POLL) { ret = i2c->msg_idx; if (ret != num) dev_dbg(i2c->dev, "incomplete xfer (%d)\n", ret); goto out; } timeout = wait_event_timeout(i2c->wait, i2c->msg_num == 0, HZ * 5); ret = i2c->msg_idx; /* * Having these next two as dev_err() makes life very * noisy when doing an i2cdetect */ if (timeout == 0) dev_dbg(i2c->dev, "timeout\n"); else if (ret != num) dev_dbg(i2c->dev, "incomplete xfer (%d)\n", ret); /* For QUIRK_HDMIPHY, bus is already disabled */ if (i2c->quirks & QUIRK_HDMIPHY) goto out; s3c24xx_i2c_wait_idle(i2c); s3c24xx_i2c_disable_bus(i2c); out: i2c->state = STATE_IDLE; return ret; } /* * first port of call from the i2c bus code when an message needs * transferring across the i2c bus. */ static int s3c24xx_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg *msgs, int num) { struct s3c24xx_i2c *i2c = (struct s3c24xx_i2c *)adap->algo_data; int retry; int ret; ret = clk_enable(i2c->clk); if (ret) return ret; for (retry = 0; retry < adap->retries; retry++) { ret = s3c24xx_i2c_doxfer(i2c, msgs, num); if (ret != -EAGAIN) { clk_disable(i2c->clk); return ret; } dev_dbg(i2c->dev, "Retrying transmission (%d)\n", retry); udelay(100); } clk_disable(i2c->clk); return -EREMOTEIO; } /* declare our i2c functionality */ static u32 s3c24xx_i2c_func(struct i2c_adapter *adap) { return I2C_FUNC_I2C | I2C_FUNC_SMBUS_EMUL | I2C_FUNC_NOSTART | I2C_FUNC_PROTOCOL_MANGLING; } /* i2c bus registration info */ static const struct i2c_algorithm s3c24xx_i2c_algorithm = { .master_xfer = s3c24xx_i2c_xfer, .functionality = s3c24xx_i2c_func, }; /* * return the divisor settings for a given frequency */ static int s3c24xx_i2c_calcdivisor(unsigned long clkin, unsigned int wanted, unsigned int *div1, unsigned int *divs) { unsigned int calc_divs = clkin / wanted; unsigned int calc_div1; if (calc_divs > (16*16)) calc_div1 = 512; else calc_div1 = 16; calc_divs += calc_div1-1; calc_divs /= calc_div1; if (calc_divs == 0) calc_divs = 1; if (calc_divs > 17) calc_divs = 17; *divs = calc_divs; *div1 = calc_div1; return clkin / (calc_divs * calc_div1); } /* * work out a divisor for the user requested frequency setting, * either by the requested frequency, or scanning the acceptable * range of frequencies until something is found */ static int s3c24xx_i2c_clockrate(struct s3c24xx_i2c *i2c, unsigned int *got) { struct s3c2410_platform_i2c *pdata = i2c->pdata; unsigned long clkin = clk_get_rate(i2c->clk); unsigned int divs, div1; unsigned long target_frequency; u32 iiccon; int freq; i2c->clkrate = clkin; clkin /= 1000; /* clkin now in KHz */ dev_dbg(i2c->dev, "pdata desired frequency %lu\n", pdata->frequency); target_frequency = pdata->frequency ? pdata->frequency : 100000; target_frequency /= 1000; /* Target frequency now in KHz */ freq = s3c24xx_i2c_calcdivisor(clkin, target_frequency, &div1, &divs); if (freq > target_frequency) { dev_err(i2c->dev, "Unable to achieve desired frequency %luKHz." \ " Lowest achievable %dKHz\n", target_frequency, freq); return -EINVAL; } *got = freq; iiccon = readl(i2c->regs + S3C2410_IICCON); iiccon &= ~(S3C2410_IICCON_SCALEMASK | S3C2410_IICCON_TXDIV_512); iiccon |= (divs-1); if (div1 == 512) iiccon |= S3C2410_IICCON_TXDIV_512; if (i2c->quirks & QUIRK_POLL) iiccon |= S3C2410_IICCON_SCALE(2); writel(iiccon, i2c->regs + S3C2410_IICCON); if (i2c->quirks & QUIRK_S3C2440) { unsigned long sda_delay; if (pdata->sda_delay) { sda_delay = clkin * pdata->sda_delay; sda_delay = DIV_ROUND_UP(sda_delay, 1000000); sda_delay = DIV_ROUND_UP(sda_delay, 5); if (sda_delay > 3) sda_delay = 3; sda_delay |= S3C2410_IICLC_FILTER_ON; } else sda_delay = 0; dev_dbg(i2c->dev, "IICLC=%08lx\n", sda_delay); writel(sda_delay, i2c->regs + S3C2440_IICLC); } return 0; } #if defined(CONFIG_ARM_S3C24XX_CPUFREQ) #define freq_to_i2c(_n) container_of(_n, struct s3c24xx_i2c, freq_transition) static int s3c24xx_i2c_cpufreq_transition(struct notifier_block *nb, unsigned long val, void *data) { struct s3c24xx_i2c *i2c = freq_to_i2c(nb); unsigned int got; int delta_f; int ret; delta_f = clk_get_rate(i2c->clk) - i2c->clkrate; /* if we're post-change and the input clock has slowed down * or at pre-change and the clock is about to speed up, then * adjust our clock rate. <0 is slow, >0 speedup. */ if ((val == CPUFREQ_POSTCHANGE && delta_f < 0) || (val == CPUFREQ_PRECHANGE && delta_f > 0)) { i2c_lock_adapter(&i2c->adap); ret = s3c24xx_i2c_clockrate(i2c, &got); i2c_unlock_adapter(&i2c->adap); if (ret < 0) dev_err(i2c->dev, "cannot find frequency (%d)\n", ret); else dev_info(i2c->dev, "setting freq %d\n", got); } return 0; } static inline int s3c24xx_i2c_register_cpufreq(struct s3c24xx_i2c *i2c) { i2c->freq_transition.notifier_call = s3c24xx_i2c_cpufreq_transition; return cpufreq_register_notifier(&i2c->freq_transition, CPUFREQ_TRANSITION_NOTIFIER); } static inline void s3c24xx_i2c_deregister_cpufreq(struct s3c24xx_i2c *i2c) { cpufreq_unregister_notifier(&i2c->freq_transition, CPUFREQ_TRANSITION_NOTIFIER); } #else static inline int s3c24xx_i2c_register_cpufreq(struct s3c24xx_i2c *i2c) { return 0; } static inline void s3c24xx_i2c_deregister_cpufreq(struct s3c24xx_i2c *i2c) { } #endif #ifdef CONFIG_OF static int s3c24xx_i2c_parse_dt_gpio(struct s3c24xx_i2c *i2c) { int idx, gpio, ret; if (i2c->quirks & QUIRK_NO_GPIO) return 0; for (idx = 0; idx < 2; idx++) { gpio = of_get_gpio(i2c->dev->of_node, idx); if (!gpio_is_valid(gpio)) { dev_err(i2c->dev, "invalid gpio[%d]: %d\n", idx, gpio); goto free_gpio; } i2c->gpios[idx] = gpio; ret = gpio_request(gpio, "i2c-bus"); if (ret) { dev_err(i2c->dev, "gpio [%d] request failed (%d)\n", gpio, ret); goto free_gpio; } } return 0; free_gpio: while (--idx >= 0) gpio_free(i2c->gpios[idx]); return -EINVAL; } static void s3c24xx_i2c_dt_gpio_free(struct s3c24xx_i2c *i2c) { unsigned int idx; if (i2c->quirks & QUIRK_NO_GPIO) return; for (idx = 0; idx < 2; idx++) gpio_free(i2c->gpios[idx]); } #else static int s3c24xx_i2c_parse_dt_gpio(struct s3c24xx_i2c *i2c) { return 0; } static void s3c24xx_i2c_dt_gpio_free(struct s3c24xx_i2c *i2c) { } #endif /* * initialise the controller, set the IO lines and frequency */ static int s3c24xx_i2c_init(struct s3c24xx_i2c *i2c) { struct s3c2410_platform_i2c *pdata; unsigned int freq; /* get the plafrom data */ pdata = i2c->pdata; /* write slave address */ writeb(pdata->slave_addr, i2c->regs + S3C2410_IICADD); dev_info(i2c->dev, "slave address 0x%02x\n", pdata->slave_addr); writel(0, i2c->regs + S3C2410_IICCON); writel(0, i2c->regs + S3C2410_IICSTAT); /* we need to work out the divisors for the clock... */ if (s3c24xx_i2c_clockrate(i2c, &freq) != 0) { dev_err(i2c->dev, "cannot meet bus frequency required\n"); return -EINVAL; } /* todo - check that the i2c lines aren't being dragged anywhere */ dev_info(i2c->dev, "bus frequency set to %d KHz\n", freq); dev_dbg(i2c->dev, "S3C2410_IICCON=0x%02x\n", readl(i2c->regs + S3C2410_IICCON)); return 0; } #ifdef CONFIG_OF /* * Parse the device tree node and retreive the platform data. */ static void s3c24xx_i2c_parse_dt(struct device_node *np, struct s3c24xx_i2c *i2c) { struct s3c2410_platform_i2c *pdata = i2c->pdata; int id; if (!np) return; pdata->bus_num = -1; /* i2c bus number is dynamically assigned */ of_property_read_u32(np, "samsung,i2c-sda-delay", &pdata->sda_delay); of_property_read_u32(np, "samsung,i2c-slave-addr", &pdata->slave_addr); of_property_read_u32(np, "samsung,i2c-max-bus-freq", (u32 *)&pdata->frequency); /* * Exynos5's legacy i2c controller and new high speed i2c * controller have muxed interrupt sources. By default the * interrupts for 4-channel HS-I2C controller are enabled. * If nodes for first four channels of legacy i2c controller * are available then re-configure the interrupts via the * system register. */ id = of_alias_get_id(np, "i2c"); i2c->sysreg = syscon_regmap_lookup_by_phandle(np, "samsung,sysreg-phandle"); if (IS_ERR(i2c->sysreg)) return; regmap_update_bits(i2c->sysreg, EXYNOS5_SYS_I2C_CFG, BIT(id), 0); } #else static void s3c24xx_i2c_parse_dt(struct device_node *np, struct s3c24xx_i2c *i2c) { } #endif static int s3c24xx_i2c_probe(struct platform_device *pdev) { struct s3c24xx_i2c *i2c; struct s3c2410_platform_i2c *pdata = NULL; struct resource *res; int ret; if (!pdev->dev.of_node) { pdata = dev_get_platdata(&pdev->dev); if (!pdata) { dev_err(&pdev->dev, "no platform data\n"); return -EINVAL; } } i2c = devm_kzalloc(&pdev->dev, sizeof(struct s3c24xx_i2c), GFP_KERNEL); if (!i2c) return -ENOMEM; i2c->pdata = devm_kzalloc(&pdev->dev, sizeof(*pdata), GFP_KERNEL); if (!i2c->pdata) return -ENOMEM; i2c->quirks = s3c24xx_get_device_quirks(pdev); i2c->sysreg = ERR_PTR(-ENOENT); if (pdata) memcpy(i2c->pdata, pdata, sizeof(*pdata)); else s3c24xx_i2c_parse_dt(pdev->dev.of_node, i2c); strlcpy(i2c->adap.name, "s3c2410-i2c", sizeof(i2c->adap.name)); i2c->adap.owner = THIS_MODULE; i2c->adap.algo = &s3c24xx_i2c_algorithm; i2c->adap.retries = 2; i2c->adap.class = I2C_CLASS_DEPRECATED; i2c->tx_setup = 50; init_waitqueue_head(&i2c->wait); /* find the clock and enable it */ i2c->dev = &pdev->dev; i2c->clk = devm_clk_get(&pdev->dev, "i2c"); if (IS_ERR(i2c->clk)) { dev_err(&pdev->dev, "cannot get clock\n"); return -ENOENT; } dev_dbg(&pdev->dev, "clock source %p\n", i2c->clk); /* map the registers */ res = platform_get_resource(pdev, IORESOURCE_MEM, 0); i2c->regs = devm_ioremap_resource(&pdev->dev, res); if (IS_ERR(i2c->regs)) return PTR_ERR(i2c->regs); dev_dbg(&pdev->dev, "registers %p (%p)\n", i2c->regs, res); /* setup info block for the i2c core */ i2c->adap.algo_data = i2c; i2c->adap.dev.parent = &pdev->dev; i2c->pctrl = devm_pinctrl_get_select_default(i2c->dev); /* inititalise the i2c gpio lines */ if (i2c->pdata->cfg_gpio) i2c->pdata->cfg_gpio(to_platform_device(i2c->dev)); else if (IS_ERR(i2c->pctrl) && s3c24xx_i2c_parse_dt_gpio(i2c)) return -EINVAL; /* initialise the i2c controller */ ret = clk_prepare_enable(i2c->clk); if (ret) { dev_err(&pdev->dev, "I2C clock enable failed\n"); return ret; } ret = s3c24xx_i2c_init(i2c); clk_disable(i2c->clk); if (ret != 0) { dev_err(&pdev->dev, "I2C controller init failed\n"); clk_unprepare(i2c->clk); return ret; } /* * find the IRQ for this unit (note, this relies on the init call to * ensure no current IRQs pending */ if (!(i2c->quirks & QUIRK_POLL)) { i2c->irq = ret = platform_get_irq(pdev, 0); if (ret <= 0) { dev_err(&pdev->dev, "cannot find IRQ\n"); clk_unprepare(i2c->clk); return ret; } ret = devm_request_irq(&pdev->dev, i2c->irq, s3c24xx_i2c_irq, 0, dev_name(&pdev->dev), i2c); if (ret != 0) { dev_err(&pdev->dev, "cannot claim IRQ %d\n", i2c->irq); clk_unprepare(i2c->clk); return ret; } } ret = s3c24xx_i2c_register_cpufreq(i2c); if (ret < 0) { dev_err(&pdev->dev, "failed to register cpufreq notifier\n"); clk_unprepare(i2c->clk); return ret; } /* * Note, previous versions of the driver used i2c_add_adapter() * to add the bus at any number. We now pass the bus number via * the platform data, so if unset it will now default to always * being bus 0. */ i2c->adap.nr = i2c->pdata->bus_num; i2c->adap.dev.of_node = pdev->dev.of_node; platform_set_drvdata(pdev, i2c); pm_runtime_enable(&pdev->dev); ret = i2c_add_numbered_adapter(&i2c->adap); if (ret < 0) { dev_err(&pdev->dev, "failed to add bus to i2c core\n"); pm_runtime_disable(&pdev->dev); s3c24xx_i2c_deregister_cpufreq(i2c); clk_unprepare(i2c->clk); return ret; } dev_info(&pdev->dev, "%s: S3C I2C adapter\n", dev_name(&i2c->adap.dev)); return 0; } static int s3c24xx_i2c_remove(struct platform_device *pdev) { struct s3c24xx_i2c *i2c = platform_get_drvdata(pdev); clk_unprepare(i2c->clk); pm_runtime_disable(&pdev->dev); s3c24xx_i2c_deregister_cpufreq(i2c); i2c_del_adapter(&i2c->adap); if (pdev->dev.of_node && IS_ERR(i2c->pctrl)) s3c24xx_i2c_dt_gpio_free(i2c); return 0; } #ifdef CONFIG_PM_SLEEP static int s3c24xx_i2c_suspend_noirq(struct device *dev) { struct platform_device *pdev = to_platform_device(dev); struct s3c24xx_i2c *i2c = platform_get_drvdata(pdev); i2c->suspended = 1; if (!IS_ERR(i2c->sysreg)) regmap_read(i2c->sysreg, EXYNOS5_SYS_I2C_CFG, &i2c->sys_i2c_cfg); return 0; } static int s3c24xx_i2c_resume_noirq(struct device *dev) { struct platform_device *pdev = to_platform_device(dev); struct s3c24xx_i2c *i2c = platform_get_drvdata(pdev); int ret; if (!IS_ERR(i2c->sysreg)) regmap_write(i2c->sysreg, EXYNOS5_SYS_I2C_CFG, i2c->sys_i2c_cfg); ret = clk_enable(i2c->clk); if (ret) return ret; s3c24xx_i2c_init(i2c); clk_disable(i2c->clk); i2c->suspended = 0; return 0; } #endif #ifdef CONFIG_PM static const struct dev_pm_ops s3c24xx_i2c_dev_pm_ops = { SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(s3c24xx_i2c_suspend_noirq, s3c24xx_i2c_resume_noirq) }; #define S3C24XX_DEV_PM_OPS (&s3c24xx_i2c_dev_pm_ops) #else #define S3C24XX_DEV_PM_OPS NULL #endif static struct platform_driver s3c24xx_i2c_driver = { .probe = s3c24xx_i2c_probe, .remove = s3c24xx_i2c_remove, .id_table = s3c24xx_driver_ids, .driver = { .name = "s3c-i2c", .pm = S3C24XX_DEV_PM_OPS, .of_match_table = of_match_ptr(s3c24xx_i2c_match), }, }; static int __init i2c_adap_s3c_init(void) { return platform_driver_register(&s3c24xx_i2c_driver); } subsys_initcall(i2c_adap_s3c_init); static void __exit i2c_adap_s3c_exit(void) { platform_driver_unregister(&s3c24xx_i2c_driver); } module_exit(i2c_adap_s3c_exit); MODULE_DESCRIPTION("S3C24XX I2C Bus driver"); MODULE_AUTHOR("Ben Dooks, <ben@simtec.co.uk>"); MODULE_LICENSE("GPL");
"Filmworker" focused on Stanley Kubrick's right-hand man. Tony Zierra isn’t done with Stanley Kubrick yet. After directing “Filmworker,” a documentary about the meticulous auteur’s right-hand man, Zierra is set to make a movie about “Eyes Wide Shut.” In an interview with Variety, he reveals that “SK13” — shorthand for Stanley Kubrick’s 13th film — was originally meant to precede “Filmworker.” “The one movie that I feel is the wrinkle in Kubrick’s filmography is ‘Eyes Wide Shut.’ The people that love him always say, ‘He’s a genius, but I’m not sure what the hell that movie was about,’” says Zierra. It makes no sense to them. The casting doesn’t make any sense to them. The story doesn’t make any sense to them.” Tom Cruise and Nicole Kidman star in the film, an erotic drama about a married couple. Kubrick died shortly after completing it. “There are a lot of great directors, but really to date, he is probably the only one that has that long shelf life. There’s still this mystery about all his films,” Zierra continues. “People are still trying to figure out what the ending meant in ‘2001,’ what ‘The Shining’ meant, what ‘Eyes Wide Shut’ was about.” No word yet on when “SK13” will be completed, though it’s likely to eventually make the festival rounds. Sign Up: Stay on top of the latest breaking film and TV news! Sign up for our Email Newsletters here.
Computer tomography of traumatic intra- and extracerebral lesions. Computer tomography was performed in 29 patients with intra- or extracerebral lesions and the results evaluated retrospectively and compared with angiography, operation or autopsy findings. Many subdural haematomas were difficult to distinguish and in two cases it was impossible. Angiography proved to be more certain. In all cases the intracerebral heamatoma was considerably more evident at computer tomography than at angiography and for contusions, the information provided is fully comparable with that by angiography.
The United States, France and Britain together launched military strikes in Syria to punish President Bashar Assad for a suspected chemical attack against civilians and to deter him from doing it again, President Donald Trump announced Friday. Explosions lit up the skies over Damascus, the Syrian capital, as Trump announced the airstrikes from the White House.… Read the rest of this entry » President Barack Obama urged US senators Tuesday to help him seek a diplomatic response to Syria’s use of chemical weapons but said he wants Congress to maintain the threat of force. The president spent more than two hours on Capitol Hill in closed…
Bill Gates ‘not satisfied’ with Microsoft’s innovations “Although Bill Gates stepped away from his day-to-day role at Microsoft nearly five years ago, he still keeps a close eye on the company he co-founded – and he isn’t always happy with what he sees. During a recent interview broadcast this morning on CBS This Morning, the Microsoft chairman was asked by Charlie Rose whether he was happy with Steve Ballmer’s performance as chief executive. Noting that there have been ‘many amazing things’ accomplished under Ballmer’s leadership in the past couple of years, Gates said he was not satisfied with the company’s innovations.” It’s impossible to deny by this point that Microsoft hasn’t done well in mobile. It would be more surprising if Gates had denied it. About The Author 97 Comments I was really looking forward to the surface RT and it was overpriced. I can’t justify buying a device that I can’t program on. The nokia range of lumias is nice, but the market is swamped with (BTW if you are in Spain) cheap and pretty good Android devices. My neighbour on the bus was showing me her phone and it looked nicer than my old HTC desire for less than €100. I am a bit of a Microsoft fanboy and I have to admit, that full Windows on an ARM device was what I wanted on a notebook and because of RT that isn’t going to happen which is a real shame. Microsoft is going the way of IBM and tbh that is expected. I still love their developer tools and I still like using Windows. I am not saying they aren’t doing well. But Microsoft in the 80s and 90s had massive growth, like google did at the turn of the century. IBM doesn’t have that, but it still there and it still making money. I think Microsoft will become more like that. Agreed. I hope Windows 8 fails (keeps on failing?) miserably and the board decides to finally get rid of Ballmer. We’ve seen some very interesting research projects coming out of MS in the past and it only shows that there’s potential there, which is currently wasted. Kinect is a very good example. Huge success in my opinion, not for what it did to the Xbox and gaming but for what it did to academia. Lots of interesting projects were started because of it, mainly because MS sold it quite cheap to fuel its Xbox economy. Looks to me like Microsoft is the one that’s fading, not IBM. While Microsoft has been essentially devolving into a two trick pony (Windows/Office on the desktop and Windows/SQL Server in the datacenter), IBM has an incredible range of successes over the same period. You just don’t hear about them much unless you work in or near datacenters. That has been their historical strength and they continue to do some really nifty stuff there. By contrast, Microsoft is in danger of seeing all of its markets erode away. In the broadly defined personal device market, they are in real trouble. They no longer have the #1 OS by marketshare, and by the middle of this year they won’t even have the #1 installed base. Android has grown that fast and that big. Microsoft is going to have to retreat into the enterprise and watch the consumer market evolve away from them. Ironic, when the source of their greatest strength in their early days were people sneaking in their own PCs to get work done that the datacenter guys didn’t have time or inclination to do! Meanwhile, the SQL Server/Windows Server combination is coming under attack from multiple directions. The supercomputer and mainframe battles were lost long ago. IIS never had a chance in the Internet facing space, although it was frequently the webserver of choice inside firewalls. That’s changing rapidly, though. The SOHO market is being lost to external hosting providers building upon open source solutions. Even the bigger companies are moving more and more work off Microsoft solutions to Linux based ones. And, since IBM has been supporting FLOSS solutions across their entire hardware line for more than a decade, they are ready, willing, and able to take that business away from Microsoft. Nope, if you’re looking for a company that’s fading away, you’re looking in the wrong direction. Microsoft is the one that’s in trouble, not IBM. You have plotted IBM’s stock price against MSFT’s stock price. That tells you nothing about how well the company has done financially. All it does is indicate how the stock market perceives each company. Since the end of 2007, IBM’s stock price has clearly outperformed MSFT’s. However, when you look at actual financial performance, you see something quite different. Microsoft’s financial results have clearly outperformed IBM’s over the past ten years. Operating income, net income, dividends, net cash, you name it. On all of these bottom-line metrics, Microsoft has done better than IBM over the past ten years. None of which invalidates my main point: When comparing the two companies, Microsoft is the one who is in danger of fading away, not IBM. Microsoft’s annual reports only serve to highlight what I stated. Read the financial breakdown by segment in the 2012 annual report that starts on page 22 as a typical example. I only pointed to the share price comparison because it shows the fact that investors have become disenchanted with Microsoft’s lack of ability to innovate in any market. There’s a lot of smart money out there that sees the same set of conditions that I pointed to. Now, if you have some evidence that can refute my argument I’d love to hear it. You have presented a very distorted view of Microsoft, by taking the dozens of products that it sells and splitting it into two groups — and then calling it a “two-trick pony.” Well, I can do the same thing to IBM, and show that it is also a two-trick pony. One trick is mainframe/services. The other is everything else. IBM gets just 4% of revenues from mainframe hardware, but 40% of its profit is tied to its dominance of the mainframe world. ( See http://www.nytimes.com/2012/08/28/technology/ibm-mainframe-evolves-… ) Oh, but you say, this is unfair. You can’t look at the 40% of profit that comes from mainframes, because it is a very diversified revenue stream. Hardware, software, storage, management, consulting services, etc. And what about the other 60%? “IBM has an incredible range of successes over the same period. You just don’t hear about them much unless you work in or near datacenters.” Well, yes, it is unfair. But so is your characterization of Microsoft. “Windows/Office on the desktop” encompasses everything from the familiar Word and Excel, to consumers products like Skype, to pricey enterprise software like Dynamics CRM, to Lync unified communications, to management tools like InTune. “Windows/SQL Server in the datacenter” ignores things like Sharepoint, Exchange, Exchange Online, Azure, Hyper-V, System Center, etc. You could just as easily say that Microsoft “has an incredible range of successes over the same period. You just don’t hear about them much” — unless you work in IT. SQL Server is coming under attack from open-source? Sure. But so are Oracle and DB2. Why is Microsoft any worse off than IBM, which sells DB2? Supercomputers, mainframe, IIS? You’re getting hung up on markets where Microsoft never made much money. So if Microsoft never breaks into these markets, how does this cause Microsoft to disappear? Is IIS losing share in enterprises? I don’t know — perhaps. Yet the Server & Tools division at Microsoft has grown faster than Microsoft overall in recent years. If Microsoft is disappearing in the enterprise, then why aren’t its revenues going down in the enterprise market? What Microsoft is doing is moving from full-spectrum dominance, in which it had a hand in every pie, to a situation in which the IT world has outgrown it. In the enterprise market, Microsoft has discovered that even a non-monopoly position can be quite profitable. This is not so different from the transformation that IBM has undergone. OK, I see your point. Fair enough. I just have a hard time seeing them as ‘fading away’ these days after having lived through IBM’s doldrums in the 70’s, 80’s, and 90’s before they bounced back as far as they have today. I hope we can both agree that’s it’s a good thing that they no longer own the computer market, right? A monoculture is never healthy, for one. For another, more choice in a marketplace is always beneficial to the customer because it forces vendors to compete on quality, price, and innovation. Why do people accuse me of missing the point when clearly they’re the ones to miss my point. Did it occur to you that I wasn’t disagreeing with your comment? Way to go miss the point. IBM is still massive but it doesn’t really lead the tech world like it used to. I’d argue Watson is such a huge leap that no one is sure how much of a lead it will end up being. Much like when IBM first invested heavily in general purpose computing. No other tech organization has a Watson in the works. Not comparable. Ballmer missed opportunities but Microsoft is still there, making money, having customers, has a future even if not as glory as the past. Microsoft is far away from vanishing, walking dead. They will be there next year too. it seems that it is obvious to the whole world apart from Microsoft, why can’t their software and services integrate, but angry bird on one platform and it runs on them all, a consistent brand (xbox music, zune music, how many more will there be), these are all the reasons why i would fire balmer, he’s been CEO of Microsoft for a decade and is responsible for blunder after blunder. Microsoft have a lot of very clever people, the resources and are in a position to really achieve amazing things, but they keep missing it, it’s like watching grandpa wake from his slumber and taking a few minutes to orientate himself again, “what? who.. where!” One Visual Studio project, one toolkit and one code base for all 3 platforms. Its three Visual Studio projects, wrestling with always having to wait for a shitty content pipeline which was conjoined at the hip with the particular version of VS (2008 at first, then 2010 which its stuck at) and being a second class citizen on Xbox. While XNA has undoubtedly left a void where it used to be, but XNA was never really any good anyway. – XNA comes out. Its better than MDX (Managed DX) but its pretty much panned for being weird and not as pretty as it could be. Remember, XNA came out I think a little bit prior to .NET3 coming out. I was actually wrong before, it was annoying to use on VS2005, 2008, and 2010. I hate their damn customized IDE. – XNA stagnates with not many fixes or advances. Some bones are thrown to us devs here and there. XBLIG placement is still awkward, the Creators Club was terrible and the approval process took fucking forever. – A lot of time passes. – Microsoft takes XNA away from us in Windows 8 and WP8. Developers seemingly forgot how much they hated XNA because they got so used to it, but are now up in arms. It was never a good framework at all. If Microsoft dropped it I hope it is because they’re working on something better. Xbox needs an app store. Maybe this will all tie into that. Hopefully. Xbox has the potential to disrupt gaming and do for games what the phone app stores did for normal software development: Democratize it. Xbox needs an app store. Maybe this will all tie into that. Hopefully. Xbox has the potential to disrupt gaming and do for games what the phone app stores did for normal software development: Democratize it. It wouldn’t be much of a disruption as Microsoft have already been beaten to that goal at least twice before: * Nintendo already did that on the Wii. Not just for professional games either, but home brew as well. * The Dreamcast also did this, but to a far less sophisticated level (but back then the vast majority of people were still on dial up, so the DC still deserves some kudos) I will grant you that the Dreamcast (as awesome as it was), didn’t take off, and the Wii isn’t really a “gamers console”, but even so, it’s already been done. Sure, but this isn’t really about “Firsts” its about who executes it well. The fact that I didn’t even know this was true about Wii should say a lot. Its a great idea, I just think it needs something on a grand sale. Imagine writing a Windows Store app and having it work across Windows Phone, Windows, and Xbox with minimal changes and a streamlined unified submission and commerce system. That’d be a disruption. Look at how other phones did touch prior to the iPhone, but it took the iPhone to radically change the direction of the consumer smartphone market. Another example is Danger with the Sidekicks. (and before someone points it out Linux distros and other OSes had repositories which still are great) With Sidekicks they had an App store of sorts prior to Apple but it really took Apple to set the framework for how most app stores will be run in the future. Sure, but this isn’t really about “Firsts” its about who executes it well. Both the Dreamcast and Wii are well executed. The fact that I didn’t even know this was true about Wii should say a lot. Its a great idea, I just think it needs something on a grand sale. It is on a grand scale on the Wii. The only way you wouldn’t know about Wii’s store would be if you have never taken your Wii online or don’t own a Wii – which in either case, that’s your adding bias to skew your opinion in favour of Microsoft. Imagine writing a Windows Store app and having it work across Windows Phone, Windows, and Xbox with minimal changes and a streamlined unified submission and commerce system. That’d be a disruption. I’d already voiced my opinion about that with XNA. The real question is will MS ever pull their finger out of their arse and actually do that? They have a history of badly uniting their hardware ranges and I’m not convinced that Win8 does a better job than their past attempts, but only time will tell Look at how other phones did touch prior to the iPhone, but it took the iPhone to radically change the direction of the consumer smartphone market. You mean like the LG Prada for example? That “revolution” would have happened with or without Apple. After all, we already had similar interfaces on other embedded devices (eg TomToms), multi-touch hardware had been prototyped for years in labs and most people used their finger as a stylus for basic tasks on even the more fiddly WinCE interfaces. So it was just a matter of the technology being released (ie capacitive touch screen) to make the final step viable. Once that happened, a number of technology firms were making the switch – Apple just being one of them. But as always, history is written by the victors; so everyone thinks Apple invented touch screen phones… Another example is Danger with the Sidekicks. (and before someone points it out Linux distros and other OSes had repositories which still are great) With Sidekicks they had an App store of sorts prior to Apple but it really took Apple to set the framework for how most app stores will be run in the future. It took Apple to popularise it. Sony Ericsson’s had app stores years before the iPhone but back then bandwidth was very expensive so few people wanted to waste it downloading games and themes when they could just pirate it on their home internet connection. I do agree that the iPhone did a grand job of taking a number of existing ideas and marketing and popularising them. and in business, that’s the number 1 priority (sales > originality). But since we’re talking about technology and innovation rather than a companies ability to sell a product, Apple were not the pioneers of the vast majority of the technology they (and many others) claim Apple invented. >Xbox needs an app store. Maybe this will all tie into that. Hopefully. Xbox has the potential to disrupt gaming and do for games what the phone app stores did for normal software development: Democratize it. It wouldn’t be much of a disruption as Microsoft have already been beaten to that goal at least twice before: * Nintendo already did that on the Wii. Not just for professional games either, but home brew as well. * The Dreamcast also did this, but to a far less sophisticated level (but back then the vast majority of people were still on dial up, so the DC still deserves some kudos) I will grant you that the Dreamcast (as awesome as it was), didn’t take off, and the Wii isn’t really a “gamers console”, but even so, it’s already been done. I don’t recall anything like that for the Dreamcast …was it for software running on its memory cards? Japan-only? And Nintendo is notoriously unfriendly to indies… Either way, it would still be a disruption, when on the scale of likely future Xbox ecosystem. They essentially do this every other Windows OS release anyways. If I didn’t know any better I’d say it was intentional. One OS release that is experimental/groundbreaking/severely flawed, followed by one OS release that refines on the strengths and irons out the wrinkles. Wash, rinse and repeat. The idea here isn’t that they need to retreat from their current course, but double down and increase the pace of innovation and the coherency of their product lineup. They are doing very good things. Commercial success is a lagging indicator, believe it or not, but no doubt I’m sure that a lot of people who reply to this comment will try to use it to prove Microsoft is headed in the wrong direction. Microsoft is a company very much in transition. It isn’t a simple effort to do such a management and cultural change inside of a company of that scale. There won’t always be messaging consistency and sometimes things will seem disjointed, but over time, the story for Microsoft is becoming a lot clearer. Look at Microsoft just a few years ago. Every product looked differently, there was an absence of designers in the equation. They’ve since coalesced around Metro. Xbox, Windows Phone, Windows all running a familiar user interface. Look at their developer tools just a few years ago. You had XNA on the Xbox, NETCF on WinMobile and pretty much everything on Windows. Now it’s XAML on the Phone, XAML on Xbox, and XAML on Windows. That’s a stunning achievement in such a little amount of time. Look at how dramatically the Windows OS has been rearchitected to work on devices thinner and lighter than an iPad. Windows. On a tablet. That doesn’t suck. That’s progress. Sinofsky et all ushered in a new era of discipline at Microsoft. He whipped WinDiv into shape, took what worked from DevDiv, and made a forward looking product in Windows 8. All only 3 years from Windows 7’s launch, which was regarded as a tremendous success. Another example being their phone efforts: They had Windows Mobile, Kin, and Windows Phone. Three competing divisions. Now there is one. That’s a huge turnaround. There is less internal duplication of effort and more work towards a common goal. The NT Kernel is used across devices ranging from small phones to beefy PC towers and server racks. Azure has seen a phenomenal turn around too. It was basically useless for a good portion of its life. Now it is a very compelling PaaS and IaaS solution for developers. Its a complete joy to use. Microsoft is also creating new opportunity. Look at SharePoint, look at their Yammer acquisition, look at Bing, look at how they’ve grown Server+Tools and their Office divisions especially with Office 365. My point in all of this is that Microsoft isn’t exactly stagnant, or fading into irrelevancy. Its just managing a mid-life crisis reasonably well. Transitions take time. Remember — this is the company that was ridiculed with their release of the Xbox but they stuck with it to turn it into a success especially with Kinect that pretty much obsoleted Nintendo’s offering. Microsoft is a company with iffy execution, but loads of cash, and very fat cash cows. They are in print money mode for the next decade. To count them out is shortsighted and ignorant of history. Look at how dramatically the Windows OS has been rearchitected to work on devices thinner and lighter than an iPad. Windows. On a tablet. That doesn’t suck. That’s progress. If it doesn’t run windows applications can you still call it windows? What is the difference then between linux, android, ios, bb10 etc. That ‘thinner and lighter than an iPad’ device doesn’t even run Firefox. My full blown Win8 tablet which is thinner and lighter than an iPad, and has 10 hour battery life, can run Windows apps. At this point though, it’s important not to split hairs, Windows Store apps are Windows apps. They’re built using Windows development platform, and share a lot of commonality with Microsoft’s other XAML platforms. At this point though, it’s important not to split hairs, Windows Store apps are Windows apps. They’re built using Windows development platform, and share a lot of commonality with Microsoft’s other XAML platforms. – So just ignore that real applications like Firefox don’t run and will never run. – And just ignore that having to rewrite your application for windows xaml is the same as rewriting it for linux, android, ios, bb10 – So just ignore that real applications like Firefox don’t run and will never run. – And just ignore that having to rewrite your application for windows xaml is the same as rewriting it for linux, android, ios, bb10 I think you missed, so I’ll bold it for emphasis: My full blown Win8 tablet which is thinner and lighter than an iPad, and has 10 hour battery life, can run Windows apps. I can run Win32 apps on my tablet. And FYI: A lot of these applications already exist on iOS (a rewrite) and Android(a rewrite). Needing to rewrite an app to fully take advantage of the hardware (via Windows Store apps) doesn’t necessarily mean there’s an incentive to bring it to Linux or something, because again, developers can now easily make a good amount of income on the Windows Store. You’ve seen Netflix, Skype, Hulu, CNN, etc rewrite their apps for the Windows Store because their apps were largely platform agnostic to begin with. Netflix even uses Windows technology (Rx Framework, adaptive bitrate) behind the scenes so XAML with a media engine that can readily consume that content was a natural shift for them. CNN usually contracts their work out, and given the return (millions of people downloading your app) its a wise investment. Hulu is platform agnostic like Netflix because they target many screens. There is probably a lot of re-use of code between Android, iOS, and the Windows Store app. There are similar stories for many other apps, and its all about making the business case for bringing an app to Windows. I can read. It is just that I don’t know of any x86 tablet that even comes close to the ipad in terms of size, weight or price. Maybe mention what device you have cause the guessing game is getting old. Windows has a 90% marketshare because of the apps. When you take away the apps windows is less useful than android or iOS. I can read. It is just that I don’t know of any x86 tablet that even comes close to the ipad in terms of size, weight or price. Maybe mention what device you have cause the guessing game is getting old. Windows has a 90% marketshare because of the apps. When you take away the apps windows is less useful than android or iOS. Acer W510. Thinner, Lighter, 9hr battery life (18hr with dock) and for $499 which is the same price as an iPad and includes twice the RAM and twice the storage. Look at how dramatically the Windows OS has been rearchitected to work on devices thinner and lighter than an iPad. Windows. On a tablet. That doesn’t suck. That’s progress. A 16GB OS footprint for a tablet does suck. It’s simply unacceptable. The issue has never been that Microsoft are unable to progress, it’s that their progress has been significantly slower than their competition. And why this is I don’t know because Microsoft Research come out with some awesome technology. The issue has never been that Microsoft are unable to progress, it’s that their progress has been significantly slower than their competition. And why this is I don’t know because Microsoft Research come out with some awesome technology. Is it really? I know this is heresy on OSNews, but people do actually embrace cloud computing in real life. I personally store all my photos on SkyDrive. Take a photo on my phone, have it automatically replicate on my Tablet, Desktop PC, and Xbox 360. I was grandfathered in so I have 25GB of additional space for my pictures, videos, and documents. As for music, I’ve never really been a music junky to have gigabytes and gigabytes of music, and to be honest, I don’t think most people are either. My mom usually puts Pandora on and lets that play through. I tend to do the same, I simply don’t see the need in purchasing that much music or ripping my entire CD collection to my tablet for the hell of it. But your point is taken, Windows is still too big and has a ways to go, and that will only happen when we can reduce further the dependency hell that sometimes manifests itself, and reduce the need for side by side deployment of libraries. However I don’t think there’s denying that a few years ago the thought of Windows on a tablet was a pipedream. Is it really? I know this is heresy on OSNews, but people do actually embrace cloud computing in real life. Some do. Some like to have local copies of their content. Like videos to watch on the train to dull the pain of their daily commute. As for music, I’ve never really been a music junky to have gigabytes and gigabytes of music, and to be honest, I don’t think most people are either. If that was the case then the early iPods, with their microdisks, would never have taken off. Plus what about movies? (see my point above). But your point is taken, Windows is still too big and has a ways to go, and that will only happen when we can reduce further the dependency hell that sometimes manifests itself, and reduce the need for side by side deployment of libraries. Indeed. Backwards compatibility is a double edged sward. However I don’t think there’s denying that a few years ago the thought of Windows on a tablet was a pipedream. Microsoft have had Windows tablets since the early days of XP. Probably before then – that was just when I first took an interest in tablets. It’s just back then the UI was lousy (or, to be more accurate, it was lousy for the tablet paradigm). To be completely honest, I think Metro is also a lousy UI (I hated it when I was running Win Phone) but I’ll happily concede that’s just my personal preference. Some do. Some like to have local copies of their content. Like videos to watch on the train to dull the pain of their daily commute. Which is fair, and you can. Some people do by renting or buying movies from Xbox Video, or ripping off of a DVD, or something. For others who don’t do this as often, Netflix or Hulu or YouTube or just private cloud versions of their videos work fine. Me, personally, I do more music on the go than video on the go, but then again, its almost always using my phone which has an LTE connection and streaming Pandora isn’t an issue. Again, YMMV as it does with most things like this, but my point I guess, is that while they are legitimate issues, they are less impactful than might be suggested. I just don’t think tablets can reasonably accommodate a storage junkie on any platform. SSDs for tablets don’t go up much farther than 128GB (unless I’m wrong, is there a tablet with 256GB) so for people with like 100GB of music and videos, it will present a problem. If that was the case then the early iPods, with their microdisks, would never have taken off. Plus what about movies? (see my point above). Funny thing, a lot of the iPods I saw (including my own) never really reached capacity on even just an 80GB unit. And iPod Touches never really had that much space, but were still successful (as successful as the dwindling sector can be) Indeed. Backwards compatibility is a double edged sward. Thankfully the Windows Store solves this DLL hell with versioning as a forethought instead of an afterthought. I think maybe after the first interim release of Windows with some decent improvements to WinRT and an expansion of the kinds of Windows Store apps you can write should make Metro a lot more palatable to people. Microsoft have had Windows tablets since the early days of XP. Probably before then – that was just when I first took an interest in tablets. It’s just back then the UI was lousy (or, to be more accurate, it was lousy for the tablet paradigm). To be completely honest, I think Metro is also a lousy UI (I hated it when I was running Win Phone) but I’ll happily concede that’s just my personal preference. I had a Windows Slate back in the day, but they were clunky and slow and ugly (but great pen support, MSFT has always been consistently ahead here) and I don’t even classify them as “Tablets” as we know them today. But just think before Windows 8. Conventional wisdom was that Windows was too slow, or too heavy, or whatever to run on small devices with resource constraints. That’s wildly different than today, and its thanks to the disciplines put in place during Windows 7 that carried over to Windows 8. But just think before Windows 8. Conventional wisdom was that Windows was too slow, or too heavy, or whatever to run on small devices with resource constraints. That’s wildly different than today, and its thanks to the disciplines put in place during Windows 7 that carried over to Windows 8. That’s because conventional wisdom was to use CE for such devices rather than arse about trying to run a desktop OS on an embedded device. Just as the conventional wisdom is not to run Android on PCs nor iOS on my MacBooks. This whole “one Windows to rule them all” seems backwards to me. I’m 100% for cross platform portability but this is the wrong way to go about that. I meant, for others who don’t do much video watching while commuting. For those that do, they can store a few videos locally. Its not ideal, but then again, no tablet has a good solution for this on a decent scale. Once you pass 128GB of content, you have a ceiling on how much you can extend it with microSD, if you even can (can’t on an iPad). I think the cloud is a more scalable solution and though it may not be ideal for all situations as you point out, it certainly can help in a lot of others. What needs to be done is to make the cloud less visible in users lives. If the user ever has to think about the cloud, you’ve lost from a UX POV. Cloud needs to be treated as just another storage medium. Well yours wouldn’t reach capacity as you already said you’re not all that into music. So you’re argument is somewhat redundant. Me? No, but are a lot of other people really that different from me? I’ve seen a lot of people who buy such outrageous sizes (me included) but don’t use anywhere near that amount of space. This whole “one Windows to rule them all” seems backwards to me. I’m 100% for cross platform portability but this is the wrong way to go about that. I’m not 100% happy today with how things are, but I do see the potential and where things are going. I don’t dispute there are rough edges and limitations, I just dispute the impact of said limitations. They’re no more major than the rough edges that Windows Vista or Windows 7 had. Product engineering is not perfect, and given a finite set of resources and the logistics of the matter, I can understand why trade offs had to be made. I meant, for others who don’t do much video watching while commuting. For those that do, they can store a few videos locally. Its not ideal, but then again, no tablet has a good solution for this on a decent scale. erm, any tablet is a decent solution for this. I think you’re getting a little carried away with the > 128GB figures. My point was just that many people do use local storage on tablets. Me? No, but are a lot of other people really that different from me? I’ve seen a lot of people who buy such outrageous sizes (me included) but don’t use anywhere near that amount of space. My music collection is > 250GB and I still have a stack of vinyl and CDs yet to copy. So yeah, some other people really are that different from you. I’m not 100% happy today with how things are, but I do see the potential and where things are going. I don’t dispute there are rough edges and limitations, I just dispute the impact of said limitations. They’re no more major than the rough edges that Windows Vista or Windows 7 had. Product engineering is not perfect, and given a finite set of resources and the logistics of the matter, I can understand why trade offs had to be made. To be frank, that’s just making excuses. Microsoft have one of the deepest pockets and best research labs in IT. If anyone should have been capable of pulling off a decent tablet, it should have been Microsoft. My music collection is > 250GB and I still have a stack of vinyl and CDs yet to copy. That’s lossless, I presume? (if lossy, it would be easily half a year of music without repeating ) In that case, “encode to lossy on copy” to a portable player is a very workable solution, and would end up in something like 30 GiB. But under today’s constraints? Consumers expect fast+fluid OSes with great battery life in thin, light, and cool (as in temperature) form factors. The tablet idea as Microsoft saw it in the early 2000s was a failed idea. Apple really brought and defined this new market segment, and it is the game Microsoft is playing now. I think that running full Windows on a device designed to compete with the iPad wasn’t really seen as logical by many prior to Windows 8. Including myself. I was one of the people who argued for scaling Windows Phone up to tablet sizes. However after using Windows 8, after seeing its battery life, and its new APIs, I see that the point is to unify the codebases. Windows devices run Windows at heart. Real Windows, not WinCE with a limited kernel. But under today’s constraints? Consumers expect fast+fluid OSes with great battery life in thin, light, and cool (as in temperature) form factors. The tablet idea as Microsoft saw it in the early 2000s was a failed idea. Apple really brought and defined this new market segment, and it is the game Microsoft is playing now. No it’s not. Microsoft are doing the same thing they were doing a decade ago: they’re still trying to release a desktop OS for the tablet. Except this time they’ve bolted on a jarring shell that is so inconsistent and counter-intuitive that even IT professionals are struggling to use it. I think that running full Windows on a device designed to compete with the iPad wasn’t really seen as logical by many prior to Windows 8. Including myself. I was one of the people who argued for scaling Windows Phone up to tablet sizes. That would have made sense. However after using Windows 8, after seeing its battery life, and its new APIs, I see that the point is to unify the codebases. You don’t need to unify code bases. I repeat, no other OS vendor does this. Google aren’t releasing Android on laptops; they have ChromeOS for that. OS X and iOS are hugely different as well. Windows devices run Windows at heart. Real Windows, not WinCE with a limited kernel. You mean that “limited” kernel that’s been powering far more varied hardware and range of embedded systems than NT ever had? Though let’s be honest, the kernel is pretty much besides the point. Android and Ubuntu Desktop run the same kernel (more or less), but very different user lands. I very much doubt that NT itself needs 16GB of user land. Or let’s put it another way, what’s NT’s footprint on the Xbox360? It’s only a few hundred megs on the original but I’ve never owned a 360. If MS can pull it off for a games console then they have no excuses for the fail on the tablets. Well NT is an easier way of saying the kernel, the driver model, the userland components that make up the OS. Its not just the kernel itself that gets ported, there’s a lot of infrastructure that even makes something line WinRT possible. WinCE had limitations with regards to multiprocessing and things, it was for very very embedded devices with severe resource constraints. Phones of today are more aligned with traditional PCs. Gigs of ram instead of megs, super fast GPUs and processors. WinCE was clearly a limiting factor in Windows Phone, and would’ve been one for Windows on Tablets without significant re-engineering. Resources better spent on scaling NT down and reducing duplication. Its clear your argument against “Desktop OS on a Tablet” will fade away once Windows Store apps make Win32 apps irrelevant, or mostly irrelevant. Windows RT can’t run Win32 apps and it is still full Windows. Full Windows userland, full Windows kernel, but very iPad like both in performance and experience. Well NT is an easier way of saying the kernel, the driver model, the userland components that make up the OS. Its not just the kernel itself that gets ported, there’s a lot of infrastructure that even makes something line WinRT possible. I’m aware of that but -and I repeat- a tablet edition of Windows shouldn’t have 16GB of userland! Period. WinCE had limitations with regards to multiprocessing and things, it was for very very embedded devices with severe resource constraints. Phones of today are more aligned with traditional PCs. Gigs of ram instead of megs, super fast GPUs and processors. In my experience it’s easier to scale up than scale down. But I’ll take your word for it that CE would have been less practical Its clear your argument against “Desktop OS on a Tablet” will fade away once Windows Store apps make Win32 apps irrelevant, or mostly irrelevant. Windows RT can’t run Win32 apps and it is still full Windows. Full Windows userland, full Windows kernel, but very iPad like both in performance and experience. But not in the disk footprint. Which was the crux of my point. I could live with a “desktop OS on a tablet” if it was only ~2GB in size (I don’t think the paradigm would be right, but I don’t think Metro is either – so that’s just a matter of personal tastes). However 16GB is just inexcusable. But the beauty of IT (and Microsoft’s dwindling relevance) is that I have a choice of tablets. I can run Android or iOS (or even webOS; as I did for a while) and avoid WinRT entirely. Just as how you can choose to run Win8 if you really think a 16GB footprint is a non-issue. Or let’s put it another way, what’s NT’s footprint on the Xbox360? It’s only a few hundred megs on the original but I’ve never owned a 360. If MS can pull it off for a games console then they have no excuses for the fail on the tablets. 1GB of data will be spent during even casual browsing and media playback, I think you have a bigger issue on your hands than Windows 8. I’d look into switching carriers, that’s ridiculous. But I hear your point, and it isn’t a perfect solution. Some apps (I think the SkyDrive app included) are conscious of mobile plans, data caps, and roaming because of WinRT APIs and become more conservative with data bandwidth accordingly. Commercial success is a lagging indicator, believe it or not, but no doubt I’m sure that a lot of people who reply to this comment will try to use it to prove Microsoft is headed in the wrong direction. Using commercial success as the yard stick to measure the soundness of a commercial company’s strategy is not entirely unreasonable. Microsoft is a company very much in transition. It isn’t a simple effort to do such a management and cultural change inside of a company of that scale. There won’t always be messaging consistency and sometimes things will seem disjointed, but over time, the story for Microsoft is becoming a lot clearer. … My point in all of this is that Microsoft isn’t exactly stagnant, or fading into irrelevancy. Its just managing a mid-life crisis reasonably well. Transitions take time … Microsoft is a company with iffy execution, but loads of cash, and very fat cash cows. They are in print money mode for the next decade. To count them out is shortsighted and ignorant of history. I think you’re right about not writing them off yet. They could just buy their way back into the game if all else fails. As to the success of their recent ‘innovations’ as a way back to the top of the pile, I think it requires a degree of faith that goes beyond cold analysis to hold the view you expressed above. Here’s my view: I have no affection for Microsoft, or any of their products, which I don’t use any more. But perhaps the best thing that could happen in the computer industry in the next decade would be for Microsoft to get richer and richer and richer, either by their cash cows, by innovations, or by good fortune. The richer they get, the richer Gates gets, and almost all of that goes to the eradication of deadly diseases in the real world. I think eradicating polio, which will pave the way for the eradication of TB, even AIDS, would put all the ups and downs of MS/Google/Apple, Win/OSX/Linux, iOS/Android/BB/Win into the shade of relative irrelevance. Using commercial success as the yard stick to measure the soundness of a commercial company’s strategy is not entirely unreasonable. Eventually, no, but I don’t think it comes right away. Plenty of eventual market leaders did not become so overnight. It took Android years, many, many terrible years before it became this self sustaining powerhouse. Similarly with the 360. I think you’re right about not writing them off yet. They could just buy their way back into the game if all else fails. As to the success of their recent ‘innovations’ as a way back to the top of the pile, I think it requires a degree of faith that goes beyond cold analysis to hold the view you expressed above. I think if you’ve seen posts where I’ve recently been critical, you’ll see I don’t make excuses for them where I think they don’t deserve one. I’m very candid with my opinion of what Microsoft should fix. I just don’t think that they need to fundamentally change direction. I feel that they actually have a compelling value proposition, they just really, really suck at closing the deal. Another example I’d give of a dramatic turn around is Windows Live. Anyone remember the inexcusable mess that was? Look how it is now. Outlook. Skydrive. People (contacts in the cloud). Office 365. Its becoming way more coherent. Bill Gates remains the [chairman of the board at Microsoft](http://www.microsoft.com/en-us/news/exec/billg/), and hand-picked all the other board members – who pick the CEO and evaluate his performance, give him goals and guidance, set his pay, bonuses and options, and set policy. Bill Gates is still very much responsible for what goes on there, and weighs in on every big decision. Windows was their best selling product and they did poor in the mobile space so they decided to sell Windows on mobile devices, too. Bad idea. Unifying desktop and mobile operating systems might be a nice idea but not if you do it at any cost. Cramming Windows on mobile devices sucks and forcing desktop users into using a mobile UI like Metro is a terrible idea, too. They should have gone the Apple way: one OS for the desktop with a desktop UI and one very slim OS for phone and tablet. Try to unify mobile and desktop OS little by little. They still don’t have a native framework for GUI RAD like Qt. MFC is a joke in 2013. WPF is cool but you can’t develop native apps, which is a shame. Why not enable WPF for native apps? And if you kill XNA, why don’t let C# programmers use DirectX? Imo they should do the following: let both C# and C++ developers target WPF and Metro using both managed and unmanaged code and let C# developers link with DirectX using both managed and native code. Windows was their best selling product and they did poor in the mobile space so they decided to sell Windows on mobile devices, too. Bad idea. Windows Mobile actually achieved decent sales success and Windows Phone has sold millions of devices. The only issue is that it is not hundreds of millions of devices, but hey, things are improving. I’m not entirely sure its a bad idea, and people who have tried Windows Phones find they generally review very well, perform very well, and have acceptable battery life. Unifying desktop and mobile operating systems might be a nice idea but not if you do it at any cost. Cramming Windows on mobile devices sucks and forcing desktop users into using a mobile UI like Metro is a terrible idea, too. I don’t think Desktop users are forced into using Metro at all, in fact, I’m writing this comment on IE10 running on the Windows 8 desktop. This is a myth. They should have gone the Apple way: one OS for the desktop with a desktop UI and one very slim OS for phone and tablet. Try to unify mobile and desktop OS little by little. Apple’s strategy made sense for Apple. They bootstrapped their Tablet OS on their Phone OS. Microsoft has bootstrapped their Tablet OS on their Desktop OS. Both leveraged success in one area to attempt to create success in another area. Its paying off for Microsoft. Their Windows Store is still growing at an astonishing rate. What has actually happened is amazing. Now any developer, be it a student, or just a hobbyist in their basement, can write a good app using very good tools, and make some additional income through the Store. This has significantly lowered the barrier to entry on Windows when it comes to software development. They still don’t have a native framework for GUI RAD like Qt. MFC is a joke in 2013. WPF is cool but you can’t develop native apps, which is a shame. Why not enable WPF for native apps? And if you kill XNA, why don’t let C# programmers use DirectX? WPF is a large, complex, largely managed code beast. It is amazingly powerful, but with that power comes a complexity that boggles the mind. You can do anything in WPF, including shoot your foot off. Metro takes WPF slims it down, simplifies it, and makes it suitable for development in 2013. It also enables you to use it from C++ which is something you asked for. As for DirectX, Windows 8 has made this easier with the Windows Runtime. Now you can just write a simple DirectX wrapper in C++/CX, deploy it as a Windows Runtime component, and project it into C# and use it as if it were a C# class. In fact, this is exactly what SharpDx does on their Metro profile. If you’re looking for something close to XNA, try MonoGame which is Mono’s implementation of XNA and works on Windows 8. Metro takes WPF slims it down, simplifies it, and makes it suitable for development in 2013. It also enables you to use it from C++ which is something you asked for. No, I didn’t ask for this. WinRT is forcing me to distribute an application trough Windows Store, which is something I dislike. What I asked for, is a quality (as in Qt) framework with RAD tools for developing GUI apps which is usable from both C# and C++ and which is available to both managed and native applications. Metro/WinRT isn’t something like that. As for DirectX, Windows 8 has made this easier with the Windows Runtime. Now you can just write a simple DirectX wrapper in C++/CX, deploy it as a Windows Runtime component, and project it into C# and use it as if it were a C# class. I know that and I dislike it. I also know about SharpDx and I dislike the fact that I need it to write DirectX apps using C#. I want Microsoft to add official support of C# to DirectX. No, I didn’t ask for this. WinRT is forcing me to distribute an application trough Windows Store, which is something I dislike. I’m sorry, but this is a reality you’ll have to accept. The Windows Store is the future of application development on Windows. You wanted a native XAML stack, sure, you got it, Microsoft agreed with you. With the Windows Store, you come to a fundamental disagreement with their vision for the future. That’s fine, you can’t please everyone on everything they want. I’m with you on the need for a native XAML stack though, and for me, I’m pleased I got it. What I asked for, is a quality (as in Qt) framework with RAD tools for developing GUI apps which is usable from both C# and C++ and which is available to both managed and native applications. Metro/WinRT isn’t something like that. Metro is exactly that. I just don’t think you’ve used it enough. Everything in that paragraph I can do using WinRT. In fact, I can do even more. I can write half of my app in C++, and have it seamlessly project itself into C# where I don’t need to do any interop. A C++ WinRT component would still be new’d up like a C# object. That’s amazing. I know that and I dislike it. I also know about SharpDx and I dislike the fact that I need it to write DirectX apps using C#. I want Microsoft to add official support of C# to DirectX. It’d be nice if they did, and they might in the future. I just don’t know if the performance hit would be worth it for a lot of scenarios. The mapping of types in WinRT would make writing a low level API component pretty much a non-starter. But who knows. If money are the problem, they can try to resurrect Steve Jobs from the dead and hire him as a CEO. I bet than in 2 years MSFT shares will worth 10 times they worth now. If resurrection is not possible, they can use a medium and hire Jobs over a long distance. That way they can fire 90% of the employees, because they can copy most of the software and throw some gay UI over that. Microsoft are not fading away. They are just as successful as before when it comes to desktop, mobile and server market share. On desktop they do great, on mobile devices very poor, on consoles they do great, on servers and data center they do an average job and when it comes to cloud, they do average. They aren’t doing much better than 3 or 3 or 5 years ago, but they aren’t doing worse either. That’s not fading away, that’s just stagnation. And if Bill isn’t happy, why the heck doesn’t he push for a new management? And try to hire guys with visions and ideas not just technically skilled guys. Look at Google, damn it! I did a set of Google interviews and I know what kind of people do they seek and how they manage the employees, how they assign and evaluate work, how is to work at Google. Microsoft is just too rigid, inflexible, frozen in a stiff corporate mentality. If Eric Schmidt will ever be in search for a job, they should hire him. The Xbox isn’t about making money. Its about having influence in the living room. That you can’t see that is very telling. Microsoft is now in the best position it can possibly be in for conquering the living room, the TV is the next stage for the war of ecosystems and no one is better off than MS. Even Apple is itching to get into this space. Microsoft was way way ahead of the curve with IPTV and what they’ve done with mindshare, turn people’s perceptions of an Xbox from a Console to an Entertainment Hub is important in their next step to this end. If Microsoft can keep their execution up, keep landing content deals, then the living room is theirs (that was after all, the entire reason for Microsoft even launching a console) Keep in mind that consoles are a cutthroat market with razor thin margins. Its never been about gaming influence as much as its been about maintaining Windows and pushing Windows into the living room. Its also had a halo effect in keeping DirectX relevant via the Xbox 360. Ensuring that the majority of games that used OpenGL were slow ports was a huge victory for them when it comes to changing developer hearts and minds. Bing is actually doing well, and moving to do better, and I think this is one area of Microsoft where the pace of innovation is high. Bing has come a long way and it is extremely well managed. They just need to make a bet on a larger scale. Bing is the other half of a one two punch for entertainment. I’m actually very bullish on Bing, this is one sector where they have nowhere to go but up, and have been doing so at a good pace. I’d say this is what happens to all large tech comapies, especially if they are enterprise-oriented. They start out by making innovative, new, “disruptive” products. Then they grow. Being bigger = being more bureaucratic and = taking less risks. Why risk to company if you are successful? So growth leads to “milking the cows”. Also, their new corporate customers are also, by definition, risk averse. So the formerly innovative IT-company gets big and risk averse, and has clients who are big and risk averse. And that means there is room for a new “disrupter”. And so the wheel spins….. I also think that Open Source software is almost allways more on the forefront of innovation, because there are no companies behind it who try to lock-in customers. Instead, this innovation is tech driven and not profit driven. Not satisfied? Its a freakin greek tragedy. Nokia which jumped all aboard the Microsoft ship went from 35% of the market 2 years ago to 3% today. That’s pretty much a catastrophic failure. I wouldn’t be surprised to see them out of business soon. Course, thats one less big supporter of Linux and open source in the market. I think that was the whole point anyway. I think Microsoft just couldn’t fill the vacuum left by their decline. Instead Samsung came along and scooped it up. Better them than Microsoft anyways.
eBay Soon, eBay will let you shop on its app by snapping pictures. The e-commerce company on Wednesday said it's developing two new image recognition tools for its mobile app. They'll only be available in the US and are set for release this fall, with a desktop version arriving later. The first feature, simply called Image Search, will let you take a photo of a pair of sneakers or a handbag (or an existing photo in your phone's library) and use it to search eBay to find similar listings. The other, called Find It On eBay, lets you tap images on any online site and "share" to eBay to get a list of similar-looking items. "We want to make the entire internet shoppable by the image," Mohan Patt, eBay vice president of buyer experience, said. How do Image Search and Find It On eBay work? By using machine learning and artificial intelligence to mine eBay's 1.1 billion live listings, as well as its troves of past sales. The new features might be used on the street or in a store when an item of clothing or piece of furniture catches your eye and you want to see if something like it is available on eBay, hopefully for less. The features also show eBay is now one step closer to a concept offered by CEO Devin Wenig earlier this year, in which people can take pictures of stuff they want to sell and eBay automatically fills titles, descriptions, listing information and even prices. The new tools could also help eBay catch up to Amazon, which has been offering image-recognition tech in its mobile app for several years. eBay last year bought up three AI-focused companies in hopes of rolling out features like these, so customers should expect more in the future. Demoing the new features, Patt used a picture he snapped in a magazine of knee-high black boots, fed it to eBay and immediately retrieved more than 500 live listings of the same kind of boots. The intent wasn't to get the identical item, but provide a range of similar looks and prices for customers, he added. eBay said it will store images shared with the company and corresponding search results to evaluate the relevance of the results and provide users with their past searches. The tools will be available across categories on eBay, but the company will develop its AI models most on soft goods like clothes and furniture that often don't include a bar code. (You can already search by bar code using eBay, Amazon and Walmart's apps.) When the new tools launch, Image Search will be available on both Android and iOS and Find It On eBay on Android, but an iOS version is expected later. Tech Culture: From film and television to social media and games, here's your place for the lighter side of tech. Batteries Not Included: The CNET team shares experiences that remind us why tech stuff is cool.
A Cat Cafe (exactly what it sounds like) has made its way to New York City, but folks in Japan believe in a different way to dine with animals. Introducing Japan’s Moomin Cafe, which seats those who are dining alone with large stuffed animals to keep them company. Moomin Cafe is a theme restaurant, based on a series of Finnish picture books about a family of hippopotamus-like creatures. Twitter user Haru recently visited paid the restaurant’s Tokyo Dome City LaQua branch a visit. FirstWeFeast reports, [Haru] bravely stepped inside, took a seat, and placed her order. But as she was sitting by herself, waiting for her food to come, she was approached by one of the staff, who came over to inform Haru there was someone who’d like to sit with her, if she didn’t mind. It turned out the server was making the request on behalf of The Snork Maiden, the girlfriend of Moomin’s main character, Moomintroll. “The waitress had such a big smile on her face, I couldn’t say no,” Haru recalls, embarrassedly [sic]. “But it was also so cute!” Then Haru snapped this photo of her new "date": The entire restaurant lives up to its theme, from the patrons: To the food: Lots of users on Instagram have been uploading photos of their Moomin Cafe experiences: Everyone looks like they're having fun:
Benefits of Postangioplasty Beta-Blocker Treatment Remain Unclear By staff New York—Treatment with beta-blockers after angioplasty appears to have minimal effect for patients with stable angina and no history of heart attack or heart failure. That’s according to a study published by JACC: Cardiovascular Interventions, finding that, after the procedure, beta-blockers did not significantly improve mortality rates or reduce the number of future cardiovascular incidents for older patients with stable angina but no history of heart attack or heart failure. For the study, led by researchers from New York-Presbyterian Hospital/Columbia University, data from the NCDR CathPCI Registry was linked with Medicare information. The study team examined records from 755,215 patients from 1,443 sites between January 2005 and March 2013, 71.4% of whom received a prescription for beta-blockers. Participants on beta-blockers were likely to be younger, female, and more likely to have a history of hypertension, diabetes, high cholesterol, smoking, dialysis, and prior angioplasty. Results indicate that after adjustment for age, gender, body mass index, smoking status, hypertension, and other variables, no significant differences in outcomes had occurred at 30 days. In fact, mortality rates and the occurrence of cardiac events were under 1.0% in both groups. At the 3-year mark, the following outcomes occurred in patients prescribed beta-blockers compared to those who had not been: Yet, over the 8-year study, researchers also note that the use of beta-blockers increased for patients with angioplasty. Lead author Apurva A. Motivala, MD, FACC, FSCAI, an interventional cardiologist affiliated with New York-Presbyterian Hospital/Columbia University, suggested that because patients in the intervention group had a higher prevalence of traditional risk factors that lead to adverse cardiac events, they might have done even less well without beta-blockers. Motivala said, however, that increased incidence of hospital readmissions due to heart failure in that group is a mystery and requires further study. In an accompanying editorial, Anthony G. Nappi, MD, a cardiologist at Albany Stratton Veterans Affairs (VA) Medical Center, and William F. Boden, MD, FACC, professor of medicine at Albany Medical College and chief of medicine at Albany Stratton VA Medical Center, posited that the increased frequency in beta-blocker prescriptions over time “is perhaps not surprising.” “This study, along with others, raises questions about the continued role of beta-blockers in patients with coronary artery disease undergoing angioplasty, especially since there is no evidence of clinical benefit in patients without prior heart attack or heart failure,” Nappi and Boden write.
Factors affecting seasonal variation in 90-day nonreturn rate to first service in lactating Holstein cows in a hot climate. The objective was to determine factors controlling seasonal variation in 90-d nonreturn rate to first service (90-d NRR) including effects of location, milk yield, and weather variables on specific days before and after breeding. Dairy Herd Improvement Association records on first services from 8124 Holstein cows in south Georgia (GA, n = 7 herds), north Florida (NF, n = 5), and south Florida (SF, n = 5) were used. The 90-d NRR was affected by location x month of breeding. The summer drop in 90-d NRR was of lower magnitude and duration in GA than in NF or SF and of lower magnitude and duration for NF than SF. When cows were grouped according to mature equivalent milk yield, there was a milk yield class x month of breeding interaction. As milk yield class increased, the summer depression in 90-d NRR was more pronounced. In a second series of analyses, effects of average air temperature at d -10, 0, and 10 relative to breeding were evaluated with subsets of cows in which average air temperature on the 10 d before the reference day were cool (< 25 degrees C). The 90-d NRR for cows having average temperatures > 20 degrees C on d -10 was less than 90 d NRR for cows with average temperatures < or = 20 degrees C on d -10 (60.1 vs. 36.5%). Similar results were found on d 0 (59.6 vs. 41.4%) and d 10 (56.9 vs. 41.1%). Thus, heat stress before and after breeding, and on the day of breeding, is associated with low 90-d NRR.
Q: Почему не работает форма, добавленная PHP-скриптом Пытаюсь сделать такую форму: id/name, по submit запись добавляется в базу данных и воспроизводится на странице. Сама воспроизводимая запись имеет кнопку delete, по которой должна удаляться из БД, и соответственно, со страницы. Действия производятся через AJAX запросы, и тут возникает проблема. Имеется JS код: функция, выводящая все записи из БД, функция обработчик добавления, функция обработчик удаления. function showAllRecords() { $.ajax({ url: 'showallrecords.php', success: function(html) { $('#records').html(html); } }); return false; } showAllRecords(); $('#addrecord').submit( function() { var form = $(this).serialize(); $.ajax({ type: 'POST', url: 'newobj.php', data: form }); showAllRecords(); return false; } ); $('#deleteRecord').submit( function() { var form = $(this).serialize(); $.ajax({ type: 'POST', data: form, url: 'deleteobj.php' }); showAllRecords(); return false; } ); PHP скрипт добавления записи в БД: <?php require_once('login.php'); $connection = new mysqli($db_hostname, $db_username, $db_password, $db_database); if ($connection->error) die($connection->error); $id = $_POST['id']; $name = $_POST['name']; $query = "insert into test values('$id','$name')"; $result = $connection->query($query); if (!$result) die($connection->error); ?> PHP скрипт удаления записи: <?php require_once('login.php'); $connection = new mysqli($db_hostname, $db_username, $db_password, $db_database); if ($connection->error) die($connection->error); $id = $_POST['id']; $name = $_POST['name']; $query = "insert into test values('$id','$name')"; $result = $connection->query($query); if (!$result) die($connection->error); ?> PHP скрипт, выводящий все записи на страницу: <?php require_once('login.php'); $connection = new mysqli($db_hostname, $db_username, $db_password, $db_database); if ($connection->error) die($connection->error); $query = "select * from test"; $result = $connection->query($query); if (!$result) die($connection->error); $rows = $result->num_rows; for ($i = 0; $i < $rows; $i++) { $result->data_seek($i); $row = $result->fetch_array(MYSQLI_NUM); echo <<<_END <div class="aRecord"> id: $row[0] <br> name: $row[1] <br> <form id="deleteRecord"> <input type="hidden" name="deletingRecordID" value="$row[0]"> <input type="submit" value="delete"> </form> </div> _END; } ?> Вопрос состоит в следующем. Добавление и воспроизведение записей происходит без проблем, все работает как должно, но функция-обработчик удаления, которая вызывается по кнопке, добавленной PHP-скриптом, не делает вообще ничего. Причем, если добавить аналогичную форму в HTML код самостоятельно и попробовать удалить какую-нибудь произвольную запись из БД, все работает. Почему не работает добавляемая скриптом форма? A: Все заработало, изменил в коде запроса: $(document).on('submit', '#deleteRecord', function() { var form = $(this).serialize(); $.ajax({ type: 'POST', data: form, url: 'deleteobj.php' }); showAllRecords(); return false; });
I was going to take part in a new extension pak focused on industries, but the project is halted yet, so I'm going you post here some advances. The pakset includes some green powerplants (wave power, geothermal, biomass and combined cicle) and some new chains (oilriggs, recycling chain, etc..) I hope I'm able to finish all I have in mind. The first preview is for the geothermal powerplant, that gets the energy from earth's heat to produce electricity. That's my idea. The original one seem to be taken from a photograph, and imho doesn't fit as well as it should, and it's very small too. The one I'm developing is based on Exxon Mobile Oil Platform in the North Sea, with the extraction tower separated from the processing plant. Not yet. I've some representation problems with the yellow structures (cranes, oil pump, towers) and I need to make some scale tests too. It's a complex project so you must be patient. It will be finished sooner o later.. Not yet. I've some representation problems with the yellow structures (cranes, oil pump, towers) and I need to make some scale tests too. It's a complex project so you must be patient. It will be finished sooner o later.. @prissiI guess vilvoh used that colour to make some contrast with dark parts of the structureto give it more life to it. My opinion is that it doesn't need to be all realistic, especially atpak64 which is lower size. There is some room to play when its this small... Colonyan is right. I just wanted to add some bright colors to such a dark building. In addition, I've planned another industry chain that will also have a rig, so I needed to distinguish one from the other and I though the color of the main structures was a good choice. These are the recycling points (aka green points), the first element of the chain, that will be within the cities producing waste. There should be at least one in every city. I'll try to create 4 or 5 different models. Then, the waste will be consumed by the biomass powerplant that will produce electricity, and the Recycling plant that will produce paper, plastic and steel in small amounts. Vilvoh: awesome. Too bad there isn't coding to allow it to produce only those goods that are already being produced by primary factories on the map... but, then, I guess if you don't have a paper mill, you won't have anything that consumes paper, either...But... what if the recycling plant comes first and then the printing press connects to the recycling plant, which isn't producing enough paper?Hmm... that could end up being complicated.However, in any case, I like the idea of recycling plants and collection points a lot, and I can't wait to see it in a pak(set). Good work on the image, too.Will the waste they produce be the same as the waste produced by scrapyards? Does a new recyclables good need to be created? I'm glad you like the chain. The initial idea is to create a combined industry chain that produces energy through a new powerplant and also goods that will be consumed by end consumers (the final elements of other industry chains if they exist) Will the waste they produce be the same as the waste produced by scrapyards? Does a new recyclables good need to be created? As the waste already exist as good in Simutrans, I prefer to use it. Creating a new good is a hard task mainly because you have to make the balancing of the whole industry chain, so I prefer to take advantage of what's already done and tested. It's a matter of laziness but on the other hand, it's my first complete industry chain, thus I don't want to make something very complex.. All of that is cool.The potential 'problems' I mentioned could also be considered not as 'problems' but as an optional 'challenge'. Some of us enjoy challenges in our games. Others of us don't. I think that the way you're planning this is a good thing.When setting the recovery amounts, different materials are more easily recoverable than others.However, on the subject of power generation, I think you can't eat your cake and have it, too. I'm no expert, but my understanding is that recycling that generates power does not generate reusable materials. E.g., if you use waste paper to generate power by incinerating it (by the by, does the incinerator generate power, or just consume waste?), yu won't get recycled paper. So, you might want to think about that. It's my understanding that recycling anything requires a power input to go from waste => plastic/glass/aluminum/steel/paper. Now, if you're accepting as waste things that should go into compost, then yes, you can generate the power for it, but I don't know if you'll get any net production of electricity.Someone wit more knowledge in this area, please correct/confirm this. By the by, I think it would be cool if one of the recycling functions produced fertilizer, which would act on farms like electrification does on other industries. The chemical plants could also produce it. But that's a big project for another place and time. As you can see in the graphic above, the original design includes the idea of metane generated by the biomass powerplant, but I'll leave it aside by now. Waste consists of several elements, some of them are organic and go to the powerplant in order to produce energy, and others are inorganic so they go to the recycling plant. imho, It's the most realistic approach I can achieve without adding more complexity to the chain.
The FBI has authorized for release all previously withheld materials in its JFK assassination files. Currently, the limited redactions relate to individuals who provided information during the course of the investigation, and whose lives may be at risk if they are publicly identified. Every effort is being made to lift the remaining redactions going forward as those personal safety concerns are balanced with the goal of maximum transparency. The National Archives will release all remaining records on a rolling basis in the coming weeks.
J20170720-0030—Eriogonum grande var rubescens—RPBG—DxO Eriogonum grande var. rubescens—red buckwheat. Included in the CNPS Inventory of Rare and Endangered Plants on list 1B.2 (rare, threatened, or endangered in CA and elsewhere). Endemic to San Miguel, Santa Rosa and Santa Cruz Islands in the northern Channel Islands. One of the showiest buckwheats, it is widely available in the nursery trade and sometimes naturalized on mainland California. Photographed at Regional Parks Botanic Garden located in Tilden Regional Park near Berkeley, CA Done
Udo Schuklenk Over the last two weeks, Ian Boyne decided to call a spade a spade as far as us annoying atheists are concerned. They were two overly long columns, saturated with names of people he likes and scorns. Their authority typically is celebrated by means of affiliation or Oxford University generally. Boyne even manages to ascribe competencies to Christian writers he agrees with that they demonstrably do not have. Alistair McGrath, a Christian theologian trained in history whose qualifications even include a doctorate in molecular biology, is declared without further ado a philosopher by Mr Boyne. While he says he is braced for ad hominem attacks by 'trite atheists', it strikes me that such generalised statements about a very diverse group of people are, well, ad hominem themselves, aren't they? Reading his columns, I tried to understand what his message to the reading public is other than walking away with the bragging rights of having read more books than those 'trite atheists'. Well, I have only one column in which to respond to Mr Boyne. I am actually a trained philosopher, in fact a professor of philosophy, and I happen to be an atheist. I can truthfully say that I've read the works of most people he mentions in his columns. Some of these authors I happen to know well personally. Alas, that has not persuaded me of the reasonableness of theism, and that, surely, is what Mr Boyne is after. I will not spend the next few paragraphs dropping names on you, or at least there will be very few. I will focus on arguments, not prestige, affiliation and whatnot. What I will do is to address - hidden under all those names and Oxford University Press volumes - what I take to be Mr Boyne's main bones of contention with philosophical atheists. They seem to be these: As human beings, our capacity to understand things in the world is limited by our biological limitations. There could be realities that are beyond our scientific abilities to discover. Among others, one of Mr Boyne's favourite Christian apologists, Alvin Plantinga, has developed this kind of argument. He claims that we would have no reason to assume that our cognitive faculties are reliable if they were just the product of evolutionary processes. So, he ends up proposing a form of evolution - many of Mr Boyne's fellow Christians will shudder in disbelief - that includes an element of divine guidance, as only that would give us reason to trust our faculties. After all, God wouldn't fool around with us, or would He/She/It? Well, most philosophical atheists happen to be philosophical naturalists. Guilty as charged, Mr Boyne. We acknowledge our scientific limitations. To us, the fact that our intellectual capacities are limited by the state of our evolution is not evidence that there is something else to be discovered that is outside our senses and that we just cannot grasp. BOYNE DESPERATE Incidentally, talking in this context vaguely about 'non-scientific ways of knowing', as Mr Boyne does, sounds a tad bit desperate to me. Unless he, or his fellow Christian apologists, give us a bit more meat to play with, let me just say that I do think this theological emperor is pretty naked. It appears to me that naturalistic processes provide us with the necessary reliability in selecting true beliefs about the world around us. How can we test that claim? One way would be to point to our never-ending and ever-accelerating scientific progress. We know more about the world and the universe than we ever did. Insisting that there could be something else around us that we just cannot grasp by means of scientific inquiry is, for all practical intent and purposes, just hand-waving by the religious. Is it possible that there is something else in the universe that we cannot grasp because of how we have evolved? Sure, it's possible. Just as it is possible that our planet rests on a metaphysical teapot that our scientific methods have so far been unable to discover and that requires Boynian 'non-scientific ways of knowing' to understand it. What I am trying to get at is that raising this exceedingly unlikely possibility is clutching at straws. It's a desperate attempt by theists to avoid drowning in an ever-increasing sea of scientific knowledge. So, even if Professor Schellenberg has a philosophical point, nothing follows with regard to the reasonableness of theism. Perhaps that is the reason why he is an atheist. At the end of the day, you have to assign probabilities to these sorts of theoretical possibilities. And the probabilities are vanishingly low for the God proposition. To support his views, Mr Boyne cites an atheistic philosopher, Thomas Nagel, at great length. The thing about Nagel is that his book on the subject matter (as well as an earlier article in a leading philosophy journal) was ripped to pieces by evolutionary biologists and philosophers specialising in the study of biology. Nagel does not appear to have a sound grasp of evolutionary theory. Hence his tacit support for 'intelligent design' is not based on a sound understanding of the scientific matters at stake. MISPLACED EXCITEMENT Mr Boyne also gets excited about another poster boy of current-day Christian apologetics, William Lane Craig. Boyne claims that he has seen many an atheist debate Lane Craig, but he has not seen a single one floor him. Funnily enough, I have seen many of these debates, too, and it seems to me that Lane-Craig looks bad in pretty all of them, but so it goes, I guess. You'll always give more credence to those batting on your team. Lane Craig's claim to fame has been his attempt to recycle medieval Christian and Islamic theologians' attempts at proving the existence of God by means of a cosmological argument. Basically what's done in this argument is to use remarkable features of our natural world, particularly its origin in the Big Bang about 14 billion years ago, and posit God as the best explanation. Of course, 'God' is really a place-holder indicating what we do not know today. If history is anything to go by, we are likely able to find out tomorrow. However, even if we never found out, 'God' would still not constitute an explanation for things we do not understand in the world around us. And if we do find out, there would probably be some further mystery for which 'God' will be offered as an 'explanation'. At the heart of this all, seemingly, is the need of religious believers to attain something approaching certainty about their various godly saviours. If they had simply decided to stick to believing that their God exists, everything would be hunky-dory. But no, they started fantasising about ways of 'knowing' about their invisible friend in the sky. They tried hard to develop logical proofs for the existence of their gods, and what not else. All that failed. Even if one granted them everything they're saying about the limitations of scientific inquiry, nothing follows at all with regard to the existence of 'God'. Scientists would have no problems at all adapting their methods if they turned out to deliver new insights. Meanwhile, vague reference to 'non-scientific ways of knowing' won't do. To give credit where credit is due, Mr Boyne seems to search seriously for answers to obvious doubts that he must have about his beliefs. Why else would he spend this much time engaging in debates with 'trite' atheists in the pages of this paper? After all, he could squander words beating up on homosexuals, as his fellow columnist Mr Espeut is wont to do. It's a good thing that Mr Boyne, even if he cannot let go of his beliefs, is looking sincerely at the arguments. There is some empirical evidence to suggest many people might never be able to let go of their deeply held religious beliefs. It could well be biological and irreversible. No, I am not kidding here. In case you care about religiosity as a biological phenomenon, you might want to check out Andrew Newberg and Eugene D Aquili's book Why God Won't Go Away: Brain Science and the Biology of Belief. Udo Schuklenk is a professor of philosophy at Queen's University in Kingston, Canada, and with Russell Blackford co-author of '50 Great Myths About Atheism'. Email feedback to columns@gleanerjm.com. Schuklenk tweets @Schuklenk.
Preoperative carcinoembryonic antigen level predicts prognosis in patients with pseudomyxoma peritonei treated with cytoreductive surgery and hyperthermic intraperitoneal chemotherapy. Even though management of pseudomyxoma peritonei (PMP) was improved with cytoreductive surgery (CRS) and hyperthermic chemotherapy (HIPEC), several aspects of PMP still need to be optimized, including patient selection for surgery and prognostic factors. We assessed the role of preoperative carcinoembryonic antigen (CEA) levels in PMP patients treated with CRS and HIPEC. A total of 449 PMP patients with documented preoperative CEA levels referred to our center between 2005 and 2011 underwent CRS and HIPEC. The association between CEA levels and characteristics of patients with PMP was assessed with χ (2) test, linear correlation, and logistic regression analyses. Survival analysis was performed with Cox proportional hazard model. Median age was 55 (range 19-84) years. There were 245 (54.5 %) females and 204 (45.5 %) males. Preoperative CEA levels were elevated in 328 (73 %, sensitivity) patients with PMP. Preoperative CEA levels were also related to peritoneal cancer index (P < 0.0001), cytoreductive surgery scores (P < 0.0001), progress free survival (P < 0.001) and overall survival (P < 0.001) in patients with PMP. Our results indicated that preoperative CEA levels are useful in predicting the extent of disease and surgical success as well as progress-free and overall survival in patients with PMP treated with cytoreductive surgery and HIPEC.
Q: Relation of the kernels of one bounded operator and its extension Sorry for this long and formal post. The application in PDEs is mentioned just at the end. Let $$V \hookrightarrow H \text{ and } Q_H' \hookrightarrow Q',$$ where $V$ and $Q$ are Banach and $H$ and $Q_H$ are Hilbert spaces. The hooked arrow $\hookrightarrow$ denotes the continuous embedding, which is basically $V \subset H$ and $\|v\|_V \geq \|v\|_H$ for all $v\in V$. The same with $Q_H'\hookrightarrow Q'$. Also, assume that the embeddings are dense, i.e. $\overline V = H$ and $\overline {Q_H'} = Q'$, where the overline denotes the closure of the space with respect to the norm of its superspace. Consider the linear bounded operator $$J\colon V \to Q_H'.$$ Then $V_0:=\ker(J)$ is a closed subspace of $V$. From an inf-sup condition, I have that $$\|Jv\|_{Q_H'} \geq \gamma \|v\|_V\quad (1)$$ for all $v \in V_1$, where $V_1$ is complementary to $V_0$, i.e. $V=V_0 \oplus V_1$. Furthermore I have that $J\colon V\subset H\to Q'$ is bounded, so that one can define the natural extension $\bar J\colon H \to Q'$, using that $V$ is dense in $H$, that is bounded as well. Also for $\bar J$ I assume this boundedness from below, see $(1)$, for functions that are not in the kernel of $\bar J$. Now my question is: Is the kernel of $J$ dense in the kernel of $\bar J$? Or, equivalently, is $\overline V_0 = H_0$, where $H_0$ is the kernel of $\bar J$? What I have tried so far: I have shown that $\overline{V_0} \subset H_0$. To show the converse direction, I thought of taking $h \in H_0$ and show that there is a sequence $\{v_{0,n}\} \subset V_0$ that goes to $h_0$ (in the norm of $H$). Since $\overline V = H$, there is $\{v_n\} \subset V$ that goes to $h_0$ (in the norm of $H$). Because of $(1)$ there is a bounded projector $P_V\colon V \to V$, with $P(V)=V_0$. Then one can split up every $v_n$ into $v_{0,n}:=Pv_n$ and the remainder $v_{1,n}$ that is in $V_1$. Now I want to show, that $\{v_{1,n}\}$ goes to $0$ (in $H$) what would make $\{v_{0,n}\} \subset V_0$ approaching $h_0$. ...... In terms of PDEs, this would answer the questions, whether the (sub)space of divergence free elements of $H_0^1(\Omega)^3$ is dense in the (sub)space of these functions in $L^2(\Omega)^3$. In this case: $J:=div$ $V:= H_0^1(\Omega)^3$ and $H:=L^2(\Omega)^3$ $Q_H := L^2(\Omega)/\mathbb R$ and $Q' = (H^1(\Omega)/\mathbb R)'$ And the question becomes: Is $$ \{v\in H_0^1(\Omega)^3:\text{div } v = 0 \in L^2(\Omega)/\mathbb R \} \text{ dense in } \{v \in L^2(\Omega)^3:\text{div } v = 0 \in (H^1(\Omega)/\mathbb R)' \} $$ A: Some updates: Somehow I think the argument doesn't need to be so long. Actually the divergence free vector field in $H(\mathrm{div})$ can be defined as $L^2$-divergence free vector fields are the closure of $C^{\infty}$-divergence free vector fields in $L^2$-norm. This is in Luc Tartar's book page 35 here. The the density argument of $C^{\infty}\subset H^1 \subset L^2$ would imply the density of the $H^1$-divergence free vector fields in $L^2$-divergence free vector fields. The proof I presented was just try to replicate what Tartar did for the density of $C^{\infty}$-divergence free vector fields in $H^1$-divergence free vector fields. Tool we use to prove density: Suppose subspace $\mathscr{X} \subset X$, they are both Banach. Define $$X^{\perp} = \{l\in X': \langle l,v\rangle =0 \;\forall v\in X\},$$ and $$\mathscr{X}^{\perp} = \{l\in X': \langle l,v\rangle =0 \;\forall v\in \mathscr{X}\},$$ where $X'$ is the set of all bounded linear functional on $X$. Then we have Claim: $\mathscr{X}$ is dense in $X$. $\Longleftrightarrow$ $\mathscr{X}^{\perp} =X^{\perp} $. Sketch of the proof: First $\mathscr{X}^{\perp} \supset X^{\perp} $ holds always. "$\Rightarrow$" is like standard exercise. For "$\Leftarrow$", we want to prove $\mathscr{X}^{\perp} \subset X^{\perp}$ implying the left: suppose the density does not hold, then we could find an open subset $Z\subset X$ so that $\overline{\mathscr{X}} \cap Z =\varnothing $. Choose $z\in Z$, we can find a non-zero bounded linear functional $g\in X'$ such that $\langle g,z\rangle \neq 0$. Consider a functional $L$ on $\overline{\mathscr{X}} + \{z\} $: $$ \langle L,x+tz\rangle = \langle l,x\rangle + t\langle g,z\rangle, \quad \text{ for } x\in \mathscr{X}, t\in \mathbb{R}, l\in \mathscr{X}^{\perp}, $$ then we can extend $L$ to whole $X$. It can be checked that $L\in \mathscr{X}^{\perp}$, but $\langle L,z \rangle = \langle g,z \rangle\neq 0$ implies $L\notin X^{\perp}$. Thus $\mathscr{X}^{\perp} \not\subset X^{\perp}$ and the claim. Now we move on to prove Divergence free vector fields in $H^1$ is dense in divergence free vector fields in $L^2$. Denote $$ V :=H_0^1(\Omega)^3,\quad V_0 := \{v\in H_0^1(\Omega)^3:\mathrm{div}\, v = 0\}, $$ and $$ H := L^2(\Omega)^3,\quad H_0 := \{v \in L^2(\Omega)^3:\mathrm{div}\, v = 0 \}, $$ then what you wanted to show is: (A) $V_0$ is dense in $H_0$. We can prove this using above claim. Define $$ H(\mathrm{div}) = \{v \in L^2(\Omega)^3,\mathrm{div}\, v \in L^2(\Omega) \}, $$ and we can check this is a Hilbert space under the norm: $$ \|\cdot\|_{H(\mathrm{div})}^2 = \|\cdot \|_{L^2(\Omega)^3}^2 + \|\mathrm{div}(\cdot)\|_{ L^2(\Omega)}^2. $$ Now all the relevant spaces are Hilbert now and we can associate the bounded linear functional with a specific inner product. First Let $l\in H(\mathrm{div})'$, representation theorem in Hilbert space says there is some $u_l \in H(\mathrm{div}) \subset L^2(\Omega)^3$ : $$ \langle l,v\rangle = \int_{\Omega} u_l \,v + \int_{\Omega}(\mathrm{div} \,u_l )\,(\mathrm{div}\, v). $$ Consider some $l$ vanishes on $V_0$: $$V_0^{\perp} = \{l\in H(\mathrm{div})': \langle l,v\rangle =0 \;\forall v\in V_0\}\subset \{l\in (H_0^1(\Omega)^3)': \langle l,v\rangle =0 \;\forall v\in V_0\}. $$ We also know that $$ \mathrm{div}: H_0^1(\Omega)^3 \to L^2(\Omega),\quad \text{ and }\quad \mathrm{div}^* = -\nabla : (L^2(\Omega))'\simeq L^2(\Omega) \to ( H_0^1(\Omega)^3)'. $$ Closed range theorem reads: $$ R(-\nabla ) = (\mathrm{ker}(\mathrm{div}))^{\perp} = \{l\in ( H_0^1(\Omega)^3)': \langle l,v\rangle =0 \;\forall v\in \mathrm{ker}(\mathrm{div}) = V_0 \} \supset V_0^{\perp}, $$ and this means $\langle l,v\rangle =0 $ for any $v\in V_0$, then $u_l = \nabla \phi$ for some $\phi\in L^2(\Omega)/\mathbb{R}$ in the sense of isomorphism: $$ \langle l,v\rangle = \int_{\Omega} u_l \,v = \int_{\Omega} \nabla \phi \,v, $$ for $v\in H^1_0(\Omega)^3$ and divergence free. Now we want to show $$ V_0^{\perp}\subset \{l\in H(\mathrm{div})': \langle l,v\rangle =0 \;\forall v\in H_0\} = H_0^{\perp}. $$ For the above $l$ that vanishes on $V_0$, $u_l = \nabla \phi$, for $u_l \in H(\mathrm{div})\subset L^2(\Omega)^3$, we can pin down this $\phi\in H^1_0(\Omega)$ by solving: $$ \int_{\Omega} \nabla \phi \cdot \nabla v = \int_{\Omega} u_l \cdot \nabla v,\quad \text{ for } \forall v\in H^1_0(\Omega). $$ We can use Green's identity which is valid for $u\in H(\mathrm{div})$ and $\phi \in H^1$, this result can be found in Tartar's book: for $u\in H_0\subset H(\mathrm{div})$ $$ \langle l,u\rangle = \int_{\Omega} \nabla \phi \cdot u = -\int_{\Omega} \phi\,\mathrm{div}\,u + \int_{\partial \Omega} (u\cdot n)\phi \,dS, $$ and the boundary term vanish for $\phi \in H^1_0(\Omega)$. Therefore $\langle l,u\rangle = 0$ for $u\in H_0$, and we have: (B) $V_0^{\perp}\subset H_0^{\perp}$. By the claim, we have (A).
Last week there was an exclusive in the Times – widely followed up – revealing majority support for NHS-linked tax rises. 'For the first time in more than a decade, a majority of Britons say that they are personally willing to pay more to increase spending, according to the respected British Social Attitudes survey'. It followed this up by a leading article to this breakthrough, saying: “ 'Ministerial hearts may be gladdened, therefore, by a new poll published in The Times today. It suggests that 61 per cent voters back higher taxes to fund the health service, with 25 per cent saying that the government should raise existing taxes and 36 per cent supporting a new tax. A dedicated health tax would be difficult to get right, but it is a good idea.' Only Mr S suspects ministers ought not get too carried away. You see one set of hearts that would certainly be gladdened were those at The King’s Fund, which planted the story – and, unbeknown to Times readers, also paid for this question to be slipped in to the normal BSA survey. The King’s Fund is perhaps the most effective of the groups lobbying for more NHS spending. If a poll is commissioned by a group with clear vested interest in its outcome, the reader is informed. 'X is happening, according to a [pollster] survey conducted for [sponsor]' – so the reader can add their own pinch of salt. Although The Times is normally very straight with its readers on sponsored polls, on this occasion it said merely that the King’s Fund 'analysed the data'. Why were the Times and everyone else misled? Perhaps because Kings Fund summarise the findings as per below: “ 'some 61 per cent of respondents support tax rises to increase NHS funding' However, the actual poll phrased the question in the following way: “ 'If the NHS needed more money, which of the following would you be prepared to accept?' Note the first word: ‘if’. It loads the question. For example, dear reader, if you were forced to give Mr S £1,000 would you rather pay with PayPal, Visa or cash? By answering the question, it doesn’t mean you want to give me money. But it can be spun that way. A genius ruse from the King’s Fund. If the military thought about this tactic – slipping a question into the BSA survey, with an ‘if’ before - the MoD budget might no be so neglected. There is an overall trend of public support for NHS-related tax rises, but a YouGov poll (taken after the King’s Fund poll) puts it at 46pc, rather than the 61pc reported last week. The King’s Fund now says that this 'if' question 'was not designed to test support for tax rises specifically'. So it must have been shocked – shocked! – to read The Times leader having assumed the opposite. And shocked to read headlines like 'majority of voters back tax rises to bolster NHS' in The Guardian and 'Nearly two thirds of Britons are ready to pay MORE TAX to bail out the crisis-hit NHS' in the Mail. As Donald Trump would say: mission accomplished.
Dozens (disambiguation) Dozens are groupings of twelve. Dozens may also refer to: The Dozens, a contest common among African Americans where participants insult each other
1. Field of the Invention The present invention relates to a ringing signal sending apparatus, and more particularly to an apparatus for sending a ringing signal to inform the called terminal of the presence of an incoming call addressed to it. 2. Description of the Related Art Telephone exchanges employ subscriber interface units to provide various control functions over local loop connections, such as handling of incoming and outgoing calls to/from customer premises equipment, or telephone. Sending a ringing signal (or call signal) is one of those subscriber interface functions. The subscriber interface unit notifies a remote telephone set that there is an incoming call addressed to it, by sending a predetermined pattern of ringing and silent intervals. FIG. 20 shows the duty cycle and waveform of a typical ringing signal. The ringing signal is an alternate current (AC) signal with a frequency of 20 to 25 Hz superimposed on a certain direct current (DC) bias voltage (e.g., −48 volts). The very first cycle of ringing and silence is referred to as the “initial ringing cycle”; similar cycles that follow are called the “cadence ringing cycles.” The illustrated signal has a duty cycle of 1.2-second ringing and 2.8-second silence for the initial ringing cycle, and 1.2-second ringing and 3.0-second silence for the cadence ringing cycles. The subscriber interface unit supplies the called telephone set with a ringing signal with such a cyclic pattern. The silent period in the initial ringing cycle is often used to provide various service functions, during which a data transfer path is established between the called telephone set and the local exchange. For example, the customer can see the caller's phone number displayed on his/her telephone set when an incoming call is signified. This service, known as the “caller number display” or “calling line identification presentation” (CLIP), exploits the silent period in the initial ringing cycle to transfer the originating phone number information. FIG. 21 shows subscriber interface units, where the function of sending a ringing signal is illustrated in a simplified form. Here, a plurality of subscriber interface units 101-1 to 101-n are installed in a telephone exchange 100. The subscriber interface unit 1011 comprises a relay switch 111, a feed voltage source 112, and a ringing voltage source 113. The illustrated subscriber interface unit 101-1 supports only one channel for simplicity. In the actual implementations, however, a single subscriber interface unit serves multiple telephone channels (e.g., 32 channels). The terminal “a” of the relay switch 111 is connected to the feed voltage source 112, which produces a feed voltage of −48 volts. The terminal “b” is connected to the ringing voltage source 113. The terminal “c” is connected to a telephone set 20 through the ring wire (also called “B wire”) of its local loop. The terminal “e” is grounded. The relay switch 111 is actuated by a driving command supplied to its terminal “d.” The tip wire (also called “A wire”) extending from the telephone set 20 is grounded at the subscriber interface unit 101-1. Suppose here that the telephone exchange 100 is to send a ringing signal to the telephone set 20. This process is initiated by giving a driving command to the relay switch 111. The armature contact (sw) of the relay switch 111 then moves to the “b” side. This creates a circuit that connects the ring wire with the ringing voltage source 113, thus sending out a ringing signal to the telephone set 20. Referring to FIG. 22, a conventional subscriber interface unit 101, which is configured as above, is electrically connected to a telephone set 20 via tip and ring wires. When modeling the telephone set 20 and subscriber line for transient analysis, one should consider their inductance and stray capacitance. The lower half of FIG. 22 shows such parasitic components. In this model, the telephone set 20 is represented as an inductor L21 and a capacitor C21 connected in series between the tip and ring wires. The tip wire itself is modeled by series inductors L22 and L23 and a capacitor C22 representing its stray capacitance to the ground. Likewise, the ring wire is modeled by series inductors L24 and L25 and a capacitor C23 representing its stray capacitance to the ground. Further, there is a parallel capacitor C24 between the tip and ring wires. The above-described parasitic components may cause noise interference, particularly when the subscriber interface unit 101 is sending a ringing signal. At every transitional point between a ringing period and a silent period, it is likely that the parasitic inductors produce counter electromotive forces, and the parasitic capacitors discharge their electric energy, thus causing impulse noises. Since the subscriber interface unit 101 simultaneously handles many telephone channels, such impulse noises developed on a channel may cause interfere with its adjacent channels via closely arranged wiring patterns on the print circuit board. Frequent impulses on a channel would be heard by the user of an adjacent channel as unpleasant crosstalk noises. As previously mentioned, a single telephone exchange 100 is designed to accommodate many subscriber interface units 101. This means that impulse noises developed on a certain unit could reach the adjacent units through the wiring on the backplane, causing adverse effects on their operation. As described earlier in FIG. 20, the telephone exchange 100 establishes a path to a remote telephone set 20 to transport data during the silent period in the initial ringing cycle. Conventionally, however, the hardware of the subscriber interface unit 101 is configured to maintain the established path in the cadence ringing cycles that follow. In other words, an unnecessary path is activated during such silent periods that have no data to send. This brings about a practical problem. To activate a data transfer path is to reduce the impedance of the loop. This low loop impedance helps the development of larger impulse noises, making it difficult to suppress them.
Q: Is the homology class of a compact complex submanifold non-trivial? Let $X$ be a connected complex manifold (not necessarily compact). Let $C \subset X$ be a compact complex $k$-dimensional submanifold (for some $k>0$). Is it true, in this generality, that the homology class $[C] \in H_{2k} (X,\mathbb{Z})$ is non trivial? EDIT - some motivating observations: a first striking fact in the study of complex manifolds is that there is no analogue of Whitney embedding theorem for compact ones; indeed by the maximum modulus principle $\mathbb{C}^n$ has no compact complex submanifolds. I am not very familiar with complex manifolds of dimension $n>1$ (and of course in dimension $1$ this problem is not very interesting). The examples of compact complex submanifolds I have and can handle (as far as the above problem is concerned) are the following the first factor in the product $K \times X$ where $K$ is any compact complex manifold and $X$ any complex manifold the base of a vector bundle over a compact manifold $K$ complex projective subspaces $\mathbb{CP}^k \subseteq \mathbb{CP}^n$ and in these cases it is easy to see that I get something which is non trivial in homology, by quite general facts not really related to complex geometry. Moreover I stumbled upon the fact that there exists many (non algebraic) 2-dimensional tori without compact complex (1-dimensional) submanifolds, as discussed for instance here. This has boosted my impression that if we manage to find a compact complex submanifold, then it must be very special indeed, in some sense. I would like to know if there is some counterexample to the sentence above, or if it can be proved by general methods in complex geometry. I am asking it in that generality also because I am not very familiar with Kähler or algebraic geometry, but of course I appreciate answers under the additional hypothesis that $X$ is compact/projective/Kähler/... A: As Mike Miller points out in the comments, if $X$ is a Kähler manifold (not necessarily compact), and $C$ is a $k$-dimensional compact complex submanifold, then $i_*[C] \in H_{2k}(X, \mathbb{Z})$ is non-trivial (here $i : C \to X$ is the inclusion map and $[C] \in H_{2k}(C, \mathbb{Z})$ is the fundamental class of $C$). To see this, let $\omega$ be the Kähler form, then $\int_C\omega^k = \operatorname{Vol}(C)$ by Wirtinger's Theorem (actually, Wirtinger's Theorem is much stronger than this). Now note that $\int_C\omega^k$ is actually a pairing of homology and cohomology classes, namely $$\int_C\omega^k = \langle i_*[C], [\omega]^k\rangle.$$ Keep in mind, this is a pairing of real homology and cohomology classes, not integral ones. Although $i_*[C] \in H_{2k}(X, \mathbb{Z})$, we only have $[\omega] \in H^2(X, \mathbb{R})$ - provided $X$ is compact, finding a Kähler metric with $[\omega]$ integral is equivalent to $X$ being projective. We're identifying $i_*[C] \in H_{2k}(X, \mathbb{Z})$ with its image under the map $H_{2k}(X, \mathbb{Z}) \to H_{2k}(X, \mathbb{R})$ induced by the inclusion $\mathbb{Z} \to \mathbb{R}$. If $i_*[C] \in H_{2k}(X, \mathbb{Z})$ were trivial, then its image in $H_{2k}(X, \mathbb{R})$ would also be trivial, in which case the pairing $\langle i_*[C], [\omega]^k\rangle$ would be zero. As $\operatorname{Vol}(C) > 0$, we therefore see that $i_*[C]$ is non-trivial. A common misconception with this argument is that if a class in $H_{2k}(X, \mathbb{Z})$ is non-zero, then its image in $H_{2k}(X, \mathbb{R})$ will also be non-zero. At no point of the argument did I make such a claim, which is good because it is false: $H_{2k}(X, \mathbb{Z})$ may have torsion which will necessarily be mapped to zero in $H_{2k}(X, \mathbb{R})$. As for the non-Kähler case, the result is no longer true. Let $X$ be the standard Hopf surface: $(\mathbb{C}^2\setminus\{(0,0)\})/\mathbb{Z}$ where the $\mathbb{Z}$-action is generated by the map $(z_1, z_2) \mapsto (2z_1, 2z_2)$. The image of $\mathbb{C}^*\times\{0\}$ under the natural projection $\pi : \mathbb{C}^2\setminus\{(0,0)\} \to X$ is $$C := \{[(w, 0)] : w \in \mathbb{C}^*\} \cong \mathbb{C}^*/\mathbb{Z}$$ where the $\mathbb{Z}$-action is given by $w \mapsto 2w$. This is a one-dimensional compact complex submanifold of $X$, namely a torus. To see that the image of the fundamental class of $C$ is trivial in $H_2(X, \mathbb{Z})$, note that $X$ is diffeomorphic to $S^1\times S^3$, so by the Künneth Theorem, $H_2(X, \mathbb{Z}) = 0$. Combining the considerations in the Kähler case, together with this example in the non-Kähler case, Donu Araparu gave a nice example of a non-compact complex surface which is not Kähler.
Ok, yeah, this is a blog post about Title Insurance. You've been warned. I know, I know...title insurance is probably the most boring thing you could think of reading about. And you're right. It probably is. But hey, it's a part of this thing called BUYING AND SELLING REAL ESTATE, so you should probably know at least a little about it, right? If you thought or mouthed to yourself silently, "yep", then read on, my friend. So what does a title company do, and what is title insurance? In short, they perform a title search to verify that the seller in a transaction has the legal title necessary to actually sell the property. They do this through the use of a title plant, which is a repository of all publicly recorded information available for a specific piece of property. Upon completing the title search, the title company will issue a preliminary title report, and will be prepared to issue title insurance upon close of escrow on a purchase. In Oregon, the seller of a property pays for the new owner’s title insurance policy. This title insurance policy protects the new owner from a number of items, including claims to title from other entities or people. However, there are exclusions to coverage, in much the same way that any other insurance has exclusions. The purpose of title insurance is to offer protection to the buyer that when they take ownership of the property, they have full legal title, free and clear of any other claims to the property. In addition to the owner’s title insurance policy, in any financed purchase the lender will require a lender’s title insurance policy. This policy covers a few of the items that are excluded from a standard owner’s title policy. The lender’s title policy is paid for by the buyer as part of their closing costs. There are a number of great title and escrow companies in Portland to choose from, and your broker likely has one that they can recommend, so don’t be afraid to ask them what their experience has been like with multiple companies. Finally, in Oregon, it is ultimately the buyer who gets to choose the title company. A seller cannot force a buyer to use a specific title insurance company. Usually the MLS listing information has an escrow preference mentioned, but the buyer does not have to necessarily choose that company, although it can be helpful to show good will. If you have any questions, or feel like we missed something important, don't hesitate to get in touch and let us know!
jots on linux/UNIX system administration, bash and perl -Tom Rodman GNU/copyleft.org Bradley Kuhn (GPL license expert/enforcer) and lawyer Karen Sandler have a podcast that covers the copyleft licenses . Their podcast has been running years now, is called Free as in Freedom, and is hosted at http://faif.us/. Why GNU matters, GNU history. overview history of GNU The GNU Project GNU people a few GNU licensed projects https://www.gnu.org/encyclopedia/encyclopedia.html: Nupedia was licensed initially under its own Nupedia Open Content License, switching to the GNU Free Documentation License before Wikipedia’s founding at the urging of Richard Stallman. http://en.wikipedia.org/wiki/History_of_Linux: 1st Linux: In 1992, he suggested releasing the kernel under the GNU General Public License. He first announced this decision in the release notes of version 0.12.[15] In the middle of December 1992 he published version 0.99 using the GNU GPL License overview/summary: GNU SCM repo hosts for FLOSS projects Savannah very new: Kallithea Software Freedom Conservancy is pleased to announce today its newest member project, Kallithea. Kallithea is a system for hosting and managing Mercurial and Git repositories. In contrast to GitHub (which serves only projects using Git and which projects cannot host locally nor modify), Kallithea supports both Mercurial and Git, and is released freely under the GNU General Public License, version 3 (GPLv3). http://sfconservancy.org/blog/2014/jul/15/why-kallithea/ The GNU date command (part of coreutils package) has a wide range of options, including relative offset strings like tomorrow, yesterday, “2 weeks ago”.. It supports some date math, and time zone conversions. ~ $ date Tue, Sep 28, 2010 3:32:17 PM ~ $ date --date '2 days ago' Sun, Sep 26, 2010 3:32:28 PM ~ $ date -d '6:00pm 2 days ago' Sun, Sep 26, 2010 6:00:00 PM ~ $ date --date '11am yesterday' Mon, Sep 27, 2010 11:00:00 AM ~ $ date --date '6pm tomorrow' Wed, Sep 29, 2010 6:00:00 PM ~ $ date --date "$(date --date 'next month' '+%m/1/%Y') -1 day" Thu, Sep 30, 2010 12:00:00 AM ~ $ : above is last day in month ~ $ date --date 'now +10 days' Fri, Oct 08, 2010 3:33:14 PM ~ $ date -d "1am +3 weeks" '+%H:%M %D' 01:00 10/19/10 ~ $ date --date 'Jan 10 00:00 -0600 - 1 hour - 50 min' Sat, Jan 09, 2010 10:10:00 PM ~ $ date --date "4:59:54 1 hour ago 53 min ago 46 sec ago" Tue, Sep 28, 2010 3:06:08 AM ~ $ date --date 'Dec 25' Sat, Dec 25, 2010 12:00:00 AM ~ $ date --date 'Jan 9 11pm + 1 hour' Sun, Jan 10, 2010 12:00:00 AM --snip ~ $ date Fri, Nov 19, 2010 10:18:49 AM ~ $ date --date "last sunday" Sun, Nov 14, 2010 12:00:00 AM ~ $ date --date "next tue" Tue, Nov 23, 2010 12:00:00 AM --snip/daylight savings $ date --date "3/14/2010 1:59am + 2 min" Sun, Mar 14, 2010 3:01:00 AM $ date --date "3/14/2010 1:59am + 1 min" Sun, Mar 14, 2010 3:00:00 AM $ date --date "3/15/2010 1:59am + 1 min" Mon, Mar 15, 2010 2:00:00 AM time zone conversions, epoch sec ~ $ TZ=Asia/Calcutta date --date '7pm fri CDT' Sat, Oct 02, 2010 5:30:00 AM ~ $ TZ=Europe/Berlin date -d "1970-01-01 UTC $(TZ=America/Chicago date --date "6:15am" '+%s') sec" Tue, Sep 28, 2010 1:15:00 PM ~ $ date -d '1970-01-01 UTC 0 sec' Wed, Dec 31, 1969 6:00:00 PM ~ $ TZ=America/Chicago date -d '1970-01-01 UTC 0 sec' Wed, Dec 31, 1969 6:00:00 PM ~ $ TZ=America/New_York date -d '1970-01-01 UTC 0 sec' Wed, Dec 31, 1969 7:00:00 PM Many more formats are available, than shown here. ~ $ date +%-m/%-d/%Y 9/28/2010 ~ $ date '+%F_%H%M%S' 2010-09-28_154837 ~ $ date '+%a %F %T.%N' Tue 2010-09-28 15:49:31.362339100 ~ $ date --date='25 Dec' +%j 359 For learning or reviewing complex tools, that take months to master, an approach I use is: to gather all the related help into a single vim edit session. For example, consider the tool ‘gpg’. Here’s the commandline I use to concatenate, the texinfo files, man pages, and selected help-webpages: true; ( set -x;: {{{;gpg2 --help; : }}}; : {{{;_vwg http://www.gnupg.org/gph/en/manual.html;: }}}; : {{{;_vwg http://www.dewinter.com/gnupg_howto/english/GPGMiniHowto.txt; : }}}; : {{{; zcat /usr/share/info/{gnupg.info*gz,pinentry.info.gz}; : }}};: {{{; _m gpg2 ;: }}};: {{{; _m gpg-agent ; : }}} ) 2>&1 | 2v -i my-GPG-help ‘true’ is there only for ease of mouse-selecting the text, for copy/pasting. ‘set -x’ let’s you see which commands ran. {{{ and : }}} introduce vim folds, which place each help topic in a separate fold or block. In vim type “:help fold”. _vwg is a tool from uqjau which uses wget and pandoc to convert a webpage to markdown. _m is 4 line bash function that runs “man”$@“|col -bx”; thus converting a man page to ASCII. _2v (“to vim”) is a personal bash function/filter that creates temp file with all the output content. It also creates a 1 line vim command in a file in a fixed location, that I source from within vim. So within in vim I can import the content using a 2 keystroke custom vim “leader command”. In vim, type “:help leader”. ex 1 line command created by _2v: e /var/home/rodmant/.vim/tmp/2v.STDIN.my-GPG-help.SunJan04.0512.548 ex snip of output: $ head /var/home/rodmant/.vim/tmp/2v.STDIN.my-GPG-help.SunJan04.0512.548 + : '{{{' + gpg2 --help gpg (GnuPG) 2.0.10 libgcrypt 1.4.4 Copyright (C) 2009 Free Software Foundation, Inc. --snip Function “Bc” below, starts a ‘bc’ session, echoes commands to bc STDIN initially to set it’s scale, and define a function, and then uses ‘cat’ to connect the starting shell’s STDIN with bc, so you can interact w/bc (w/the keyboard for example). Bc() { : -------------------------------------------------------------------- : Synopsis: Wrapper for 'bc'. Defines an exponential function : 'p (a,b) { return (e ( l (a) * b )) }' : -------------------------------------------------------------------- { echo 'define p (a,b) { return (e ( l (a) * b )) }' echo scale=3 cat }| bc -lq } I like to minimize the number of shells I have open, so when a command takes for than 5 seconds, I background it; there are several approaches. In the general case consider foo to be a built in command, or external command. Where noted ‘foo’ could represent a complex bash command, as in for x in a b c; do true|false|true; done The simplist way to background is: foo& This does not always work smoothly. In some shells foo will suspend itself if it generates STDOUT. If you have permissions to run ‘at’, you can: echo foo|batch # or echo foo | at now +45 min at 8am Sun <<\END foo -xyz for x in a b c; do true|false|true; done END setsid will run the job in a separate process group from your current shell. setsid foo # or: setsid bash <<\END { du /var date } > /tmp/var-df 2>&1 END The job will run in the bg, with no tty (no terminal), and no association with your shell session (it will not show up in ‘jobs’ output). With setsid, logging out of you shell session should never impact the job. I have a script called ’_bg’ in uqjau, which is a wrapper for setsid. $ head -23 $_C/_bg #!/usr/bin/env bash # -------------------------------------------------------------------- # Synopsis: Run simple command in background in separate # process session. Will not be seen by your shell as a job. Log # STDOUT and STDERR to file. Simple command => exactly 1 command # and it's args. # -------------------------------------------------------------------- # Usage: # ourname SIMPLE-COMMAND_HERE # ourname - # ourname # # (in last 2 cases above) => shell script to run is from STDIN # # (complex shell commands OK) # -------------------------------------------------------------------- # Options: # -l run in bash login shell w/ -i # -e set pathname env vars per _29r_dirdefs output # -o LOGPATHNAME # -n JOBNAME becomes part of log name # # -W run nothing, but show recent logs # -------------------------------------------------------------------- I seldom use ’_bg’. The simple workaround I use all day is in ~/.inputrc: "\C-xB": "\C-a(: set -x;: pwd; \C-e) < /dev/null 2>&1|ff &\C-b" # (works for both simple and complex commands) # For help on ~/.inputrc, see 'man bash' (Readline Initialization). When I type: foo\C-xB # foo can be a complex bash commandline, with pipes, switches etc # result is: (: set -x;: pwd; foo) < /dev/null 2>&1|ff & # Remove the leading colons above for verbose runs. By redirecting foo’s STDIN to /dev/null, you prevent it from trying to access your tty. foo STDOUT and STDERR are piped to ‘ff’ which will log the job to a new tempfile; when foo completes ‘ff’ will beep, and rudely display the log pathname. If you use ‘ff -i baz’, then ‘baz’ will be part of the logfile name. ff is part of uqjau. When one of my cron jobs fails, the wrapper script that had launched and logged it, places an appropriately named symbolic link to the log file, into a normally empty directory. Another cron job watches that dir and emails when a link exists, alerting one to the failed job, and positioning you to see the detailed log. The wrapper script is called ‘jobmon’, and is part of uqjau. jobmon has a fair number of options. For example supports you passing in via it’s args, another meta quoted shell commandline for the script you want to run. ‘/usr/sbin/tmpwatch’ is typically run in cron to cleanup /tmp. Here is a man snip: If the --atime, --ctime or --mtime options are used in combination, the decision about deleting a file will be based on the maximum of these times. The --dirmtime option implies ignoring atime of directories, even if the --atime option is used. -u, --atime Make the decision about deleting a file based on the file's atime (access time). This is the default. Note that the periodic updatedb file system scans keep the atime of directories recent. -m, --mtime Make the decision about deleting a file based on the file's mtime (modification time) instead of the atime. -c, --ctime Make the decision about deleting a file based on the file's ctime (inode change time) instead of the atime; for directories, make the decision based on the mtime. The last two args for tmpwatch are always: <hours> <dirs>; unfortunately the -u, -m, and -c all refer to the single argument <hours>. In my personal (non root) crontab, I run a modified copy of the shell script /etc/cron.daily/tmpwatch: $ egrep 'flags=|days=|/usr/sbin/tmpwatch' ~/bin/tmpwatch #flags=-umc flags=${tmpwatch_flags:--cm} days=${tmpwatch_days:-5} /usr/sbin/tmpwatch --verbose "$flags" $[24 * $days] "${@:-${HOME}/tmp}" I suggest you study the timestamps in your tmp dirs to see if atimes or ctimes are being freshened by other processes; only after that should you finalize your tmpwatch <hours> argument and -u, -m, and -c switches. Here I run my bash function ‘_tmpf_timestamps’ to look at timestamps below ~/tmp: $ _tmpf_timestamps -c 10 ~/tmp Total non dirs: [114] in [/var/home/rodmant/tmp] Dirs: [127] Empty Dirs: [7] count of non dirs w/[mca] timestamp-age older than 'col 1'-days : i: 0 m: 114 c: 114 a: 114 i: 1 m: 64 c: 46 a: 45 i: 2 m: 54 c: 36 a: 36 i: 3 m: 51 c: 33 a: 33 i: 4 m: 40 c: 22 a: 22 i: 5 m: 39 c: 21 a: 21 i: 6 m: 39 c: 21 a: 21 i: 7 m: 39 c: 21 a: 21 i: 8 m: 39 c: 21 a: 21 i: 9 m: 39 c: 21 a: 21 i: 10 m: 39 c: 21 a: 21 My theory is that tmpwatch does not cleanup sockets or named pipes (the 21 items above). $ ls -lct $(find . ! -type d -ctime +5) |head -2 srwxr-xr-x 1 jdoe crew 0 Oct 21 07:41 ./sock= prw-rw-rw- 1 jdoe crew 0 Feb 13 2014 ./_untartmp.dl.Irli3917/home/jdoe/s2f| $ file ./_untartmp.dl.Irli3917/home/torodman/s2f ./_untartmp.dl.Irli3917/home/torodman/s2f: fifo (named pipe) Theory now confirmed. See another man snip: -a, –all Remove all file types, not just regular files, symbolic links and directories. Here is my bash function ‘_tmpf_timestamps’: /usr/local/etc/team/mke/iBASHrc $ _bashfunccodegrep _tmpf_timestamps < ./functions _tmpf_timestamps() { : -------------------------------------------------------------------- : Synopsis: Analyze timestamps of either tmpfiles or empty dirs. An : aid in debugging the behavior of tmpwatch script. : -------------------------------------------------------------------- : Usage: $ourname [-d] DIRPATHNAME : ' -d Look only at empty dirs instead of files.' local opt_true=1 opt_char badOpt= OPTIND=1 # OPTIND=1 for 2nd and subsequent getopt invocations; 1 at shell start local OPT_d= OPT_c= while getopts dc: opt_char do # save info in an "OPT_*" env var. [[ $opt_char != \? ]] && eval OPT_${opt_char}="\"\${OPTARG:-$opt_true}\"" || badOpt=1 done shift $(( $OPTIND -1 )) # If badOpt: If in function return 1, else exit 1: [[ -z $badOpt ]] || { : help; return 1 &>/dev/null || exit 1; } #unset opt_true opt_char badOpt ( [[ $OPT_d == -d ]] && action="-type d -empty" || action="-type f" tdir=${1:-/tmp} [[ -d $tdir ]] || { echo $FUNCNAME:\[$tdir] not a dir; return 1; } tdir=$(cd "$tdir";pwd -P) # make tdir "find friendly" emptydirs=$(find $tdir -type d -empty 2>/dev/null|wc -l) echo Total files: \[$(find $tdir -type f 2>/dev/null |wc -l)] in \[$tdir] \ " "Dirs: \[$(find $tdir -type d 2>/dev/null|wc -l)] \ " "Empty Dirs: \[$emptydirs] if [[ $emptydirs = 0 && $action =~ -type\ d\ -empty ]] ;then return 1 fi echo echo "count of files w/[mca] timestamp-age less than 'col 1'-days :" echo for (( i=1; $i <= ${OPT_c:-15} ;i += 1));do m=$(find $tdir $action -mtime -$i 2>/dev/null|wc -l) c=$(find $tdir $action -ctime -$i 2>/dev/null|wc -l) a=$(find $tdir $action -atime -$i 2>/dev/null|wc -l) printf "i:%3d m:%6d c:%6d a:%6d " $i $m $c $a done |sed -e 's~^~ ~' ) } – construct similiar to ‘eval’ $ cmd='set -- a s d ;for f in "$@";do echo $f;done' $ source <( echo "$cmd" ) ## Only works in bash 4.x a s d Below is a bash function ’_diskfull’ used to help identify large files to manually delete. The bash function _bashfunccodegrep is used to display ’_diskfull’ from the file “functions”: /usr/local/etc/team/mke/iBASHrc $ _bashfunccodegrep _diskfull < functions _diskfull() { : _func_ok2unset_ team function : Size-sorted output of: cd ARG1 ... du -xSma : Safe to run on /, because of -x switch to du, stays in / fs -- this has been tested. : -S == do not include size of subdirectories : Advantages of -S: : .. dirs w/small files only at their top level get low sort rank, top level as in "GNU find's depth 1" : .. fewer size sum calculations ( set -eu local fs="${1:-$PWD}" fs_bn="$(basename "$(canPath "$fs")")" : canPath "$fs", could be replaced with: readlink -f "$fs" if [[ $fs_bn == / ]] ;then fs_bn=ROOT fi local tmpdir=${TMPDIR:-~/tmp} [[ -d $tmpdir ]] || tmpdir=/tmp local out="$( mktemp $tmpdir/$FUNCNAME.$fs_bn.$(hostnameshort).XXXXX)" du_stderr=$(mktemp $tmpdir/$FUNCNAME.du_stderr.XXXXX) sort_stderr=$(mktemp $tmpdir/$FUNCNAME.sort_stderr.XXXXX) cd "$fs" echo $FUNCNAME: writing to $out ( set -x : CWD: $PWD writing to $out nice du -xSma 2>$du_stderr|nice sort -T $tmpdir -k1,1rn 2>$sort_stderr : cat $du_stderr cat $sort_stderr ) > $out 2>&1 ) rm -f $du_stderr $sort_stderr } I run cron scheduled backups to rsync.net, and tape backups - to either DDS4 or LTO tapes. GNU tar supports tar backup to a tape drive on a remote host. From GNU tar texinfo help: `--rsh-command=CMD' Notifies `tar' that is should use CMD to communicate with remote devices. For example: tar –rsh-command=/usr/bin/ssh … The code below is available in uqjau. I put together wrapper functions for tar, and mt in a file to be sourced by bash ( uqjau file: “_tape_utils.shinc” ): $ _bashfuncgrep _tar < ./_tape_utils.shinc _tar() { # -------------------------------------------------------------------- # Synopsis: GNU tar wrapper to support remote tape drive # -------------------------------------------------------------------- (set -x;sleep 5;time tar ${_use_ssh+--rsh-command=$_use_ssh} "$@") # _use_ssh if defined is path to ssh, typically /usr/bin/ssh } The script I use for backing up a linux host to (remote or local) tape w/tar is called “_backupall”, and is also part of uqjau. The bash function ‘_bashfuncgrep’ is in iBASHrc. ( applies to GNU: ln, mv, and cp ) Ex of snafu: ~ jdoe $ ls -ldog * lrwxrwxrwx 1 2 Mar 20 20:21 latest -> d3/ lrwxrwxrwx 1 2 Mar 20 20:20 prev -> d1/ ~ jdoe $ ln -sf d2 prev # WRONG ~ jdoe $ ls -ldog * lrwxrwxrwx 1 2 Mar 20 20:21 latest -> d3/ lrwxrwxrwx 1 2 Mar 20 20:20 prev -> d1/ ~ jdoe $ ls -ld d1/* lrwxrwxrwx 1 2 Mar 20 20:23 d1/d2 -> d2 solution: ~ jdoe $ ln -Tsf d2 prev # RIGHT ~ jdoe $ ls -ldog * --snip lrwxrwxrwx 1 2 Mar 20 20:23 prev -> d2/ Ex: rename existing symbolic link and redefining another existing symbolic link: mv -Tf saz yap # -T, --no-target-directory == treat DEST as a normal file # With out the -T, if yap had been a symbolic link to a dir, then # the symbolic link 'saz' would have ended up under that dir. _cg() { : Regex grep of: all commands in PATH, and bash: aliases, built-ins, keywords, and functions. : Usage: $FUNCNAME [REGEX] : --http://stackoverflow.com/questions/948008/linux-command-to-list-all-available-commands-and-aliases : compgen -c will list all the commands you could run. : compgen -a will list all the aliases you could run. : compgen -b will list all the built-ins you could run. : compgen -k will list all the keywords you could run. : compgen -A function will list all the functions you could run. : compgen -A function -abck will list all the above in one go. local filter if [[ $# == 1 ]];then filter="| egrep -i '$1'" fi (set -x; eval "compgen -A function -abck ${filter:-}") } ’_cg’. is part of iBASHrc. Output is not sorted. Example listing all commands, snipped by sed: $ _cg 2>&1 |sed -ne 2115,2120p pax eu-readelf nano fusermount gitk xxd Example grep for “pk.*er“: $ _cg 'pk.*er' + PATH+=:/usr/local/7Rq/scommands/cur + eval 'compgen -A function -abck | egrep -i '\''pk.*er'\''' ++ compgen -A function -abck ++ egrep -i 'pk.*er' pklogin_finder pkinit-show-cert-guid I try to stay in a single vim session, typically open for weeks, so the number of buffers can get out of control. Here are a couple of simple housekeeping custom .vimrc commands, that I use all day long: command Kb :b#|bdel# command KB :b#|bw!# where ‘b#’ switches to previous buffer, then bdel# deletes the buffer you were in when you ran this ‘Kb’ command. Just created. Tips for improving gratefully accepted. Thx to ‘zapper’ for regex. function Mfind(...) let searchStg="" let i = 0 for stg in a:000 let searchStg .= i == 0 ? ".*" . stg : "\\&.*" . stg let i += 1 endfor exe "g;" . searchStg endfunction # ex :call Mfind("red","blue","white") Run this command as root: dd < /dev/sda > /dev/null reads all blocks on the entire ‘sda’ device (ie the first hard drive). Only read errors are displayed – you should have none. Be very careful when ever /dev/sda shows up on the root commandline! A crude test, but very simple. Related: http://www.techrepublic.com/blog/linux-and-open-source/using-smartctl-to-get-smart-status-information-on-your-hard-drives/ For unmounted drive partitions: man badblocks man e2fsck man dumpe2fs drive partitions: ‘cd_’ is a simple bash function to create, manage, and use a directory of symbolic links that point to your favorite directories. I create a wrapper function with a shorter name to call ‘cd_’. ‘cd_’ is part of iBASHrc. ex. using my directory shortcut 'zz' ~ $ c zz # Where 'c' is alias for 'cd_'. /usr/local/7Rq/package/cur/sys-2012.03.25/shar/lib $ cd_() { : team function _func_ok2unset_ manages directory shortcuts : -------------------------------------------------------------------- : Synopsis: cd using favorite single word nicknames, or manage : related symbolic links : -------------------------------------------------------------------- : $FUNCNAME , "(no args) to list all shortcuts" : $FUNCNAME -a SHORTCUTBASENAME, add sym link for \$PWD : $FUNCNAME -a REALPATH SHORTCUTBASENAME, add sym link for REALPATH : $FUNCNAME -d SHORTCUTBASENAME , delete : $FUNCNAME -h , show recently created favorites local dirs=~/dirs mkdir -p ~/dirs local hist=$dirs/hist local opt_true=1 OPTIND=1 local OPT_l= OPT_d= OPT_a= OPT_h while getopts lad:h opt_char do # save info in an "OPT_*" env var. test "$opt_char" != \? && eval OPT_${opt_char}="\"\${OPTARG:-$opt_true}\"" || return 1 done shift $(( $OPTIND -1 )) unset opt_true opt_char if [[ -z $OPT_l && -z $OPT_d && -z $OPT_a && $# = 1 ]];then if [[ -L $dirs/$1 ]] ;then cd "$dirs/$1" return 0 elif [[ -f $dirs/$1 ]];then # $1 is a script that echos the dest dir. cd "$(source "$dirs/$1")" else echo "$FUNCNAME: [$1] not a shortcut" >&2 return 1 fi elif [[ -n $OPT_a ]];then if [[ $# == 2 ]];then (set -x;ln -Tsf "$1" "$dirs/$2") 2>&1 |tee -a $hist return ${PIPESTATUS[0]} elif [[ $# == 1 ]];then (set -x;ln -Tsf "$PWD" "$dirs/$1" ) 2>&1 |tee -a $hist return ${PIPESTATUS[0]} else echo "$FUNCNAME:oops:[$*]" >&2 return 64 fi elif [[ -n $OPT_d ]];then (set -x;rm -f "$dirs/$OPT_d") return 0 elif [[ $OPT_l ]];then ls -ld $dirs/{*,.[^.]*} return 0 elif [[ -n $OPT_h ]];then ( set -x;tail -4 $hist ) return 0 elif [[ $# = 0 ]];then ( set -x;cd "$dirs";ls -ld * ) 2>&1 |less return 0 else echo $FUNCNAME:internal error >&2 return 1 fi } The environment for cron jobs is minimal. This is close to the env that cron jobs see: $ env -i USER=$USER HOME=~ PATH=/usr/bin:/bin /bin/bash -c set BASH=/bin/bash BASH_ARGC=() BASH_ARGV=() BASH_EXECUTION_STRING=set BASH_LINENO=() BASH_SOURCE=() BASH_VERSINFO=([0]="3" [1]="2" [2]="25" [3]="1" [4]="release" [5]="i386-redhat-linux-gnu") BASH_VERSION='3.2.25(1)-release' DIRSTACK=() EUID=--snip GROUPS=() HOME=--snip HOSTNAME=--snip HOSTTYPE=i386 IFS=$' \t ' MACHTYPE=i386-redhat-linux-gnu OPTERR=1 OPTIND=1 OSTYPE=linux-gnu PATH=/usr/bin:/bin PPID=3237 PS4='+ ' PWD=/var/home/rodmant/tmp SHELL=/bin/bash SHELLOPTS=braceexpand:hashall:interactive-comments SHLVL=1 TERM=dumb --snipped USER and UID _=/bin/bash This one liner is a example of running a script w/args to see if it will run in a sparse env, like a cron job: $ env -i USER=$USER HOME=~ PATH=/usr/bin:/bin /bin/bash -c "$_C/argsshow a 'b c'" _01:a$ _02:b c$ Swap in your script and it’s args into the double quotes above. A bash function I wrote for ~/.bash_profile to de-dup $PATH. It requires a bash associative array, so it works only in 4.x bash or later. _deDupPATH() { local path=$1 if [[ ${BASH_VERSION%%.*} < 4 ]];then : Requires at least bash 4.x. echo "$path" return 0 fi local oIFS="$IFS" local p nPATH declare -A seen local started="" IFS=: for p in $path;do if [[ -n $started ]];then if [[ -n ${seen["$p"]:-} ]];then continue else nPATH+=:"$p" fi else started=1 nPATH="$p" fi seen["$p"]=1 done IFS="$oIFS" unset seen echo "$nPATH" } # ex $ _deDupPATH a:a:z a:z New scratch files are created below ~/tmp/_ff/. A symbolic link ~/tmp/ff.txt is made pointing to the current scratchfile. Old scratch files are not deleted (let cron do that ). I also have vim functions to call ‘ff’ for reading and writing. $ ff --help ff: Convenience cut and paste tool. Type, edit, pipe to an auto created, unique scratchfile. date|ff date > $scratchfile # ( new $scratchfile ), pathname of $scratchfile shown on STDERR seq 5|ff -t seq 5|tee $scratchfile # ( new $scratchfile ) ff -c cat $scratchfile ff -w show pathname of current $scratchfile ff -C COMMENT prepend COMMENT to $scratchfile basename ff -l less $scratchfile ff -n edit new $scratchfile ff -nE new $scratchfile, echo pathname ff -P windows print (cygwin only) ff ~/mystuff cp ~/mystuff $scratchfile # ( new $scratchfile ) ff -e ed $scratchfile ff -5 tail -5 $scratchfile ff +5 head -5 $scratchfile ff -gc clipboard to new $scratchfile (cygwin only) ff -pc copy $scratchfile to clipboard (cygwin only) ff -R -- REMOPTS REMARGS REMOPTS and ARGS are sent to a remote instance of ff ff -h HOST -- REMOPTS REMARGS ff -r use readline; read 1 line from STDIN, write new $scratchfile ff cat > $scratchfile # reads STDIN from terminal ( new $scratchfile ) ff >foo cat $scratchfile > foo This bash function is part of uqjau. ‘rm -rf foo’ fails below, due to ‘chmod a-x foo/’ : $ uname -ro; rpm -qf /bin/rm 2.6.18-348.6.1.el5 GNU/Linux coreutils-5.97-34.el5_8.1 $ id -u;mkdir foo;chmod a-x foo/;ls -logd foo 4187 drw-r--r-- 2 4096 Oct 17 07:46 foo/ $ rm -rf foo; echo $? rm: cannot chdir from `.' to `foo': Permission denied 1 Pretty sure this is intended behaviour. Last time I was able to check Solaris did not have this “feature”. Assume you have a corrupt or faulty ~/.bash_profile, which prevents you from logging in. This should position you to login and edit it: ssh -t localhost bash --norc -i johndoe@foobar.com # -t forces a tty, --norc else source ~/.bashrc, -i for interactive $ echo hi | ( set -e; <&- read foo ; echo notSeen >&2 ) bash: read: read error: 0: Bad file descriptor $ :|(TTY=/dev/$(command \ps -o tty= -p $$);exec <$TTY;read -p '> ';echo got: $REPLY) > hi got: hi $ printf "z\000j\000a"|sort -z |od -c 0000000 a \0 j \0 z \0 --snip $ printf 'hi\000ho\000'|while read -r -d "" foo ;do echo $foo;done hi ho A bash function “_sa" ( as in “sane” ) using vim, that has been working for me: _sa () { : -------------------------------------------------------------------- : Synopsis: reset terminal, terminal reset, sanity reset. : Warning: has hardcoded: 'stty sane erase ^H', and depends on vim : -------------------------------------------------------------------- [[ ${OSTYPE:-} = cygwin ]] || { reset; : in one case reset fixed line-drawing characters snafu } stty sane erase ^H vim +:q # Has side affect of fixing up terminal. } $ printf '\000hi\000' > foo $ wc -c foo 4 foo $ echo -n "$(<foo)" | od -c 0000000 h i 0000002 See bash ‘help set’. Not sure where ‘set – ARGS’ is documented. # compare: set -- $ans # vs set - $ans # /1st better when $ans is undefined example: $ echo $BASH_VERSION 4.1.10(4)-release $ set -- -foo $ echo $1 -foo $ set - $ echo $1 -foo $ set -- $ echo $1/ / $ (set -e; foo(){ false; echo hi; }; foo ) # Works ok if in simplist form. $ echo $? 1 # Three "not safe" examples: $ (set -e; foo(){ false; echo hi; }; if foo; then :;fi; ! foo; foo || : ; foo && : ) hi hi hi hi Simple statements calling function ‘foo’ are not a problem, but notice that some compound statements like: if foo ... ! foo foo || : foo && : effectively disable ‘set -e’ (errexit flag) within function ‘foo’. Consider avoiding a dependency on ‘set -e’ in your functions. Related links: Despite how negative the above threads are I think ‘set -e’ is still useful. $ type -a _login _login is a function _login () { : --------------------------------------------------------------------; : Synopsis: Start new bash login shell using 'env -i ...' which minimizes; : environment vars picked up by new shell. 'SSH_' related vars for; : example will not be inherited. PATH also is fresh.; : --------------------------------------------------------------------; env -i USER=$USER HOME=$HOME TERM=$TERM $SHELL --login } $ (: $* is immune from set -u; set -eu;set --; echo "$# [$*]") 0 [] ‘set -u’ does not apply to unexecuted code $ (set -eu;[[ -z $PATH || -n $bar ]]; echo hi ) -bash: bar: unbound variable $ (set -eu;[[ -n $PATH || -n $bar ]]; echo hi ) # short circuit op works, no err for nounset :-> --snip $ ( set -eu; if false;then : $bar;fi;echo hi ) hi $ ( set -eu; if true;then : $bar;fi;echo hi ) bash: bar: unbound variable $ Linux ‘ps -p PID…’ supports multiple pids $ command ps -wwH -o pid,ppid,sess,user,tty,state,bsdstart,args -p 1 4 PID PPID SESS USER TT S START COMMAND 1 0 1 root ? S Feb 22 init [3] 4 1 1 root ? S Feb 22 [watchdog/0] uqjua SCRIPTS_OVERVIEW synopsis of the best scripts uqjau.tar.gz: >200 GPL’d bash scripts;perl scripts; bash functions… home: http://www.nongnu.org/uqjau/README.html#README README interactive bash function library; scheme to manage bash login sequence download: http://trodman.com/pub/iBASHrc.tar.gz README A scheme for managing ~/.{bashrc,bash_profile} and other 'rc' files. A suite of over 160 day to day sysadmin/general bash functions, 100+ aliases, and several ~/.applicationrc files; for interactive use in Linux, and Cygwin. Supports approach for managing functions, aliases, and env vars on multiple hosts (selectively sharing code). Typically, I update the tar archive (content) at least once per week. The login sequence is broken up into *many* separate files, that are sourced. Host specific modifications are placed in a sub directory named './noshar', so all else can be shared across hosts. Run the '_lsq' (login sequence) bash function, to get an idea of the flow. It's ugly/messy/a bit fragile, but I use it every day, on several hosts. For now I suggest you just look it over for ideas. I have no design docs, but it is reasonably commented. Although it should be safe to install on your primary (non root) account, but it's a major set of changes, so I suggest you create a new account to test it. These start up routines have some dependencies w/my GPL'd bash shell scripts: http://www.nongnu.org/uqjau/README.html#README Hope you get some idioms/ideas from the code. BUGS: Some bash functions (and aliases?) are included that will not work without uqjau tools installed; and some of the tools are very provincial. I will try to move them out as time permits. 'set -e' is enabled for most of login sequence so and failing command will abort your login ;easy to change this, but be warned! If you like UNIX cp, 'cp -r', 'mkdir -p, and touch; you have to use windows and you want destination files and dirs w/normal windows permissions... Take a look at these bash cygwin wrapper scripts: _wtouch _wmkdir _cp _cpd I use them frequently so the're reasonably mature. The're part of http://trodman.com/blog/#uqjau _cp is available in $_lib/_cp.shinc; it will be also automatically loaded as a shell function if you install http://trodman.com/pub/iBASHrc.tar.gz Above approach applies to $_lib/_wtouch.shinc, and $_lib/_wmkdir.shinc. _cpd is a script which will be in your PATH.
<?php namespace Biz\Thread\Dao; use Codeages\Biz\Framework\Dao\GeneralDaoInterface; interface ThreadPostDao extends GeneralDaoInterface { public function deletePostsByThreadId($threadId); public function deletePostsByParentId($parentId); }
// Licensed to the .NET Foundation under one or more agreements. // The .NET Foundation licenses this file to you under the MIT license. // See the LICENSE file in the project root for more information. .macro NESTED_ENTRY Name, Section, Handler LEAF_ENTRY \Name, \Section .ifnc \Handler, NoHandler #if defined(__APPLE__) .cfi_personality 0x9b, C_FUNC(\Handler) // 0x9b == DW_EH_PE_indirect | DW_EH_PE_pcrel | DW_EH_PE_sdata4 #else .cfi_personality 0x1b, C_FUNC(\Handler) // 0x1b == DW_EH_PE_pcrel | DW_EH_PE_sdata4 #endif .endif .endm .macro NESTED_END Name, Section LEAF_END \Name, \Section #if defined(__APPLE__) .set LOCAL_LABEL(\Name\()_Size), . - C_FUNC(\Name) .section __LD,__compact_unwind,regular,debug .quad C_FUNC(\Name) .long LOCAL_LABEL(\Name\()_Size) .long 0x04000000 # DWARF .quad 0 .quad 0 #endif .endm .macro PATCH_LABEL Name .global C_FUNC(\Name) C_FUNC(\Name): .endm .macro LEAF_ENTRY Name, Section .global C_FUNC(\Name) #if defined(__APPLE__) .text #else .type \Name, %function #endif C_FUNC(\Name): .cfi_startproc .endm .macro LEAF_END Name, Section #if !defined(__APPLE__) .size \Name, .-\Name #endif .cfi_endproc .endm .macro LEAF_END_MARKED Name, Section C_FUNC(\Name\()_End): .global C_FUNC(\Name\()_End) LEAF_END \Name, \Section .endm .macro NOP_6_BYTE .byte 0x66 .byte 0x0F .byte 0x1F .byte 0x44 .byte 0x00 .byte 0x00 .endm .macro NOP_3_BYTE nop dword ptr [rax] .endm .macro NOP_2_BYTE xchg ax, ax .endm .macro REPRET .byte 0xf3 .byte 0xc3 .endm .macro TAILJMP_RAX .byte 0x48 .byte 0xFF .byte 0xE0 .endm .macro PREPARE_EXTERNAL_VAR Name, HelperReg mov \HelperReg, [rip + C_FUNC(\Name)@GOTPCREL] .endm .macro push_nonvol_reg Register push \Register .cfi_adjust_cfa_offset 8 .cfi_rel_offset \Register, 0 .endm .macro pop_nonvol_reg Register pop \Register .cfi_adjust_cfa_offset -8 .cfi_restore \Register .endm .macro alloc_stack Size .att_syntax lea -(\Size)(%rsp), %rsp .intel_syntax noprefix .cfi_adjust_cfa_offset \Size .endm .macro free_stack Size .att_syntax lea \Size(%rsp), %rsp .intel_syntax noprefix .cfi_adjust_cfa_offset -\Size .endm .macro set_cfa_register Reg, Offset .cfi_def_cfa_register \Reg .cfi_def_cfa_offset \Offset .endm .macro save_reg_postrsp Reg, Offset __Offset = \Offset mov qword ptr [rsp + __Offset], \Reg .cfi_rel_offset \Reg, __Offset .endm .macro restore_reg Reg, Offset __Offset = \Offset mov \Reg, [rsp + __Offset] .cfi_restore \Reg .endm .macro save_xmm128_postrsp Reg, Offset __Offset = \Offset movdqa xmmword ptr [rsp + __Offset], \Reg // NOTE: We cannot use ".cfi_rel_offset \Reg, __Offset" here, // the xmm registers are not supported by the libunwind .endm .macro restore_xmm128 Reg, ofs __Offset = \ofs movdqa \Reg, xmmword ptr [rsp + __Offset] // NOTE: We cannot use ".cfi_restore \Reg" here, // the xmm registers are not supported by the libunwind .endm .macro PUSH_CALLEE_SAVED_REGISTERS push_register rbp push_register rbx push_register r15 push_register r14 push_register r13 push_register r12 .endm .macro POP_CALLEE_SAVED_REGISTERS pop_nonvol_reg r12 pop_nonvol_reg r13 pop_nonvol_reg r14 pop_nonvol_reg r15 pop_nonvol_reg rbx pop_nonvol_reg rbp .endm .macro push_register Reg push \Reg .cfi_adjust_cfa_offset 8 .endm .macro push_eflags pushfq .cfi_adjust_cfa_offset 8 .endm .macro push_argument_register Reg push_register \Reg .endm .macro PUSH_ARGUMENT_REGISTERS push_argument_register r9 push_argument_register r8 push_argument_register rcx push_argument_register rdx push_argument_register rsi push_argument_register rdi .endm .macro pop_register Reg pop \Reg .cfi_adjust_cfa_offset -8 .endm .macro pop_eflags popfq .cfi_adjust_cfa_offset -8 .endm .macro pop_argument_register Reg pop_register \Reg .endm .macro POP_ARGUMENT_REGISTERS pop_argument_register rdi pop_argument_register rsi pop_argument_register rdx pop_argument_register rcx pop_argument_register r8 pop_argument_register r9 .endm .macro SAVE_FLOAT_ARGUMENT_REGISTERS ofs save_xmm128_postrsp xmm0, \ofs save_xmm128_postrsp xmm1, \ofs + 0x10 save_xmm128_postrsp xmm2, \ofs + 0x20 save_xmm128_postrsp xmm3, \ofs + 0x30 save_xmm128_postrsp xmm4, \ofs + 0x40 save_xmm128_postrsp xmm5, \ofs + 0x50 save_xmm128_postrsp xmm6, \ofs + 0x60 save_xmm128_postrsp xmm7, \ofs + 0x70 .endm .macro RESTORE_FLOAT_ARGUMENT_REGISTERS ofs restore_xmm128 xmm0, \ofs restore_xmm128 xmm1, \ofs + 0x10 restore_xmm128 xmm2, \ofs + 0x20 restore_xmm128 xmm3, \ofs + 0x30 restore_xmm128 xmm4, \ofs + 0x40 restore_xmm128 xmm5, \ofs + 0x50 restore_xmm128 xmm6, \ofs + 0x60 restore_xmm128 xmm7, \ofs + 0x70 .endm // Stack layout: // // (stack parameters) // ... // return address // CalleeSavedRegisters::rbp // CalleeSavedRegisters::rbx // CalleeSavedRegisters::r15 // CalleeSavedRegisters::r14 // CalleeSavedRegisters::r13 // CalleeSavedRegisters::r12 // ArgumentRegisters::r9 // ArgumentRegisters::r8 // ArgumentRegisters::rcx // ArgumentRegisters::rdx // ArgumentRegisters::rsi // ArgumentRegisters::rdi <- __PWTB_StackAlloc, __PWTB_TransitionBlock // padding to align xmm save area // xmm7 // xmm6 // xmm5 // xmm4 // xmm3 // xmm2 // xmm1 // xmm0 <- __PWTB_FloatArgumentRegisters // extra locals + padding to qword align .macro PROLOG_WITH_TRANSITION_BLOCK extraLocals = 0, stackAllocOnEntry = 0, stackAllocSpill1, stackAllocSpill2, stackAllocSpill3 set_cfa_register rsp, 8 __PWTB_FloatArgumentRegisters = \extraLocals .if ((__PWTB_FloatArgumentRegisters % 16) != 0) __PWTB_FloatArgumentRegisters = __PWTB_FloatArgumentRegisters + 8 .endif __PWTB_StackAlloc = __PWTB_FloatArgumentRegisters + 8 * 16 + 8 // 8 floating point registers __PWTB_TransitionBlock = __PWTB_StackAlloc .if \stackAllocOnEntry >= 4*8 .error "Max supported stackAllocOnEntry is 3*8" .endif .if \stackAllocOnEntry > 0 .cfi_adjust_cfa_offset \stackAllocOnEntry .endif // PUSH_CALLEE_SAVED_REGISTERS expanded here .if \stackAllocOnEntry < 8 push_nonvol_reg rbp mov rbp, rsp .endif .if \stackAllocOnEntry < 2*8 push_nonvol_reg rbx .endif .if \stackAllocOnEntry < 3*8 push_nonvol_reg r15 .endif push_nonvol_reg r14 push_nonvol_reg r13 push_nonvol_reg r12 // ArgumentRegisters PUSH_ARGUMENT_REGISTERS .if \stackAllocOnEntry >= 3*8 mov \stackAllocSpill3, [rsp + 0x48] save_reg_postrsp r15, 0x48 .endif .if \stackAllocOnEntry >= 2*8 mov \stackAllocSpill2, [rsp + 0x50] save_reg_postrsp rbx, 0x50 .endif .if \stackAllocOnEntry >= 8 mov \stackAllocSpill1, [rsp + 0x58] save_reg_postrsp rbp, 0x58 lea rbp, [rsp + 0x58] .endif alloc_stack __PWTB_StackAlloc SAVE_FLOAT_ARGUMENT_REGISTERS __PWTB_FloatArgumentRegisters END_PROLOGUE .endm .macro EPILOG_WITH_TRANSITION_BLOCK_RETURN free_stack __PWTB_StackAlloc POP_ARGUMENT_REGISTERS POP_CALLEE_SAVED_REGISTERS ret .endm .macro EPILOG_WITH_TRANSITION_BLOCK_TAILCALL RESTORE_FLOAT_ARGUMENT_REGISTERS __PWTB_FloatArgumentRegisters free_stack __PWTB_StackAlloc POP_ARGUMENT_REGISTERS POP_CALLEE_SAVED_REGISTERS .endm .macro RESET_FRAME_WITH_RBP mov rsp, rbp set_cfa_register rsp, 16 pop_nonvol_reg rbp .cfi_same_value rbp .endm
Titanic Belfast: Putting Northern Ireland's capital back on the map Titanic Belfast has already been nicknamed The Iceberg. A neck-craning 125ft tall, resembling the immense hulls of four ocean-liners and clad in glistening silver shards, it's not difficult to see why. This 150,000 sq ft exhibition space, which cost almost £100million to build, opened its doors on March 31, marking the centenary of Titanic's completion. The world's biggest Titanic attraction, she rises from the dockside where her namesake was created, in what was once the busiest shipyard in the world. Scroll down for video Standing outside on the redundant slipway, it's hard to imagine the glory days of the 1900s, when Belfast was a thriving international hub for the linen, tobacco and ropemaking industries. But walk through the exhibition's glass-roofed entrance hall, enter Gallery One and you're back in boomtime Belfast, striding to work alongside flat-capped yard workers, swerving at the clatter of horses' hooves on the cobbles and jumping at the whistle of steam behind you. It's a sensory experience that encourages visitors to empathise with the people of the era. You can really feel the optimism that filled Belfast's streets back then. RELATED ARTICLES Share this article Share In Gallery Two, a lift shoots 70ft up a reconstruction of one of the huge pillars of the giant Arrol Gantry, for years a Belfast landmark, as if to start a day's work, hammering iron rivets into Titanic's inch-thick steel plates. Across nine galleries altogether, visitors are guided on a journey through the ship's lifespan, from her conception and construction to her resting place, 13,000ft deep on the Atlantic seabed. Historic journey: A postcard featuring the legendary liner 'This is not a traditional museum,' says local tour guide Dee Morgan. 'You won't find any old spoons and dusty suitcases here.' And she is right, I realise, as I dance around on the interactive floor panels, ride a steel cart through the giant gantry and pilot a submersible craft in search of underwater artefacts - experiences that bring Titanic back to life through 3D film and talking holograms, rather than rusty relics. In Gallery Seven, icy air fills your nostrils as you look down to find watery ripples licking at your shoes and hear the fervent clicking of Morse Code. But rather than dramatise the sinking, this exhibition is about celebrating the craftsmanship behind the boat. 'In Belfast, everyone's granddaddy built the Titanic,' Dee explains. 'We're proud to be reviving our heritage.' Every so often, the exhibition reconnects with the landscape outside - the dry dock beneath, the original drawing offices, the River Lagan where the Titanic first set sail - reminding visitors that this building could not have been placed anywhere else in the world. Until recently, 'Titanic' was a dirty word in Belfast - the city was eager to erase its connection with the greatest maritime disaster in history. But this innovative exhibition is about bringing the Titanic back to the Belfast of today, using the epic story of her creation to put Northern Ireland's capital back on the map. Getting there Titanic Belfast is open from 9am to 7pm, Monday to Saturday; 10am to 5pm Sundays; and 10am to 5pm every day between October and March. Entrance tickets include all interactive experiences and cost £13.50 for adults and £6.75 for children up to the age of 16. Entrance is free for under-fives. For further information and tickets, visit titanicbelfast.com.
Should I prefer sleep or cut it to exercise? That’s a one heck of a choice. You want to gain weight naturally and you also don’t want to break your resolutions but then sleep spoils your spirit every morning. Most of us are lost in this dilemma in their head, repeatedly breaking their commitment with exercise to embrace sleep and then cursing themselves for not leaving the bed. Do you think sleep is not that important to gain weight? Think again. A sound sleep is as much important as exercise but it depends from person to person. While some need to give a couple more hours to sleep, others should dedicate their sleep time to exercises. What to do to find peace? We also wavered to meet these two ends. Then we found an explanation to settle on an equilibrium point for what should we choose and what not. Take a look; Take no Decision at all :- Yes, that’s the perfect answer to this. Take no decision at all. Don’t let your exercise regimen impact your sleep and don’t let your sleep override your exercise time. If you need the explanation from experts to back this statement, then here it is. Dr Charles Czeisler from Harvard Medical School says, “That’s a terrible choice,” while Dr Edward Laskowski of Mayo Clinic told Time, “I couldn’t choose between the two.” Dr Hashmi from Hashmi Pharmacy told in an interaction, ” Sleep is as important for body as Exercise. I can’t think to compromise anyone of them” To bring clarity in your mind, here are the pros and cons for both the aspects which would help you figure out your stand. When You Sleep Enough (and skip exercise):- Positive: Better workout chances next time– When you take complete sleep, your body tissues recover from the losses of the last day, your energy restored and harmony settled in mind. It prepares the body for a better workout next time. Due to complete relax and restoration, you find better efficiency at work. Negative: You may miss on some basic health points- The excess sleep can cut your time you could invest in exercises. When you indulge in 30 minutes workout, it improves your cardiovascular health. Sounds good to start early in the morning. Positive: You give yourself recovery period- Sometimes your mind and body need only rest. Not the delicious food, not the icy water nor the entertaining movie. Just sleep. And this sleep period may extend in the morning time too. What should one do? Simply give your body its needed recovery time. Negative: Sleeping more or sleeping less both are equally harmful- Either you sleep for 5 hours or lie down in bed for 10 hours, both are damaging to your body and nervous system. To achieve optimum efficiency, the optimum balance is very important. Excess sleep or less sleep, both pose risks for heart health. 7-9 hours of sleep is sufficient for a healthy body. When You Exercise More (and sleep less):- Positive: You will get better sleep– If you do the workout, your chances of getting better sleep that night increases. Many studies confirm that people who suffer from insomnia or sleeping disorders should do daily workout to get comfort at night. Negative: You will feel hungrier- If you are not into weight gaining, then it might impact you negatively. Skipping sleep regularly signals your mind to slow down the calorie consumption, increases fat storage, secrete a signal in mind that increases intensity to eat. This can prohibit your efforts to stay fit. Positive: Your body will strengthen- The more you exercise, the more you gain in terms of strength and stability. Mental and physical limits can be stretched and maintained if the person does workout every day. Negative: You could face dull moments often- Just like the mind, our body may also face the symptoms of circadian rhythms. This is a situation where your body might get the awkward feel of sleeping. This could happen due to the usual habit that tells the body to sleep while you leave the bed. After observing the Pros and Cons mentioned above, one can figure out that there is no specified rule for this. Both the things are important for body. Now the question is how it can be done because most of us keep the alarm on snooze for next 30-40 minutes every day while fitness freaks are ready to wake up at 5 am even after late night dinner with friends. You can reschedule your routine. 7-9 hours of sleep is a must for a human being in the total time of 24 hours. And 30-45 minutes workout is also necessary in the same 24 hours duration as well. Now, plan things accordingly. Go to bed early enough to complete the sleep time and to spare margin 30-45 minutes time in the morning for exercise. It just needs that much effort. Little tweaks in your daily schedule. So what is the best combination to gain weight naturally? In order to gain proper weight, it is important to keep a balance between the two. While a good sleep repairs the body tissues and relaxes the nervous system, a good workout helps in toning the muscles, fat accumulation and right body shaping. A Supplement to help you all the way :- Though we don’t recommend the weight gain protein supplements available in the market, this particular natural supplement is quite different to mention. While most of the supplements emphasize on quick weight gaining, Vetoll XL works to increase the body capability, corrects digestion function and regulates energy circulation for better body buildup. It does not add bulks to body but supports the body to gain weight naturally.
To figure out how dinosaurs walked, you could build an elaborate computer model… or you could attach a fake tail to a chicken’s butt. Scientists from the University of Chile, the University of Chicago and the University of Illinois at Chicago opted for the latter approach in their research, as you can see here: Modern birds descend from a group of dinosaurs called theropods, a vast suborder that includes both the 40-foot long, 7-ton Tyrannosaurus rex and the diminuitive Anchiornis huxleyi, which was little more than a foot long and weighed in at just under 4 ounces. It might be possible to glean some knowledge about the way theropod dinosaurs moved by closely studying their descendants. But, while chickens and dinosaurs have similar feet and feathers, there’s one key component missing in poultry -- the long, heavy tail. So Chilean researcher Bruno Gossi and colleagues decided to fake it. In the team’s experiments, four experimental chickens were raised from birth wearing an artificial tail made from a wooden stick, attached to the chicken’s behind with modeling clay. The researchers replaced the tail every few days as the chicken grew, keeping the tail at 15 percent of the bird’s weight -- a proportion they think is probably similar to the weight of a theropod dinosaur’s tail. They shared their observations on Wednesday in the journal PLOS ONE. They found that the chickens raised with the tails walked differently and stood with a slightly altered posture than the control birds. The prosthetic-tailed chickens stood with their femurs (the big uppermost leg bones) held in a more vertically-oriented position, and moved their femurs and knees differently while walking: Photo: Grossi et al/PLOS ONE “These results indicate a shift from the standard bird, knee-driven bipedal locomotion to a more hip-driven locomotion, typical of crocodilians… mammals, and hypothetically, bipedal non-avian dinosaurs,” the authors wrote. To ensure that it wasn’t just extra weight of the tail causing the changes, the researchers also had another control group of chickens raised wearing coats with lead weights held close to the chicken’s center of mass. When they crunched the numbers, the control-weight group’s posture and walking gait were pretty close to the control chickens that were raised without any lead coats or tails. “Our experimental approach, although not perfect, was effective in displacing the [chickens’ center of mass] and recreating locomotor patterns expected in non-avian theropods,” the authors wrote. “Thus, we expect that careful phenotypic manipulation of extant birds can open new avenues of experimental investigation into unexplored facets of dinosaur locomotor mechanics and energetics, providing a more nuanced understanding of the relationship between form and function in dinosaur evolution.” SOURCE: Grossi et al. “Walking Like Dinosaurs: Chickens with Artificial Tails Provide Clues about Non-Avian Theropod Locomotion.” PLOS ONE, published online 5 February 2014.
Binary Robot 365 Review Is Binary Robot 365 System a Scam?. We would like to state categorically that Binary Robot 365 is not a scam. Here is our 100% Honest Binary Robot 365 Review. Read my detailed Binary Robot 365 Review before joining this Auto Trading System. The use of robots in auto trading is a trend that is already causing ripples in the binary options world. The indispensable aspects of robots to trade round the clock is increasingly becoming popular because they widen exposure to financial markets hence boost profitability. With technological evolution gathering pace, the number of binary options robots has taken a sharp upward curve. So are you conversant with binary options robots? Ever heard of Binary Robot365? Binary Robot 365 Review: Is Binary Robot 365 Scam or Legit? This Binary Robot 365 system is just one of the newly released automated investment software online. Despite being new, it has managed to prove its legitimacy among a huge number of users. The robot is not just compatible with a good number of legitimate and reliable brokers; it even gives you the option to choose which one you want to make trades with. One of the Binary Robot 365 robot’s main characteristics is that it allows you to use a variety of useful trading tools. It is even possible to adjust risk levels, and there is a stop-loss feature which helps protect your deposits and investments. In other words, it is very clear that Binary Robot 365 is NOT a scam. The majority of the Binary Robot 365 reviews you can find online seem to emphasize on serious negatives without even explaining their claims in detail. Also, if you take a deeper look at these reviews, you’ll find out that they’re actually promoting other binary robots which seem to be rather doubtful. One of the Binary Robot 365 robot’s main characteristics is that it allows you to use a variety of useful trading tools. It is even possible to adjust risk levels, and there is a stop-loss feature which helps protect your deposits and investments. In other words, it is very clear that Binary Robot 365 is NOT a scam. The majority of the Binary Robot 365 reviews you can find online seem to emphasize on serious negatives without even explaining their claims in detail. Also, if you take a deeper look at these reviews, you’ll find out that they’re actually promoting other binary robots which seem to be rather doubtful. A Brief Overview of Binary Robot 365 – Pros & Cons Stated !! Established just recently in 2016, Binary Robot 365 review website has been able to attract thousands of clients by providing risk-free investment opportunities, a good level of customization options, and a number of useful features. It is an automated trading tool that allows you to set up your preferred settings and let it do the rest. You have the freedom to set your trading amount, the type of assets you want to trade, time of expiration, and the amount of investments you want to make. The BinaryRobot365.com website also allows you to choose one of the three money management strategies namely Martingale, Fibonacci, and Classic. Aside from autopilot trading, the system also provides algorithmic trading on manual investments, allowing you to switch anytime you want to depend on your preferences. Though this might sound too complicated for a complete beginner, the Binary Robot 365 review program is actually a really user-friendly system. It’s designed to allow both newbie and advanced traders use the system with ease. How Binary Robot 365 System Works ? Easy ? Steps Discussed ! The Binary Robot 365 App makes use of advanced algorithms along with built-in trading strategies to perform auto trading. The system has been tested and massively optimized by its developers via back testing and even real-time trading. Due to its user-friendly interface, you can be able to use it easily even as a beginner. Not to mention, it even works perfectly on smaller devices due through the use of their mobile platform. Aside from automated trading, you can also use the Binary Robot 365 system for creating your own trading strategies. It comes with different technical indicators which you can use for developing your own strategies such as TREND, RSI, CCI, STOCH, MACD, and WILLIAMS. Additionally, you can be able to minimize your risks with the help of some advanced parameters like assets to trade, expiry time, and more. However, you should keep in mind that the Binary Robot 365 system doesn’t choose the risk parameters automatically. Instead, you will have to choose the parameters manually. Guide to Using the Binary Robot 365 Software The Binary Robot 365 App works together with a number of EU regulated brokers such as CherryTrade, Tradorax, StockPai, Banc de Binary, and more. But before you can even go trading, you need to create your account first to start using the auto trading service. Once you manage to sign up to the Binary Robot 365 review website, you have to fund your trading accounts to start auto trading. All that follows is choosing which filters you want to apply. You can choose which trading system, indicators, assets, and expiry time to apply in your trades. Afterward, you select the auto trading option to allow the Binary Robot 365 software to automatically place your trades depending on the signals created by your chosen filters. Some of the filters you should take note of are the following: Selecting your preferred assets to trade Selecting which technical indicators to apply Selecting which trading system to use And many others You also need to enter the amount you want to invest for every trade and choose whether to perform the trading process automatically or manually. Once you have chosen automated trading, it will be the job of Binary Robot 365 platform to take care of the rest of the trading activity depending on your chosen filters. Binary Robot 365 App is developed in such a way that both newbie and advanced traders can benefit from its use. Here are some of the few advantages you can enjoy when using the software: High-Level Customization Option – Binary Robot 365 system comes with a high level of customization which both newbie and veteran traders can take advantage of. You are able to choose which risk parameters to set, or even create your own binary options trading strategies with the help of the seven technical indicators offered by the robot, along with the three advanced money management strategies that were mentioned earlier. You have also guided the best configuration settings you need to perform a trade. The Binary Robot 365 review website offers you a lot of control as many features of the system can be customized to suit your preferences. Proven and tested– the Binary Robot365 program has been known for its 90% win ratio which its users find really advantageous. It makes use of advanced binary options trading strategies and algorithms for performing auto trades. Moreover, the software has undergone extensive performance tests via back-testing and live trading environments by its developers to make sure it is properly optimized before it goes into the market. Risk Management Features – Binary Robot 365 software comes with advanced risk management parameters to minimize your risks while maximizing your profits. Some of the parameters you can choose include the amount per trade, which assets you want to trade, and more. How To Fund Your Binary Robot 365 Account ? To fund your account you should follow the following clear guidelines; Is Binary Robot 365 Free ? Any Costs Involved ? This Binary Robot 365 App is absolutely free; no monthly costs nor up-fronts will be charged once you register on this software. You just sign up on the official website then you will be requested to join one of the associated brokers. Once you fund your account you will be legible to initiate trade without any deduction being made. This has been one cause for concern amongst novice, but the software is offered for free to increase the number of triumphant traders who will eventually refer other traders to join the associated brokers. By so doing commission is generated from the partnered brokers thereby exempting the charges for the trader and putting the responsibility on the recommended brokers. What Is The Best Setting For BinaryRobot365 Software ? The Binary Robot 365 App uses trading strategies are based on the frequently used indicators which include Stochastic, Williams, MACD, CCI, RSI, and STOCH. The software uses three money strategies including Martingale, Fibonacci, and Classic. CLASSIC The robot will place every trade with the fixed amount set by the trader. This strategy is incredibly good for traders with a low-risk appetite, the robot will keep placing with a similar trade amount unless it is adjusted. MARTINGALE Under this strategy, Binary Robot365 system will double the trade amount after every lost trade. This is a high-risk strategy but it brings high returns. The strategy gives traders the chance to recoup all the losses from the previous trade. FIBONACCI This strategy is like the opposite of the Martingale management strategy. Under this strategy, the robot will double the trade amount after every trade won. This is the most popular strategy and is preferred by a good number of binary traders across the globe. This is greedy strategy and the idea is that you win a trade when the market conditions are favorable therefore the probability of striking a deal in the subsequent trade is high. The Binary Robot365 Review App is based on these advanced strategies since they deliver profitable trading opportunities. You can have a view of the latest Results on this trading robot on the website. Assets & Indexes Available With Binary Robot 365 .. The Binary Robot365 software gives trader the opportunity to trade EURGBP, AUDUSD, EURJPY, GBPUSD, EURUSD, GBPJPY, NZDUSD, USDJPY, and USDCHF. However, only EURUSD, USDJPY, and AUDUSD options are available for 24 hours trading. This Robot has a legal status and offers services as a licensed company. The company has bestowed regulated brokers with the responsibility to guide traders and the have expressed total confidence in them. The company is also out to serve their clients professionally and in case of any questions they can be directed to the management or the support team. More On – Binary Robot 365 Customer Support Binary Robot 365 offers one of the best customer support services you can find in an online trading community. They are available 24/7 and will address any concerns you have no matter where you’re from. Moreover, they even have a Live Chat feature to answer your questions instantly. Conclusion: Binary Robot 365 is Not a Scam The Binary Robot 365 System is one of the best Binary trading robot available in the modern markets. Its partnership with licensed and regulated brokers is geared towards helping traders achieve profits in the binary options world. We have reviewed Binary Robot 365 website’s results page and their performance is truly evident and scam free. There is a lot of fuss about this software and this has confused many especially the beginners. But one thing is for sure the Binary Robot 365 Review software is designed with every trader in mind including the beginners. No experience is required to make profitable profits on this software, open an account, funds and let this robot work magic. Post navigation Jessica Glenza Trader for more than 10 years and author of several blogs. After the success of my Binary Options training lessons, I decided to offer my services on a worldwide basis. That is how this blog was born. I hope you enjoy it and share it. Thank You.
Q: Azure Function POST interpreted as GET when using http rather than https I have an Azure Functions .NET Core 3.1 endpoint to which I am sending a POST command from Postman. When I send a request to my endpoint using https (SSL), everything is normal. When I hit the same endpoint using http, the Method of the incoming HttpRequest is "GET" rather than "POST". I have nothing fancy in the host.json file and nothing else unusual about this function app other than the fact that my function can take either GET or POST methods. Here is my host.json: { "version": "2.0" } Any my function: [FunctionName("GraphQL")] public IActionResult Get ( [HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = "graphql")] HttpRequest req, ILogger log ) { // ... req.Method === "GET" here when under only http, even when a POST } A: I believe that probably your HTTP request sent via http is receiving a 302 response and therefore redirecting it. When redirecting it will not redirect as a POST but as a GET and you function will treat it as a GET. On your Azure Function App settings / Custom Domains try setting HTTPS only to true to see if it will fix the issue.
Rodent models for ovarian cancer research. Animal models that are biologically and clinically relevant are essential for conducting research to investigate the pathophysiologic progression of disease and to develop diagnostic or therapeutic strategies. Several rodent models that vary in methods of induction allow appropriate in vivo evaluation for ovarian cancer. The types of rodent models discussed include chemically (nonhormonal and hormonal) induced, genetic (knockout and transgenic), xenograft, and syngeneic. A summary of the available rodent models is provided with a discussion of the advantages and disadvantages of each. Optimization and application of these rodent models to future research may benefit the detection and treatment of ovarian cancer.
Keywordtrailer aero GREENVILLE, S.C. — SmartTruck has come out with a new TopKit fairing system for dry van and reefer trailers, which mounts to the back of the trailer and provides fuel savings of 5.5% according to SmartWay evaluations. The company said…
1. Field of the Invention The present invention relates to a VCA-based lens driving device, and more particularly, to a VCA-based lens driving device that enables the auto-focusing (AF) and the optical image stabilization (OIS). 2. Description of the Prior Art The optical image stabilization (OIS) with a common magnet scheme has been developed as a solution of anti-shaking for a conventional lens driving device. Driving two sets of coils, e.g. the auto-focusing (AF) coil and the OIS coils, with a common magnet set to adjust the lens assembly makes the lens driving device compact, and is thus getting widely applicable for enhancing the image quality. Nevertheless, it is found that such a conventional scheme may have some disadvantages. For example, in the known lens driving device, the OIS coils formed with the coil plate are typically different in length and/or are asymmetric. Such difference or asymmetry leads to a yawing effect, which makes the lens assembly and the magnet rotate unintentionally. The rotating of the lens assembly and the magnet will make the displacement sensor fail to detect the displacement correctly, such that the lens driving device cannot enable the auto-focusing and mage stabilization. Besides, in order to realize the AF and OIS functions in a greater precision, the magnetic sensitivity of the OIS coil structure needs to be further improved.
Plasminogen activator inhibitor-1 and prognosis in primary breast cancer. Evaluation of the relationship between plasminogen activator inhibitor-1 (PAI-1) and the metastatic potential of primary breast cancer, and to compare the prognostic impact of PAI-1 in multivariate analysis with those of conventional prognostic factors, including steroid-hormone receptors, and those of urokinase plasminogen activator (uPA), pS2-protein (PS2), and cathepsin D. Cell biologic prognostic factors were analyzed in 657 cytosols routinely prepared from frozen-tissue biopsies that were submitted to our laboratory for the assessment of steroid-hormone receptor status. The median duration of follow-up in patients still alive at the time of analysis was 48 months. Estrogen receptor (ER) and progesterone receptor (PgR) status were assessed by radioligand binding assay, PS2, and cathepsin D by radiometric immunoassay, and uPA and PAI-1 by enzyme-linked immunosorbent assay (ELISA). PAI-1 levels were found to be strongly positively correlated with the rates of relapse (P < .0001) and death (P < .001). Relating the levels of PAI-1 with those of other cytosolic prognostic factors, we found a positive association with the metastasis-related proteases uPA (P < .0001) and cathepsin D (P < .0001). On the other hand, PAI-1 levels were found to be negatively correlated with ER (P < .005) and PgR (P < .001), and the estrogen-regulated pS2-protein (P < .001), which are proteins associated with a favorable prognosis. In multivariate regression analysis for 5-year relapse-free survival, and using an optimized cutoff point for discrimination between PAI-1-positive and -negative, independent predictors of the rate of relapse were found to be PAI-1 (P < .0001) and uPA (P = .01) of the cytosolic parameters, and tumor size, lymph node status, and premenopausal age of the clinical parameters. In multivariate analysis in patients with node-negative disease, only PAI-1 (P < .001) and tumor size (P = .03) were positively and premenopausal age negatively (P < .001) associated with the rate of relapse. In patients with node-positive disease, PAI-1 (P < .001), uPA (P = .02), tumor size (P < .001), and the number of positive lymph nodes (P < .001) were all positively associated with the rate of relapse. We conclude that the PAI-1 level measured in routinely prepared cytosols is an important parameter to predict the metastatic potential in both node-negative and node-positive human primary breast cancer.
Pigeonholing Pigeonholing is a process that attempts to classify disparate entities into a limited number of categories (usually, mutually exclusive ones). The term usually carries connotations of criticism, implying that the classification scheme referred to inadequately reflects the entities being sorted, or that it is based on stereotypes. When considering various classification schemes, one must be aware of the following pitfalls: Using categories that are poorly defined (e.g., because they are subjective). Entities may be suited to more than one category. Example: rhubarb is both 'poisonous' and 'edible'. Entities may not fit into any available category. Example: asking somebody from Washington, DC which state they live in. Entities may change over time, so they no longer fit the category in which they have been placed. Example: certain species of fish may change from male to female during their life. Attempting to discretize properties that would be better viewed as a continuum must be taken with caution. Example: while sorting people into 'introverted' and 'extroverted' one must keep in mind that most people exhibit both traits to some degree. The term pigeonholing is sometimes used disparagingly during instances wherein one unnecessarily assigns masculine or feminine qualities to a topic, person or group as with overgenderization, when the situation could more reasonably have been referred to in a gender-neutral fashion. Another example of pigeonholing in everyday conversation occurs when a person assigns wingism to apolitical or barely political people, without ascertaining what the political stance of this individual is. Such an erroneous designation is especially erroneous when assigning it to people who live in places where the right-wing left-wing dichotomy is not present. See also Archetype Labelling Pigeonhole principle Typecasting (acting) References Category:Stereotypes
In this video I show you how to use the VSCode Unity Debugger including how to set code breakpoints? how to use the watch expressions? and how to use the debugger console? We also look at how you can download this Unity3d Debugger extension through Visual Studio Code just in case you haven't done so already.
Ballet Ariel infuses new life into an ages-old tale. The Sorcerer’s Apprentice is an original and fanciful production flawlessly danced by Ballet Ariel’s professional company. Choreographed by Director Ilena Norton and her creative team, the Sorcerer’s Apprentice will beguile children of all ages, introducing them to the enduring art of ballet in an exciting and delightful afternoon spent with a bumbling apprentice, dancing goblins and pot and pans spinning out of control. Inspired by the ages-old tale, Ballet Ariel performs this original work to Paul Dukas’s score made famous in Walt Disney’s Fantasia. Described as music gone wild, this spirited composition perfectly sets the stage as the little apprentice discovers the dangers that occur when power prevails over wisdom. Excerpts from Princess Aurora’s Wedding in The Sleeping Beauty, will further captivate children, both young and old. Little Red Riding Hood and Puss In Boots will be joined by other favorite characters making for an enchanted interlude. A Little Tea Party with the fair and lovely Princess Aurora and other costumed cast members following the performance is a special treat little one’s won’t want to miss.
Hey guys, thanks stopping by and visiting! I am glad you like this site. Please go to my Facebook and like me there. I look forward to seeing you! Sunday School is a vital part of any ministry. In it, one is able to experience a deeper knowledge of God’s Word. Here at “Word For Life Says,” I want to help you help others. Below you will find resources to help you prepare for your upcoming lessons and my personal summary notes that I use when teaching. May God bless you! “God Made a Covenant with Abram” Genesis 15:1-6, 17-21 PDF Lesson Print Out is now Located at the Bottom of the Lesson. Please Scroll Down, Click and Enjoy! Blessings. Fear and uncertainty can act as a barrier when waiting on the revealed promises of God. Find out in today’s lesson how God addressed these issues with Abram when He made a covenant with him. Genesis 15:1 “After these things the word of the LORD came unto Abram in a vision, saying, Fear not, Abram: I am thy shield, and thy exceeding great reward.” “After these things.” Our lesson story picks up where another story left off. Lot, Abram’s nephew, and his family and his things, had all fallen victim to the raids that came against Sodom and Gomorrah by a coalition of opposing kings led by Chedorlaomer, king of Elam (Genesis 14). After Abram gathered his men he fought and recovered “all the goods, and also brought again his brother Lot, and his goods, and the women also, and the people,” (Genesis 14:16). Upon his return was his famous encounter with Melchizedek, the king of Salem who was also a priest, whom when Abram was blessed by him, he gave him tithes of all. The king of Sodom, whose army previously fled and fell to their demise either in the area of the slime pits or into the slime pits themselves, with some of the army running to and hiding in the mountain (Genesis 14:10), he now comes out at that time and offered Abram all the goods that he had rescued. Abram refused to accept anything from the king. He would depend, in faith, on God to increase him, and not the king. After all this occurred, God came to Abram and speaks. And, the first three words He speaks to him in a vision is, “Fear not, Abram.” Why the need for reassurance? Did Lot getting captured by raiding kings and the fight to get him back shake Abram up? Was there a little trepidation or hesitation over actually receiving the promises of God? God has already spoken to him no less than three times before this point regarding his promised blessings and increase (see Genesis 12:2-3, 7; 13:14-17). Was the waiting along with the contentions faced causing doubt or an unsure feeling about everything? Fear will cause one to take their eyes off the faithfulness of God and focus more on the problems that try to attack the promises of God. So, God encourages Abram, saying, “I am thy shield, and thy exceeding great reward.” Another way of looking at it is as if God is saying, “I am your protector and the fulfiller of the promise. And, if you have nothing else, you have Me. I am here for you. There’s no need to fear.” After what was probably not only a fight for Lot’s life, but his own, and for fear of retaliation, I imagine these words from God are meant to soothe, encourage, and direct Abram to lean into his faith. He would need to muster that same faith he initially instituted in God, that faith which propelled him forward to leave home and follow God’s leading, causing him to completely trust in God. God is for those who trust in Him (2 Samuel 22:3; Psalm 3:3; 33:20; Proverbs 30:5). I am “thy exceeding great reward.” Was the refusal of the recovered treasure weighing heavy on his mind? One can only speculate. But, God gave no wiggle room in the matter of Abram’s faith, nor ours. He lets him and us know that He is the greatest treasure one can have or hope to obtain. For any payback or compensation one can hope to have or receive – God is your greatest benefit! Thus, do not fear Abram. God is here and He is in control of it all. This promise for all that are His comes from the Word: “Be not afraid of sudden fear, neither of the desolation of the wicked, when it cometh. For the LORD shall be thy confidence, and shall keep thy foot from being taken,” (Proverbs 3:25-26). Genesis 15:2-3 “And Abram said, Lord GOD, what wilt thou give me, seeing I go childless, and the steward of my house is this Eliezer of Damascus? And Abram said, Behold, to me thou hast given no seed: and, lo, one born in my house is mine heir.” Abram was rich in cattle, silver, and gold (see Genesis 13:2). Abram was rich in servants. He had at least 318 armed male servants (see Genesis 14:14). Abram was rich in promise and faith. But, there was one area of his life where he lacked. That one area that tied directly to the promises God spoke to him. He was “childless.” Abram did not have a direct “heir” from his own body. Taking inventory of his life, his substance, and his future, the only answer he could see with his natural eyes and human thinking was for everything to come through his “steward, Eliezer of Damascus.” The implications of having no biological heir were serious. Emotionally, the desire was there to have a child of their very own. One to love, nurture and raise. For many, they consider one of the greatest joys in life is having a baby of their own and the lack of one often leaves a void that yearns to be filled. After all, children are considered to be a “heritage of the LORD,” (Psalm 127:3) and it was considered a blessing to have them (see Psalm 128:3 and 144:12). When God told Abram that he personally would be the father of a “great nation” (Genesis 112:2), I imagined he expected that blessing to be manifested by now in his life – yet, currently, he was still without, still feeling the emotional void. Materialistically, and heritage-wise, the need for an heir was very real. All that Abram accrued and will gain has to be passed on to someone else when his time comes to depart from this earth. The responsibility of everything and everyone now in his ownership, and what is to come as a result of the promises given, needed that leading successor who would care for it all. Prior to this, Abram was not only promised descendants (Genesis 12:2), but that they would be as great as the “dust of the earth” in number (Genesis 13:16). His seed would be too numerous to count. He was not only given the promise of actual descendants in great number, but they would have a great inheritance of land (see Genesis 12:7; 13:14-17). None of which he can physically see at the moment because biologically, he is “childless.” The only thing, or should I say, the only person he currently has in his reserves, in hopes of carrying everything he has, is through “Eliezer of Damascus.” His only hope as far as he can see is “one born in mine house” as the heir and not one born from his own body. But God was about to shake up what Abram believed now and give him something greater to look forward to. Genesis 15:4-5 “And, behold, the word of the LORD came unto him, saying, This shall not be thine heir; but he that shall come forth out of thine own bowels shall be thine heir. And he brought him forth abroad, and said, Look now toward heaven, and tell the stars, if thou be able to number them: and he said unto him, So shall thy seed be.” “This shall not be thine heir.” Basically, God is saying, “Negatory! That plan is not a go for Me.” How Abram is viewing this big plan of God’s coming to pass is different than the way God views it. Why is that? Why do we view things so differently than God does? There are a few reasons for that, but ultimately it all goes back to God is omniscient, and we are not. He knows all things (1 John 3:20) past, present, and future. Thus, He is able to promise with complete assurance that everything will happen according to His plan. Opposite of that, mankind has tethered his reality to time, life, and the experiences and knowledge of those experiences they were predisposed to and exposed to. Since God is not tethered by these limitations that man deals with, He is able, on a grand and larger scale, to see beyond it all. God knows it all and what He knows is far more superior in how anyone of us thinks a promise should come out. So, to counter Abram’s assumption of whom he believes the “heir” should or could be God shows him what He has in mind. Something so grand that all of history will point back to this moment of promise and realize it was nothing but God that brought it to pass. To the childless Abram God speaks His plan: “He that shall come forth out of thine own bowels shall be thine heir.” This well-advanced man of age was going to produce an “heir” from his own body. The promise God gave Abram originated with Abram, it was going to going to be procreated by Abram, and it was going to multiply, grow, and be fruitful through Abram. Not his servant. To give Abram another visual representation (much like the dust of earth analogy mentioned before) of what He had in mind, God, “brought him forth abroad, and said, Look now toward heaven, and tell the stars, if thou be able to number them . . .” “Look now toward heaven.” Can I paraphrase and say, “Gain a higher vision for your life.” Abram’s thinking mentality for his future was still consumed with his right now situation. God wanted him to visualize more. While Abram was trying to figure out how 1 Abram + 1 Sarai was going to = a great nation, God had already done the math and was already on the move to show him a blessing so great that he won’t be able to count them: “tell the stars, if thou be able to number them . . . So shall thy seed be.” God lovingly exhibited the stars before Abram as a display of the greatness of the promise He had for him and used that demonstration of the heavenly view to center his eyes of faith on what will be, through God’s intervention and blessing, instead of what his present is currently speaking. Genesis 15:6 “And he believed in the LORD; and he counted it to him for righteousness.” “And he believed in the LORD.” Abram’s faith in God was alive and active. He “believed” everything God spoke to him was true and was going to come to pass. “Sometimes in life, I think we tend to downplay the importance of our faith and what we believe. We hear about it so much that I think as Christians we have become desensitized to its power in both our spiritual and our natural lives. This is not what it should be since we are told in four verses of the Bible, “The just shall live by his faith,” (Hab. 2:4; Rom. 1:17; Gal. 3:11; Heb. 10:38, KJV). Our faith, our belief should be the marker for everything in our lives. Not just for salvation, but for every action and prayer. We know all the right words to say and all the proper “Christian” motions to make, but is our faith alive and put into full force action?” (Word For Life Says; I urge you to read the entire article for steps to build your faith and belief in the promises of God.) And, Abram didn’t just believe, but he believed “in the LORD;” He believed in God! Not just in the promises He said He would perform, but Abram believed, with an unchanging heart, in God! Where one’s faith is centered lies at the core of all they believe, and for Abram, that faith not only took his feet out of the land of Ur, but it caused his heart to be tied in total trust to God. Romans gives us this picture of Abram’s faith: “He staggered not at the promise of God through unbelief; but was strong in faith, giving glory to God, And being fully persuaded that, what he had promised, he was able also to perform,” (Romans 4:20-21). He didn’t see it, but Abram’s heart was totally convinced that God is who He said He is and He was going to do everything He said He would do. “And he counted it to him for righteous.” The heart of all true “righteousness” is belief in God. Because he believed; because he had a faith relationship with God; because he placed himself, his life, and his future in total confidence in God, and moved and ordered his life in that way – it is through that obedient faith he is accepted and received by God and God credits it to him as being righteous. With the whole of Abram fixed on God as the center of his life, everything else took their rightful place behind Him. His faith in God realigned his thinking and his choices and his living. God was his filter for everything and every move. Thus God, through grace (yes, grace in the Old Testament), imputed “righteousness” to him (compare Romans 4:3). As one puts money in their bank account and it is added to their credit, so too did God for Abram, when through his faith and belief in God, He [God] put it on his [Abram’s] spiritual account and He “counted it to him for righteousness.” The verses in between this section of our printed text and the next that is not covered in today’s lesson hold valuable information for how we get from one part to the other. Verses 7-16 talk more of God’s plan for Abram and his seed after him. In verse 7, God reiterates that He is the same God who called him out of Ur of the Chaldees and that there was a promised land that Abram would inherit and his descendants would possess. And, in verse 8 Abram wanted to know more details about how this was going to come to pass. In verses 9-11, Abram was directed to divide certain animals as a covenant offering. They were going to be used in the enacting of this covenant God was making with Abram (more on this later). After the which, a deep sleep fell upon Abram (vs. 12) and God spoke to him concerning his seed, their inheritance, their future bondage in Egypt, and their deliverance and possession of the land (vss. 13-16). Picking back up in our printed lesson text we see the purpose for the divided animals and their use. Genesis 15:17 “And it came to pass, that, when the sun went down, and it was dark, behold a smoking furnace, and a burning lamp that passed between those pieces.” With the covenant offerings still divided on the ground, Abram saw a vision of what would represent the presence of God in the form of a “smoking furnace and a burning lamp” walking between the pieces. God not only told him of the promises but demonstrated through this sign that He was in the midst of it and would work it according to His perfect will. This is God’s covenant with Abram. He is the one who enacted it and the fulfilling of it would fully rest on Him and His promise. How do we know this? This sort of covenant usually involved both parties of said agreement to pass in the midst of the divided animals. But God didn’t have Abram pass through. Only the symbol of His presence strode down the middle of those divided parts indicating He alone is responsible for this covenant. Not Abram, or at times, his wayward descendants. This covenant was initiated by God, endorsed by God, and it would be carried out and completed by God and according to His holy plan. God’s promises to Abram are sure. What He says goes. Period. That’s it – end of discussion. Genesis 15:18-21 “In the same day the LORD made a covenant with Abram, saying, Unto thy seed have I given this land, from the river of Egypt unto the great river, the river Euphrates: The Kenites, and the Kenizzites, and the Kadmonites, And the Hittites, and the Perizzites, and the Rephaims, And the Amorites, and the Canaanites, and the Girgashites, and the Jebusites.” God then names off a list of many nations. Although they may be many in number and cities, the place where they currently dwell is no longer for them. They may live there right now, temporarily, but God is giving and has given the permanent residence of those lands to Abram and his seed as part of the covenant. “In the same day the Lord made a covenant with Abram, saying, Unto thy seed have I given this land.” We see it beginning to be filled through the conquest of Joshua beginning at Jericho and through other events in history; and even in the time of King David (fourteen generations later, see Matthew 1:17), when “David defeated Hadadezer king of Zobah as far as Hamath, as he went to establish his power by the river Euphrates,” (1 Chronicles 18:3, NKJV) gaining part of the land noted in the promise above. As nation after nation was listed, the realm of God’s promise grew in the eyes of Abram. It was going to happen. Signed, sealed, and delivered by God for His people. No wonder it’s called the Promised Land, for that’s exactly what it is and that’s exactly what God did. Conclusion: For all the promises of God, wait. For He will surely keep and fulfil everything He has spoken. God’s promises never fail (see Joshua 21:45; 1 Kings 8:56).
Why Not MoCo For Marriott’s New Headquarters? From Bethesda Now - By Dan Reed Marriott’s plans to move out of their headquarters on Fernwood Drive will launch an epic battle between D.C., Maryland and Virginia for the international hotel chain. But as local and state leaders seek to retain Montgomery County’s fourth-largest employer, Marriott’s stated interest in being close to a Metro station makes this fight a little different than previous attempts to woo big companies. For much of the 20th century, big companies favored suburban locations, like the office park where Marriott International sits today. William Whyte studied where 38 Fortune 500 companies leaving New York City relocated to, and found they all moved to areas with the same things: exclusive country clubs, high-performing schools, and room for big, lavish homes. In fact, all of those companies moved the headquarters within eight miles of the executive’s home. That’s how Bethesda, as well as places like Tysons Corner in Northern Virginia, became major job centers: It was already a sought-after place to live, and so the heads of large companies chose to locate their offices nearby. Lately, companies are increasingly seeking urban locations: Places with shops and restaurants, walkable and bikeable streets, and transit access that attracts young, educated workers. Urban doesn’t necessarily mean the District of Columbia. Many corporate heads still live in Maryland and Virginia (Marriott Executive Vice President Kathleen Matthews lives in Chevy Chase and is mulling a run for the area’s House seat) and Marriott says they’re considering locations throughout the region. That new criteria should be an advantage for Montgomery County, which has long boasted the kind of high-end suburban neighborhoods where executives might choose to live, as well as walkable, transit-served urban places that their young workers may prefer. In recent years, the county has tried to encourage a nighttime economy in the hopes that attracting young people would draw the companies who want to hire them. There are a number of places in Montgomery County that would suit Marriott’s needs. Downtown Bethesda is a natural choice, as is downtown Silver Spring, both of which have Metro access. But neither have a lot of office construction going on. Plans to make White Flint an urban hub are a long way from fruition, but it has a Metro station. Long-term plans for developments like Pike & Rose, North Bethesda Center and White Flint Mall do include substantial amounts of office space. However, some things haven’t changed. Despite evidence to the contrary, Virginia has long been perceived as more business-friendly than Maryland. In recent years, Virginia beat out Maryland in landing the headquarters of Hilton Hotels, Northrup Grumman, Intelsat and Bechtel, which received tax incentives from Maryland before leaving. And Fairfax County is catching up to Montgomery with urbanism, with plans to remake existing job centers like Tysons Corner and Merrifield as urban districts. The Silver Line has made Tysons, which was already a very attractive place for large companies, even more desirable as employees from all over the D.C. area can get there without being stuck in Beltway traffic. As a result, Virginia has emerged as an early favorite for Marriott’s new headquarters. A list of seven potential sites includes two in D,C., four in Northern Virginia, and not a single one in Montgomery County. It may be hard to convince Marriott bigwigs that Montgomery County is, in fact, a better place to do business. But what if Montgomery County leaders made the case for Montgomery County as a better place? In many white-collar fields, a lot of work happens outside the office, whether on the golf course or at the bar. Tysons may have the Silver Line. But unlike Bethesda or Silver Spring, it doesn’t have many places for an evening stroll or a round of drinks with coworkers, at least outside of the mall. This approach might be challenging for new Maryland Gov. Larry Hogan, who campaigned on improving the state’s business climate, but has been ambivalent about public transit and the state’s urban areas. His repeated delays of the Purple Line between Bethesda and New Carrollton, which was ready to break ground this year, might send the wrong message to a corporation that wants to locate near a Metro station. However, it might be the ticket to keeping Marriott in Montgomery County. We may not be able to beat Fairfax County at providing tax incentives, but we’ve got thirteen Metro stations and some great places along them. It’s time we made them part of our business strategy too. Photo via Marriott International Dan Reed is an urban planner who grew up in Montgomery County and remembers eating Gifford’s ice cream before it was on Bethesda Row. He sits on the board of Action Committee for Transit, an organization dedicated to sustainable transportation in Montgomery County. He also writes at Just Up The Pike, a blog about Silver Spring, and Greater Greater Washington, a regional blog about planning.
platform :ios, '9.0' target 'Survey' do use_frameworks! pod 'Deli', :path => '../../' end
Di-, tri-, and tetranuclear zinc hydroxamate complexes as structural models for the inhibition of zinc hydrolases by hydroxamic acids. Attempts to produce Zn analogues of the structural model complexes [M2(mu-O2CR)2(O2CR)2(mu-H2O)(tmen)2] (M = Ni, Co, Mn; R = CH(3), C(CH3)3, CF3) by the reaction of a series of zinc carboxylates with N,N,N',N'-tetramethylethylenediamine (tmen), resulted in the mononuclear complexes [Zn(OAc)(2)(tmen)] (1) and [Zn(crot)2(tmen)].(0.5)H2O (2) for R = CH3 and (CH)2CH3, respectively, and the dinuclear complexes [Zn(2)(mu-piv)(2)(piv)(2)(mu-H2O)(tmen)2] (3) and [Zn2(mu-OAc(F))2(OAc(F))2(mu-H2O)(tmen)2] (4) for R = C(CH3)3 and CF3, respectively. In contrast to the analogous imidazole series, i.e., [M2(mu-O2CR)2(O2CR)2(mu-H2O)(Im)4] (M = Ni, Co, Mn; R = CH3, C(CH3)3, CF3), zinc carboxylates react with imidazole to give only the mononuclear complexes [Zn(OAc)2(Im)2] (5), [Zn(crot)2(Im)2].H2O (6), [Zn(piv)2(Im)2].(0.5)H2O (7), and [Zn(OAc(F))2(Im)2] (8). Reaction of 1, 2, and 3 with either acetohydroxamic acid (AHA) or benzohydroxamic acid (BHA) gives the dinuclear complexes [Zn2(O2CR)3(R'A)(tmen)], where R'A = acetohydroxamate (AA) (9, 10, 11) or benzohydroxamate (BA) (13, 14, 15). In these complexes, the zinc atoms are bridged by a single hydroxamate and two carboxylates, with a capping tmen ligand on one zinc and a monodentate carboxylate bonded to the second zinc atom. This composition models closely the observed structure of the active site of the p-iodo-d-phenylalanine hydroxamic acid inhibited Aeromonas proteolyticaaminopeptidase enzyme. In contrast, 4 reacts with AHA to give [Zn2(OAc(F))3(tmen)2(AA)] (12) with an additional tmen ligand so that both Zn atoms are 6-coordinate, whereas reaction with BHA gives the trinuclear complex [Zn3(OAc(F))4(tmen)2(BA)2] (16). Reactions of 3 and 4 with glutarodihydroxamic acid (GluH2A2) produce the tetranuclear complexes [Zn4(piv)6(tmen)4(GluA2)] (18) and [Zn4(OAc(F))6(tmen)4(GluA2)] (19).
Marble has been in use for centuries as furniture table tops and is still popular for accent tables, kitchen, and bathroom counter tops. Bakers believe they are the best material for rolling pie dough. Marble top furniture remains cool to the touch regardless of whether making it resident to bacteria. Unfortunately it sometimes becomes stained, scratched, nicked, and broken. Marble top furniture cleaning and repair is no more difficult to than other table tops. This article will provide you cleaning, repair, and periodic maintenance tips to keep your marble tops looking great. Dull marble can be returned to its once new condition by polishing it with a commercial marble polish that can be purchased any place that sells marble. Apply a small amount of marble polish to a cleaning cloth and scrub the surface in small circular motions. Let dry, wipe off the residue and polish with a paste wax. Stains are more difficult to remove. Most stains can be removed with peroxide but if it’s an oil stain you will need an absorption compound to absorb the oil. Corn starch works well at removing most oil stains. Apply it to the stained area and let stand for a while before wiping it off. Scrub the area with a detergent, rinse with clear water, dry, and wax. Rust stains are common on bathroom marble tops caused by water. A common commercial rust remover will remove rust stains with little effort. After the stain is removed, wax the surface with a paste wax. Scratches and nicks can’t be completely removed but they can be diminished by buffing and polishing with a marble polishing compound and wax with paste wax. Marble is a porous material and should be treated with marble sealer before waxing. To keep marble tables and counter tops in good condition they should be sealed and waxed periodically. Once a year should be adequate for tables and more frequently for bathroom and kitchen counter tops. Repairing marble top nicks and scratches by polishing is easy; however, a broken top is more difficult to deal with. A clean brake can be repaired. Marble dealers have the facilities and equipment to cement broken pieces back together and restore the surface to its natural appearance. These types of repairs should be left to the professionals. Your marble table and counter tops can retain their beauty for centuries if you give them a little tender loving care. Never discard a marble top unless it is broken beyond repair. Replacement tops are very expensive because they have to be tailor cut to fit your application.
Background {#Sec1} ========== Mobile elements have been extremely successful at colonizing human genomes. As much as two thirds of human genomic content is either made up of mobile elements or mobile element relics \[[@CR1],[@CR2]\]. The two most significant classes of human mobile elements are long interspersed elements (LINEs) and short interspersed elements (SINEs), and together, these elements account for at least 38% of genomic content \[[@CR2],[@CR3]\]. In humans, the most significant members of the LINE and SINE families are L1 and Alu, respectively. L1 elements have been amplifying in mammalian genomes since before the divergence of placental and marsupial mammals 170 million years ago \[[@CR4]\]. Since becoming established in primate genomes (\~40 million years ago), there has been a linear evolution of L1 subfamilies, with each newly active L1 subfamily replacing the one that came before \[[@CR5],[@CR6]\]. Today, the predominant active L1Hs family of L1 retrotransposons is responsible for the majority of known L1 retrotransposition events \[[@CR4],[@CR7]-[@CR10]\]. L1 promoters and other functional sequences vital to L1 retrotransposition accumulate disruptive mutations as a function of time. Hence, it is not surprising that studies have shown the elements most likely to be active are those that have most recently inserted and therefore may not have become fixed in the population \[[@CR11]-[@CR13]\]. These polymorphic L1 elements are often capable of significant levels of L1 retrotransposition \[[@CR11],[@CR14]\]. Active full length L1 is a \~6 kb long element that codes for two open reading frames (ORFs): ORF1 which encodes a protein with nucleic acid chaperone and RNA binding properties, and ORF2 which encodes a protein with endonuclease (EN) and reverse transcriptase (RT) activities \[[@CR15],[@CR16]\]. L1 mobilizes through a process called retrotransposition \[[@CR17],[@CR18]\]. Briefly, full length, intact L1 loci are transcribed by RNA polymerase II, to generate a bicistronic L1 mRNA encoding two proteins, ORF1p and ORF2p \[[@CR19]\]. The L1 mRNA and L1 proteins form a ribonucleoprotein (L1 RNP) which enters the nucleus where the ORF2p-encoded endonuclease (EN) and reverse transcriptase (RT) create a cDNA copy of the L1 mRNA in a new genomic location through a process called target primed reverse transcription (TPRT) \[[@CR17],[@CR18]\]. Through retrotransposition, L1 has been able to amplify itself to its current copy number of \~500,000 copies per genome. However, the majority of these L1 elements are non-functional relics due to severe truncation at their 5' ends caused by aborted TPRT events and accumulation of deleterious mutations either over time or as a result of unfaithful RT activity \[[@CR2],[@CR20]-[@CR23]\]. An estimated 10-30% of new L1 insertions are full length with the potential for further retrotransposition \[[@CR11]\]. Negative evolutionary selection leads to depletion of the full-length elements and there are only about 5000 full-length elements in the human genome \[[@CR2]\]. Most of these are old and mutated, and current estimates of the number of potentially active full length L1s stand at about 80--100 per individual \[[@CR11],[@CR14]\]. Next generation sequencing (NGS) studies have predominantly focused on detecting L1 insertional mutagenesis through detection of the 3' end of L1 elements \[[@CR24]-[@CR27]\]. Through these NGS techniques, various groups have identified the insertional burden of L1 in germline and somatic tissues, including various cancers. Few studies, however, focus on the specific detection of full-length L1 elements in these genomes that could potentially be hot for retrotransposition and responsible for the accumulation of further L1 insertion events. This is an important facet of the retrotranspositional burden in cancers because these full length L1 elements have been shown to be capable of extremely high levels of continued retrotransposition in tumors with newly inserted hot L1 copies continuing to be actively mobile \[[@CR28]-[@CR30]\]. Studies that have looked at full-length L1 elements in the reference build of the human genome have indicated that the bulk of retrotransposition occurs from a handful of hot L1 elements \[[@CR11]\]. Additionally, full length L1 elements that are polymorphic between individuals are significantly more likely to be hot than fixed, reference full length L1s \[[@CR11],[@CR31]\]. Recent reports support the idea that different L1 elements are active in different individuals and that single hot L1 elements can result in incredibly high levels of insertional mutagenesis \[[@CR28],[@CR29]\]. Additionally, hot L1 elements in tumors have recently been shown to mobilize to hundreds of new locations within some tumors, with some of these new insertions also further mobilizing to additional locations \[[@CR30]\]. Despite the extensive mutagenesis caused by hot L1s, few attempts have been made to identify full length L1 elements in human populations that may lead to successive insertion events. In this report, we describe a NGS-based method to identify full length L1 elements in human genomes that we call Sequencing Identification and Mapping of Primed L1 Elements (SIMPLE). Using SIMPLE we have identified 228 polymorphic L1 elements in seven independent individuals. SIMPLE has 94% sensitivity and is robustly able to identify nearly all full length L1 elements in a human genome, making it possible to identify hot L1s capable of causing high degrees of insertional mutagenesis. Results {#Sec2} ======= Description of SIMPLE library generation {#Sec3} ---------------------------------------- Various anchored PCR strategies have been effectively employed to identify mobile element insertions in a whole-genome setting. They all share the use of one primer from within the known sequence of the element, and use different strategies to target a second primer outside of the element. However, there are several significant drawbacks to most of those methods. Those that ligate a linker to a restriction site located randomly outside of the mobile element insertion only create one amplified fragment size from each element. Thus, it is impossible to tell whether duplicate sequences are independent ligations or PCR duplicates. A similar argument can be made for the use of an arbitrary primer to prime outside the mobile element. Furthermore, restriction sites located at different lengths away from the element will amplify with differential efficiencies, as will different arbitrary primers. Thus, these methods require pooling of multiple experiments using different restriction enzymes or primers to come close to saturating the potential insertions. In addition, even those protocols that utilize randomly sheared DNA to apply anchors still use a major ligation step in their preparation. These ligation steps, such as those used in the preparation of Illumina libraries, lead to low levels of chimeric fragment ligation between genomic fragments that can confound data analysis \[[@CR32]\]. Sequencing Identification and Mapping of Primed L1 Elements (SIMPLE) is a unique L1-detection method based on the principles of random shearing combined with t-linker ligation-mediated PCR that takes advantage of the massive parallelization offered by high throughput NGS technology \[[@CR33]\]. A primer specific to the 5' UTR of full length L1 elements is used in a single round of primer extension in the 5' direction of the L1 element, which allows for priming on all of the \~5000 full length L1 elements in the human genome and the generation of a single adenine overhang at the end of those extended fragments (Figure [1](#Fig1){ref-type="fig"}A). The vast majority of the genome will remain single-stranded so that little besides the extended mobile element fragments will have the A overhang necessary for duplex linker ligation via a 3' thymidine overhang. Following linker ligation, PCR amplification using L1 5' UTR and linker-specific primers allows for specific amplification of only those DNA fragments anchored by an L1 extension event (Figure [1](#Fig1){ref-type="fig"}B). Additional Illumina adapter sequences are added in a subsequent PCR reaction and the SIMPLE library is size fractionated on an agarose gel before final library amplification (Figure [1](#Fig1){ref-type="fig"}C). SIMPLE libraries can be directly loaded onto the next generation sequencing platform for sequence analysis. Thus, each sample is generated from a randomly sheared fragmentation, making it easy to differentiate PCR duplicates from authentic detection of the same element multiple times. In addition, we reasoned that the ligation step used in SIMPLE would be less likely to create chimeras than a traditional linker ligation because of the nature of the 3' overhangs generated by primer extension and that most of the fragments will remain single-stranded.Figure 1**Description of SIMPLE. (A)** Randomly sheared genomic DNA (black line) is subjected to a single round of primer extension using a primer specific to the L1 5' UTR (blue arrow). Only DNA fragments containing a full length L1 element (large red arrow) will undergo linear extension from the L1-specific primer. DNA fragments containing no L1 sequence or truncated L1 copies (bottom left and right respectively) will not be extended. Following primer extension, a duplex "t-linker" (orange) is ligated to L1-primer extended ends. **(B)** Following ligation of duplex linkers, adapter sequences needed for Illumina sequencing (purple bars) are added by nested PCR to only those fragments anchored by an L1 extension event. **(C)** Following gel based size fractionation, SIMPLE libraries are run on an agarose gel and analyzed by Agilent BioAnalyzer to confirm size and library quality. Validation of SIMPLE {#Sec4} -------------------- To assess the ability of SIMPLE to detect full length L1 elements from the human genome, we sought to determine the efficiency of SIMPLE at pulling out known, reference full length L1 elements that are fixed in the human population. These L1 elements represent evolutionarily established L1 elements that inserted in a primate genome before the split of *Homo sapiens*, and thus represent L1 elements universally present in all human genomes at a diploid level. We employed SIMPLE to determine how many known, fixed full-length L1 elements on three randomly selected chromosomes we could detect. We limited our initial analysis to three chromosomes to allow a more exhaustive manual analysis of any variants. Of the 543 fixed full-length L1 elements on three randomly selected chromosomes, SIMPLE successfully detected the vast majority with an average read depth of ten independent (having a different linker location) reads per element. Of these, 511/543 (94%) were detected by at least one SIMPLE read, with 502/543 (92%) detected by more than three independent SIMPLE events (reads) (Figure [2](#Fig2){ref-type="fig"}A and B). The 32/543 (6%) full-length L1 elements not detected by SIMPLE were located in regions of repetitive DNA (nearly all other genomic L1 elements), which confounds the bioinformatics mapping of these elements.Figure 2**Validation of 5'-SIMPLE using known full length L1 elements. (A)** CLC Genomics Workbench window view of a representative fixed full length L1 element detected by SIMPLE. L1PA4 represents a known full length L1 element on chromosome 1. Small green lines at bottom are forward SIMPLE reads, reading into the 5' end of the L1PA4 element. **(B)** Frequency distribution of the number of reference full length L1 elements detected by SIMPLE versus the number of independent SIMPLE hits detected for that element. On average, a reference full length L1 was assayed by nine independent SIMPLE reads, with the vast majority of reference full length L1 elements assayed by at least two independent reads. **(C)** CLC Genomics Workbench window view of a representative non-reference full length L1 element detected by SIMPLE. As in **(A)**, green bars represent forward SIMPLE reads, however in this case, there is no reference full length L1 at the expected site in the genome. **(D)** Frequency distribution of the number of non-reference, known polymorphic full length L1 elements detected by SIMPLE versus the number of independent SIMPLE hits detected for that element. On average, a non-reference full length L1 was assayed by seven independent SIMPLE reads, with the vast majority of reference full length L1 elements assayed by at least two independent reads. To determine how SIMPLE performed for non-reference full-length L1 elements (ie, those elements either present as a single copy or not present in any given individual) we assessed the detection rate of previously described polymorphic full-length L1 elements on the same three chromosomes \[[@CR11],[@CR25]-[@CR27]\]. While the three chromosomes selected were found to have 152 previously described full-length, polymorphic elements on them, we would only expect a small proportion of these elements to be assayed in our individuals by SIMPLE given the polymorphic nature of these elements. In fact, SIMPLE determined that 23 of these known polymorphic full-length L1 elements were present in our tested population of seven individuals. As with fixed L1 elements, SIMPLE performed well for polymorphic elements, with a median of seven independent SIMPLE reads per polymorphic element (Figure [2](#Fig2){ref-type="fig"}C and D). Additionally, the results of our SIMPLE analysis were compared to a lower-throughput method which identified 68 novel full length L1 elements in the genomes of six individuals of diverse backgrounds \[[@CR31]\]. Using SIMPLE, we confirmed the presence of 29/68 of these non-reference full length L1 elements in at least one of our seven individuals, including the identification of elements with estimated allele frequencies as low as 2%, many of which were known to be hot for retrotransposition (Additional file [1](#MOESM1){ref-type="media"}: Table S3) \[[@CR31]\]. These data indicate SIMPLE is a powerful tool to detect virtually all full length L1 elements in a given individual in a single experiment, allowing for the detection of novel non-reference full length L1s that may contribute to mutagenesis by mobilization in these individuals. Extent of full length L1 polymorphism {#Sec5} ------------------------------------- To determine the extent of human full length L1 polymorphism between unrelated individuals, we applied SIMPLE to seven individual genomic DNA samples from non-related Caucasian individuals to assess the degree of full-length L1 polymorphism between these individuals. For ease of analysis, SIMPLE reads mapping to known full-length L1 elements were discarded, and only those reads that did not represent reference elements were taken into account for analysis. Using SIMPLE, we detected 228 non-reference full-length L1 elements in the genomes of seven unrelated individuals. The range of non-reference, full-length L1 elements varied from a low of 73 per individual to a high of 134 per individual with an average of 96 non-reference full length L1 elements per individual (Table [1](#Tab1){ref-type="table"}). Of the non-reference, full-length L1 elements detected by SIMPLE, 160 were elements present in dbRIP or described by previous studies \[[@CR24],[@CR26],[@CR27],[@CR31],[@CR34]\]. The remaining 68 non-reference full length L1 elements detected by SIMPLE in these individuals were novel elements that had not previously been described (Table [1](#Tab1){ref-type="table"}, Additional file [2](#MOESM2){ref-type="media"}: Table S2).Table 1**Summary of polymorphic full length L1 elements in the seven individuals tested in this studyIndividualTotal polymorphicsTotal Unique (New)1**737 (6)**2**758 (8)**3**13428 (14)**4**12013 (6)**5**966 (3)**6**914 (2)**7**803 (2)**Avg.*96**10 (6)***Column two represents the total number of polymorphic full length elements detected in a particular individual. Column three represents the number of non-reference full length L1 elements unique to one individual (and the number of those that represent novel elements). To get a better understanding of the frequency distribution of non-reference, full-length L1 elements detected by SIMPLE, we performed pair-wise comparisons of the SIMPLE-detected non-reference L1 elements between each of the 7 individuals (Figure [3](#Fig3){ref-type="fig"}). Our data reveal any two individuals shared between 29 and 94 non-reference full length L1 elements with the average number of non-reference L1s shared between any two individuals being 52 (Additional file [3](#MOESM3){ref-type="media"}: Figure S1A). Similar pairwise comparisons revealed any one individual has between 13 and 91 non-reference, full-length L1 elements that are not present in another single individual with an average of 44 non-reference, full-length L1 elements not shared between any two individuals (Figure [3](#Fig3){ref-type="fig"}A). Analysis of novel non-reference, full-length L1 elements (ie, those not previously reported) revealed individuals contained between 9 and 26 novel non-reference full length L1 elements, with an average of 15 novel non-reference elements per individuals (Figure [4](#Fig4){ref-type="fig"}B). Of these, most are unique to a single individual, but as many as 10% are shared between at least two individuals in our sample (Figure [3](#Fig3){ref-type="fig"}B, Figure [4](#Fig4){ref-type="fig"}A, Additional file [3](#MOESM3){ref-type="media"}: Figure S1A). Taken together, these data indicate an individual genome may contain upwards of 90 non-reference full length L1 elements with the potential to be active.Figure 3**Significant variation in the number of non-reference full length L1 elements is observed between individuals. (A)** Two-way matrix showing the total number of polymorphic full length L1 loci present in one individual but not in another. For example, the first row represents the number of polymorphic full length L1 elements present in individual one but not in individuals two through seven (left to right). Total numbers of polymorphic full length L1 elements per individual are included at the end of each row and the bottom of each column for reference. Numbers vary from a low of 13 elements in individual seven that are not found in individual six (red square) to a high of 91 elements that are found in individual three, but are not found in individual two (blue square). **(B)** Similar to **(A)** except showing the subset of only novel (previously unreported) polymorphic full length L1 loci present in one individual but not in another. For example, the first row represents the number of novel polymorphic full length L1 elements present in individual one but not in individuals two through seven (left to right). Total numbers of novel polymorphic full length L1 elements per individual are included at the end of each row and the bottom of each column for reference. Numbers vary from a low of five elements in individuals five and six that are not found in individual seven (red squares) to a high of 24 elements that are found in individual three, but are not found in individual five (blue square). **(C)** Representative gels depicting 5' flank PCR of two randomly selected non-reference full length L1 elements. Lane labels: NTC=no template control, M=1 kb DNA ladder, Arabic numerals=individual number as per Table [1](#Tab1){ref-type="table"}. **(D)** Representative gel depicting 3' flank PCR of non-reference full length L1 element. Lane labels are as in **(C)**.Figure 4**Further SIMPLE validation. (A)** Frequency distribution showing the number of novel (ie, previously undescribed) non-reference SIMPLE-detected full length L1 elements (Y-axis) versus the number of individuals in which they were detected (X-axis). A majority (42/68, 62%) of novel non-reference full length L1 elements were detected in only one individual, however 26/68 (38%) were detected in at least two individuals and 7/68 (10%) were detected in three or more individuals. **(B)** Graph showing the number of novel non-reference full length L1 elements found in each individual. Numbers ranged from three novel non-reference elements in individual seven to 28 in individual three. The average number of novel non-reference full length L1 elements per individual was ten. Validation of SIMPLE-detected polymorphisms {#Sec6} ------------------------------------------- Non-reference, full-length L1 polymorphisms detected by SIMPLE were subjected to PCR validation to confirm the presence of the non-reference elements. We randomly chose 15 non-reference, full-length L1 elements detected by SIMPLE to perform PCR based validation using L1 junction PCR to amplify across the 5' L1 junction. This method allowed us to determine the 5' junction of the non-reference L1 element by Sanger sequencing of PCR product. We successfully validated 15/15 (100%) SIMPLE-detected L1 polymorphisms by 5' junction PCR and confirmed the insertions by Sanger sequencing of the 5' junction (Figure [3](#Fig3){ref-type="fig"}C, Additional file [4](#MOESM4){ref-type="media"}: Table S4). Additionally, a random sample of validated 5'-SIMPLE reads was also validated for 3' junctions. We successfully validated seven full length SIMPLE junctions at the 3' end, and were able to determine polyA tail length, L1 EN cleavage site, and target site duplication size for these polymorphic elements, which showed these elements displayed the normal features expected of insertion by retrotransposition (short target site duplications, polyA tails, and L1 endonuclease consensus cleavage sites) (Additional file [5](#MOESM5){ref-type="media"}: Table S5). Taken together, these data show SIMPLE is a robust method for detecting non-reference full length L1 elements. Distribution of non-reference full length L1 elements {#Sec7} ----------------------------------------------------- We next analyzed the chromosomal distribution of non-reference, full-length L1 elements detected in our study. SIMPLE successfully detected non-reference L1 elements on all 22 autosomes and X with no significant bias for any chromosome to harbor either a particularly high or particularly low number of non-reference, full-length L1 elements. Novel full-length L1 elements were detected on all chromosomes except 8 and 19 (Additional file [6](#MOESM6){ref-type="media"}: Figure S2). Analysis of the number of individuals in our cohort sharing any given non-reference full length L1 reveals that the majority of novel full-length L1 elements detected by SIMPLE are present in only a single individual, indicating they occur at low allele frequency. However, surprisingly, a sizeable number (38%) of novel full length L1s were detected in at least two independent individuals. In fact, 7/68 (10%) novel full length L1 elements detected by SIMPLE were present in at least three independent individuals (Figure [4](#Fig4){ref-type="fig"}A). Additionally, our data show that any individual contains on average 10 unique non-reference, full-length L1 elements with more than half of those (on average, six) having not been previously described (Figure [4](#Fig4){ref-type="fig"}B and Table [1](#Tab1){ref-type="table"}). However, the vast majority of the novel full length L1s detected by our study represent relatively rare alleles that are limited to only one or two individuals in our sample. To highlight this, we compared the allele frequencies of the 68 novel full length L1s described by this study to the 160 previously described polymorphic full length L1s also detected in our seven individuals. We find that while the previously described polymorphic L1 elements range from rare to common among our seven individuals, the majority of the novel polymorphic full length L1 elements detected by SIMPLE are unique to a single individual (Additional file [7](#MOESM7){ref-type="media"}: Figure S3). Taken together, these data suggest that the extent of low frequency (rare) non-reference, full-length L1 insertion polymorphisms in the population may be greater than expected by previous estimates \[[@CR11],[@CR12],[@CR35]\]. Allele frequency estimates of novel full length L1s {#Sec8} --------------------------------------------------- Because our study identified numerous novel full-length L1s that we anticipate are present at low allele frequency based on their frequency within our seven-individual population (Additional file [7](#MOESM7){ref-type="media"}: Figure S3), we wanted to determine the allele frequencies of the polymorphic full length L1 elements detected in our study in a larger population of individuals. We conducted a population study to determine the allele frequency of 43 randomly chosen polymorphic elements in 80 individuals from four geographically diverse backgrounds (African American, Asian, German Caucasian, and South American). The vast majority (79%) of full-length polymorphic L1 elements tested existed at an allele frequency less than 50%, with most (58%) existing at less than 25% allele frequency. Additionally, 33% of polymorphic full length L1s were rare in the population, with allele frequencies less than 10% (Figure [5](#Fig5){ref-type="fig"}). These frequencies remained consistent whether we looked at the total population, or within each of the four subpopulations (Additional file [8](#MOESM8){ref-type="media"}: Figure S4). Together, these data suggest that the polymorphic elements detected by SIMPLE represent low frequency polymorphisms in the population that are younger and most likely to be active.Figure 5**Allele frequency determination of full length polymorphic L1s.** The allele frequency of 40 randomly selected full length polymorphic L1s detected by SIMPLE was determined by screening a panel of 80 unrelated individuals from four distinct populations. Most polymorphic L1s tested were found to be low allele frequency elements, with only very few elements existing at allele frequencies above 50%. Application of SIMPLE to L1 3' ends {#Sec9} ----------------------------------- To determine if SIMPLE could be easily adapted to detect all novel L1 insertions, we sought to modify SIMPLE to target the 3' end of human specific L1 elements (L1Hs) because the vast majority of new L1 insertions are 5' truncated \[[@CR2],[@CR20],[@CR36],[@CR37]\]. We designed primers targeting only the youngest, most active L1 subfamilies and performed SIMPLE targeting L1Hs 3' ends in a single individual. To determine the sensitivity of L1 SIMPLE for L1 3' ends, we performed a similar analysis to that done in Figure [2](#Fig2){ref-type="fig"}B. By mapping 3'-SIMPLE data to the human genome reference build and assaying three random chromosomes for detection of known L1Hs 3' ends, we determined SIMPLE performs similarly for L1 3' ends as it does for 5' ends, detecting 93% of known L1Hs elements on the three chromosomes assayed (Figure [6](#Fig6){ref-type="fig"}A). Like with 5'-SIMPLE, those elements not detected by 3'-SIMPLE are elements located in areas rich in repetitive DNA sequences, compounding mapping. Like 5'-SIMPLE, 3'-SIMPLE is able to detect L1Hs 3' ends with high confidence with an average of seven independent 3'-SIMPLE hits per known L1Hs (Figure [6](#Fig6){ref-type="fig"}B). These results support the use of SIMPLE for detection of *de novo* L1 insertion events, which would prove useful in studying L1 mutagenesis in various cancer types.Figure 6**Adaptation of SIMPLE to detect L1 3' ends. (A)** CLC Genomics Workbench window view of a representative fixed human specific L1 element (L1Hs) detected by 3'-SIMPLE. L1Hs represents a known human specific L1 element on chromosome 1. Small red lines at bottom are reverse 3'-SIMPLE reads, reading into the3 end of the L1Hs element. **(B)** Frequency distribution of the number of reference L1Hs elements detected by 3'-SIMPLE versus the number of independent 3'-SIMPLE hits detected for that element. On average, a reference L1Hs element was assayed by seven independent 3'-SIMPLE reads, with the vast majority of reference L1Hs elements assayed by at least two independent reads. Discussion {#Sec10} ========== Full length polymorphic L1 elements are responsible for virtually all of the disease-causing retroelement insertions and the bulk of retroelement mobilization observed in tumors. We have developed a robust tool to identify full-length polymorphic L1 elements in individual genomes, which also has potential to be applied to the detection of *de novo* somatic L1 insertion events in tumors or other samples. Because DNA fragmentation for SIMPLE occurs by random shearing of genomic DNA by sonication, SIMPLE is able to detect L1 elements that may be located too distally from restriction enzyme cut sites used by other high-throughput sequencing methods \[[@CR25],[@CR26]\]. Additionally, SIMPLE has benefits over other fosmid-based approaches used in the past in that it is extremely high throughput, allowing the detection of essentially all genomic L1 elements in a single SIMPLE reaction \[[@CR31]\]. Those rare elements missed by SIMPLE, which reside within repetitive regions of the genome (ie, other L1 elements) are likely a function of read length. It is possible that future iterations of SIMPLE using longer reads would allow for exact mapping of reads that lie partially within repetitive genomic regions. Using SIMPLE, we have determined the typical individual contains about 100 non-reference, full-length L1 elements. Based on previous estimates, we expect 63/100 of these elements to be active, with 54/100 exhibiting particularly high levels of activity \[[@CR31]\]. These 100 non-reference, full-length L1 elements are in addition to the active polymorphic L1 elements present in the human genome reference build, indicating the potential burden of active L1s in any one individual exceeds previous estimates of 80--100 elements per individual \[[@CR11]\]. Additionally, our results indicate that the number of non-reference, full-length L1 elements in the population likely exceeds previous estimates given the finding that a number of novel elements detected by SIMPLE were present in multiple individuals. Further, the variation we see between individuals in terms of the number of non-reference, full-length L1 elements implies there are likely individual differences in each person's "retrotransposition potential", with some individuals in a population being more prone to L1 jumping than others, which is consistent with previous hypotheses \[[@CR38]\] as well as with recent data showing some hot L1s are differentially active between different cancer \[[@CR28]-[@CR30]\]. It is interesting to note that although all seven individuals assayed in this study were Caucasian, there is still significant variation between individuals with respect to non-reference, full-length L1 elements. We anticipate, however, that analysis of diverse populations will yield similar results in terms of absolute numbers to those reported here. We would expect an individual from any single population to harbor the average 100 non-reference full length L1 elements with the only difference being in the identity of the particular elements found in that individual. Two individuals from similar backgrounds will share a greater number of non-reference elements than two individuals from diverse backgrounds, but both individuals will have \~100 non-reference full length L1 elements each regardless of their background. Conclusion {#Sec11} ========== SIMPLE represents a useful tool not only for both population and tumor-based studies of L1 insertion polymorphism. The vast majority of L1 retrotransposition is likely to occur due to the approximately 100 polymorphic L1 loci in each individual. Work looking at L1 mobilization in tumors has identified a number of tumor types that appear to support L1 mobilization (colorectal, prostate, lung, ovarian) as well as others that do not (glioblastoma, multiple myeloma) \[[@CR25]-[@CR27],[@CR39],[@CR40]\]. While this preference for particular tumor types can be explained by cell-specific factors affecting L1 mobilization, there is also significant variation in the extent of L1 mobilization within cancers of the same type that could be due to differences in polymorphic, active L1 elements between two individuals \[[@CR27],[@CR40]\]. Using SIMPLE, it may be possible to determine the nature of biases in L1 retrotransposition rates between different individuals or tumors and attribute them to differences in the number or location of active L1 elements between two individuals. Additionally, with the application of SIMPLE to L1Hs 3' ends, it can be used to determine the total levels of L1 retrotransposition in different individuals and tumors. Methods {#Sec12} ======= Cell lines and oligonucleotides {#Sec13} ------------------------------- Fibroblast cell lines GM01631, GM01632, GM05510, GM05568, GM16094, GM15983, and GM08207 were obtained from the Coriell Institute (Camden, NJ). Cell lines were maintained in EMEM supplemented with non-essential amino acids, sodium pyruvate, and 10% fetal bovine serum. DNA oligonucleotides and duplex linkers were obtained from Integrated DNA Technologies (Coralville, IA). Oligonucleotide sequences used in this study are presented in Additional file [9](#MOESM9){ref-type="media"}: Table S1. 5' SIMPLE library generation {#Sec14} ---------------------------- Genomic DNA from fibroblast cell lines was extracted using the DNEasy Blood and Tissue Kit from Qiagen (Germantown, MD). DNA was sheared to approximate 750-1200 bp using a Diagenode BioRuptor on High, 30s on/30s off for 12 minutes. 50 ng of sheared gDNA was subject to a primer extension reaction using Taq polymerase and a L1 5' UTR specific primer (L15'UTRP1) which sits \~100 bp from the start of the L1 element. Phosphorylated duplex T-linkers (IDT, Coralville, IA) were ligated using T4 DNA ligase. First round PCR was performed for 20 cycles using primers L15'UTRP1 and LinkerP1. One-million fold dilution of PCR I was performed and 1uL of this dilution was subjected to a nested PCR using primers L15'UTRP2 and LinkerP1 for 25 cycles. PCR products were run on a gel and a gel slice at \~500-700 bp was extracted using the Qiagen Gel Extraction kit. Following extraction, the final library was amplified using Phusion polymerase (Thermo, Waltham, MA) for 12 cycles as per the Illumina library generation protocol, and gel purified to yield the final 500-700 bp library. Library quantitation and illumina sequencing {#Sec15} -------------------------------------------- Illumina sequencing libraries were quantified by qPCR and checked for quality by Agilent BioAnalyzer trace at Elim Biopharmaceuticals (Hayward, CA). 100 bp paired end Illumina sequencing was performed on an Illumina HiSeq 2000 by Elim Biopharmaceuticals (Hayward, CA). Reference genome masking {#Sec16} ------------------------ Reference repetitive element annotation files were downloaded from UCSC genome browser as GTF files and used to annotate the GRCh37 (hg19) reference genome in CLC Genomics Workbench (CLC Bio, Cambridge MA). Two custom GFF annotation files were also generated: 1) representing known polymorphic elements and 2) representing all known full length L1 elements plus 600 bp of upstream genomic flanking sequence. To identify and annotate 5' L1 regions of the genome, the first 300 base pairs of the L1.3 consensus sequence was aligned to the human reference genome (CRCh37/hg19) via NCBI BLAST (blastn algorithm). Initial search parameters were relaxed to increase sensitivity for older, more mutated elements (word size=7, gap cost=3, gap extension cost=3, match score=2, mismatch penalty=−3, minimum E score of 10). Blast output was generated in tabular format. Because many of the hits obtained from this search consisted of isolated smaller regions (20-30 bp) that were independent of any identifiable L1 element, we further filtered for those hits where alignments were made across \>=250 bp of the 300 bp query. This greatly improved our specificity while maintaining sensitivity to older, more mutated elements. Tabular format blast results were converted to GFF via Perl scripting, and an additional 600 bp of flanking sequence was added upstream and downstream to each matching position. Annotation of 3' L1 ends and flanking sequenced was performed using a similar strategy as above, except that initial identification of 3' end locations was based on the UCSC Genome Browser hg19 RepeatMasker table track, which allowed for subfamily classification based on all available sequence from the insertion. 5' SIMPLE mapping strategy {#Sec17} -------------------------- FASTQ sequencing files were analyzed for quality using FastQC. PCR duplicate reads were removed using custom in-house Perl scripting. Sequence read mappings were performed using CLC Genomics Workbench. Single end genomic flank reads were mapped using a two-pronged mapping strategy. First, reads that either mapped to within 600 bp of known, fixed full length L1 elements or those reads that did not map unambiguously were discarded, as they represented either known elements or unmappable reads. The remaining reads were then mapped uniquely to an annotated GRCh73/hg19 reference genome to identify novel polymorphic or potentially *de novo* full length L1 elements. Polymorphic full length L1 allele frequency determination {#Sec18} --------------------------------------------------------- ### DNA samples {#Sec19} Confirmed novel L1 elements were genotyped to determine the allele frequency of the insert on a DNA panel of 80 diverse individuals (20 African Americans, 20 Asians, 20 Europeans and 20 South Americans) obtained from the Coriell Institute for Medical Research, Camden, NJ. The efficacy of each primer pair was initially assessed through PCR using human DNA cell line HeLa (ATCC CCL-2) and/or chimpanzee DNA cell line Clint (S006006), confirming the size of the predicted PCR products. ### PCR primer design {#Sec20} BLAT was used to locate candidate L1 sequences and flanking DNA these sequences in the hg 19 human reference genome, and add 1,000 bp of flanking sequence both upstream and downstream. Sequences were then screened through RepeatMasker confirming the absence of candidate L1 inserts from the hg 19 sequence and localizing unique genomic regions for primer design. For the most part, prospective PCR primers flanking each candidate L1 locus were designed using Primer3 software. Once screened through the BLAT genome browser, primers were selected for PCR if they were predicted to amplify a single locus. In addition, a virtual PCR was performed for each locus using the in silico function of BLAT to compute the expected PCR product size, annealing temperature, and further verify that only one locus would be amplified. For candidate L1 loci directly flanked by substantial stretches (more than 1 kb) of repetitive genomic sequence, primers were designed manually. In such instances, flanking sequences (up to 1 kb) were screened using the BLAT genome browser and aligned with closely reported matches (usually 4--12 loci) in BioEdit (Ibis Biosciences). Point mutations, insertions, and/or deletions specific to the candidate L1 loci were manually positioned toward the 3' end of the primers. In addition, internal primers, purposed to anneal to the 3' end of a full length human specific L1 element, were available \[[@CR41]\]. ### PCR analysis {#Sec21} PCR assays were performed in four stages. (1) An empty site PCR analysis was initially conducted using HeLa DNA. External primers flanking each candidate L1 locus were used to amplify PCR products matching the predicted empty site band lengths previously generated using BLAT's in silico PCR feature. This assay was capable of amplifying PCR products no larger than 1,500 bp. As a result, alleles lacking the L1 insertion were exclusively amplified. Neither alleles possessing an L1 insertion nor unsuitable primer pairs amplified PCR products. Thus it was impossible to distinguish suitable primers pairs flanking L1 inserts that were homozygous present in HeLa DNA from an unsuccessful reaction (failure to amplify an existing empty site). Therefore, given the human specific nature of the predicted L1 inserts, chimpanzee DNA was used in a control PCR analysis examining each primer pair that failed to amplify an empty site during the initial PCR. (2) An internal primer test was conducted to verify the presence of novel L1 inserts and verify that they were full length. These PCR analyses were performed using DNA samples from the individual(s) in which the inserts were originally identified, an external primer flanking the predicted 3' end of the L1, and a 3' internal primer \[[@CR41]\]. (3) A long amplification PCR was conducted to further confirm the presence or absence of candidate L1 insertion characterized in the former stages. This assay was capable of amplifying PCR products up to 15 kb. Thus the flanking external primers were used to amplify the entire L1 insertion. (4) An allele frequency PCR analysis was conducted using a DNA panel of 80 individuals (20 African Americans, 20 Asians, 20 Europeans and 20 South Americans). This assay subjected each sample to both an empty site assay (stage 1) and an internal primer test (stage 2). Stages 1, 2, and 4 were performed in 25 μL reactions containing 25 ng of template DNA, 200 nM of each oligonucleotide primer, 1.5 mM MgCl2, 10× PCR buffer (50 mM KCl, 10 mM TrisHCl; pH 8.4), 0.2 mM deoxyribonucleotide triphosphates and 1 to 2 U Taq DNA polymerase. PCR reactions were performed as follows: initial denaturation at 94°C for 60 seconds, followed by 32 cycles of denaturation at 94°C for 30 seconds, 30 seconds at primer annealing temperature (determined previously with HeLa DNA), and extension at 72°C for 30 seconds. PCR reactions were terminated with a final extension at 72°C for 2 minutes. Fractionation of 20 μL of each PCR product was performed in a horizontal gel chamber on a 2% agarose gel containing 0.2 μg/mL ethidium bromide for 45 minutes at 200 V. UV-fluorescence was used to visualize the amplified DNA fragments. Stage 3 was performed using Takara LA-Taq (long amplification) DNA polymerase (Clontech Laboratories, Inc., Mountain View, CA). These PCR reactions were performed in 25 μL reactions containing 25 ng of template DNA and LA-Taq PCR reagents according to the manufacturer's suggested protocol: initial denaturation at 94°C for 1 minute and 20 seconds, followed by 32 cycles of denaturation at 94°C for 20 seconds, 20 seconds at the optimized annealing temperature, and a long extension step at 68°C for eight minutes and 30 seconds. These long-amplification reactions were terminated with a final extension at 68°C for 10 minutes. PCR products were size fractionated on a 1% agarose gel for 150 minutes to 180 minutes at 150 V. UV-fluorescence was used to visualize the DNA fragments. 3'-SIMPLE library generation {#Sec22} ---------------------------- Library generation for 3'-SIMPLE was performed in the same manner as 5'-SIMPLE with the following modifications. Primer extension used L1HsP1 primer. First round PCR was 22 cycles rather than 20 and used primers L1HsP1 and LinkerP1. Second round PCR used primers L1HsP2 and LinkerP1. All remaining steps of 3'-SIMPLE library generation were identical to those for 5'-SIMPLE. 3'-SIMPLE mapping strategy {#Sec23} -------------------------- Raw reads were processed for quality and duplicate removal in the same manner as for 5'-SIMPLE. A similar two-pronged mapping strategy was used for 3'-SIMPLE involving mapping reads to known L1Hs elements and then taking unmapped reads and mapping unambiguously to an annotated GRCh37/hg19 genome. Additional files {#Sec24} ================ Additional file 1: Table S3.Detection of previously reported polymorphic full length L1s. This table shows details for 68 previously identified polymorphic L1s \[[@CR31]\], some of which were also detected by this study.Additional file 2: Table S2.Polymorphic full length L1 elements identified in this study. This table shows details for all 228 polymorphic full length L1s identified by this study.Additional file 3: Figure S1.Pairwise comparison of polymorphic L1 elements shared between any two individuals. (A) Two-way matrix showing the total number of polymorphic full length L1 loci shared between any two individuals. For example, the first row represents the number of polymorphic full length L1 elements shared between individual one and each of individuals two through seven (left to right). Total numbers of polymorphic full length L1 elements per individual are included at the end of each row and the bottom of each column for reference. Numbers vary from a low of 29 elements shared between individuals two and seven (red square) to a high of 94 elements shared between individuals three and four (blue square). (B) Similar to (A) except showing the subset of only novel (previously unreported) polymorphic full length L1 loci shared between any two individuals. For example, the first row represents the number of novel polymorphic full length L1 elements shared between individual one and each of individuals two through seven (left to right). Total numbers of novel polymorphic full length L1 elements per individual are included at the end of each row and the bottom of each column for reference. Numbers vary from a low of zero elements shared between individuals one and five and individuals two and five (red squares) to a high of eight elements that are shared between individuals one and two and individuals three and four (blue squares).Additional file 4: Table S4.Validation of SIMPLE-detected full length L1s by 5' flank PCR. This table shows details of the validation of SIMPLE-detected polymorphic L1s using 5' junction PCR.Additional file 5: Table S5.Validation of SIMPLE-detected full length L1s by 3' flank PCR. This table shows details of the validation of SIMPLE-detected polymorphic L1s using 3' junction PCR.Additional file 6: Figure S2.Chromosomal distribution of polymorphic full length L1 elements across seven individuals. Polymorphic full length L1 elements were detected by SIMPLE on all autosomes and chromosome X without any apparent bias for one genomic region over another. Novel full length L1 elements (grey) were also detected across nearly all autosomes and chromosome X without apparent bias.Additional file 7: Figure S3.Allele frequency of known and novel full length L1s among the seven individuals in our population. Graph shows the frequency (as a percentage) of known (blue bars) or novel to this study (red bars) polymorphic full length L1 elements within our population. Previously described polymorphic L1s show an even distribution with some representing rare alleles (present in only one or two individuals) and others representing common alleles (present in all or most individuals). However, the novel polymorphic L1s show a distribution much more skewed to rare alleles, with most novel polymorphic L1s being unique to only a single individual.Additional file 8: Figure S4.Allele frequency of 40 selected polymorphic full length L1s from four diverse populations. Histogram showing the allele frequencies of 40 randomly selected polymorphic full length L1s broken down by population subtype. 20 unrelated individuals from each of four geographically distinct ethnic groups (African American, Asian, German Caucasian, and South American) were tested for the presence or absence of each selected polymorphic full length L1.Additional file 9: Table S1.List of oligonucleotides used in this study. NNNNN represent random barcodes used for multiplexing. **Competing interests** The authors declare that they have no competing interests. **Authors' contributions** VAS designed and performed most experiments, analyzed data, and wrote the manuscript. VEJ conducted population study experiments. SL and DJH created data analysis tools. MAB analyzed data and revised the manuscript. PLD designed and conceived experiments, analyzed data, and revised the manuscript. All authors read and approved the final manuscript. The authors thank Dr. Cecily DeFreece and the members of the Deininger lab for critical comments on this manuscript. This work was supported by the National Institutes of health \[R01GM045668, P20RR020152, and P20GM103518 to PLD, R01GM59290 to MAB. Vincent Streva was supported by a Louisiana State Board of Regents Fellowship. Funding for open access charge: National Institutes of Health.
The long-term objective of the proposed research is to gain a better understanding of the processes and representations underlying spoken word recognition. Substantial empirical evidence now exists in support of the hypothesis that surface variability in speech has both processing and representational implications. The processing work demonstrates that variability in speech, such as a change in the identity of a talker, hinders perception, resulting in slower and/or less accurate spoken word recognition. The representational work demonstrates that variability also has long-term consequences for the perception of spoken words, suggesting that the representations underlying spoken word recognition can include highly specific surface details. Furthermore, recent work also suggests that variability effects follow a time course, manifesting themselves at predictable points during perceptual processing. More specifically, some types of surface variability do not seem to affect early stages of spoken word recognition. Rather, abstract linguistic representations appear to drive the initial on-line perception of spoken words and more specific representations, containing surface details associated with variability, only affect latter stages of processing. The aim of the proposed work is to provide a more immediate and fine-grained analysis of the time course of variability effects. In particular, behavioral measures of the time course effects of two sources of variability, talker-identity and speaking rate, will be collected using the eye-tracking paradigm. The proposed research should ultimately lead to better theories and models of both normal and disordered spoken language perception. [unreadable] [unreadable]
Q: How to use a foreach loop, but do something different on the last iteration? This is probably a simple question, but how do you iterate through an array, doing something to each one, until the last one and do something different? I have an array of names. I want to output the list of names separated by commas. Joe, Bob, Foobar I don't want a comma at the end of the last name in the array, nor if there is only one value in the array (or none!). Update: I can't use implode() because I have an array of User model objects where I get the name from each object. $users = array(); $users[] = new User(); foreach ($users as $user) { echo $user->name; echo ', '; } How can I achieve this and still use these objects? Update: I was worrying too much about how many lines of code I was putting in my view script, so I decided to create a view helper instead. Here's what I ended up with: $array = array(); foreach($users as $user) { $array[] = $user->name; } $names = implode(', ', $array); A: Use implode: $names = array('Joe', 'Bob', 'Foobar'); echo implode(', ', $names); # prints: Joe, Bob, Foobar To clarify, if there is only one object in the array, the ', ' separator will not be used at all, and a string containing the single item would be returned. EDIT: If you have an array of objects, and you wanted to do it in a way other than a for loop with tests, you could do this: function get_name($u){ return $u->name; }; echo implode(', ', array_map('get_name', $users) ); # prints: Joe, Bob, Foobar A: $array = array('joe', 'bob', 'Foobar'); $comma_separated = join(",", $array); output: joe,bob,Foobar A: Sometimes you might not want to use implode. The trick then is to use an auxiliary variable to monitor not the last, but the first time through the loop. vis: $names = array('Joe', 'Bob', 'Foobar'); $first = true; $result = ''; foreach ($names as $name) { if (!$first) $result .= ', '; else $first = false; $result .= $name; }
Tumor Vessel Normalization, Immunostimulatory Reprogramming, and Improved Survival in Glioblastoma with Combined Inhibition of PD-1, Angiopoietin-2, and VEGF. Glioblastoma (GBM) is a non-T-cell-inflamed cancer characterized by an immunosuppressive microenvironment that impedes dendritic cell maturation and T-cell cytotoxicity. Proangiogenic cytokines such as VEGF and angiopoietin-2 (Ang-2) have high expression in glioblastoma in a cell-specific manner and not only drive tumor angiogenesis and vascular permeability but also negatively regulate T-lymphocyte and innate immune cell responses. Consequently, the alleviation of immunosuppression might be a prerequisite for successful immune checkpoint therapy in GBM. We here combined antiangiogenic and immune checkpoint therapy and demonstrated improved therapeutic efficacy in syngeneic, orthotopic GBM models. We observed that blockade of VEGF, Ang-2, and programmed cell death protein-1 (PD-1) significantly extended survival compared with vascular targeting alone. In the GBM microenvironment, triple therapy increased the numbers of CTLs, which inversely correlated with myeloid-derived suppressor cells and regulatory T cells. Transcriptome analysis of GBM microvessels indicated a global vascular normalization that was highest after triple therapy. Our results propose a rationale to overcome tumor immunosuppression and the current limitations of VEGF monotherapy by integrating the synergistic effects of VEGF/Ang-2 and PD-1 blockade to reinforce antitumor immunity through a normalized vasculature.
#define PY_SSIZE_T_CLEAN #include <Python.h> #include <stdint.h> static PyObject* websocket_mask(PyObject* self, PyObject* args) { const char* mask; Py_ssize_t mask_len; uint32_t uint32_mask; uint64_t uint64_mask; const char* data; Py_ssize_t data_len; Py_ssize_t i; PyObject* result; char* buf; if (!PyArg_ParseTuple(args, "s#s#", &mask, &mask_len, &data, &data_len)) { return NULL; } uint32_mask = ((uint32_t*)mask)[0]; result = PyBytes_FromStringAndSize(NULL, data_len); if (!result) { return NULL; } buf = PyBytes_AsString(result); if (sizeof(size_t) >= 8) { uint64_mask = uint32_mask; uint64_mask = (uint64_mask << 32) | uint32_mask; while (data_len >= 8) { ((uint64_t*)buf)[0] = ((uint64_t*)data)[0] ^ uint64_mask; data += 8; buf += 8; data_len -= 8; } } while (data_len >= 4) { ((uint32_t*)buf)[0] = ((uint32_t*)data)[0] ^ uint32_mask; data += 4; buf += 4; data_len -= 4; } for (i = 0; i < data_len; i++) { buf[i] = data[i] ^ mask[i]; } return result; } static PyMethodDef methods[] = { {"websocket_mask", websocket_mask, METH_VARARGS, ""}, {NULL, NULL, 0, NULL} }; #if PY_MAJOR_VERSION >= 3 static struct PyModuleDef speedupsmodule = { PyModuleDef_HEAD_INIT, "speedups", NULL, -1, methods }; PyMODINIT_FUNC PyInit_speedups(void) { return PyModule_Create(&speedupsmodule); } #else // Python 2.x PyMODINIT_FUNC initspeedups(void) { Py_InitModule("tornado.speedups", methods); } #endif
Nikolay Ivanovich Fedorov Nikolay Ivanovich Fedorov (Russian: Николай Иванович Фёдоров, 1918-10-13 in Vyatka (now Kirov) to 1990-11-16) was a Russian painter and textile designer. Collections of his works were acquired by the Russian Museum in St. Petersburg and by the Museum of Decorative and Applied Arts in Moscow and also are permanently exhibited in the State Darwin Museum in Moscow and in the museums in Tomsk and Krasnoyarsk. Honors and exhibitions Member of the USSR’s Union of Artists since 1956 and in 1978 awarded with the title of Honored Artist of Russia. The collection of textile elaborated with his participation won the Grand Prix, Diploma of 1st degree and a gold medal at the World Exhibition in Brussels in 1958. His tapestries have been exhibited various times at the Leipzig Fair. He is one of the authors of curtains for the Bolshoi Theatre in Moscow, curtains for concert hall in the Hotel Russia (along with Kausov), the assembly hall of the Palace of Culture of Moscow’s Textile Institute (along with Shubnikova) and author of curtains for Concert Hall at the Palace of Culture of the Ministry of Internal Affairs. The joint collection of his and Shubnikova’s art works represented Russian textile art of 1940’s-1950’s in the exposition of Russian Museum in St. Petersburg in 1993. This exhibition has made a long tour through several counties in Europe in the 1990s. Documenting the search for Tunguska meteorite In 1939, he participated as an artist in the last Leonid Kulik’s expedition for the search of Tunguska "meteorite" (it is still a controversy what exactly caused the event). Later, in 1984 and 1988, he also participated in the Tunguska meteorite expeditions under the guidance of Academician Vasiliev. His paintings describing eye witness reports and later scientific theories were exhibited in many museums and used in several books. Textiles Nikolay's textiles were produced for many years by the Moscow Weaving and Finishing Complex (MTOK) and widely used. Some textile samples were purchased by the Moscow Film Studio (Mosfilm) and utilized in many popular films as curtains at the set. One textile based on French classical tapestry was used in the popular Russian TV series "Twelve Chairs". References Roy A. Gallant, The day the sky split apart: investigating a cosmic mystery, Atheneum Books for Young Readers, 1995. , . Roy A. Gallant, Meteorite Hunter: The Search for Siberian Meteorite Craters, McGraw-Hill, 2002. , . External links Nikolay Ivanovich Fedorov Official web site Category:1918 births Category:1990 deaths Category:20th-century Russian painters Category:Russian male painters
Q: Enforce Unique consumer group id in Kafka We are making a Kafka Queue into which messages are being published from source system. Now multiple consumers can connect to this queue to read messages. While doing this the consumers have to specify a groupId based on which the messages are distributed , if two apps have the same groupId both of them won't get the messages. Is there a way i can enforce every app. to have a unique consumer group Id? A: Kafka is not a naming registry, but there are many other strategies that you might use to assign unique group names without a coordinator: Name the group based on the application. Two apps will each get all the data since their names are unique. Name the group based on the data store that they are writing to (e.g. jdbc connection string). If you start an app instance for a different database, it will have it's own offsets Name the group based on the (canonical) class name that implements the consumer. If your app has two consumers implemented in different Java classes, they'll get different groupIds Assign a random value (e.g. UUID) to the group. On ever restart a new group will be created.
Glycosyltransferases and glycan structures contributing to the adhesive activities of L-, E- and P-selectin counter-receptors. In mammals, leucocytes of the adaptive and innate immune systems must move from their sites of origin to sites of maturation, or to where they are deployed against the invasion of pathogens. The vascular tree serves as the primary throughfare by which leucocytes move to these various destinations. Adhesion must be established between the leucocyte and the endothelial cells that line the vascular tree to enable leucocytes to escape the vascular compartment and then contribute to extravascular immune processes. A major fraction of these leucocyte-endothelial-cell adhesive events initiate with, and require interactions between, the selectin family of cell adhesion molecules and their glycoconjugate counter-receptors. This article will review the structures of the glycan components of these counter-receptors, and the glycosyltransferases that control their expression.
Q: Unable to install python2.7-dev on Ubuntu 12.04.2 LTS The command $ sudo apt-get install python2.7-dev yields this result: Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: python2.7-dev : Depends: libexpat1-dev but it is not going to be installed E: Unable to correct problems, you have held broken packages. I have run sudo apt-get update, sudo apt-get upgrade and sudo apt-get dist-upgrade before running the install command. I have tried switching the repository server from my local server (Czech Republic) to the Main server. I have tried installing the package using aptitude, the result was: The following NEW packages will be installed: libexpat1-dev{ab} libssl-dev{a} libssl-doc{a} python2.7-dev 0 packages upgraded, 4 newly installed, 0 to remove and 0 not upgraded. Need to get 32.4 MB of archives. After unpacking 47.9 MB will be used. The following packages have unmet dependencies: libexpat1-dev : Depends: libexpat1 (= 2.0.1-7.2ubuntu1.1) but 2.1.0-1ubuntu1 is installed. The following actions will resolve these dependencies: Keep the following packages at their current version: 1) libexpat1-dev [Not Installed] 2) python2.7-dev [Not Installed] Accept this solution? [Y/n/q/?] Y No packages will be installed, upgraded, or removed. 0 packages upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Need to get 0 B of archives. After unpacking 0 B will be used. The aptitude's suggested "solution" doesn't actually do anything and has no effect. After printing out the above, it exists without an error code or any error message. I have tried removing libexpat1, but way too many packages I need depend on it including, it seems, huge chunk of gnome. None of the things I have tried had any effect, the result of the install command was still the same. I have searched the web for the issue and found few posts that seem to be related to my problem, but though they may be "resolved", none of them helped me in any way (here are the two I didn't close before asking this question): Unable to install python2.7-dev due to unmet dependencies Ubuntu 12.04 https://askubuntu.com/questions/275861/problem-installing-python-dev A: The interesting bit here is: libexpat1-dev : Depends: libexpat1 (= 2.0.1-7.2ubuntu1.1) but 2.1.0-1ubuntu1 is installed. According to the online package search tool, 12.04 LTS ships libexpat1 version 2.0.1-7.2ubuntu1.1 for both precise and precise-updates. The question is, how did you get version 2.1.0-1ubuntu1 installed in the first place? Using the aptitude() interactive UI, you can search for libexpat1, and select the appropriate version for installation, i.e., downgrade the package to match the libexpat1-dev dependency. You can do that from the command-line as well: sudo aptitude install package=version
Eastern Area Agency on Aging sees increase in requests for services DOVER-FOXCROFT, Maine — More senior citizens, adults with disabilities and caregivers in Piscataquis, Washington, Hancock and Penobscot counties reached out for more help in 2010 than in 2009 from the Eastern Area Agency on Aging. Piscataquis County commissioners learned recently that the nonprofit agency’s 53 employees, some of whom are part-time, along with more than 700 volunteers, served 14,332 clients in 2010, a 29 percent increase over 2009. Leslie Lizotte, an outreach social worker with the agency, which is funded in part by the Department of Health and Human Services and municipal donations, told the commissioners last week that the agency provides information, community outreach including minor home repairs, health insurance assistance, transportation, health and wellness and the agency’s popular Meals on Wheels and Community Cafes. Income-eligible senior pet owners also receive some help with pet food that has been donated to the agency. ‘’We really look at the whole picture,’’ when someone reaches out for help from the agency, Lizotte said. ‘’We try to find out if there are programs that can help them.’’ For those caring for people with dementia and related disorders, the agency offers respite care for short periods of time as a stop-gap measure. ‘’We have a very small respite program, small but powerful,’’ Lizotte said. The agency provided 1,571 families with respite and caregiver services in 2009. Staff members and volunteers also spend considerable time helping seniors through insurance issues and programs. ‘’Insurance is no longer simple’’ considering all of the different insurance plans, Lizotte said. Another area of need is provided through the Meals for Me program. About 623 meals per day were served to seniors in 2009, both in the various dining rooms set up throughout the counties and home deliveries, according to Lizotte. In addition, volunteers and a part-time employee served 138 people with minor home repairs from installing ramps, steps, lawn care and simple home modifications. Those home visits sometimes result in something unexpected, according to Tom Iverson Jr., Piscataquis County’s emergency management agency director. He said on a couple of occasions, volunteers have found homeowners deceased. Iverson suggested this week that the agency should provide volunteers with training on how to handle such incidents. Lizotte concurred that training would be helpful. ‘’We could not do our work without the volunteers,’’ she said. The 737 volunteers who work with the agency provided 58,742 hours of services, the equivalent of 28 full-time people year-round, according to agency officials.
hi I'm Peg from Sony and I'm here to show you how to troubleshoot no display on your Sony Vaio notebook for my demonstration that I'm the Vpceb each Vaio model is slightly different and the operating system you're using could be a factor so be sure to check the owner's manual or visit us at esupport.Sony.com for specific information on your model and operating system start by checking that your AC power adapter is properly connected to the computer you should also check the wall outlet and verify that all connections are good I suggest that you use AC power if possible if you are using the battery make sure that the battery pack has a full charge it is possible that the computer has gone into standby mode just press any key to activate the display the monitor should flicker or light up if your monitor still is not showing a display you should at least see lights on the computer this will show that the computer is awake if no lights are on push the power button and wait for your Vaio pc to start up completely if you still have no display check the brightness and ensure that it has not been turned down all the way press the function key and the F5 key at the same time use to right arrow key to increase the brightness if the brightness did not change and you have an external monitor connected check your display by pressing the function and F7 keys to cycle between the video output modes if you still have no display you're going to power the unit off disconnect all external devices displays and remove any pc cards now I'm going to show you how to go into safe mode as your computer is powering up when you see the Vaio screen you will start tapping the F8 key now you are in safe mode you should have a display at this point you will need to go to esupport.Sony.com for more information on how to do restore a point for your computer at this point if you cannot see your monitor you're going to need further technical support go to esupport.Sony.com I hope you enjoyed this video for more tips tricks and how-to videos please visit us at esupport.Sony.com or on our SonyListens youtube channel thank you