text stringlengths 316 100k |
|---|
AI and the Art of Go
Lucas Baker Blocked Unblock Follow Following Mar 12, 2016
Today, a once-distant milestone arrived much sooner than anyone predicted: AlphaGo has defeated Lee Sedol, and computers have conquered Go.
The world’s oldest and most intricate board game now joins chess, Jeopardy, and the host of other pursuits that, once exclusive to humans, have become the province of ever-more-sophisticated artificial intelligence. But this time is different: whereas previous conquests could be explained away as “brute force,” Go required the development of true intuition. Deep Blue’s chess differed from the human kind as dramatically as airplanes do from birds. In contrast, AlphaGo learned Go much as a human would: through observation, experimentation, self-improvement, the organic development of heuristics, and, ultimately, a keen feeling for master play. That feeling may be expressed in the form of neural nets and floating-point numbers, but it is feeling nonetheless, for what do you call a sense that tames the near-infinity of possible variations into one move, if not “intuition”?
As expected, the advent of AlphaGo has sent shockwaves throughout Asia and the world. Many have recognized it as a historic milestone in artificial intelligence, whose elements will prove valuable in technology, medicine, and a long list of other practical applications. But AlphaGo’s triumph has also provoked a special sadness. This is no ordinary game that has fallen; it is one of humanity’s oldest, most profound, and most beautiful intellectual arts. Edward Lasker once claimed that Go is “so elegant, organic and rigorously logical that, if intelligent life forms exist elsewhere in the universe, they almost certainly play Go.” Throughout its 3000-year history, the game has been considered a hallmark of culture (one of the Four Arts of ancient China), an expression of human genius, and a mirror to the soul. One must wonder if a computer’s victory in such a game tarnishes that mirror. And if the mirror is tarnished, isn’t our image of the human soul dimmed as well?
I’ve wrestled with these thoughts myself, particularly as some of my closest friends have shared this perspective. To me, and to many of the 40 million other players in the world, Go represents more than a game. It is a strangely bewitching thing, these patterns of wood, slate, and shell, this system of such simple design and such impossible complexity. I have never found anything so intellectually captivating, and doubt I ever will. Mathematics itself must be benevolent if the wonderful patterns of the Go board can arise merely, in conformance with a minimal set of rules, from placing one stone down after another.
And now to the crux of the matter: as everyone who has felt this way knows, there is something mysterious and nearly magic about this process. That sense of mystery has always been deeply cherished among Go players. It is the reason we were so proud that Go, not merely a game but an art, stood unconquered among games of strategy. But now a program reigns supreme, not a human mind, and in the presence of science there can be no mystery.
That sense of loss: it feels as if the art of Go is gone.
But must we write the epitaph of Go, now that it has been subdued? Has the game that endured for millennia at last entered its twilight?
What will be the fate of Go?
To answer that question requires a more precise definition, because Go is not one game only, but three: Go the science, Go the mind sport, and Go the art. Each of these will react very differently to AlphaGo’s victory.
Go the science is essentially finished, because the interesting questions about the game are essentially solved. The first Go program was an ALGOL script written in 1968, and since that time we have progressed through three epochs of research. The first, until 2007, devoted its effort to hard-coded expert knowledge and heuristics. The second, until 2014, built on the Monte Carlo tree search techniques pioneered by Coulom, Gelly, Silver, and others. The third, driven by DeepMind, lasted only two years but comprised the greatest leaps in strength, delving into deep neural networks and reinforcement learning. The interesting questions of Go were: Can computers match humans? Can they do it by brute force, or must they develop intuition? And is it possible to create a strong value function for the game? The answers were yes, intuition, and definitely. But science succeeds best when it renders further study unnecessary. For the research community, the existence of AlphaGo is an unmitigated triumph, in that it has done just that. If our appetite for research is not yet sated, we will need to devise an entirely new set of questions.
The sport of Go will change rapidly, in much the same way as the sport of chess did. After Kasparov was vanquished by Deep Blue, chess computers quickly matched humans at smaller scales, first on desktops and then even on phones. No longer were stables of grandmasters necessary to prepare a champion for the next game, as grandmasters, aspiring professionals, and amateurs alike gained access to superhuman-level instruction and analysis. The theory of the game experienced a renaissance as computers introduced a flurry of new ideas. Cheating also ran rampant for a while, then was quickly curtailed as the same computers that had enabled cheating prevented it through move-by-move Elo analysis. Amateur spectators gained a better understanding of masters’ games, with live engine-based analysis indicating who had the advantage and what alternative lines of play they could have chosen.
Whether AlphaGo is involved in these developments or suffers the sad fate of Deep Blue, shut away forever by IBM after its last match, the Go world will see all these changes take place over the next several years. And though the sport of chess looks different now, with computers integrated into every facet of competitive play, the chess community has not suffered unduly. Magnus Carlsen is a household name in Norway, famous enough even to create his own clothing line, though he has never beaten a top computer and never will. The name of Lee Sedol will remain dear to Koreans, regardless of his defeat in this series, and his place in history as one of the greatest players of his era is secure.
That leaves only the art of Go. In this perspective, which finds expression throughout East Asian history, Go is not a contest of victory and defeat, but a poetic and semi-mystical game that both captivates and reflects the spirit. One famous legend, Lan Ke (“Rotten Axe”), finds a young man lost in the forest where he has gone to cut wood. He stumbles on two strange old men playing Go, one of whom gives him a date to eat. He sits down to watch their game and, rapt with attention, falls into a trance. When he finally arises, he realizes that so much time has passed that the handle of his axe has rotted away.
The art of Go also implies a certain insight into a player’s personality. The Go board is large enough to accommodate every type of attitude: aggression, calm, fear, desire, greed, hope, rebelliousness, and lust for power are only a few of the traits that can be directly recognized in any given move. In Edo-era Japan, where Go reached its cultural height, players would joke that a true master could look at a game and determine when the student was calm, when they were confident, when they were desperate, and when the maid came by with tea.
And in all three of the major Go-playing countries (China, Korea, and Japan), the concept of the “divine move” still commands the imagination: that is, a move so profound that only God could play it. The idea is ancient, but it survives in many forms today. In fact, a movie called The Divine Move came out in Korea in 2014, to enormous commercial success. The anime Hikaru no Go, which has inspired a large surge in popularity for Go over the past fifteen years, also makes frequent references to “Kami no Itte”. It seems that Go players simply cannot help but inject an element of spirituality into the game.
All of this sentiment is tied to one perspective: that not only is Go a human game, it is specially human. It is part of what makes humans unique, the ability to fathom something so mysterious so deeply, and synthesize all this information into one right move. It belongs to that part of human character that approaches the divine.
And it is that sacredness that the victory of a computer has taken away. For that is the essence of science: replacing mystery with understanding.
Many articles on AlphaGo have discussed the future of Go, and most of them use a physical analogy. Did people cease to run when the car was invented? Of course not, they say, and so it will be with Go. But if Go is an art, this analogy remains deeply unsatisfying. The car does not run like we run. But AlphaGo plays like we play, only better. In the words of Lee Sedol’s rival Gu Li, commenting after the second match, “it played some moves that only God could have played.” AlphaGo, it seems, has proven more divine than we have.
Yet I do not believe that this match ends the art of Go. I believe I have a better analogy.
Astronomy has always been a deeply inspiring subject: what lies above us is perhaps the only thing as astounding as what lies within us. And astronomy, too, has undergone a transformation in knowledge, but its moment came a very long time ago. We once thought the sun revolved around the Earth; that is, that the universe itself was anthropocentric. Copernicus proved that theory false, and suffered for it, but ultimately the universe became no less marvelous for his discovery. Instead it grew more so, as we looked first beyond the bounds of our own planet, then of our solar system, then of our galaxy. Our horizons did not contract, but expanded, when the universe in our minds stopped revolving around us.
Now we have discovered that Go does not revolve around us, either. We may soon find that this is true of other pursuits we hold dear, such as music or writing. But Go itself is no less sublime, no less fantastic, for its submission, and nor would any of these others be. A game of Go holds as much interest for me today as it did yesterday. In fact, it holds more, as the creative and counterintuitive moves in this very match illustrate how much uncharted territory remains. With that clarity comes a chance for understanding, an opportunity to map that territory. What is the true komi? What is the best variation of the avalanche? What is the true strength of any given move? And who was the strongest player ever to have lived? If computers had never mastered Go, we would forever continue to answer these questions in a limited and temporary way. But now, one day, we will know. And the emergence of such clarity offers an opportunity to unite the Go world in a way that has never happened in its millennia of existence.
The moment may be bittersweet, but the art of Go has not died today. Today, the art of Go is reborn. |
Dr. Michelson and others emphasize that while the new Fermi results do not yet eliminate the prospect, further observations with more gamma-ray bursts could eventually verify or refute the hypothesis. That would have a major effect on physicists’ efforts to unify the Einsteinian gravity that governs outer space with the weird quantum laws that govern the inner space of the atom.
Mario Livio, an astronomer at the Space Telescope Science Institute in Baltimore, called the Fermi results an interesting effect but not revolutionary by any stretch. “The beauty of the experiment is not as much in what it achieves,” Dr. Livio said, “as in the fact that you can use astronomical observations to place some interesting limits on very fundamental physics.”
Quantum theory, as Einstein discovered to his chagrin, reduces life on subatomic scales to a game of chance in which elementary particles can be here or there but not in between. One consequence is that space-time itself should become discontinuous and chaotic when viewed at very close distances, the way an ocean that looks smooth from an airplane appears choppy and foamy up close.
This, the story goes, could have an effect on the propagation of light — or photons, as they are called in quantum-speak — slowing light with short wavelengths relative to light with longer wavelengths. The higher the energy of a photon, the shorter is its wavelength. One way to think about it is to envision the photons as boats on this choppy sea. The small ones, like tugboats, have to climb up and down the waves to get anywhere, while the bigger ones can slice through the waves and bumps like ocean liners, and thus go a little faster.
Newsletter Sign Up Continue reading the main story Please verify you're not a robot by clicking the box. Invalid email address. Please re-enter. You must select a newsletter to subscribe to. Sign Up You will receive emails containing news content , updates and promotions from The New York Times. You may opt-out at any time. You agree to receive occasional updates and special offers for The New York Times's products and services. Thank you for subscribing. An error has occurred. Please try again later. View all New York Times newsletters.
Until now such quantum gravity theories have been untestable. Ordinarily you would have to see details as small as 10-33 centimeters — the so-called Planck length, which is vastly smaller than an atom — to test these theories in order to discern the bumpiness of space. Getting that kind of information is far beyond the wildest imaginations of the builders of even the most modern particle accelerators, and that has left quantum gravity theorists with little empirical guidance.
“What’s really lacking,” Dr. Michelson explained, “is a laboratory experiment that tells us anything. So we have to use cosmology: we use the universe as the lab.”
The photons from GRB 090510, detected on May 9, ranged from 10,000 electron volts — the energy unit of choice in physics — to 31 billion electron volts, a factor of more than a million, in seven brief bursts over about two seconds.
The spread in travel time of 0.9 second between the highest- and lowest-energy gamma rays, if attributed to quantum effects rather than the dynamics of the explosion itself, suggested that any quantum effects in which the slowing of light is proportional to its energy do not show up until you get down to sizes about eight-tenths of the Planck length, according to the Nature paper, whose lead author was Sylvain Guiriec of the University of Alabama.
Advertisement Continue reading the main story
But Dr. Livio emphasized that this was only one of many classes of models. “It would be amazing that in effect we don’t need a quantum theory of gravity,” he said. “This only tells us where there are the dead ends.”
Indeed, other physicists said that even this model would not be ruled out until the size limit had been set much below the Planck size.
The good news, astronomers said, is that more data expected from Fermi could decide the question. As Lee Smolin, a quantum gravity theorist from the Perimeter Institute for Theoretical Physics in Waterloo, Ontario, said, “So a genuine experimental test of a hypothesized quantum gravity effect is in progress.”
In the meantime, the last word belongs to Einstein, Robert P. Kirshner of the Harvard-Smithsonian Center for Astrophysics wrote in an e-mail message paraphrasing a 1919 headline in The New York Times about observations that confirmed Einstein’s general relativity. “But the Nature story,” Dr. Kirshner wrote, “is ‘Einstein found right again. Heavens not askew! Savants not agog!’ ” |
Land ownership in South Africa remains heavily skewed across racial lines twenty years after the end of apartheid. But is 80% of the country really in the hands of only 40,000 white families?
Do around 40,000 white families own 80% of the land in South Africa?
It is a claim that has been widely circulated since Andile Mngxitama, a Member of Parliament and “commissar for land and agrarian revolution” with the Economic Freedom Fighters (EFF), raised the issue in an open letter to business tycoon Richard Branson in May 2014.
In Mngxitama’s letter – written after the Virgin founder purchased a 40-hectare farm near Franschhoek in the Western Cape province – Branson’s acquisition was described as “stolen land”.
‘Native majority are landless’
“The dominant idiom since 1652 is that of the settler, who imposed it upon the native majority through force of arms,” Mngxitama wrote. “The result of this conquest is that, about 350 years later, the native majority is landless and only about 40,000 white families own up to 80% of our land.”
Mngxitama later repeated the claim on Twitter, writing: “#Land101 SA is constituted by 123-million hectares. 80% of SA land owned by only 40,000 white families. SA population [about] 53-million”.
In a subsequent television debate with Cornelius Janse van Rensburg from the Afrikaans “business rights watchdog” AfriSake, the EFF’s spokeswoman in Gauteng, Mandisa Mashego, was adamant that “80% of this country’s land is deemed as agricultural land and 80% of that land is owned by 40,000 white families”. Glaring, Janse van Rensburg responded: “It’s nonsense, it’s not so.”
Can the claim be dismissed as nonsense, or is there some truth to it?
79% of SA in private hands
Mngxitama was emphatic when we spoke to him: “40,000 white families own 80% of the land. Deal with that.” He said his claim was supported by a recent state land audit, data collected by Statistics South Africa (Stats SA) and research conducted by the Institute for Poverty, Land and Agrarian Studies (PLAAS) at the University of the Western-Cape.
Mashego did not respond to questions.
So what does the data tell us?
The state land audit, carried out by the office of South Africa’s Chief Surveyor-General and published in 2013, did indeed find that 79% of South Africa’s landmass was in private hands.
But that includes land owned by individuals, companies and trusts and all urban real estate as well as agricultural and mining land in South Africa.
Therefore, according to Mmuso Riba, the Chief Surveyor-General, “there is no basis” for the claim that whites own 80% of South Africa.
‘Land ownership deeply skewed’
One possible source for Mashego’s claim is a dataset on land utilisation that is still used by the Department of Agriculture, Forestry and Fisheries (DAFF) despite the fact that it is more than two decades old. The data was compiled by the Development Bank of Southern Africa in 1991. (According to the department’s spokesman, Makenosi Maro, updated data will only be released towards the end of 2016.)
The 1991 dataset shows that 100,665,792 hectares – or 82.3% of South Africa’s surface area – consisted of farmland. Of this, 81.9% (or 86,186,026 hectares) was considered commercial agricultural land. The rest – situated in what were formerly “black homelands” established under the auspices of the apartheid state – remains classified as “developing agriculture”.
Prof. Cherryl Walker, professor of sociology at the University of Stellenbosch and author of Landmarked: Land Claims and Land Restitution in South Africa, prepared a fact sheet on land distribution for PLAAS last year.
According to Walker: “Land ownership is still deeply skewed along racial lines, but these figures [by the EFF] do not illuminate the current land dispensation.”
One farmer, one farming unit?
Should the EFF’s Mngxitama and Mashego be referring to 80% of farmland – and not 80% of South Africa’s landmass – it is possible that the most recent census of commercial agriculture is the primary source of their claims. It was carried out seven years ago by Stats SA.
The census found that there were slightly fewer than 40,000 farming units, defined as “one or more separate farms in the same provinces that are farmed as a single unit”.
Importantly, the census report explained: “The number of farming units… does not represent the number of farmers, as a specific farming unit can be operated by more than one farmer, and one farmer can operate more than one farming unit.”
The census also did not reflect the racial composition of farm owners, nor the surface area of the farming units.
Small farms likely excluded
There is another caveat. For a farming unit to be included in the census it had to be registered for Value Added Tax (VAT). In South Africa it is compulsory to register for VAT when a business’s turnover reaches a certain threshold. In the census year the bar was set at R300,000 over a twelve-month period.
Peter O’Halloran, who writes on tax matters for Farmer’s Weekly, says this would have excluded smallholdings surrounding the major cities and farms that are too small to make them economically viable.
“Commercial farms might number 40,000 or so according to the census, but in terms of land owners who own farms, this number could be much higher. VAT registration and compliance is highly onerous and the small operator will shy away from that.
“My take is that the smaller farmers and recreational farmers make up the majority of farmland owners in South Africa.”
Unions join the fray
Of the farming units registered for VAT in 2007, only 39,966 were identified as “active” at the time of the census and included. The majority of farming units (33,249) were owned by individuals, with 2,167 belonging to companies, 2,259 to close corporations and 874 described as “family-owned”.
How many are owned by black or white farmers? It is difficult to say for certain.
A Black Economic Empowerment (AgriBEE) scorecard – that measures elements such as black ownership and skills development – has been introduced for the agricultural sector. But the Agricultural Business Chamber (Agbiz) said in its latest survey report it is “very difficult to measure the BEE compliance of the agricultural sector as whole, as so few enterprises have determined their score, never mind obtained accredited scorecards”.
Black or white?
Frustrated by pressure from legislators and politicians, agricultural unions have carried out land audits of their own. To date, two have been completed.
The KwaZulu-Natal Agricultural Union (Kwanalu) did not publicly release their audit so it cannot be independently assessed. Its CEO, Sandy la Marque, forwarded Africa Check a copy of a presentation which put white ownership at 15.4% of the province’s surface area with the ownership of a further 23.11% listed as “unknown”.
Agri Free State had their audit assessed by the Bureau for Food and Agricultural Policy (BFAP), a university based research network. They found that only 2.96% of commercial agricultural land in the province was black-owned. Another 10% could not be fully accounted for.
(Note: This is the case, for example, where land is owned by trusts or companies and it becomes virtually impossible to define ownership as either white of black. Free State Agri refers to the Anglo American Corporation, which has a BEE rating, but not necessarily an AgriBEE rating and is listed on foreign stock exchanges, but has significant domestic shareholding.)
To complicate matters further, both unions’ counts of state-owned land are at odds with the state land audit.
The Surveyor-General said his office would refine their audit in time. At this stage they are surveying and registering land owned by the state. This includes a great number of schools, health facilities, police stations, vast tracks of land in the Eastern Cape and a significant chunk of the Kruger National Park – all of which were not previously recorded as state land.
Conclusion: The claim is incorrect
Claims that 40,000 white families own either 80% of South Africa, or 80% of the country’s farmland, are incorrect and not supported by the available data.
Although a state land audit has shown that 79% of South Africa is privately owned, this includes land owned by individuals, companies and trusts, and includes all urban real estate and agricultural and mining land in South Africa. This would include land owned by both black and white South Africans.
It is also unlikely that the number of commercial farming units captured in the 2007 census – slightly less than 40,000 – reflects the true status of all commercial agricultural land in South Africa.
Certainly, huge disparities remain and land ownership continues to be heavily skewed across racial lines twenty years after the end of apartheid.
But none of the datasets support the claims made by Mngxitama and Mashego. Given the inherent sensitivity of the land debate and the importance of land reform in South Africa, it is vital that debate around the issue and policy decisions is informed by accurate, current data.
Edited by Julian Rademeyer
© Copyright Africa Check 2019. You may reproduce this piece or content from it for the purpose of reporting and/or discussing news and current events. This is subject to: Crediting Africa Check in the byline, keeping all hyperlinks to the sources used and adding this sentence at the end of your publication: “This report was written by Africa Check, a non-partisan fact-checking organisation. View the original piece on their website", with a link back to this page. |
This article is over 1 year old
José Inés García Zárate said death of Kate Steinle was accidental in case that fueled debate over immigration and ‘sanctuary cities’
A jury on Thursday found a Mexican man not guilty of murder in the killing of a woman on a San Francisco pier that touched off a national immigration debate two years ago.
José Inés García Zárate had been deported five times and was wanted for a sixth deportation when Kate Steinle was fatally shot in the back while walking with her father on the pier.
García Zárate did not deny shooting Steinle but said it was an accident.
The shooting came in the middle of the presidential campaign in July 2015 and touched off a fierce debate over the country’s immigration policies. It spotlighted San Francisco’s “sanctuary city” policy, which limits local officials from cooperating with US immigration authorities.
Politics, however, did not come up in the month-long trial, which featured extensive testimony from ballistics experts. Defense attorneys argued that García Zárate was a hapless homeless man who killed Steinle in a freak accident. Prosecutors said he meant to shoot and kill her.
García Zárate was found guilty of being a felon in possession of a firearm.
The San Francisco deputy district attorney Diana Garcia said during the trial that she did not know why García Zárate fired the weapon, but he created a risk of death by bringing the firearm to the pier and twirling around on a chair for at least 20 minutes before he fired.
“He did kill someone. He took the life of a young, vibrant, beautiful, cherished woman by the name of Kate Steinle,” she said.
A defense attorney, Matt Gonzalez, said in his closing argument that he knew it was difficult to believe García Zárate found an object that turned out to be a weapon, which fired when he picked it up.
But he told jurors that García Zárate had no motivation to kill Steinle and that as awful as her death was, “nothing you do is going to fix that”.
The bullet ricocheted on the pier’s concrete walkway and fatally struck Steinle in the back.
The gun was stolen from the SUV of a US Bureau of Land Management ranger that was parked in San Francisco. The city has been plagued by an epidemic of car burglaries in recent years.
Before the shooting, García Zárate had finished a federal prison sentence for illegal re-entry into the United States and had been transferred to San Francisco’s jail in March 2015 to face a 20-year-old charge for selling marijuana.
The sheriff’s department released him a few days later after prosecutors dropped the marijuana charge, despite a request from federal immigration officials to detain him for deportation.
Donald Trump said during the presidential campaign that Steinle’s death was another reason the United States needed to build a wall on its southern border and tighten its immigration policies.
Trump signed an executive order to withhold funding from sanctuary cities, but a federal judge recently blocked it in a lawsuit from two California counties, San Francisco and Santa Clara. The administration has appealed.
After the verdict, the president tweeted:
Donald J. Trump (@realDonaldTrump) A disgraceful verdict in the Kate Steinle case! No wonder the people of our Country are so angry with Illegal Immigration.
Another lawyer for the defense, Francisco Ugarte, said that the death of Kate Steinle was an “incomprehensible tragedy”, but the ruling was a vindication for immigrants.
Ugarte said the case was used “to foment hate” and used “to catapult a presidency along that philosophy of hate of others”.
He said the immigration status of García Zárate had no relevance to the case and the verdict was a correct reflection of what had transpired. |
Dozens of Turkish journalists have been jailed, and several foreign journalists have been harassed, detained, denied entry or expelled from Turkey in recent months, particularly since an unsuccessful coup attempt in July. A Wall Street Journal correspondent, Dion Nissenbaum, was held incommunicado for three days in December without explanation before leaving the country along with his wife and infant daughter.
After having arrived at the Istanbul airport from London, Mr. Nordland said in an email that he had been stopped by the border police. They told him that his name was on an Interior Ministry order denying him entry and that they were placing him on the next flight back, “no reason given,” he wrote. A Turkish lawyer for The Times, Orcun Cetinkaya, said the airport police had told a colleague that the reason was “national security,” with no further details.
A spokesman for the Turkish presidency, who spoke on condition of anonymity under government protocol, and Huseyin Muftuoglu, a spokesman for the Foreign Ministry, said they would investigate the circumstances and had no further immediate comment. Telephone attempts to reach Ali Ozturk, a spokesman for the Interior Ministry, were met with a busy signal.
Turkish officials had earlier expressed unhappiness over some articles by Mr. Nordland from November and December, particularly one from the southeast city of Diyarbakir, the former stronghold of an outlawed Kurdish group, the Kurdistan Workers’ Party, or P.K.K. The article described the aftermath of months of fighting there between P.K.K. guerrillas and government forces.
In a statement, Mr. Baquet said: “The Turkish government’s action is an affront to freedom of the press and an effort to keep the world from having access to independent reporting from Turkey. Rod is a veteran correspondent who has done groundbreaking journalism from around the world. There was no justification for today’s action. The Times remains committed to covering Turkey fairly, accurately and fully.” |
There are more than four million Muslims in Pakistan who are prohibited by law to freely profess their faith. They are systematically discriminated against, excluded from social and political spheres, and their homes, places of worship, and community members are the targets of brutal violence. These are people of the Ahmadiyya Muslim community, disciples of Mirza Ghulam Ahmad.
Mirza Ghulam Ahmad of Qadiyan (British Punjab) emerged towards the end of the nineteenth century as a leading polemicist and amassed a number of followers, and in 1899 he declared himself the promised messiah for the Muslims of Islam. By believing in the prophethood of Mirza Ghulam Ahmad, Ahmadi Muslims have earned the ire of Muslims who have a different interpretation of the finality of Prophet Mohammad (khatam-i-nabuwat).
Historically, the Ahmadiyya community supported the Muslim League during the crucial phase of the independence movement of the 1940s. Mirza Basheer-ud-Din Mahmood Ahmad, then Caliph of the Ahmadiyya Muslim Community, urged his followers to support the Muslim League, as he felt that Muslims would not be granted equal rights and representation in post-partition India. He oversaw the safe migration of Ahmadis from what was Indian Punjab to present-day Pakistan, building a community in Rabwah, their present-day headquarters. Ironically, it is in India where members of the Ahmadiyya community are recognized as Muslims and enjoy equal rights.
While anti-Ahmadi sentiment dates back to the time of the partition, Ahmadis had all the legal protections as any other group during the early decades of Pakistan, and Jinnah himself had close relations to the community and accepted them as Muslims. In fact, the first foreign minister of Pakistan, Zafarullah Khan, was an Ahmadi Muslim.
Anti-Ahmadi sentiment continued, as members of the Jamaat-i-Islami raised objections on members of the community acting like a close-knit minority while reaping the benefits of being a majority. Furthermore, Deobandi leaders raised objections over the acquisition of land in Rabwah, as well as what they perceived to be a disproportionate Ahmadi presence in higher administration and the military. More common were fiery speeches that accused the Ahmadiyya community of harming the sentiments of Muslims by disbelieving in the finality of the prophet Muhammad. Anti-Ahmadi sentiment gradually rose until an ultimatum was delivered to the Prime Minister on January 21 1953 which demanded:
The removal of Zafarullah Khan from the foreign ministry;
The removal of Ahmadis from top government offices;
The declaration of Ahmadis as non-Muslim.
When these demands were rejected, riots followed in which between 200 and 2000 Ahmadi Muslims were killed and martial law had to be imposed. Following these riots, a Court of Inquiry was established to look into these disturbances, the report of which is referred to as the "Munir Report", after Chief Justice of the time Muhammad Munir. The report famously highlights the fact that no two clerics could agree on the definition of a Muslim, setting a dangerous precedent.
What the religio-political parties and the ulema failed to achieve in 1953 finally took place in 1973 when Pakistan officially declared Ahmadis non-Muslim in 1974, under the populist regime of Zulfiqar Ali Bhutto. By this point in time, the masses in Pakistan had grown disillusioned with military rule and the massive developmental schemes which had resulted in large gaps between the rich and the poor. Stephen Cohen in The Idea of Pakistan also credits the secession of Bangladesh as one of the causes for the empowerment of conservative elements in what remained of Pakistan. Bhutto, who capitalized on increasing religiosity in the country with fiery speeches and use of slogans such as “Islamic socialism”, struck a chord with the populace, as highlighted by Ali Usman Qasmi in The Ahmadis and the Politics of Religious Exclusion in Pakistan. That year, major riots against the Ahmadiyya community took place, which resulted in widespread loss of life and property, and ended in the second amendment to the Constitution which declared the sect non-Muslim.
Anti-Ahmadi prohibitions were only exacerbated under the military dictator Zia-ul-Haq, who succeeded Bhutto. Ordinance XX was passed in 1984, which severely limited the religious freedom of the Ahmadiyya community, who were prohibited from:
Calling their places of worship a mosque.
Using the Azan (Muslim call to prayer) in their places of worship.
Referring to themselves as Muslim.
“In any manner whatsoever” outraging the religious feelings of Muslims.
All these egregious developments were stepping stones towards stripping the community of some of their basic rights. Today, most Ahmadis in Pakistan live in a sense of all-pervasive fear. Their places of worship are destroyed over allegations of blasphemy, they are the victims of targeted violence, they are arrested for propagating the Ahmadiyya Muslim faith, there are movements to boycott Ahmadi owned products and their graves are desecrated with impunity. Police officers are oftentimes complicit, and local media fans the flames of bigotry. Public school curricula panders to prejudice and alienates anyone who isn't Muslim.
Anti-Ahmadi sentiment is only increasing, and these values have been exported to other countries, where attacks against members of the Ahmadiyya community continue, including the United Kingdom and the United States continue.
The only way for Pakistan to truly progress as a nation is to work towards the principles of democracy and inclusion that were envisioned by the nation’s founders. By continuing to pander to divisive elements, Pakistani society will continue to be bogged down in intolerance and hatred. The state should take steps to repeal Ordinance XX, and work towards the separation of religion and state. Unfortunately, recent surveys have shown that a majority of Pakistanis want Sharia law (a legal system derived from the religious precepts of Islam), and whether or not such a political system safeguards the rights of minorities is highly questionable. |
It’s a common figure of speech to say that x is worth its weight in y, where y is usually (but not always) gold. But most of us don’t buy and weigh gold very often, so how do you connect that to real life? Does “worth its weight” in pennies or $100 bills make any more sense?
We have collected here a bunch of examples for different things that represent a wide range of monetary value per unit weight, in what might make a useful
calibration chart for your future idiomatic usage.
Let’s start this off with a down-to-earth question. Which has a higher monetary density: dimes or quarters? In other words, if you had to carry around $1000 worth of either dimes or quarters, which should you ask for?
And… surprisingly enough, dimes and quarters have the same density, about $4.50 $20 per pound, so you can pick either. (But as you can see, nickels are mighty inefficient. Avoid carrying them in your pockets whenever possible. )
On a related topic, paper bills weigh about 1 gram each. The monetary density of paper currency makes much more sense– just look at that beautiful curve. Clearly, bigger bills are better.
Alongside US coins we have such staples as all-purpose flour and base metals. Interesting that copper is worth so much more than pennies are– but pennies these days are only 2.5% copper, the rest is cheap zinc. (In fact, pennies haven’t been
made of plain copper since 1837.)
[Aside: The commodity prices that we cite here are all rough estimates, believed to be more-or-less correct as of August 2008. See the table at the end for references.]
Kopi Luwak coffee costs approximately the same amount per pound as human blood. (Knowing where it comes from, I think I’d rather drink the blood. It’s been pointed out before that printer ink is also up there, but I’d rather not drink that either.)
Would you have guessed that peacock feathers can be worth more than their weight in dollar bills? Or that a fancy steak costs twice as much as its weight in dollar coins?
People have been saying that the new industrial grade swimsuits like the LZR Racer are worth their weight in gold. As you can see, this is clearly inaccurate. But such a suit is worth its weight in marijuana or industrial diamonds.
At the high end of this graph is gold (the only thing worth exactly its own weight in gold!), right next to the cost of launching a pound of stuff to low earth orbit on the ISS. Putting that into perspective here: You might as well build your whole spaceship out of $20 bills– it still would cost less than putting it up there. It could almost be made of solid gold for that price.
Of course, gold isn’t the only precious metal, or even the most expensive. That “honor” belongs to rhodium, whose price far exceeds that of its weight in $100 bills. There’s an interesting coincidence in this price range: Cocaine is about $50/gram, while a fifty dollar bill weighs about a gram. Even exchange? Platinum is also in the same price range, so you could say that $50 bills are worth their weight in platinum.
If we look at good-quality 1 carat diamonds, we find that they are quite expensive compared to the industrial diamonds we saw earlier. Now, the diamond monopoly hasn’t kept prices quite as high as LSD, however they are doing a very impressive job of trying. LSD doses measure in the micrograms, which makes the per-pound “street value” of the stuff astronomically high.
In the table that follows, we list our data with references, organized from cheapest to most expensive. A few of the items in the table didn’t make it into our graphs, including the last (and most expensive) item. Antimatter — presently made one subatomic particle at a time– would be unfathomably expensive in the bulk, some $26 Quadrillion per pound. |
Democrats have no idea where to turn.
They simply weren’t prepared for Hillary Clinton to lose, and now they’re struggling to find a path. Do they reach out to the blue collar white voters they lost to Donald Trump? Do they double down on the Obama coalition, hoping that changing demographics sweep them to victory? Do they choose new leadership in the hopes of drawing more young enthusiasm? Or do they go with tried and true leadership, hoping that trends will carry them forward in any case?
One thing is clear: they won’t be moderating their positions any time soon. And that’s an enormous mistake. Here are some of the angles the DNC is considering:
More Racial Politics. Rep. Keith Ellison (D-MN) was recently called an unacceptable pick by the leftists at the Anti-Defamation League for his anti-Semitic comments about the role of Israel in foreign policy. But that’s just the tip of the iceberg. Ellison has a long history with the anti-Semitic Nation of Islam, has spoken of 9/11 as a Nazi-esque Reichstag fire, and has defended the worst sort of Jew-haters, including Khalid Abdul Mohammed and Louis Farrakhan. The Democratic Party has spent the last several years moving in a significantly anti-Israel direction; in 2012, the DNC infamously overruled its own members to avoid removing Jerusalem from their platform as the capital of Israel. The Obama administration has been the most anti-Israel administration in history, and Ellison is an extension of that.
But at least he’s a minority! He’s black and Muslim, and according to some Democrats, that’s the important thing. He’s got the support of Bernie Sanders, Elizabeth Warren, and Harry Reid. As former Sanders spokeswoman Symone Sanders said, “We don’t need white people leading the Democratic Party right now.” No wonder Trump won the white vote in historic fashion.
More Big Government. Dean is losing, but his entire pitch is fundraising and doubling down on Obamacare. Obamacare has now lost Democrats the 2010, 2014, and 2016 elections. South Carolina Democratic Party chairman Jaime Harrison worked for the Podesta Group, the Clinton-dominated outlet involved in typical pay-for-play Clintonian politics.
Leftist Social Issues. One of the possible new DNC chairs Ilyse Hoge is president of NARAL, a pro-abortion interest group. Hillary pushed issues ranging from transgender bathrooms to same-sex marriage extraordinarily hard during this election cycle, and voters in swing states simply didn’t care – they wanted to hear how she’d make their lives better. Democrats are obviously most passionate about social issues, but that’s not true of the rest of the country.
The Democrats are, thank God, in disarray. And they don’t understand that their contempt for regular Americans has a large deal to do with it. They can’t present a face that isn’t invested in identity politics or big government shilling. No wonder they’re becoming a regional party. |
But their aim is to devise a new generation of fast and flexible computers that can work out for themselves how to solve a problem, rather than having to be told exactly what to do.
Professor Bill Ditto, at the Georgia Institute of Technology, is leading the project and says he is amazed that today's computers are still so dumb.
Bill Ditto views his computer wetware
Well connected
The device the team has built can "think for itself" because the leech neurons are able to form their own connections from one to another. Normal silicon computers only make the connections they are told to by the programmer.
This flexibility means the biological computer works out it own way of solving the problem. "With the neurons, we only have to direct them towards the answer and they get it themselves," says Professor Ditto.
This approach to computing is particularly suited to pattern recognition tasks like reading handwriting, which would take enormous amounts of power to do well on a conventional computer.
Each neuron's electrical activity corresponds to a number
These features can be used to make each neuron represent a number. Calculations are then performed by linking up the individual neurons.
Leech neurons are used because they have been extensively studied and are well understood.
Though much simpler, the neuron computer works in a similar way to the human brain. Professor Ditto says a robot brain is his long-term aim, noting that conventional supercomputers are far too big for a robot to carry around.
However, in the immediate future, the team from Georgia Tech and Emory University are working on enabling their computer to do multiplication.
The biological computer is featured on BBC One's Tomorrow's World at 1930 BST on Wednesday 2 June 1999. |
Building CouchApps
Create web applications stored in an Apache CouchDB database
Before you start
This tutorial is for web application developers interested in creating database-driven applications using nothing but HTML, CSS, and JavaScript. You should know how to write JavaScript and how to manipulate the Document Object Model (DOM) of an HTML page using JavaScript. You should also have some experience using a library tool, such as jQuery or Dojo.
About this tutorial
Apache CouchDB is an open source document-oriented database management system that stores data as JSON objects. Traditional database systems allow you to perform data retrieval and update functions using a series of SQL statements that are executed through some form of proprietary client software or API. Apache CouchDB is different — you send your queries or updates using a RESTful HTTP API, making it simple to communicate with Apache CouchDB in virtually any modern programming language.
Because of the architecture Apache CouchDB is built on, it is actually possible to build entire web applications that reside inside an Apache CouchDB database. We call these applications CouchApps. CouchApps allow you to create full database-driven applications using nothing but HTML, CSS and JavaScript. The beauty of these apps is that they allow you to take full advantage of Apache CouchDB's powerful replication features to replicate your CouchApp across Apache CouchDB instances. This allows you to keep your CouchApp on several devices, and synchronize them, with automated incremental replication keeping your data up-to-date on each device.
In this tutorial, you will learn how to create your own CouchApp using HTML, CSS, and JavaScript. Your application will perform database operations using Ajax powered by the jQuery framework. The application you will build is a contact manager that allows you to view, create, edit, and delete your contacts. Finally, you will learn how to replicate this application between two Apache CouchDB instances.
Prerequisites
You will need the following tools to follow along with this tutorial:
An Apache CouchDB database instance, v1.0.1 or higher
The CouchApp tool, version 0.7.0 or higher
See Related topics for download information and Downloadable resources for the source code of our sample application.
Introducing Apache CouchDB and CouchApps
In this section we'll look at the advantages of using Apache CouchDB over a traditional database solution.
Apache CouchDB versus traditional database solutions
Apache CouchDB is a database system that works differently compared to traditional database solutions such as IBM DB2, Oracle, or MySQL. In place of the structured format of databases, tables, and columns offered by these solutions, Apache CouchDB instead works by storing documents. Documents are free-form structures, which means that you can have any combination of fields, and field structures, and this can be different for every document in the database.
For example, with a traditional relational database you might store the basic information about a contact by defining a contact table using the statement shown in Listing 1.
Listing 1. Storing basic contact information in a traditional relational database
CREATE TABLE contact ( id int(11) NOT NULL AUTO_INCREMENT, title char(20) DEFAULT NULL, firstname char(20) DEFAULT NULL, lastname char(20) DEFAULT NULL, PRIMARY KEY (id) )
This places a rigid structure on your data, which can be both useful and restricting at the same time. For example, what happens if you need to add a middle name to the table? You would need to add a new field to accommodate the new data. Another element to consider is that additional information — for example, the telephone numbers used to contact an individual — would probably exist in another table. To get the phone number for an individual you would need to perform a query with a join (or subquery) to match the base contact table to the phone number table. You would have to do the same with other data points, such as addresses and email accounts, or with more flexible data, such as important dates, spouses, and other links.
Storing information in Apache CouchDB
With Apache CouchDB, information is instead stored in documents, and the documents are freeform, written using JavaScript Object Notation (JSON), allowing you to construct a document that contains lists, hashes, as well as traditional fields, all into a single document. For example, Listing 2 shows a contact in JSON format.
Listing 2. Contact information in JSON format
{ "firstname" : "Martin", "address" : { "home" : [ "Some road", "Some town", "Postcode", "Country" ] }, "title" : "Mr", "lastname" : "Brown", "phones" : { "home" : "09874978", "mobile" : "0892374908" } }
In Listing 2, all of the information about the contact is in a single document. However, the document structure is not fixed in any way. Listing 3 shows a different contact from the same database.
Listing 3. Different contact
{ "email" : { "work" : "sample@example.com", "home" : "other@example.com" }, "firstname" : "Paulie" }
Don't think that the use of such a freeform structure for your database content means that you lose the ability to enforce a structure. You can use a validation routine that can check not only the structure of the document, but also the contents of those fields.
Documents within Apache CouchDB are stored using a document ID. You can use any string as a document ID, so your contact could be stored with the document ID 'MartinBrown', or you can allow Apache CouchDB to create a UUID (Universally Unique ID).
One final element of the Apache CouchDB system is that unlike traditional databases, you do not need a special library or interface system to access or update the data. Instead, the entire interface is built around a REST-like interface accessible through anything that can access a web page over HTTP. Thus, we can access the document MartinBrown, stored within the database 'contacts', on the machine 'Apache CouchDB' by opening a web browser and accessing http://127.0.0.1:5984/contacts/MartinBrown . The contact 'Paulie' would be stored within the URL http://127.0.0.1:5984/contacts/Paulie .
What is a CouchApp?
This simple web interface to the database also provides the basis of the CouchApp. A CouchApp is an HTML5 and JavaScript-based application to the documents stored within an Apache CouchDB database. The documents and code that make up the interface and application are also stored within Apache CouchDB as design documents. The result is an application (including display elements) that can be entirely self-contained within the database that provides the data, making the entire process of building and interacting with your application focused on the information that you want to present.
The use of JavaScript as a core part of CouchApps also extends to the server, where JavaScript is used to construct views of your database. In a traditional database, like Oracle, the SQL and the database structure provide the ability to pull out information. This makes it easy to pull out a list of all the records where the firstname field is 'Martin'. However, with Apache CouchDB, the data is stored in documents, not tables. To achieve the same result you would need to open every document in the database, work out if it contained the specified field, and then add the document to a list if the field matched what you wanted. This is a time (and CPU) expensive process, especially if your database contains thousands or even millions of documents.
To improve the performance for these types of operation, Apache CouchDB uses views. Views perform the operation of iterating over every document and building a list of the documents with specific fields. The views are built on the server, and the resulting view is stored on disk as an index to the underlying documents. This improves the performance when retrieving lists of documents in this fashion, and makes it easy to pull out records according to whether they match a specific field value.
To make the entire process of building a CouchApp easier, there is a command-line tool called CouchApp that can create stub and template code for your Apache CouchDB application, while creating files on the local filesystem that you can then edit and 'push' to your Apache CouchDB server using the CouchApp command line tool. This simplifies the entire process and means that you can concentrate on building the application without worrying about uploading the application to Apache CouchDB.
There are some additional differences that we won't describe in detail here, but that we will cover in part as we go through the rest of the tutorial. For example, documents can also include attachments (one or more files associated with the document), and all document revisions are stored, with each update to a document forming a new 'revision'. The CouchApp tools hide this complexity to make the entire system easier to use.
Installing CouchApp
In this section we will install and configure CouchApp
Installing Apache CouchDB
Before installing CouchApp, you need to install Apache CouchDB. You can download Apache CouchDB in a variety of formats, including as source, which you can build yourself, or as a standalone application for running on Windows™, Mac OS X and Linux®. For example, the Mac OS X application, Apache CouchDBX, runs as a standard application.
On Linux and UNIX®, you will get a binary, couchdb, which you can run from the command-line (see Listing 4).
Listing 4. Running the couchdb binary from the command-line
$ couchdb Apache CouchDB 1.0.1 (LogLevel=info) is starting. Apache CouchDB has started. Time to relax. [info] [<0.33.0>] Apache CouchDB has started on http://127.0.0.1:5984/
You are now ready to go! You can check that the database is running by accessing the given URL. If you want to open up your server so that it can be accessed over the network, change the bind parameter within the local.ini configuration file to match the IP address (not hostname) of your Apache CouchDB server.
Installing CouchApp
For the CouchApp command line tool, you need to download the CouchApp tar or Zip package from GitHub (see Related topics). You will need an installation of Python on your machine (if you do not already have it installed), but the CouchApp installer will handle all of the dependencies of additional libraries for you.
After you have downloaded the package, extract it, and then change into the CouchApp directory: $ cd couchapp .
Now run the Python setup tool to download and install any dependencies and install the couchapp tool: $ python setup.py install .
You can test the installation by running CouchApp, which should return the help information for the tool: $ couchapp .
Note: Normally you would not create a database that allows anybody to access and update it. Apache CouchDB does support authentication and different levels of security and authority for performing different operations, but we do not cover them in this tutorial.
Configuring your database and creating your CouchApp
As a good way of understanding the simplicity of Apache CouchDB, you can create a new database within your Apache CouchDB instance from the command using the curl command line tool. To create a database, you issue a PUT HTTP command to the URL of the database you want to create. For example, to create a contact database, you could use the command line: $ curl -X PUT http://127.0.0.1:5984/contacts .
If you check the content of the file contacts, it should specify that the operation completed successfully.
Alternatively, go into Futon, using http://127.0.0.1:5984/_utils . You can create a new database using the Futon administration interface.
You now have an empty database. To create a stub application, you can use CouchApp to generate all of the basic files that you need on your file system ready to be uploaded to your Apache CouchDB database. You can do this by running: $ couchapp generate app contacts .
This creates a directory, contacts, which contains an array of files and contents that we can use to build our contacts application. You can see a top-level file list in Listing 5.
Listing 5. Top-level file list
$ ls -al contacts/ total 605- drwxrwxrwx 9 mcco mcco 4096 Dec 1 14:49 ./ drwxrwxrwx 3 mcco mcco 4096 Dec 1 14:49 ../ -rw-rw-rw- 1 mcco mcco 174 Dec 1 14:49 .couchappignore -rw-rw-rw- 1 mcco mcco 2 Dec 1 14:49 .couchapprc -rw-rw-rw- 1 mcco mcco 1660 Dec 1 11:51 README.md drwxrwxrwx 3 mcco mcco 4096 Dec 1 14:49 _attachments/ -rw-rw-rw- 1 mcco mcco 16 Dec 1 14:49 _id -rw-rw-rw- 1 mcco mcco 70 Dec 1 11:51 couchapp.json drwxrwxrwx 4 mcco mcco 4096 Dec 1 14:49 evently/ -rw-rw-rw- 1 mcco mcco 10 Dec 1 11:51 language drwxrwxrwx 2 mcco mcco 4096 Dec 1 14:49 lists/ drwxrwxrwx 2 mcco mcco 4096 Dec 1 14:49 shows/ drwxrwxrwx 2 mcco mcco 4096 Dec 1 14:49 updates/ drwxrwxrwx 3 mcco mcco 4096 Dec 1 14:49 vendor/ drwxrwxrwx 3 mcco mcco 4096 Dec 1 14:49 views/
Some of the major elements of this are:
views — contain the Views on the database. These are JavaScript functions that build a list of keys and data that you want returned, the equivalent of a typical database query.
— contain the Views on the database. These are JavaScript functions that build a list of keys and data that you want returned, the equivalent of a typical database query. lists — contain lists that are used to build formatted versions of the view output. Lists are JavaScript functions that take the information from a view and format the information (usually as HTML) for display.
— contain lists that are used to build formatted versions of the view output. Lists are JavaScript functions that take the information from a view and format the information (usually as HTML) for display. shows — display a single document, rather than a list of documents as provided by a view. Like lists, a show is defined as a JavaScript function.
— display a single document, rather than a list of documents as provided by a view. Like lists, a show is defined as a JavaScript function. attachments — contain attachments for the application, including index.html and JavaScript files.
Views are critical to the way you access information from the database when you don't know the document ID of the document you want to load. Lists and shows provide built-in methods for displaying information. However, you use other methods, such as the jQuery library, to get information out of the database using the view to return the list of documents that is then processed by the jQuery Apache CouchDB library.
Editing index.html
The default document, index.html, is contained within the database attachments directory, contacts/_attachments/index.html. You should edit this to contain some default links to the database.
To make the best use of the environment offered by jQuery and Apache CouchDB we will define the entire interface for the Contacts application dynamically. This will use JavaScript to dynamically provide the different elements, include displaying the results of the view and the forms used to edit the contact information.
For that to work, you need to edit the index.html file to that shown in Listing 6.
Listing 6. Editing the index.html file
<!DOCTYPE html> <html> <head> <title>Contacts</title> <link rel="stylesheet" href="style/main.css" type="text/css"> <script src="vendor/couchapp/loader.js"></script> <script src="recordedit.js"></script> </head> <body> <div id="account"></div> <h1>Contacts</h1> <div id="items"><div id="add"><a href="#" class="add">Add Contact</a></div> <div id="contacts"></div> <div id="contactform"></div> </body> </html>
The key elements of structure are:
The loading of the vendor/couchapp/loader.js script. This in turn loads the jQuery and jQuery Couch libraries, among others.
The loading of the recordedit.js script. This is the script we will populate with the JavaScript functions used to build the application.
A button that will be used to trigger the Add form for creating a new contact.
A div element, with the id contacts, that will be used to display the contact list.
A div element, with the id contactform, that will be used to display the contact form.
Once you have edited the file, you need to push the application to your Apache CouchDB database using the CouchApp command-line tool: $ couchapp push contacts http://127.0.0.1:5984/contacts .
The first argument is the instruction to push (publish) the application, the second is the local directory, contacts, where the application is stored, and the third is the URL of the Apache CouchDB database where you want to upload the database. Once the push has completed successfully, you can view the uploaded application using the URL: http://127.0.0.1:5984/contacts/_design/contacts/index.html .
It is worth dissecting this URL:
127.0.0.1:5984 is the hostname, and port number, of the Apache CouchDB server. By default servers run on port 5984.
is the hostname, and port number, of the Apache CouchDB server. By default servers run on port 5984. contacts is the name of the database.
is the name of the database. _design is a special identifier to Apache CouchDB that indicates you want to access the design document. Design documents contain the view, list, and show definitions. You can have more than one design document for a given database.
is a special identifier to Apache CouchDB that indicates you want to access the design document. Design documents contain the view, list, and show definitions. You can have more than one design document for a given database. contacts is the name of the design document. The CouchApp tool creates design document with the same name as your application by default.
is the name of the design document. The CouchApp tool creates design document with the same name as your application by default. index.html is the name of the attachment for the contacts design document.
CouchDB also includes a rewriting module that will simplify these URLs to something more friendly. See Related topics for an article on this topic.
With the basic document in place, you can start to build the rest of the application.
Displaying a list of contacts
In this section, we will create and display a list of contacts.
Creating a view
To construct a list of contacts, you first need to create a view. This is a JavaScript function that accepts a document as the only argument. The function is executed by Apache CouchDB on every document in the database, and it should output keys (which are used for displaying and filtering information) and corresponding values that you want to output for each document in this view. This process is called the map, as you are mapping the document contents to the information that you want to extract. There is another step, called reduce, which can be used to summarize or simplify the information, but we will not need that for a contacts application.
For example, to output the name from the contact document as the key, and the entire contact record as the value, you would need to write the view shown in Listing 7.
Listing 7. Writing the view
function(doc) { if (doc.name) { emit(doc.name,doc); } }
The anonymous function accepts a single argument, the document. The function then checks to ensure the record has a field called name, and if true, the emit() function then returns two values: The first is the key, and the second is the value, in this case, a copy of the document. Both keys and values can be any valid JSON structure. Keys are used by Apache CouchDB during searching and paging. The values merely contain the information that you want to expose in this view when it is accessed.
Within CouchApp, you can create a new view on the command line using the generate command: $ couchapp generate view contacts byname .
This creates the view byname , in the directory contacts/views/byname, and creates two files, map.js and reduce.js. Edit the map.js file and change it to the function in Listing 6.
You can now push the application to your Apache CouchDB database again. Views are accessible through the browser interface. You can access the view byname on the contacts design document by accessing the URL http://127.0.0.1:5984/contacts/_design/contacts/_view/byname . Again, we are using the design document contact, this time requesting the output of a view (identified by the _view ) in the path, with the view name of byname.
At this stage, the view will be empty: {"total_rows":0,"offset":0,"rows":[]} .
Displaying the view within the application
To display the view within our application, we can use the jQuery Couch library to access the view, iterate over each record returned by the view, and then print the record information.
A function for this is shown in Listing 8.
Listing 8. Displaying the view within our application
db = $.couch.db("contacts"); function updatecontacts() { $("#contacts").empty(); db.view("contacts/byname", {success: function(data) { for (i in data.rows) { $("#contacts").append('<div id="' + data.rows[i].value._id + '" class="contactrow"><span>' + data.rows[i].value.name + "</span><span>" + data.rows[i].value.phone + "</span><span>" + '<a href="#" id="' + data.rows[i].value._id + '" class="edit">Edit Contact</a>' + "</span><span>" + '<a href="#" id="' + data.rows[i].value._id + '" class="remove">Remove Contact</a>' + "</span></div>" ); } } }); }
The first line in Listing 7 sets a variable used to access the database. The updatecontacts() function first empties the div element that will be used to display the contacts list, then it accesses the results of the view that was just created. If the view access was successful, an anonymous function is called with the returned view data as a JSON structure. The function then iterates over the content, and builds a contact row that outputs the contact name and phone number.
The view results is represented as an array (rows), with each element of the array being a JSON structure containing the contents of the key returned by the view, and the value returned by the view. Hence, we can access the phone number of a contact record returned by the view by accessing the value.field portion of the array element.
The output produces an Edit Contact and a Remove Contact link, which also includes the ID of the underlying Apache CouchDB document. This will be used to provide the information when updating and deleting a contact.
To add this function to the contacts application, create a new file, contacts/_attachments/recordedit.js, and add the function to the file.
The second step is to ensure that the document is loaded, and the updatecontacts() function is called to display the current contact list. jQuery makes this easy by providing a ready() function on the document. Everything within the function you assign to this operation will be executed when the document has finished loading. You can see the definition in Listing 9.
Listing 9. ready() function
$(document).ready(function() { updatecontacts(); }
Push the application again, and reload the index page. There shouldn't be any changes to what is displayed, but it is good practice to ensure that the application has not been broken when adding in new components.
Of course, the list will still be empty, so let's create a form that can be used to view some contacts.
Creating, editing, and deleting contacts
In this section, we will show you how to create, edit, and delete contacts.
Creating new contacts
As a web application, creating a new contact involves providing a form to the user that can be filled in, and then the content of the form can be written to the Apache CouchDB database. The first step for that is a new function that will dynamically generate the HTML for a form, and then attach this to the contactform div element defined in index.html (see Listing 10.
The function for this is shown in Listing 9.
Listing 10. Dynamically generating the HTML as a form
function contactform(doctoedit) { var formhtml; formhtml = '<form name="update" id="update" action="">'; if (doctoedit) { formhtml = formhtml + '<input name="docid" id="docid" type="hidden" value="' + doctoedit._id + '"/>'; } formhtml = formhtml + '<table>'; formhtml = formhtml + '<tr><td>Name</td>' + '<td><input name="name" type="text" id="name" value="' + (doctoedit ? doctoedit.name : '') + '"/></td></tr>'; formhtml = formhtml + '<tr><td>Phone</td>' + '<td><input name="phone" type="text" id="phone" value="' + (doctoedit ? doctoedit.phone : '') + '"/></td></tr>'; formhtml = formhtml + '<tr><td>Email</td>' + '<td><input name="email" type="text" id="email" value="' + (doctoedit ? doctoedit.email : '') + '"/></td></tr>'; formhtml = formhtml + '</table>' + '<input type="submit" name="submit" class="update" value="' + (doctoedit ? 'Update' : 'Add') + '"/>' + '</form>'; $("#contactform").empty(); $("#contactform").append(formhtml); }
The last line in Listing 9 is the key. The $() construct is shorthand for using jQuery functions, and here we are using it to append a form to an existing element. jQuery makes it easy to access the DOM elements of an HTML page. In this case, using the # prefix looks for an element within the page DOM with the specified id attribute. This follows the same format as used for CSS formatting, which makes it easy to look up different elements. The period prefix looks for items with the specified class. We will see an example of that later.
The function itself accepts a single argument, a document to be edited. We'll use this when we look at editing an existing contact. It's used in the function when building the form first to introduce the document ID for the contact (which will be needed when we update the contact record), and when setting the value of each field in the form.
The basics of the form, however, are straightforward; we generate text input elements with the name and ID of the field (name, phone, email, and so on).
For the form to be activated, you need to enable the Add Contact link in the index.html file so that it calls the function. You can do this by adding the operation to the ready() function. This ensures that the button is not active until the document has loaded. See Listing 11 for the jQuery code to update the operation when the link is clicked.
Listing 11. Showing the contact form
$("a.add").live('click', function(event) { contactform(); });
Push the application again to Apache CouchDB. You should now find that the Add Contact button populates the form with empty values. You can see a sample of this in Figure 1.
Figure 1. Empty Contact Form
The second part of creating a contact is actually saving the document to the database when the Submit button is pressed. The function to handle this is shown in Listing 12.
Listing 12. Saving the document to the database when the Submit button is pressed
$("input.update").live('click', function(event) { var form = $(event.target).parents("form#update"); db.saveDoc(builddocfromform(null,form), { success: function() { $("form#update").remove(); updatecontacts(); }, }); return false; });
This JavaScript fragment should be added to the existing ready() function. The fragment adds the function that will be called when the Update button is clicked. The first line of the inline function creates a variable through which we can access the fields of the form.
The saveDoc() function will save a JSON structure as a document to the Apache CouchDB. The first argument should be the document data, and the second a JavaScript object, which defines what happens if saving the document was a success. Remember that JavaScript operations that go out to access information are asynchronous, that is, the request is sent to the host (Apache CouchDB), and you have to wait for the response to come back before operating on the information.
The first argument to this function is the return value of another function, builddocfromform() . This is used to simplify the construction of the document from the form data, whether you are creating a new document, or editing an existing one. The code for this is shown in Listing 13.
Listing 13. builddocfromform() function
function builddocfromform(doc,form) { if (!doc) { doc = new Object; } doc.name = form.find("input#name").val(); doc.phone = form.find("input#phone").val(); doc.email = form.find("input#email").val(); return(doc); }
The function accepts an existing document object, initializing it to an empty JavaScript object if document is not defined. Then, it uses the supplied form jQuery object to access each field in the form and populate the document object before returning it. You could add more fields to the function here (providing you also added the field definitions to the form HTML).
The anonymous function attached to the success field will be called if the document was written to the database successfully. If this occurs, the HTML of the contact form is removed by emptying the contactform div element content, and then the updatecontacts() function will be called, which will update the active list of contacts in the display.
You can now push the application to your Apache CouchDB instance again, and try adding a contact to the system. You should end up with one or more contacts in Apache CouchDB, as shown here in Figure 2.
Figure 2. Some contacts
Editing existing contacts
We have already laid much of the groundwork for editing an existing contact. The JavaScript function for outputting the form already accepts an existing document, and the form is populated with the Apache CouchDB document ID and existing values from that object.
The two changes required are first to enable the Edit Contact link output against each contact in the list. Creating this all individually would be a nightmare, but jQuery provides functionality to identify when any link has been clicked by identifying the target DOM object. That information can be used to access the ID of the document, which was embedded into the link, and then to load the record from Apache CouchDB and call the form function. You can see this in Listing 14.
Listing 14. Identifying when a link has been clicked by identifying the target DOM object
$("#contacts").click(function(event) { var target = $(event.target); if (target.is('a')) { id = target.attr("id"); if (target.hasClass("edit")) { db.openDoc(id, { success: function(doc) { contactform(doc); }}); } } });
This should be added to the ready() function. Following the lines of the code in order:
First line identifies a click event within the #contacts DOM elements.
DOM elements. Second line identifies the target that was clicked.
Third line checks that what was clicked as an 'A' clickable element.
Fourth line identifies the id attribute. In the contacts list, the id attribute of each link contains the document ID of the corresponding contact.
Fifth line identifies the class of the link that was clicked. The Edit Contact links have a class of edit , the remove links a class of remove . If the link is an 'edit' link, access the Apache CouchDB to load the document (with openDoc() ), and when the document has successfully been loaded, the contactform() function is called with the document data. This will present a contact form with the existing contact information to be edited.
The result is that when you click on the Edit Contact link against a contact, a form with the contact details is displayed.
You might wonder why the contact information that is displayed is not used to populate the form directly. The reason is that as a potentially multi-user application you want to ensure that the document has not been updated (or deleted) by another user before you edit it. Therefore, you want to ensure that the document exists, and you have the latest version as stored in the database.
The other half of the process is to change the function that is called when the Submit button on the form is pressed. Here you need to identify whether an existing record is being updated. Since the contactform() function only includes the document ID if we are updating an existing document, you can use this to determine the operation type. If we are updating an existing document, then that document should be loaded from the database, and then the form values updated, before the document is saved back to the database. The resulting code is shown in Listing 15.
Listing 15. Changing the function that is called when the Submit button on the form is pressed
$("input.update").live('click', function(event) { var form = $(event.target).parents("form#update"); var id = form.find("input#docid").val(); if (id) { db.openDoc(id, { success: function(doc) { db.saveDoc(builddocfromform(doc,form), { success: function() { $("form#update").remove(); updatecontacts(); }}); }, }); } else { db.saveDoc(builddocfromform(null,form), { success: function() { $("form#update").remove(); updatecontacts(); }, }); } return false; });
Again, we load the existing document and update the contents using the builddocfromform() function. This ensures that we are updating the latest version of the document. This is important because Apache CouchDB records the revision and changes in all documents. Therefore, you need to ensure that you are updating the latest version — the revision number is used as a check to ensure that you are updating the right version.
There is another reason why we load the document before updating the fields and saving it back. The form, as it stands, only supports name, phone, and email fields of the document. But what if the document contains other fields that this form does not yet support? By loading the entire existing document, and updating only the fields that were on the form, you won't lose any of the fields that the form does not know about.
Of course, there are times when you want to delete a record.
Deleting existing contacts
Deleting an existing contact should be straightforward. You can add another hook to the Remove Contact link, like the Edit Contact link. However, you don't want the Remove Contact link to be accidentally clicked, so you can provide a confirmation process to ensure that the deletion is required.
You can use some of the principles already demonstrated to output a new set of links, and then use the click events to make these new links either confirm, or cancel, the remove request.
The code should be added after the edit function in Listing 14. The code is shown in Listing 16.
Listing 16. Deleting an existing contact
if (target.hasClass("remove")) { html = '<span class="confirm">Really Delete? ' + '<a href="#" id="' + id + '" class="actuallydel">Delete</a>' + '<a href="#" id="' + id + '" class="canceldel">Cancel</a></span>'; target.parent().append(html); } if (target.hasClass("actuallydel")) { db.openDoc(id, { success: function(doc) { db.removeDoc(doc, { success: function() { target.parents("div.contactrow").remove(); } }); } } ); } if (target.hasClass("canceldel")) { target.parents("span.confirm").remove(); }
When the user clicks the Remove Contact link, two further links are added next to the contact. If the Delete link is clicked, the document is loaded (to confirm it still exists), and the removeDoc() function is called to delete it. If this is successful, you remove the entire contact row, which you identify by looking for the parent DOM element. If the user clicks Cancel, you just remove the confirmation links.
You can see a contact awaiting confirmation for deletion in Figure 3.
Figure 3. Contact awaiting confirmation for deletion
The final application
With all of the different elements to the process, it can be difficult to see the entire application. Listing 17 shows the recordedit.js file, which contains all of the JavaScript for the application.
Listing 17. recordedit.js file
db = $.couch.db("contacts"); function updatecontacts() { $("#contacts").empty(); db.view("contacts/byname", { success: function(data) { for (i in data.rows) { $("#contacts").append('<div id="' + data.rows[i].value._id + '" class="contactrow"><span>' + data.rows[i].value.name + "</span><span>" + data.rows[i].value.phone + "</span><span>" + '<a href="#" id="' + data.rows[i].value._id + '" class="edit">Edit Contact</a>' + "</span><span>" + '<a href="#" id="' + data.rows[i].value._id + '" class="remove">Remove Contact</a>' + "</span></div>" ); } } }); } function contactform(doctoedit) { var formhtml; formhtml = '<form name="update" id="update" action="">'; if (doctoedit) { formhtml = formhtml + '<input name="docid" id="docid" type="hidden" value="' + doctoedit._id + '"/>'; } formhtml = formhtml + '<table>'; formhtml = formhtml + '<tr><td>Name</td>' + '<td><input name="name" type="text" id="name" value="' + (doctoedit ? doctoedit.name : '') + '"/></td></tr>'; formhtml = formhtml + '<tr><td>Phone</td>' + '<td><input name="phone" type="text" id="phone" value="' + (doctoedit ? doctoedit.phone : '') + '"/></td></tr>'; formhtml = formhtml + '<tr><td>Email</td>' + '<td><input name="email" type="text" id="email" value="' + (doctoedit ? doctoedit.email : '') + '"/></td></tr>'; formhtml = formhtml + '</table>' + '<input type="submit" name="submit" class="update" value="' + (doctoedit ? 'Update' : 'Add') + '"/>' + '</form>'; $("#contactform").empty(); $("#contactform").append(formhtml); } function builddocfromform(doc,form) { if (!doc) { doc = new Object; } doc.name = form.find("input#name").val(); doc.phone = form.find("input#phone").val(); doc.email = form.find("input#email").val(); return(doc); } $(document).ready(function() { updatecontacts(); $("#contacts").click(function(event) { var target = $(event.target); if (target.is('a')) { id = target.attr("id"); if (target.hasClass("edit")) { db.openDoc(id, { success: function(doc) { contactform(doc); }}); } if (target.hasClass("remove")) { html = '<span class="confirm">Really Delete? ' + '<a href="#" id="' + id + '" class="actuallydel">Delete</a>' + '<a href="#" id="' + id + '" class="canceldel">Cancel</a> </span>'; target.parent().append(html); } if (target.hasClass("actuallydel")) { db.openDoc(id, { success: function(doc) { db.removeDoc(doc, { success: function() { target.parents("div.contactrow").remove(); } }); } } ); } if (target.hasClass("canceldel")) { target.parents("span.confirm").remove(); } } }); $("a.add").live('click', function(event) { contactform(); }); $("input.update").live('click', function(event) { var form = $(event.target).parents("form#update"); var id = form.find("input#docid").val(); if (id) { db.openDoc(id, { success: function(doc) { db.saveDoc(builddocfromform(doc,form), { success: function() { $("form#update").remove(); updatecontacts(); }}); }, }); } else { $db.saveDoc(builddocfromform(null,$form), { success: function() { $("form#update").remove(); updatecontacts(); }, }); } return false; }); });
After you have the file updated, push the application using CouchApp up to your Apache CouchDB instance and try it out.
One more thing — replicating your CouchApp
One of the main features of Apache CouchDB is that you can replicate the documents in your database to another database, whether that database exists on the same Apache CouchDB instance or a remote one. The synchronization occurs in both directions, which means that you can replicate your contacts database from your desktop machine to your laptop, make changes on your laptop while away, and then synchronize those changes back to your desktop so that the two databases are kept in synchronization.
As an added bonus, CouchApps, which are just stored in the Apache CouchDB database as documents, are also synchronized. This means that when you replicate the contacts database you are also replicating the application code that makes up the CouchApp application. In environments that are more traditional, this would be difficult to achieve. With CouchApps, that functionality is provided as part of the Apache CouchDB functionality.
You can set up replication by sending the request to the Apache CouchDB server using a command line tool such as curl, but an alternative is to use the Futon tool, a CouchApp built into every Apache CouchDB instance, that provides a complete management and editing interface for Apache CouchDB and the documents stored in the databases within Apache CouchDB.
You can access Futon by visiting http://127.0.0.1:5984/_utils. This shows the Futon interface, as seen here in Figure 4.
Figure 4. Futon interface
Click on the Replicator link on the right hand side, and you will be presented with the form as seen here in Figure 5.
Figure 5. Replicator form
Replication can occur either in push (from the current Apache CouchDB instance to a remote database), or pull (from a remote Apache CouchDB instance to the local Apache CouchDB instance). Replication can also either be performed once, or you can set the replication to be continuous, which means that changes on one database will automatically be replicated to the other database if it is available.
For example, if you start a Apache CouchDB instance on your laptop, you can replicate the contacts database from the CouchDB server to the local contacts database. You can see the filled in form, and the result of starting the replication process, in Figure 6.
Figure 6: Successful replication
After the application is on the laptop instance of Apache CouchDB, you can edit and update the database using exactly the same interface because you have replicated the entire application. What's more, you can even replicate the changes back to your desktop when you get home.
Suggested improvements
The screenshot samples in this tutorial may look different than your application, because the CSS has been updated to improve the layout slightly. Fortunately, because we have used classes and IDs on all of the different components (including the form, contact list, and links), changing the formatting should be straightforward. The CSS is defined within the contacts/_attachments/style/main.css file on the local file system, and will be included when you push the application with couchapp. Changing the CSS is probably the easiest change you can make to improve your application.
After that, you may want to improve the data that is captured (which can be fixed by updating the form and the structure for storing that data). For example, as outlined in the introduction, you could add support for adding multiple phone numbers by storing the phone numbers within a separate structure in the document.
After you have extended the information displayed, you may want to start improving the contacts list so that you can display it in pages. Paging functionality in Apache CouchDB uses the key returned as part of the view. You may also want to construct different views, and provide search functionality, all of which are possible by building and constructing additional views that build different representations of the contact list.
Summary
CouchApps and Apache CouchDB provide a rich environment for building web applications. The entire process for constructing the forms, saving the data, and reporting on the database content, is entirely stored within the Apache CouchDB database. Using JavaScript, and the jQuery libraries simplifies a lot of the complexity of constructing the application. Meanwhile, Apache CouchDB eliminates the need to worry about defining tables or writing complex queries to get at the information that you have stored. Finally, with Apache CouchDB you can easily replicate your entire application, including all of its data, to another Apache CouchDB instance, including your laptop or mobile phone.
Downloadable resources
Related topics |
The Arizona Attorney General says Tucson may be violating state law by destroying guns that have been turned over to police. (Photo: Getty Images/iStockphoto)
TUCSON - The Arizona attorney general says the city of Tucson may be violating state law by destroying guns that have been turned over to police.
In a response to a complaint filed by state Rep. Mark Finchem, Attorney General Mark Brnovich did not indicate whether he will pursue legal action to stop the city from destroying most guns taken in by police, The Arizona Daily Star reported.
Instead, the AG is giving Tucson time to respond to the opinion; the city contends the law that gives the state sway over local matters is unconstitutional.
“The Ordinance, which requires the Tucson Police Department to destroy forfeited firearms, conflicts with state law. The Office recognizes, however, that while the prior case law is most likely distinguishable, there is a question as to whether this matter is of purely local concern and thus the Ordinance might not violate state law,” Brnovich concluded in his report, issued Monday. “The Office therefore concludes … that the Ordinance may violate state law.”
Finchem in October filed a complaint stating that Tucson is violating a 2013 Arizona law that requires the sale of otherwise legal guns obtained by law enforcement agencies. He filed his complaint under a new law that says local governments that violate state statutes lose their state-shared revenue if they don’t stop.
This is the second time a lawmaker has challenged local control under the law, which passed earlier this year with strong GOP support and was a key part of Gov. Doug Ducey's legislative agenda. In an earlier action, the state challenged the town of Snowflake's enactment of a special tax to benefit a marijuana-cultivation facility. Snowflake council members, in response, repealed the part of their ordinance that Brnovich's office found clashed with state law.
The law has riled municipal officials, who complain the state is using the threat of lost shared revenue to supersede local control. State-shared revenue is a substantial portion of local budgets.
"It is the state meddling in local decisions," said Ken Strobeck, executive director of the League of Arizona Cities and Towns. "In this case, there is an honest dispute about charter government. Instead of litigation, however, the state chooses to use this 800-pound hammer," he said, referring to the threat of losing millions of dollars.
Tucson was among many police agencies in Arizona that had to adapt policies and procedures to comply with the 2013 law requiring the sale of confiscated weapons.
Police in Phoenix held a gun buy-back before the law took effect, collecting thousands of weapons through the events that officers destroyed in September 2013.
Since then, many agencies have stockpiled the weapons they used to scrap as they awaited more clarity on the law and sought to establish agreements with federally licensed firearms dealers.
Tucson Mayor Jonathan Rothschild said he believes more action against the city will occur related to the law but that he believes the law challenges a city’s sovereign status.
“The city’s position is that (the law) is unconstitutional,” he said.
Tucson officials argue destroying firearms is legal because it is a local issue involving municipal property. Brnovich’s response refutes that claim.
City Attorney Mike Rankin said the city will have to wait and see whether Brnovich pursues legal action against the city.
City records show that the Tucson Police Department has destroyed 4,820 guns since the beginning of 2013.
Councilman Steve Kozachik said Brnovich avoided the issue in his response.
“He absolutely punted the fundamental questions put before him,” Kozachik said.
Reporter Mary Jo Pitzl contributed to this story.
Read or Share this story: http://azc.cc/2fX2qxr |
The System Isn’t ‘Rigged’ Against Sanders Clinton’s winning because more Democrats want her to be the nominee.
A week ago, New York Daily News columnist and Bernie Sanders supporter Shaun King tweeted the following about the Democratic caucuses in Washington, which took place in late March:
Washington State has 7.2 million people. @BernieSanders won 71% of the votes. NONE of those votes count in the "popular vote totals". — Shaun King (@ShaunKing) May 19, 2016
Whether King intended it or not, he implied that caucuses — which often require hours of participation and mean lower turnout — are representative of what would happen if a larger electorate had its say. Well, a funny thing happened in Washington on Tuesday: The state held a mail-in, beauty-contest primary — so voting was easy, but no delegates were at stake. (The Associated Press has declared Hillary Clinton the winner.) The results are still being finalized, but Clinton leads by about 6 percentage points with more than 700,000 votes counted. Sanders won the Washington caucuses, which had 230,000 participants, by 46 percentage points.
So, turnout was much higher in the Washington primary than in the caucuses, and Clinton did much better. Something similar happened in Nebraska, where Clinton lost the early March caucuses by 14 percentage points and won the early May primary, in which no delegates were awarded, by 7 points.
Nebraska and Washington are part of a pattern. As Sanders fans claim that the Democratic primary system is rigged against their candidate and that Sanders wins when turnout is higher, they fail to point out that Sanders has benefited tremendously from low-turnout caucuses. Indeed, if all the caucuses were primaries, Clinton would be winning the Democratic nomination by an even wider margin than she is now.
Let’s start out with the real-world numbers. Here are the delegate and vote totals by contest, including caucuses and primaries, so far:
POPULAR VOTE (THOUSANDS) PLEDGED DELEGATES STATE CAUCUS CLOSED WINNER CLINTON SANDERS CLINTON SANDERS Iowa ✓ Clinton +0 85 85 23 21 N.H. Sanders +22 95 152 9 15 Nevada ✓ ✓ Clinton +5 44 40 20 15 South Carolina Clinton +47 272 96 39 14 Alabama Clinton +59 309 76 44 9 Am. Samoa ✓ Clinton +43 <1 <1 4 2 Arkansas Clinton +36 146 66 22 10 Georgia Clinton +43 546 216 73 29 Massachusetts Clinton +1 607 590 46 45 Oklahoma Sanders +10 139 174 17 21 Tennessee Clinton +34 246 121 44 23 Texas Clinton +32 936 477 147 75 Vermont Sanders +72 18 116 0 16 Virginia Clinton +29 505 276 62 33 Colorado ✓ ✓ Sanders +19 50 73 25 41 Minnesota ✓ Sanders +23 78 126 31 46 Louisiana ✓ Clinton +48 222 72 37 14 Nebraska ✓ ✓ Sanders +14 14 19 10 15 Kansas ✓ ✓ Sanders +35 13 26 10 23 Maine ✓ ✓ Sanders +29 16 30 8 17 Michigan Sanders +1 582 599 63 67 Mississippi Clinton +66 187 38 31 5 N. Marianas ✓ ✓ Clinton +20 <1 <1 4 2 Florida ✓ Clinton +31 1,101 569 141 73 Illinois Clinton +2 1,040 999 79 77 Missouri Clinton +0 312 311 36 35 North Carolina Clinton +14 623 467 60 47 Ohio Clinton +13 697 535 81 62 Dems abroad Sanders +38 11 24 4 9 Arizona ✓ Clinton +15 262 193 42 33 Utah ✓ Sanders +59 15 60 6 27 Idaho ✓ Sanders +57 5 19 5 18 Hawaii ✓ Sanders +40 10 24 8 17 Washington ✓ Sanders +46 62 167 27 74 Alaska ✓ ✓ Sanders +59 2 8 3 13 Wisconsin Sanders +14 434 570 38 48 Wyoming ✓ ✓ Sanders +11 3 4 7 7 New York ✓ Clinton +16 1,134 820 139 108 Pennsylvania ✓ Clinton +12 922 722 106 83 Rhode Island Sanders +12 53 67 11 13 Connecticut ✓ Clinton +5 170 152 28 27 Delaware ✓ Clinton +21 56 37 12 9 Maryland ✓ Clinton +29 573 310 61 34 Indiana Sanders +5 303 335 39 44 Guam ✓ ✓ Clinton +19 1 1 4 3 West Virginia Sanders +16 86 123 11 18 Kentucky ✓ Clinton +0 213 211 28 27 Oregon ✓ Sanders +13 264 347 26 35 Total Clinton +12 13,463 10,544 1,771 1,499 Democratic votes and delegates based on actual results Popular vote in Iowa, Nevada, Maine, Washington and Wyoming is estimated based on overall turnout. Sources: Dave Leip’s Atlas of U.S. Presidential Elections, The Green Papers, U.S. Elections Project
Counting only caucuses, Sanders has won 63 percent of the vote, 64 percent of the delegates and 11 of the 16 contests. In doing so, he has earned 341 elected delegates, compared with Clinton’s 195 delegates, for a margin of 146 delegates. These caucuses have had approximately 1.1 million participants. As a point of comparison, turnout in the caucuses has been only about 13 percent of the total number of votes President Obama got in the 2012 presidential election in these states.
Sanders has done far worse in the states that have held primaries. Counting just primaries, including Tuesday’s in Washington, Sanders has won only 42 percent of the vote, 42 percent of delegates and 10 of the 34 statewide contests. Clinton earned 1,576 elected delegates, compared with Sanders’s 1,158, for a margin of 418. The turnout in these contests has been far higher than in the caucuses, with a little more than 24 million votes cast. That’s about 49 percent of the total number of votes Obama got in the 2012 election in these states.
Now, it is fair to point out that the caucuses have taken place in states that are demographically different than the primary states. Caucus states in 2016 are overwhelmingly white and overwhelmingly rural compared with primary states. Still, these differences don’t come close to explaining the differences in results between the caucuses and primaries so far. We can look to Nebraska and Washington as two examples of the disparity. Of course, one could argue that because no delegates were up for grabs in those states’ primaries, the campaigns didn’t really compete for residents’ votes and therefore those contests aren’t representative of what a truly competitive primary would look like there. Fortunately, because the vote in the Democratic primary has largely broken down along demographic lines, we can use statistical models to approximate what would happen if states that held caucuses had held primaries instead.
At various times, we’ve tried using demographics to model the vote in the Democratic nomination contest so far. The model considers each 2016 contest and controls for (i) the black and Hispanic share of the Democratic vote in that state in the 2008 general election, (ii) whether that primary or caucus is “open” to independent voters unaffiliated with a political party, and (iii) the margin in national primary polls at the time the contest is held. This model estimates that holding caucuses instead of primaries is a massive advantage for Sanders. In fact, Clinton would do about 20 to 25 percentage points better relative to Sanders if a state changed from a caucus to a primary, the model estimates.
Here’s how we project each caucus would have gone if a primary had been held instead:
POPULAR VOTE (THOUSANDS) PLEDGED DELEGATES STATE CLOSED WINNER CLINTON SANDERS CLINTON SANDERS Iowa Clinton +24 301 182 27 17 Nevada ✓ Clinton +29 185 101 23 12 Am. Samoa Clinton +60 4 1 5 1 Colorado ✓ Clinton +6 331 295 35 31 Minnesota Clinton +1 402 394 39 38 Nebraska ✓ Clinton +7 79 70 13 12 Kansas ✓ Sanders +12 91 116 15 18 Maine ✓ Sanders +5 92 102 12 13 N. Marianas ✓ Clinton +39 3 1 4 2 Utah Sanders +39 52 120 10 23 Idaho Sanders +37 33 72 7 16 Hawaii Sanders +17 63 88 10 15 Washington Clinton +6 471 418 53 48 Alaska ✓ Sanders +40 17 40 5 11 Wyoming ✓ Clinton +13 19 15 8 6 Guam ✓ Clinton +42 10 4 5 2 Current primary states Clinton +14 13,064 9,861 1,576 1,158 Total Clinton +12 15,216 11,880 1,847 1,423 Projected Democratic results if caucus states had held primaries
Sanders fans have claimed that because caucuses have lower turnout the current national caucus and primary vote underrates how well Sanders is doing. In fact, the opposite is true. When we switch all caucuses over to primaries, Sanders actually does worse. Clinton’s lead in the popular vote would grow from 2.9 to 3.3 million votes. Moreover, her edge in elected delegates would expand significantly. Instead of her current lead of 272 elected delegates, Clinton would be ahead by 424. Some states that were won by Sanders in caucuses, including Colorado and Minnesota, would be won by Clinton in primaries, according to our calculations.
In fact, counting the 537 superdelegates The Associated Press currently gives Clinton, she would likely have 2,384 total delegates if every state had held a primary. That’s one more than necessary to clinch the nomination.
But what would happen if every state held a primary that was open to independent voters? Independent voters, after all, have been among Sanders’s strongest groups, and Sanders supporters have consistently cited closed contests as evidence the game is rigged. We can rerun the same regression as above but estimate what would happen if all the primaries are open to unaffiliated voters.
POPULAR VOTE (THOUSANDS) PLEDGED DELEGATES STATE WINNER CLINTON SANDERS CLINTON SANDERS Iowa Clinton +24 301 182 27 17 Nevada Clinton +18 188 130 21 14 Am. Samoa Clinton +60 4 1 5 1 Colorado Sanders +6 331 373 31 35 Minnesota Clinton +1 402 394 39 38 Louisiana Clinton +39 240 100 36 15 Nebraska Sanders +5 79 88 12 13 Kansas Sanders +23 90 144 13 20 Maine Sanders +16 91 127 10 15 N. Marianas Clinton +30 3 2 4 2 Florida Clinton +20 1,159 760 129 85 Arizona Clinton +4 267 248 39 36 Utah Sanders +39 52 120 10 23 Idaho Sanders +37 33 72 7 16 Hawaii Sanders +17 63 88 10 15 Washington Clinton +6 471 418 53 48 Alaska Sanders +50 16 48 4 12 Wyoming Clinton +2 19 19 7 7 New York Clinton +4 1,146 1,049 129 118 Pennsylvania Clinton +0 915 907 95 94 Connecticut Sanders +6 176 200 26 29 Delaware Clinton +9 58 49 11 10 Maryland Clinton +18 581 399 56 39 Guam Clinton +31 11 6 5 2 Kentucky Sanders +10 205 256 24 31 Oregon Sanders +25 251 418 23 38 Current open primary states Clinton +12 8,146 6,429 956 715 Total Clinton +8 15,298 13,024 1,782 1,488 Projected results if every state had held an open primary An “open” primary allows the participation of voters not registered with either major political party.
Clinton’s margin in the national popular vote shrinks to about 8 percentage points (from 12). That’s because opening a primary to independent voters shrinks Clinton’s margin in a state by about 10 percentage points on average, according to the model. Sanders would also project to win Connecticut and Kentucky, which he lost in the real world when they held closed primaries.
Still, this wouldn’t make all that much difference. Just 11 states held closed primaries, so the national vote is mostly reflective of a process open to unaffiliated voters. Indeed, Clinton has won 14 primaries open to independent voters, while Sanders has won nine.
In fact, if all states held primaries open to independents — instead of closed primaries, or caucuses of any kind — Clinton might have a larger lead in elected delegates than she does now. The model indicates that Clinton would have a lead of 294 elected delegates, compared with the 272 she holds now. That’s not a huge difference, but it means that Clinton has been hurt at least as much by caucuses as Sanders has been hurt by closed primaries.
Listen to the latest episode of the FiveThirtyEight politics podcast.
What would happen if the primary system conformed to each candidate’s best-case scenario? (All closed primaries for Clinton and all caucuses open to independent voters for Sanders.) If every state held a closed primary, Clinton would beat Sanders by 19 percentage points and have a 654 elected delegate advantage, we estimate. If, however, each state held an open caucus, Sanders would beat Clinton by 22 percentage points nationwide and have a 496 elected delegate lead. Of course, neither of those scenarios would happen.
Realistically, if you throw everything together, the math suggests that Sanders doesn’t have much to complain about. If the Democratic nomination were open to as many Democrats as possible — through closed primaries — Clinton would be dominating Sanders. And if the nomination were open to as many voters as possible — through open primaries — she’d still be winning. |
Check out this prototype jersey the Padres almost wore in the 1980s
The Padres' uniform history is one of gorgeous, unexpected delights. From the bright yellows of their early days, to the mix of brown, yellow and orange which inspired this year's All-Star Game uniforms, the Padres are wholly unique in baseball uniform history.
In 1985, they swapped out that style of uniform for this brown-and-orange pinstripe number.
But there was one other option that never made the field. In a Hall of Fame exhibit featuring a number of rare uniforms at this year's All-Star FanFest, a 1985 Padres uniform prototype was on display for the first time ever. With a unique wordmark and orange (instead of brown) pinstripes, this is a flashy pullover model.
Not only that, but this uniform would have featured a brand new Friar logo on the sleeve.
While the wordmark is a little bland, the rest of the uniform looks fantastic. In an alternate reality somewhere, Tony Gwynn was winning batting titles in this and looking fresh.
Michael Clair writes about baseball for Cut4. He believes stirrup socks are an integral part of every formal outfit and Adam Dunn's pitching performance was baseball's greatest moment. |
Jake Rudock (Photo: Elizabeth Conley, Detroit News)
Ann Arbor -- Donning glasses and a headset, Michigan coach Jim Harbaugh peered out at the 400 or so fans who came out to the students-only practice and welcomed them to the submarine.
First and foremost, he had rules.
"We don't want anyone recording our practice," Harbaugh said. "Nobody will be recording Utah's practice tonight, and a video of us would give them an advantage."
Even with 200 students gathered on the north end-zone track and the other half in the stands, fans were being constantly monitored by ushers strategically placed throughout the stadium. Per Harbaugh's orders, cell phones weren't allowed and ushers escorted out students who flashed their phones.
He also made it clear that he wanted it to be loud. He urged the crowd and band to make as much ruckus as possible, and for about an hour and a half, cheerleaders, marching band members, piped-in crowd noise and music blasted throughout the Big House to emulate the noise expected in 12 days in Salt Lake City, Utah.
During stretches, Motown joyfully bounced off of the stadium walls, but as soon as the scrimmage started, the genre quickly changed to KISS and AC/DC.
Regardless of the cell phone ban, news that Iowa transfer Jake Rudock would be quarterbacking the first team managed to leak out quickly. Rudock threw both touchdown passes, one to sophomore wide receiver Drake Harris and the other to junior tight end Jake Butt.
As expected, Butt was a frequent target and had just as many catches as any receiver. Harbaugh has a reputation for developing tight ends, and if Rudock and Butt continue to mesh as well as they did Saturday under Harbaugh's direction, the pairing will be well known this fall.
Though the quarterback battle has been hot through fall practice and junior Shane Morris said that it's his job to lose, Morris didn't have much of a field presence and couldn't manage to put together long drives.
Morris still has a rocket for an arm, and he overthrew senior wide receiver Jehu Chesson past the end zone and into the crowd. Sophomore quarterback Wilton Speight was the last quarterback to come on the field and connected well with third-team junior wide receiver Jack Wangler for a 10-yard pass.
To many students' dismay, freshman tight end Tyrone Wheatley Jr. and junior offensive lineman Patrick Kugler were both on crutches, and senior wide receiver Amara Darboh — who recorded 36 receptions for 473 yards and two touchdowns last season — had his left pinkie in a splint. Sophomore wide receiver Freddy Canteen's right arm was also in a sling.
The injuries allowed freshman wide receiver Grant Perry and sophomore wide receiver Drake Harris to get a lot of attention from Rudock on the first team. Perry took advantage of his situation with multiple catches of his own, and Harris' most notable catch came halfway through the scrimmage on a 15-yard touchdown pass to the north end zone.
The pair helped prove that the Wolverines may need to use a more pass-oriented attack. Harbaugh previously said that he wants a balanced offense, but the rushing game appeared inconsistent, with junior running backs De'Veon Smith, Ty Isaac and Derrick Green struggling to make an impact.
As the scrimmage ended, AC/DC faded out and practice came to a close. Harbaugh took off his headset, thanked the students for coming out and walked back up the tunnel. The submarine was closed again.
Kelly Hall is a freelance writer |
Auricular hematoma/ “Cauliflower ear”.
If you are a grappler, there is always a risk of getting Cauliflower ear or Auricular hematoma. This is an injury to the ear that results from trauma to the outer ear.
Cauliflower ear is an irreversible condition and as a result, the outer ear becomes permanently swollen and deformed, resembling a cauliflower.
The condition is common in martial arts such as boxing, mixed martial arts or wrestling, and in full-contact sports such as rugby union football.
Ineffective traditional treatment
Some grapplers proudly call it a ‘badge of honour’ but for many it is a real problem. The old school way of treating it consists of draining the ear with a syringe and extracting the fluid and then to continue training while wearing some form of headgear. The problem with this method is that not only does the ear swell up with the slightest touch but the outer ear may wrinkle, and can become slightly pale due to reduced blood flow; hence the common term “cauliflower ear”.
The only viable solution
Dr. Bryan Ales, BJJ student of Omar Sabha, MHA is the inventor of CauliCure™ Advanced Compression System, which is a permanent solution for cauliflower ear.
The Caulicure kit contains 4 compression discs in various sizes to fit any location on the ear. Each disc comes in a sealed container to take with you wherever you go. In addition, the kit provides a headband to ensure proper position while you sleep.
How to use Caulicure
Preventive phase: By compressing the damaged ear tissue “before” it has a chance to fill with blood/fluid you will avoid the formation of cauliflower ear.
By compressing the ear, you will prevent the ear from refilling with fluid and the constant draining that comes with it.
Why is this product “the better option”, comparing it to the other options available to athletes at risk?
The other available options have one of the following challenges:
They don’t work
You could need oral or systemic antibiotics
You could be out $150 minimum, each time you need it drained (2 or 3 times per week).
You could spend $300 plus having a medical doctor stitch a button to your ear.
You could be stuck wearing headgear “all the time” or risk getting the condition.
You off the mats much longer than with the CauliCure™
Is it Re-Usable?
The product is made so that the discs can be used for each and every traumatic incident requiring draining and effective compression.
How can I find out more about the CauliCure™ system?
You can visit www.caulicure.com and read about the product details, watch the instructional video detailing out the product usage or email at [email protected].
Furthermore, the product is available for sale on the www.caulicure.com |
SINGAPORE - If the taxi industry wants private-hire operators to compete on a completely level playing field, then they might have to contend with Uber and Grab drivers being allowed to pick up street hails.
Senior Minister of State for Transport Ng Chee Meng said in Parliament on Monday (July 11) that 80 per cent of taxi-driver trips are street hails.
"If you completely level the playing field, it means that the private-hire cars will have street hail privileges as well," he said. "This will not bring necessarily the outcomes we desire."
Mr Ng said this in response to questions from Nee Soon GRC MP Lee Bee Wah on whether there has been "any measurement to show that actually Uber and GrabCar bring positive impact to our commuters".
"If not, then why should we have them in Singapore, and create so much unhappiness among the taxi drivers?" she asked.
Mr Ng said: "We follow industry practices as best as possible."
"For commuters' interests, if you level completely the playing field, and Uber, Grab and taxi companies become homogeneous, then we will not be able to have the innovative disruption that private-hire car brings to the industry," he noted.
"So in the morning peak hours where we have inadequate supply of full-time taxi-drivers, many of the commuters' interests are served because there's supplementary drivers that come in the form of Uber and Grab, and these are mostly part-time drivers."
To a call by Ang Mo Kio GRC MP Ang Hin Kee for three separate licences for taxi-drivers, private-hire drivers and limousine drivers, Mr Ng said: "This was actually considered in our initial original design of the system. So it is possible."
But the minister said the decision to have two categories was taken to prevent the system becoming too unwieldy.
"This is part of the whole review to make it not too cumbersome, with too many regulations, for the drivers to jump through too many hoops," Mr Ng noted.
"But it's early days yet.... It will take a year to implement, so we will take your views into consideration." |
Short Profile Name: John Vincent Hurt
DOB: 22 January 1940 (d. 25 January 2017)
Place of Birth: Chesterfield, Derbyshire, England
Occupation: Actor
Mr. Hurt, do you know how many times you’ve died on screen?
I think I’ve got the record.
There is a video on YouTube that shows you dying in at least 40 films.
Yes, I know the video! It got to a point where my children wouldn’t ask me if I died, but rather how do you die? (Laughs)
What’s the hardest way to die on camera?
[Long pause] That’s very interesting, I’m not quite sure.
John Hurt dying 40 times.
An alien coming out of your chest?
That’s certainly one of the most unusual! Trying to produce the death gurgle is never easy…
That reminds me of your cameo in Spaceballs where you parody that famous scene from Alien.
I first got involved with Mel Brooks through The Elephant Man. Everybody knows now, but they didn’t know at the time that he was the producer.
Because his name was only associated with comedies?
Yes, so he didn’t put his name on it, but what a fantastic producer he was. He was amazing. After that film he kept ringing me up saying, “I want you to do History of the World.” He said, [with accent] “C’mon over John, c’mon! We’ll put you in a nice hotel, give you a couple of grand.” You suddenly find that you’re involved in a scene which is worth millions and you’re getting 2,000 dollars and a nice hotel – that’s it! Very clever, Mel. And then I worked with him again on Spaceballs.
"I don’t talk about my ‘work’ – there’s not much work in it is there? I say you play a part, you don’t work one."
It sounds like the industry was a lot different back in the ’70s and ’80s.
It was. I have to say there was more fun in the business. We didn’t talk about work so much. I’m somewhat old-fashioned and I still talk about playing a part. I don’t talk about my work – “I’ve seen some of your work” – there’s not much work in it is there? (Laughs) So I say you play a part, you don’t work one.
Aren’t there some aspects that are a bit of work, like memorizing your lines?
That’s the only work side of it really. But, you know, you don’t sit down and learn the lines. At least I don’t.
Really?
You keep reading it and going over the scenes and looking at it and thinking about it and eventually you know the lines. I supposed science fiction is a bit like work… They are not much fun to make.
Some of your most famous work has been in the sci-fi genre. You really don’t care for them?
Well, I mean, I’m open to any genre – that is who I am. Essentially I am an actor for hire. I am not a rarified creature. I do all these different things and they all interest me.
All of them?
Well there were moments when I did need to do something because the family coffers were getting a bit low. But Gary Oldman is a very good friend of mine and we talked about this and he works and operates in the same way – except that he is much more commercial than me. “You make more money Oldman!” But, I mean, I don’t mind. But the things that I’ve enjoyed most are not really science fiction. They are not much fun to make because there are so many toys involved. They are fun for directors who like toys, like Ridley Scott, but they are not a lot of fun to make. A lot of hanging around, changing this and that. And then when you play the scenes they are not really that interesting.
That’s probably true of most blockbusters …
An American studio film and the amount which is shot on that, particularly with digital because there is no limit to the amount that you can shoot, is endless. They do shots from your point of view, from his point of view, from her point of view, from under your legs, from that corner, from the other corner, from different points of view, three sizes - until you are just bored stiff with the whole thing.
They have to do something to make up for the lack of an interesting story.
I loved making Midnight Express, for example. You see, we were making commercial films then that really did have cracking scenes in them, as well as plenty to say, you know? You’d find it very hard to get something like Midnight Express off the ground today. I don’t know who would make it. No studio would make it, and it was made by Columbia.
And if it did get made, Harvey Weinstein would probably try to cut it down in order to distribute it more widely.
He is a bully, you know. I know that he can do that. The films do get more widely distributed, but then that isn’t necessarily better for the film. You can only have a big release for a film that is purely entertaining.
Are you pessimistic about the future or do you think the world is getting better?
I think one should be very careful of making things cut and dried or simplistic in that sense. For everything that you find dreadful, there’s usually something that is rather marvelous as well. And where humanity is going to find itself in say 20, 30, 40, or 50 years would be very difficult to predict, I think. There are moments of course when you think that it’s going from bad to worse, but there are other moments when you think that human efforts are really flowering into something really fantastic. It’s not something that can be either dismissed or accepted with any kind of simplicity. |
Today is so-called Pulpit Freedom Sunday, which sounds a little odd on its face. Political rhetoric has been flying along with the more conventional religious variety. With religion and politics already too closely tied for many of us, it seems anachronistic to have pastors preaching which candidates deserve your vote. Separation of church and state dates to our founders. Yet separation of church and political speech is more recent.
The 1954 version of the Internal Revenue Code added restrictions on 501(c)(3) organizations such as churches so they couldn't participate in political campaigns. For many of us who grew up knowing that rule, it seemed to make sense. The tax advantages churches and other charities receive are considerable.
Yet constitutional lawyers and scholars can and do debate whether this limitation on the speech and activities of churches is unconstitutional. The stick the IRS has is considerable, including the threat the tax exemption of churches could be revoked. The IRS does the best it can to fairly enforce the most complex tax system in the world. On the whole, it does a pretty good job.
But the IRS is in a pickle. For as you'll see in The Political Pulpit, the IRS is criticized if it enforces the rules and if it doesn't. Perhaps the IRS, the churches and their free-speaking figureheads will find a way to compromise. However, this now annual foray into (more) religion-infused politics (or is that politics-infused religion?) may be more about whose pulpit is bigger.
Regardless of which side of the pulpit you're on, this not-so-little-pulpit-pounding-I-dare-you-to-tax-me debate isn't over. There will be another sort of coming. Or perhaps There Will Be Blood.
For much more:
The Pulpit Initiative: Executive Summary
The Pulpit Initiative: FAQ
The Pulpit Initiative: White Paper
The Pulpit Initiative: What It Is -- What It's Not
Pulpit Perversion Sunday: The Religious Right’s Partisan Scheme To Politicize Churches
Pastors Who Play Politics from the Pulpit
Pastors to IRS: Come and Get Us
Christian Crusade For Tax Benefits |
Fingers are pointing in every direction as politicians and pundits assign blame for the automatic spending cuts that are scheduled to kick in tomorrow night. But in truth, it was a real team effort. And something this stupid didn’t just happen overnight; it took a few years of hard work and dedication. These high-stakes games of chicken have become a fixture of American politics during the Obama presidency. In the past, one side or the other has always blinked at the last minute. But the latest iteration looks like it will end in a head-on collision, and while the resulting wreck will be grisly, it might provide the shock to the system we need to steer our political debate back on course.
In this year’s State of the Union address, President Obama declared, “The greatest nation on Earth cannot keep conducting its business by drifting from one manufactured crisis to the next.” The key word there is “manufactured.” Facing mass unemployment, widening inequality, rising health care costs, the threat of climate change, and instability in the Middle East, just to name a few concerns, one would think our lawmakers had more than enough legitimate problems to worry about. But congressional Republicans have proven themselves to be entrepreneurial problem-makers since the night of Obama’s first inauguration, when they gathered to plot his downfall.
Advertisement:
From the beginning, the Republican strategy has been one of total opposition, but that backfired once they regained control of the House of Representatives and were actually expected to govern. As a result, writes E.J. Dionne, “The country has been put through a series of destructive showdowns over budget issues we once resolved through the normal give-and-take of negotiations.” The situation reached a boiling point in summer 2011, when Republicans threatened to let the federal government hit the debt ceiling. (No, not that time. The time before that.) Although there’s been a lot of back and forth about whether the White House deserves some or all of the blame for creating the sequester in the first place, it’s worth remembering that the debt ceiling debacle basically forced Obama’s hand. The result was the Budget Control Act, which established a bipartisan and famously useless “Super Committee” to hammer out a long-term deficit reduction plan. The Sword of Damocles hanging over the committee’s heads was sequestration, a mixture of automatic budget cuts designed to be so unpalatable to both parties that they would be forced to find an alternative solution – until they didn’t. Whoops.
Aiding and abetting Republicans throughout this misadventure were the deficit hawks, who grew tired of hearing about the economic crisis almost as soon as it began. They wanted to get back to more serious topics of discussion, like why the Obama administration was suddenly spending so much money. (Could it be… the economic crisis?) Twelve million people unemployed? Meh. One in five children living below the poverty line? Boring. Debt-to-GDP ratio approaching 90 percent? Sweet Rogoff, it’s time to declare a state of emergency! This relentless elite-level concern trolling drove the political debate to the far right while supposedly giving voice to the moderate middle, enabling the GOP’s worst policy instincts.
Now that things are once again down to the wire, Congress is scrambling to find a last-minute fix, but this time it looks like they’ll come up short. A Republican proposal that would have given President Obama more discretion over how to implement the cuts failed after Obama rightly dismissed it as an attempt to keep all the cuts in place while shifting all the blame onto him. A Democratic proposal to replace the sequester with a more balanced package of cuts and revenue was dead on arrival. And no one seems willing or able to simply cancel the cuts and call the whole thing off. As Adam West once said, some days you just can’t get rid of a bomb.
The consequences of sequestration will almost certainly be dire. In a survey of top economists conducted by The New Republic, most predicted that it would slow our already anemic economic growth, while even the most positive assessment cast it as some sort of punishment that America has had coming for a long time due to our failure to don the hair shirt of austerity along with our European allies. The indiscriminate cuts will take a heavy toll on the poor, women and children in general, domestic violence victims in particular, people who eat food… you get the picture. And the fact that this pain is being inflicted by fiat only makes the sting worse.
On the other hand, while sequestration was entirely unnecessary and unwise, something like this was bound to happen once Republicans chose to throw caution and responsibility to the wind. You can win a game of Russian Roulette once, but you’re not likely to have a long reign as champion. Likewise, if you keep inventing fake crises to help you get your way, one of them is eventually going to become real. It’s tempting to hope that this is what it looks like when Congress hits bottom, although it seems to break through to previously unexplored depths each time. But if this is what it takes to wake more Americans up to how distorted our policy debate has become so that we can start rethinking our national priorities, the pain may just barely be worth it after all. |
AT&T Has Fooled The Press And Public Into Believing It's Building A Massive Fiber Network That Barely Exists
from the fiber-to-the-press-release dept
"AT&T announced today it is planning to expand the availability of ultra-fast speeds through AT&T GigaPower to homes, apartments and small businesses in parts of 38 additional metros across the United States – which will total at least 56 metros served. With the launch of our ultra-fast Internet service in parts of 2 of these metros today – Los Angeles and West Palm Beach – AT&T GigaPower is now available in 20 of the nation’s largest metros.
A few years ago, AT&T realized something amazing: you don't have to build a cutting edge, fiber to the home broadband network, when it's relatively easy to fool the press and public intoyou're building a cutting edge, fiber to the home network. So as AT&T was actually busy reducing its fixed-line broadband spending and quietly walking away from DSL users it didn't want to upgrade, it launched a service it calls "U-Verse with Gigapower." Basically, AT&T's delivering gigabit speeds to high-end housing developments, then pretending the upgrades are much, much larger than they actually are.Case in point: AT&T this week breathlessly announced that the company was deploying gigabit fiber to 38 more markets , bringing the grand total of its gigabit fiber deployment to an56 total metro markets:Note a few things about the announcement, however. Nowhere does the company statethese connections will be delivered. Similarly nowhere does the company make clear that it's targeting mostly high-end housing developments where fiber is already in the ground, making costs negligible (the only way you could technically accomplish a deployment of this kind and magically have your CAPEX consistently drop). And while AT&T claims these improvements will reach 14 million residential and commercial locations, AT&T gives no timeline for this accomplishment. That means it could cherry pick a few hundred thousand University condos and housing developments per year and be wrapping up this not-so-epic fiber deployment by 2040 or so.Nowhere -- now or ever -- will you see AT&T specify precisely how many users have, or will be able to get gigabit speeds from AT&T. That's because, in reality, users in these "launched" markets will almost always find it difficult if not impossible to sign up for this gigabit service. And, in some cases, by a "launched" market AT&T actually means a few dozen homes sitting on a hill in a single housing development.Now take a minute and look at the press coverage of AT&T's announcement , and try to findnews outlet that could be bothered to note the limited nature of these launches. Whether it's the Shreveport Times or the pages of the Milwaukee Journal Sentinel , AT&T's convinced the entire country that it's on the cusp of getting gigabit fiber that -- for the vast majority of them -- is. Even technology news outlets that should know better (if they'd spent five minutes studying AT&T's history on this front) are busy bandying about quotes how AT&T is " outpacing every other competitor ."To be clear,AT&T customers will certainly get fiber. If you live in one of the few areas where AT&T has to actually compete thanks to Google Fiber, or in locations where it's possible to upgrade to fiber with the least amount of effort and cost possible, you may be upgraded -- eventually. Granted it won't be cheap, and you'll have to pay a steep premium if you don't want AT&T to spy on you, but you'll get fiber. More likely than not, however, you live in a DSL or U-Verse (FTTN) market that AT&T not only won't upgrade, but may be walking completely away from in order to focus on more profitable (read: usage capped) wireless.The press and public aren't the only ones being conned. AT&T has consistently used its phantom fiber deployment as a carrot on a stick with regulators, at one point threatening to stop making these barely-there investments unless regulators walked back net neutrality. AT&T backed off the claim when the FCC asked for hard data, but this kind of telecom theater works exceptionally well in state legislatures. Last week AT&T claimed net neutrality prevented them from innovating, and this week they're portraying themselves as the innovator of the century (even though the only actual innovation here is in misleading PR).And AT&T's not alone when it comes to bogus gigabit bravado. Most of the major phone and cable companies have similarly responded to Google Fiber by cherry picking the nation's most affluent housing developments for gigabit deployment, then pretending they're keeping pace with the nation's broadband needs. Even Google Fiber has made a habit lately of getting oodles of press attention for fiber deployments that may or may not actually happen. In reality however, two thirds of homes lack the choice of more than one ISP at speeds of 25 Mbps or greater. And as AT&T and Verizon walk away from unwanted DSL markets, cable's monopoly power is going to grow, making broadband less competitive than ever in many markets.None of this is to pooh pooh the actual gigabit fiber deployments that are occurring. While it only has an estimated 100,000 subscribers now, there's every indication Google Fiber's going to eventually have a major disruptive impact . There's a lot of interesting stuff going on at the grass roots level, whether it's municipal broadband, or companies like Tucows taking the reins and upgrading small towns, one at a time. But on the meta scale, an uncritical press is contributing to an epic case of delusion when it comes to the pace of broadband progress.That's why we're not living in the age of fiber to the home -- so much as we're living in the age of fiber to the press release
Filed Under: competition, fiber, fiber to the press release, gigapower, journalism, net neutrality
Companies: at&t |
Abstract It has recently become possible to study the dynamics of information diffusion in techno-social systems at scale, due to the emergence of online platforms, such as Twitter, with millions of users. One question that systematically recurs is whether information spreads according to simple or complex dynamics: does each exposure to a piece of information have an independent probability of a user adopting it (simple contagion), or does this probability depend instead on the number of sources of exposure, increasing above some threshold (complex contagion)? Most studies to date are observational and, therefore, unable to disentangle the effects of confounding factors such as social reinforcement, homophily, limited attention, or network community structure. Here we describe a novel controlled experiment that we performed on Twitter using ‘social bots’ deployed to carry out coordinated attempts at spreading information. We propose two Bayesian statistical models describing simple and complex contagion dynamics, and test the competing hypotheses. We provide experimental evidence that the complex contagion model describes the observed information diffusion behavior more accurately than simple contagion. Future applications of our results include more effective defenses against malicious propaganda campaigns on social media, improved marketing and advertisement strategies, and design of effective network intervention techniques.
Citation: Mønsted B, Sapieżyński P, Ferrara E, Lehmann S (2017) Evidence of complex contagion of information in social media: An experiment using Twitter bots. PLoS ONE 12(9): e0184148. https://doi.org/10.1371/journal.pone.0184148 Editor: Renaud Lambiotte, University of Oxford, UNITED KINGDOM Received: May 3, 2017; Accepted: August 18, 2017; Published: September 22, 2017 Copyright: © 2017 Mønsted et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Data Availability: Data cannot be made publicly available due to privacy concerns. For data access please contact Copenhagen Center for Social Data Science (sodas.ku.dk), or the corresponding author: Sune Lehmann. Funding: This work was funded by the Danish Council for Independent Research (http://ufm.dk/forskning-og-innovation/rad-og-udvalg/det-frie-forskningsrad), grant number 4184-00556a. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Competing interests: The authors have declared that no competing interests exist.
Introduction The diffusion of information and ideas in complex social systems has fascinated the research community for decades [1]. The first proposal to use epidemiological models for the analysis of the spreading of ideas was put forth more than fifty years ago [2]. Such models, where each exposure results in the same adoption probability, are referred to as simple contagion models. It was subsequently suggested, however, that more complex effects might come into play when considering the spread of ideas rather than diseases. For example, some people tend to stop sharing information they consider “old news”, while others refuse to engage in discussions or sharing certain opinions they do not agree with [3–5]. Such models, in which adoption probabilities instead depend strongly on the number of adopters in a person’s social vicinity in a way where exposure attempts cannot be viewed as independent, are referred to as complex contagion [6] models. Concretely, we use a threshold complex contagion model, in which the adoption probability is assumed to increase slowly for low number of unique exposure sources, then increase relatively quickly when the number of sources approaches some threshold level (see ‘Models’ for full details). The role of contagion in the spreading of information and behaviors in (techno-)social networks is now widely studied in computational social science [7–19], with applications ranging from public health [20] to national security [21]. The vast majority of these studies are, however, either observational, and therefore prone to biases introduced by confounding factors (network effects, cognitive limits, etc.), or entail controlled experiments conducted only on small populations of a few dozens individuals [6, 7]. To date, these limitations have prevented the research community from drawing a conclusive answer as to the role of simple and complex information contagion dynamics at scale. In this paper we shed new light on the nature of information diffusion using a large-scale experiment on Twitter, in which we study the spreading of hashtags within a controlled environment. Creating a controlled environment for experiments within online platforms is especially challenging for researchers that do not have access to the system’s design itself, as traditional techniques such as A/B testing cannot be employed. Even for service providers like Facebook, ethical concerns emerged when random control trials were carried out without review board approval [15]. For this experiment, we leveraged algorithm-driven Twitter accounts (social bots) [22]. We had previously shown that a coordinated network of Twitter bots can be effective in influencing trending topics on Twitter [23]. This study is a follow-up experiment designed to quantitatively investigate how users react to information stimuli presented by single or multiple sources. In particular, for this experiment, teams of students from the Technical University of Denmark (DTU) worked together to create a network of Twitter bots (a botnet) designed to attract a large number of human followers. We programmed the bots to spread Twitter hashtags (see Table 1) in a synchronized manner among a set of real Twitter users from a selected geographical area. A large number of users in our target dataset followed one or multiple bots (See Fig 1B), which allowed us to study the effect of multiple exposures from distinct sources on information contagion. PPT PowerPoint slide
PowerPoint slide PNG larger image
larger image TIFF original image Download: Table 1. List of interventions. https://doi.org/10.1371/journal.pone.0184148.t001 PPT PowerPoint slide
PowerPoint slide PNG larger image
larger image TIFF original image Download: Fig 1. Illustration of the status of our botnet at the time of the interventions. The bots had accumulated a large number (∼25000) of followers (A) at the time of the interventions (shaded region), and many of the target users followed several distinct bots (B). https://doi.org/10.1371/journal.pone.0184148.g001 The decision to use Twitter bots to perform coordinated interventions has several advantages: first, we are able to ensure that the hashtags we introduce are new to Twitter, and therefore that they are seen by the target users for the first time when we perform experiments. Second, it enables the bots to work together to expose users to each intervention multiple times. Finally, the Twitter botnet mitigates the confounding effects of homophily [24–26]. For example, when conducting a purely observational study, it is a fundamental problem to distinguish whether a user is more likely to adopt information shared by many of their friends because they are influenced by their friends sharing the content, or simply because friends tend to be similar, so anything tweeted shared by the user’s friends is more likely to be of interest to the user. In the remainder of the paper we will discuss the experimental framework design in detail, then present two statistical models for simple and complex contagion, developed in order to evaluate the two competing hypotheses, and finally show the results of the experimental evaluation.
Methods Data. All data was collected in accordance with the Danish regulations for personal data; additionally the study has been subject to Institutional Review Board (IRB) approval. The IRB grantee is Indiana University (protocol number 1410501891), which was the hosting institution of the only U.S.-based author (Emilio Ferrara) at the time when this experiment was performed. All co-authors aligned to the requirements imposed by Indiana University’s approved protocol. For data access please contact Copenhagen Center for Social Data Science (http://sodas.ku.dk/contact/), or the corresponding author: Sune Lehmann. Botnet creation. We designed the Twitter bots as part of a graduate course on social networks. The goal was to create bots which appear, at a cursory glance, to be human-operated Twitter accounts, but in reality are algorithmically driven (by means of Python scripts). The bot creation was divided into two phases: first, the goal was to build convincing accounts that real users might want to follow. Second, we worked to infiltrate a set of geographically co-located real users and spread new hashtags among them. In phase 1, each group of 2-4 students manually created 1-3 personas (with interests, music taste, favorite sports team, etc.) and corresponding Twitter profiles, each with a profile picture, profile description, background picture, etc., resulting in a total of 39 bots. Each group also manually posted a number of initial tweets for each bot. One of the key objectives was to achieve a large follower base while maintaining a low following/follower ratio. A low following/follower ratio is unusual among bots [21] and signals popularity on Twitter. Our bots achieved a low ratio by capitalizing on the fact that many new users with relatively few followers (and other Twitter bots) tend to reciprocate the link when they gain a new follower. Therefore, we used the following strategy: Every day, each bot automatically followed approximately 100-200 randomly selected accounts with a low follower count or the string ‘followback’ in the description. After 24 hours, the bots unfollowed the accounts that failed to reciprocate their follow. This routine was repeated every subsequent day. Using this strategy, the bots were able to maintain a following/follower ratio close to 1, while gaining large amounts of followers. The bots avoided automatic detection by limiting the churn among their followers, since performing too many (un)follow operations in a day leads to a suspension of the account. As a whole, the botnet was successful in gaining a large group of followers which grew steadily throughout the duration of the experiment, as shown in Fig 1A. While attracting followers, the bots gradually assumed a number of behaviors designed to emulate human behavior: Geographical patterns. All bots’ self-reported location in their Twitter profile was set to the San Francisco Bay Area. In addition, all bots tweeted with geo-tagged tweets, set to originate from a random location within the Bay Area bounding box. This allowed our bots to target a geographically-confined region. Temporal patterns. Bots also timed their tweets to match typical diurnal patterns corresponding to the pacific time zone, and produce content that reflected circadian patterns of activity commonly observed online [32]. Content. Finally, based on simple natural language processing rules, the bots automated tweeting and re-tweeting of content that matched the persona developed above. As final step of phase 1, the bots unfollowed users which were obviously spam/bot accounts in order to decrease their following/followed ratio. To investigate the quality of each bot, we routinely used the online service Bot or Not API [33] (http://truthy.indiana.edu/botornot/) to ensure that the bots appeared human to state-of-the-art bot-detection-software. In phase 2, the bots began following non-bot Twitter accounts within the target area (San Francisco/Bay Area), leveraging the information users self-reported in their Twitter profiles (location string). To achieve the goal of having individuals in the target area following multiple bots, the bots maintained a shared list of Twitter accounts that followed-back any of the bots—and all bots followed those real accounts over the following days. As a result, many Twitter users in the target set ended up following multiple bots by the time when the interventions occurred during the period between November 15th to December 2nd, 2014. The distribution of the number of bots followed by other Twitter users during the intervention period is shown in Fig 1B. Statistics of observed data. The following shows how the observations, including the error bars, in Fig 3 were obtained. For both SC and CC, we investigate how P(RT) changes as a function of k, then iterate over each of the interventions and for each target user we compute the distribution of exposure numbers, according to Eq (2) for SC, and according to the Poisson binomial distribution shown in Eq (8) for CC. These distributions allow us to estimate the number of retweets after k exposures in the following way: Consider a series of events S 1 , S 2 , …, S n , each representing a user retweeting an intervention-related tweet. For an event S i , we have probabilities p i,1 , p i,2 , …, p i, n of the event representing k = 1, k = 2, …, k = n true exposures. Hence, considering a discrete value k = j, the event can belong to bin j with a probability p i,j , and it can belong in another bin with probability 1 − p i,j ; i.e., it is drawn drawn from a Bernoulli distribution with p i = p i,j and . Similarly, the following event is drawn from another Bernoulli distribution independent of the first, and so the distribution of each bin follows another Poisson binomial distribution with μ = ∑ i p i and . This process approaches the normal distribution , when the number of Bernoulli draws becomes large due to the central limit theorem (see SI Appendix for details). Thus, can we obtain an approximate distribution for the number of observed retweets for each value of k. Bayesian information criterion. The Bayesian information criterion (BIC) score is defined as (13) where L is the likelihood of the data given the model, k is the number of model parameters, and n is the number of data points. We compute the likelihood based on the fits to the number of retweets, i.e. fits like those shown in Fig 3B: For each exposure number k, we have (from our previous analysis) an estimate of the number of times, N k , a user has experienced k exposures. To ensure a discrete number of retweets, we run a series of simulations, computing P(k|A) for each retweeting user and adding 1 to a bin k, which is selected using that probability distribution. We denote the number of retweets in bin k by n k , and discard bins in which n k < 5. As our models provide the probability P(RT|k) of each exposure succeeding in eliciting a response from the exposed user, the likelihood of each bin in one such simulation is given by a binomial distribution, and the total likelihood is simply the product of those, i.e. (14) We repeat this simulation 103 times for both SC and CC for the full range of values of q.
Conclusion Diffusion phenomena in social and techno-social systems have attracted much attention due to the importance of understanding dynamics such as disease propagation, adoption of behaviors, emergence of consensus and influence, and information spreading [1, 6–8]. In contrast to modeling epidemics, for which clear laws have been mathematically formulated and empirically validated [2, 4], modeling and understanding information diffusion has proved challenging, in part due to the inability to perform controlled experiments at scale and due to the abundance of confounding factors that bias observational studies [24–26]. Two competing hypothesis have been debated, namely that information spreads according to simple or complex contagion. In this work we test the two hypotheses by creating a controlled experimental framework on Twitter: we deployed 39 coordinated social bots [22] that interacted with a selected cohort of participants (our target population), and carried out a variety of interventions, in the form of attempts to spread new positive messages (i.e., memes for social good). The bots recorded the behavior of the target users and all their interactions with the bots and with other users, while tracking the number of exposures to each message over a period of more than one month. The data we collected allowed us to test two Bayesian models that we derived to capture the diffusion dynamics of simple and complex information contagion. Specifically, in our complex contagion model, we assume that the probability of adoption depends on the number of unique sources of information, rather than the number of exposures. The statistical evidence clearly shows that the complex contagion model is a better explanation for the observed data than the simple contagion model. This implies that exposures from multiple sources impacts the probability of spreading a given piece of information. This threshold mechanism differs significantly from, say, the spreading of a virus, where many exposures from a single source are sufficient to increase probability of infection. A variety of explanations for the complex contagion hypothesis have been proposed in social theory, including social reinforcement and social influence, echo chambers, human cognitive limits, etc. [1, 3, 9–11, 13, 19]. While our work identifies the type of mechanism according to which information spreads from person to person, much work is still needed to discriminate which factors drive this phenomenon. We expect that future work will explore these factors and further disentangle and explain the dynamics of human communication in social networks. |
LZ 37 Artist's impression of the destruction of German Zeppelin LZ 37 by Sub-Lieutenant Reginald Warneford on 7 June 1915. Role Reconnaissance and bombing National origin German Empire Type M-Class Zeppelin Manufacturer Luftschiffbau Zeppelin at Friedrichshafen Construction number LZ 37 First flight 4 March 1915 Owners and operators Imperial German Navy In service 4 March 1915 – 7 June 1915 Flights 14 Fate Shot down, June 7, 1915
The airship LZ 37 was a World War I Zeppelin of the German Kaiserliche Marine (Imperial Navy). It was the first Zeppelin to be brought down during the war by an enemy plane on the night of 6–7 June 1915.
History [ edit ]
In 1915 Zeppelins were first used by Germany for strategic bombing of the United Kingdom and France.
LZ 37 was part of a raid with Zeppelin LZ 38 and LZ 39. While returning, she was intercepted in the air by Reginald Warneford in his Morane Parasol during its first raid on Calais on 7 June 1915. Warneford dropped six 20 pounds (9.1 kg) Hales bombs on the zeppelin which caught fire and crashed into the convent school of Sint-Amandsberg, next to Ghent, Belgium ( ), killing two nuns. The commander of LZ 37, Oberleutnant van der Haegen, and seven members of the crew were killed. One crew member, Steuermann Alfred Mühler, miraculously survived with only superficial burns and bruises when he was precipitated from the forward gondola, landing in a bed.[2] It was the first victory of a heavier-than-air aircraft over a lighter-than-air dirigible. Warneford was awarded the Victoria Cross for his achievement.
The LZ 37 was based in Gontrode, Belgium (airport location: ).
Specifications [ edit ]
Data from "The Zeppelin Airships - Part Two: Zeppelins of the Great War 1914–1918". Puget sound airship society .
General characteristics
Crew: 28
28 Length: 163.37 m (536 ft in)
163.37 m (536 ft in) Diameter: 18.7 m (61 ft 4 in)
18.7 m (61 ft 4 in) Volume: 33,780 m 3 (1,126,000 ft 3 )
33,780 m (1,126,000 ft ) Empty weight: 17588 kg (38,775 lb)
17588 kg (38,775 lb) Useful lift: 8520 kg ( lb)
8520 kg ( lb) Powerplant: 4 × Maybach MC-X, 155 kW (210 hp) each
Performance
Maximum speed: 96 km/h (60 mph)
Armament
Four machine-guns
Notes [ edit ]
^ History of the First World War, vol. 3, pp. 986. |
The family of an Irish man who has been missing for more than a month while backpacking in India, have appealed for assistance in tracing him.
Jonathan Spollen (28), a freelance writer and journalist from Ranelagh, has not been seen or heard from since February 3rd.
Mr Spollen, who has been based in Hong Kong working for the International Herald Tribune until recently, was last known to be in Rishikesh, in the state of Uttarakhand in northern India.
It is thought he may have been planning on going a trek at the time of his disappearance having changed his mind about a trip to Delhi to meet up with a friend.
Mr Spollen had arrived in India from Nepal in late November and was planning to leave the country by February 21st, when his visa was due to expire.
His father David Green has travelled to India to join in the search for him.
Mr Spollen's mother, who last spoke to him on the day he went missing, said it was completely out of character for Jonathan to be out of contact with her for so long.
"Jonathan has lived overseas for a number of years and has always been good at keeping in touch," said Lynda Spollen.
"We are 75 per cent confident that he may still be in the Rishikesh area and we don't think he has left the country."
Ms Spollen said that friends who had seen Jonathan shortly before he went missing had commented on the fact that he had lost weight.
"Jonathan was planning to come to Ireland before his visa expired and when I was last speaking to him had said he might take a trek before he left. Our concern is that he may have picked up a virus of some kind while trekking because he maybe wasn't as fit as he thought he was. We are worried that something may have befallen him."
Ms Spollen said her son would have been determined to leave India before his visa expired.
"Originally, Jonathan wanted to stay in the country for a while and travel from the south to the north but was only able to obtain a three-month visa. However, as a journalist the last thing he would have wanted to do was to overstay in the country because he wouldn't have wanted blots on his copybook. His intention would always have been to try and get out in time rather than overstay," she said.
"Jonathan hadn't been home since April last and so the idea was to head home and be here in Ireland for a while and then see what he would do next," she added.
The Department of Foreign Affairs is providing consular assistance to the family.
Anyone with possible information regarding Jonathan is asked to contact Ms Spollen at lyndaspollen@eircom.net |
When it was first released, the ESP8266 was a marvel; a complete WiFi solution for any project that cost about $5. A few weeks later, and people were hard at work putting code on the tiny little microcontroller in the ESP8266 and it was clear that this module would be the future of WiFi-enabled Things for the Internet.
Now it’s a Kickstarter Project. It’s called the Digistump Oak, and it’s exactly what anyone following the ESP8266 development scene would expect: WiFi, a few GPIOs, and cheap – just $13 for a shipped, fully functional dev board.
The guy behind the Oak, [Erik Kettenburg], has seen a lot of success with his crowdfunded dev boards. He created the Digispark, a tiny, USB-enabled development board that’s hardly larger than a USB plug itself. The Digispark Pro followed, getting even more extremely small AVR dev boards out in the wild.
The Digistump Oak moves away from the AVR platform and puts everything on an ESP8266. Actually, this isn’t exactly the ESP8266 you can buy from hundreds of unnamed Chinese retailers; while it still uses the ESP8266 chip, there’s a larger SPI Flash, and the Oak is FCC certified.
Yes, if you’re thinking about building a product with the ESP8266, you’ll want to watch [Erik]’s campaign closely. He’s doing the legwork to repackage the ESP into something the FCC can certify. Until someone else does it, it’s a license to print money.
The FCC-certified ESP8266 derived module, cleverly called the Acorn, will be available in large quantities, packaged in JEDEC trays sometime after the campaign is finished. It’s an interesting board, and we’re sure more than one teardown of the Acorn will hit YouTube when these things start shipping. |
Yesterday we dug deep into West Virginia's collapse on the road at Kansas State and its origins with the refusal to rely on a pair of productive running backs. I'll pick up where we left off:
All this pretty quickly leads to the question of why would a coach make the decisions that Dana Holgorsen did for his team in the second half? Pride? Stubborness? I don't know, but the answer to that question very likely informs how you think WVU should handle the position of head coach moving forward. ...that WVU loss had nothing to do with (Bill Snyder's) special brand of wizarding magic. Nor did it have anything to do with blown special teams assignments or kick coverage demons. It had to do with a coach stubbornly refusing to adhere to a plan that had yielded victory in 4 previous games and all but certainly would have done so again. It had to do with the failure to follow through with a plan of attack that was painfully obvious to anyone who had watched this team at any point in the last month.
That's about as critical as I've ever been with coach Dana Holgorsen in writing and I know a lot of people who think I'm way too forgiving. After a month of good feelings, the Mountaineer fanbase is as angry about its coaching situation as it's ever been. For the record I'm still against making a change (I'll get into the reasons later) but the questions are very valid.
With all that, I figured it would be a good idea to focus on just this with a separate Retweet from the Kansas State postgame. I've got a lot of thoughts on a complex topic and 180 characters wasn't cutting it.
"You can say that on your couch or at your computer but it's not that simple. You just don't understand." https://t.co/Tv6XsAYzXl — Chris Anderson (@CMAnderson247) December 6, 2015
(author's note: this is not a real quote but boy it sure sounds like one. Sorry for being unclear there and suggesting Holgorsen said something that he did not. But the sentiment stands - Holgo doesn't like to have his playcalling questioned but there's ample reason to do so.)
Well technically coach I was at my dining room table at my computer, does that change things? In all seriousness, I get the gist of what he's saying here. Playcalling is born from a complex collection of information and game conditions, many of which escape us as we sit watching at home. It's a perfectly valid point. I get that.
But the game Saturday was notable in that, more-so than any game I can remember, the statistics laid bare the coach's inability to simply take the simple path to a win. We saw in the first half the template that had proven successful over the four game winning streak and the inexplicable shift in strategy in the second half begged a lot of questions.
Is it really unreasonable to wonder why you run your quarterback with his injured foot on the biggest play of the game as WVU did on their final 4th and 2 (it went for no gain)? Is it truly that complicated to point out that when you ran your running backs a combined 13 times on first down in the first half (they averaged 7.2 YPC on those plays) you scored on 3 drives and took a 10 point lead but when you gave those same two tailbacks just 8 combined carries in the second half (5.7 YPC) you scored only twice and were outscored by 11? What if I pointed out that the 6 times you elected to air it out on that first down in the second half netted 4 drive-crippling incompletions?
Dana Holgorsen has a huge perception problem in that his entire fanbase has no idea what is behind what on the surface looks to be perplexing decision making. And a tone like the one he took with the above quote doesn't help - there's a tinge of arrogance there that essentially rejects the premise of any question. If he has any chance of rebuilding some goodwill with the fans, he needs to walk us all through what went into his second-half decisions. I have no doubt that it would be frustrating for him, but he frankly needs to explain himself. I'd think his next coach's show would be a perfect time to do that.
If William Crest can't play over one legged Skyler in an offense that doesn't want to pass, he's probably never going to be the guy. — Smoking Musket (@SmokingMusket) December 5, 2015
@abpriddy @SmokingMusket Holgo better pray that Chuggs lives up to the billing. — MGraves (@WVUfanMG) December 6, 2015
On the field, the biggest problem Holgorsen has right now and moving forward is at quarterback. I like Skyler Howard and he's done the best he can do (without a lot of help from his receivers) but I think it's fair to say he puts a ceiling on this team. Holgorsen needs someone with the arm strength to make the throws his system demands and the command, timing and touch to capitalize on the home run attempts that an opposing defense invariably yields. Skyler Howard simply hasn't been as consistent on either of these fronts as Holgorsen needs.
The biggest fear is that it won't be possible to have anyone else prepared to run the show by the beginning of next season. The difference between Clint Trickett in year 1 and Trickett in year two suggests that it takes a year of live fire before you can really understand things in this system. The difference in production between what we saw from Geno Smith between 2011 and 2012 (the first half of the season at least) would seem to support that. Doesn't paint a hopeful picture of a first-year starter for next season.
Then there's the William Crest question. Given all we've seen (or not seen) I'd have to give 3-1 odds against that he ever starts a game at quarterback for WVU. He's been around long enough that he should understand things and he certainly has the physical ability to run the read-option that Holgorsen has become so enamored with, so what gives? Where is he? The Musket tweet above made the case perfectly. If Crest can't make a case for playing time now, will he ever?
It's an enigma that an offensive guru who has put over a half-dozen wide receivers in the NFL can't find a quarterback, and it might very well be Holgo's undoing. It's probably the single biggest factor in whether or not WVU can put together a successful 2016 and I just don't see an easy solution. If we agree Skyler is what he is (good, not great), Crest isn't and won't be ready for prime time and Sills' future is at receiver then we're left with Chuganov. Seems like a lot to put on a guy who's never taken a snap.
If he's not calling in plays, theoretically he should have more time for clock management/situational decisions. https://t.co/q5OAl1TBce — Randy Gyorko (@GooseGyorko) December 6, 2015
I'm in favor of the Head Coach being involved in game planning all week, getting my the team ready to play, and in game decisions. — Randy Gyorko (@GooseGyorko) December 6, 2015
Having someone you trust to call plays (with input if needed from Head Coach) isn't a negative. Should free him up to think big picture. — Randy Gyorko (@GooseGyorko) December 6, 2015
What Randy is outlining here is the job description for virtually every head coach in college. There are a million things to do for a head coach and play-calling is a huge task to add to that list. And even if he doesn't want to relinquish play-calling duties, it's fair to wonder if some of the sloppiness we saw on offense resulted from not having a single person charged with the day-to-day operations associated with that side of the ball this season. This was the first year of his 5 in Morgantown that Holgorsen didn't have an OC and also probably the worst the passing game has ever looked. I wonder if those two are related.
But now let's get to the meat of the matter. There are two big reasons that I don't think WVU should fire Holgorsen.
@kin_kinsley @GooseGyorko @abpriddy @SmokingMusket what part of the hiring a new coach sets us back 5 years do people not understand? — Afton (@akw304) December 6, 2015
I've always been a proponent of the "don't fire a guy until you have a new guy you really like" philosophy. And not some half-assed list, but a GUY. Someone you have an understanding with that they are absolutely going to be your new coach and more importantly a guy you want to be your new coach because he's really good, not just because you want to fire the guy you've got now.
Ask Tennessee what happens when you fire a good coach, as they did with Phil Fulmer in 2008, without a plan. They're 7 years (and a small Brinks truck of buyout payments) removed from that decision and only now getting to the point they were at when they fired Fulmer. And that's at a marquee name in the sport.
Want to see WVU's real worst-case scenario? Cast your gaze up I-79 to the Pittsburgh Panthers . They fired Dave Wannstedt just a year after his team had put together a 9 win season and was within a point of winning the conference championship. Then they hired 4 coaches in 5 years and the job became a revolving door for middling coaches looking to simply hang around long enough to further their career.
The discussion must start with a realization that West Virginia is a different place. It's a good job and a place you can win, but it's not a glamour job that inspires awe among the general coaching population. The stature that it does have has come from the hiring and retaining of coaches who placed a higher value on the job than many of their brethren. They wanted to be here for reasons deeper than simply taking the next step. You can't assume that anyone else we'd bring on board would share that viewpoint. It's entirely possible that WVU could fire Holgorsen and enter a cycle similar to what Pitt has seen.
One of the things I've always liked about Holgorsen is he's felt like a good fit here. He's laid-back off the field and values the opportunity that WVU offers him - good enough to win at a high level, but not so high-profile as to constrain his personal life. He's a simple guy who just wants to coach football and then go do his thing, and WVU gives him that opportunity. Oliver Luck hired him in 2011 as a long-term solution at head coach. Not everyone WVU could bring on board would fit that description.
My point is if you're WVU you've got to give a little more rope than other places. You're not Michigan or USC with the ability to cycle through coaches until you find the one you want. So you can't be so quick to hire and fire. You need to be sure, and as much as people don't want to hear it, there's been enough success since 2011 to warrant more time. A little, not a lot.
This was a disaster. Unmitigated disaster. Holgo isn't getting fired. But it may be more because all the best candidates have been hired. — Smoking Musket (@SmokingMusket) December 6, 2015
We've been through this before #WVU. You can't fire Dana, especially with so many big programs that have openings. We won't improve. — William Hirsch (@WillHirschNHL) December 6, 2015
Not only are a lot of the good candidates gone, but there are plenty more jobs out there left to fill and it's a seller's market. In the event that they needed to hit that market, WVU would quite likely overpay for a coach that isn't as good. And within the context of what I said above, it's an unnecessary chance to take. This year.
Especially when there are a lot of reasons to be optimistic about WVU next season. You've got a young offense where just about everyone is returning (with the biggest question mark being Wendell Smallwood's potential exit for the NFL). Didn't a lot of the rust we saw this season remind you of 2013? Well those growing pains paid dividends in 2014. Tony Gibson has given ample reason to believe he'll be able to take care of things on the defensive side, so with a schedule boasting 5 conference home games and non-conference games against Mizzouri and BYU squads that will be breaking in new staffs early in the season there would seem to be ample reason to keep things in place for another year and see where they land.
@EERSNATION @jtoler20 @wvufaninpratt @MitchVingle if he gets the time he needs 2years from now we play NT but nobody want to wait for that — Robert Rutter (@lazyb81) December 7, 2015
OK maybe not THAT optimistic. But hey, now we know Dana's dad is on Twitter!
We can disagree on many things but if you are satisfied with the mediocre product then I have to question your expectations. — Sports Dude™ (@ImTheSportsDude) December 6, 2015
This sentiment is pretty prevalent and I'd guess I'd argue that there's a little more nuance there than people give credit for. Some of these teams have been pretty damn good. Last year's was about 1 play away from being 7-2 with back to back wins over top 15 teams and a great look at a conference title before Clint Trickett went down by the facemask and effectively ended the season.
This year 8-4 was within grasp and with a lucky bounce against Oklahoma State maybe 9-3. Up until this last game they were handily beating teams they should beat. That's not to be taken for granted and it's certainly not mediocre. But that was the most damaging thing to Holgorsen about what we saw against Kansas State. So many times he's been undone by bad luck as much as bad decisions, but there was no hiding from the reason for the loss on Saturday.
I'm confused. Dana says Howard had hurt ankle, so he couldn't get the edge on the 4th down. If so, why call the play? — Jonathan Martin (@JonathanKMartin) December 6, 2015
OK maybe this was the most damaging thing to Dana to come out of Saturday night. If things do deteriorate and he does get fired either this year or next, remember that play. It's the play with no explanation and no excuses. It's the play that cost Dana the game and maybe more. It's a play that many will forget, but might have changed the trajectory of the program.
The good news in an ASU match up is if there's one thing Holgo hates more than giving the ball to Wendell, it's losing to Todd Graham. — Smoking Musket (@SmokingMusket) December 6, 2015
Life's all about motivations, right? I'll take it!
And here is where the proverbial rubber hits the road. All our high-minded talk of stats and coaching scenarios takes a back seat to things that directly affect revenue. Ticket sales to the bowl game will matter a little (they will be pretty small) but season ticket sales for next year will matter a lot. A bowl win (Vegas has installed WVU as a 2 point favorite for what it's worth) could do quite a bit to fuel fan sentiment and boost ticket sales, but a(nother) bowl loss could drive morale down even further. This is the part where I point out that WVU is 1-7 in bowl games since 2000 that were not started by Pat White
Also not to be forgotten is the buyout payment that would be owed to Holgorsen and his staff, somewhere in the neighborhood of $6 million to the head coach and $3 million to his assistants. That number drops after next year (although I can't find to what) so it would seem to be another argument in favor of another year.
So there we are. The conversation is sure to last all offseason but this is where I stand. I think it only makes sense to give Holgorsen another year but with the shortest of leashes. Athletic Director Shane Lyons needs to have a list in his back pocket and if things look bad heading into November he needs to make calls and have a plan in place. Don't pull a half-assed LSU and make it up as you go along, have a plan and have a guy.
I still think Dana can work here and I still want him to. His reign has been characterized by some thrilling moments and we've gotten glimpses of what this program could be, but glimpses are no longer enough. It's time to deliver.
I hope he can. |
Last week I wrote about my upcoming congressional testimony and wow - you guys are awesome! Seriously, the feedback there was absolutely sensational and it's helped shape what I'll be saying to the US Congress, including lifting specific wording and phrases provided by some of you. Thank you!
As I explained in that first blog post, I'm required to submit a written testimony 48 hours in advance of the event. That testimony is now publicly accessible and reproduced below.
Do keep in mind that the context here is the impact on identity verification in "a post-breach world".
My task is to ensure that the folks at the hearing understand how prevalent breaches are, how broadly they're distributed and the resultant impact on identity verification via knowledge-based authentication. I've had some great suggestions around tackling the root cause of data breaches and I'd love to have another opportunity in the future to talk about that, but it goes beyond the specific focus of this hearing. That said, who knows what I'll be asked by congressmen and congresswomen on the day and they may well question what can be done to combat the alarming rise in these incidents. I've now got a lot of great references on hand to go to should that happen so once again, thank you!
Below is the written testimony which has now been submitted and cannot be changed. (Incidentally, there were some formal requirements such as the 1-page summary in the opening of the document.) On Thursday morning at 10:15 DC time, I'll read my oral presentation which is a 5-minute distilled version of the testimony below. I'll reproduce that in another blog post after the hearing as well as linking through to a recording of the event. In writing both of these, I've spent quite a bit of time watching previous hearings including Securing Consumers' Credit Data in the Age of Digital Commerce which featured Bruce Schneier (his written testimony is also worth a read). This has helped me pitch it at what I believe is the right level and I've reflected many of the terms and phrases I've heard from the folks on the committee. As such, you'll find this somewhat different to a lot of my usual writing as it's intended for a very different audience with a very different purpose. That said, I'm certainly not selling out on the things that are important to me!
Finalising my congressional testimony on data breaches. Big on facts, small on buzzwords 😎 pic.twitter.com/oMPyZw39pl — Troy Hunt (@troyhunt) November 26, 2017
(Because someone will point it out if I don't mention it, yes, there is one "cyber" below which reflects Pluralsight's wording around their audience but does not appear in my oral presentation!)
Before you read this testimony, let me share one thing that's particularly noteworthy one week on: I wrote about the prevalence of old data breaches before the Uber news broke and before I was passed the imgur data. I wrote about "unknown unknowns" before those breaches became "knowns" and whilst I'm not naming names in the testimony, I'm sure you can see significance of the timing, especially given the way the Uber situation was handled.
If you're around DC and want to come along, the notice of the hearing has all the info you should need (please come say hi if you do). If you can't be there but would like to tune in on the day, the hearing notice states that it will be available via webcast at energycommerce.house.gov. At the time of writing, there's a YouTube video sitting on the hearing page stating it will go live at the scheduled hearing start time. I've also embedded it below for convenience sake:
Here's my written testimony in full:
Statement of Troy Hunt
For the House Committee on Energy and Commerce
“Identity Verification in a Post-Breach World”
30 November 2017
Summary
Data breaches occur via a variety of different “vectors” including malicious activity by attackers exploiting vulnerabilities, misconfiguration on behalf of system owners and software products intentionally exposing data by design. There is frequently a long lead-time (sometimes many years) between a data breach and the service owner (and those in the breach) learning of the incident. We have no idea of how many incidents have already occurred but are yet to come to light. The industry has created a “perfect storm” for data exposure. The rapid emergence of cheap, easily accessible cloud services has accelerated the growth of other online services collecting data. Further to that, the rapidly emerging “Internet of Things” is enabling us to digitise all new classes of information thus exposing them to the risk of a data breach. An attitude of “data maximisation” is causing services to request extensive personal information well beyond the scope of what is needed to provide that service. That data is usually then retained for perpetuity thus adding to an individual’s overall risk. Lack of accountability means that even in the wake of serious breaches, very little changes in the industry and we continually see other organisations repeat the same mistakes as their peers. Data breaches are redistributed extensively. There’s an active trading scene exchanging data both for monetary gain and simply as a hobby; people collect (and thus replicate) breaches. Many of the personal data attributes exposed in breaches cannot be changed once in the public domain, nor can these breaches be “scrubbed” from the internet once circulating. Even without data breaches, we’re willingly exposing a huge amount of personal information publicly via platforms such as social media. The prevalence with which our personal data is exposed has a fundamental impact on the viability of knowledge based authentication. Knowledge which was once personal and could be relied upon to verify an individual’s identity, is now frequently public knowledge.
Opening
Vice Chairman Griffith, Ranking Member DeGette, and distinguished Members of the House Energy and Commerce Committee, thank you for the opportunity to testify.
My name is Troy Hunt. I’m an independent Australian Information Security Author and Instructor for Pluralsight, an online learning platform for technology and cybersecurity professionals. I’m commissioned on a course-by-course basis to create training material that has been viewed by hundreds of thousands of students over the last 5 years. I’m also a Microsoft Regional Director (RD) and Most Valuable Professional (MVP), both titles of recognition rather than permanent roles. I’ve been building software for the web since 1995 and specialising in online security since 2010.
Of particular relevance to this testimony is my experience running the data breach notification service known as Have I Been Pwned (HIBP). As a security researcher, in my analysis of data breaches I found that few people were aware of their total exposure via these incidents. More specifically, I found that many people were unaware of their exposure across multiple incidents (one person appearing in more than 1 data breach) and indeed many people were unaware of any exposure whatsoever. In December of 2013, I launched HIBP as a freely accessible service to help people understand their exposure. Over the last 4 years, the volume of data in the service has grown to cover more than 250 separate incidents and over 4.8 billion records. What follows are insights drawn largely from running this service including the interactions I’ve had with companies that have been breached, those who have had their personal data exposed (myself included) and law enforcement in various jurisdictions around the world.
Data Breach Vectors
Data breaches have become a fact of modern digital life. Our desire to convert every aspect of our beings into electronic records has delivered both wonderful societal advances and unprecedented privacy risks. It’s an unfortunate yet unavoidable reality that the two are inextricably linked and what follows describes the risks we are now facing as a result.
The term “data breach” is used broadly to refer to many different discrete vectors by which data is exposed to unauthorised parties. Some are as a result of malicious intent, some occur due to unintentional errors and yet others are inadvertent by-products of software design; they’re “features”, if you will.
Malicious incidents are the events we immediately associate with the term “data breach”. In this case, a “threat actor” has deliberately set out to gain unauthorised access to a protected system, often with the intention of causing harm to the organisation and their subscribers. We frequently see successful attacks mounted through exploitation of very well-known vulnerabilities with equally well-known defences. They exploit flaws in our software design, our security measures and indeed our human processes. They may be as sophisticated as leveraging previously unknown flaws or “zero days”, yet they’re frequently as simple as exploiting basic human shortcomings such as our propensity to choose poor passwords (and then to regularly reuse them across multiple services).
Especially in recent years with the growing ubiquity of easily accessible cloud services, data breaches often take the form of unintentionally exposed data. The ease today with which a publicly facing service can be provisioned and large volumes of data published to it is unprecedented – it can take mere minutes. Equally unprecedented is the simplicity with which an otherwise secure environment can be exposed to the masses; a single firewall setting or a simple access control change performed in mere seconds is all it takes.
The very design of some online services predisposes them to revealing large volumes of data about their subscriber base. Particularly in systems intended to make people discoverable such as social media or dating sites, we’ve seen many precedents of large volumes of publicly accessible information collated in an automated fashion in order to build a rich dataset. Some may be reluctant to even call this a “data breach”, yet the end result is largely consistent with the previous two examples of malicious intent and unintentionally disclosed data.
We Often Don’t Know Until Years Later
We simply have no idea of the scale of data that has been breached. We can measure what we know and conclude that there’s an alarmingly large amount of personal information having been exposed, but it’s the extent of the “unknown unknowns” that is particularly worrying.
Increasingly, we’re realising the significance of the problem. During 2016 and 2017 in particular, we saw many incidents where large data sets belonging to well-known brands appeared after having been originally obtained years earlier. These incidents were frequently of a scale numbering in the millions, tens of millions or even hundreds of millions of customers. In some cases, the organisations involved were aware of a successful attack yet consciously elected not to disclose the incident. Many of the recent large breaches involved companies that were aware of unauthorised access to their systems, yet the scope of the intrusion was not known until years later when large volumes of data appeared in the public domain. In other cases, intrusions were entirely unknown until the organisation’s data appeared publicly.
I’ve been personally involved in the disclosure of multiple incidents of this nature directly to the organisations involved. They’re consistently shocked – shocked – that a breach had taken place and had not seen prior indicators that their data may have fallen into unauthorised hands. The passage of time frequently means that root cause analysis isn’t feasible and indeed many of these systems have been fundamentally rearchitected since the original event.
It begs the questions – how much more data is out there? And what are we yet to see from events that have already occurred? We simply don’t know nor is there any feasible way of measuring it. The only thing I can say with any certainty is that there is still a significant amount of data out there that we’re yet to learn of.
A Perfect Storm of Data Exposure
Data breaches have been increasing in regularity and the incidents themselves have been increasing in terms of the volume of records impacted. There are a variety of factors contributing to what can only be described as a “perfect storm” of data exposure:
Firstly, as mentioned above, the rapid emergence of cloud services has enabled organisations and individuals alike to publish data publicly with unprecedented ease, speed and cost efficiency. The low barrier to entry has meant that it’s never been easier to collect and store huge volumes of information and very little technical expertise is required to do so.
Then we have the ever-increasing array of online services collecting data; social media sites, e-commerce, education, even cooking – every conceivable area of human interest has an expanding array of online services. In turn, these services request personal information in order to subscribe or comment or interact with others. As a result, the number of pools of user data on the internet grows dramatically and so too does the total attack surface of information.
The more recent emergence of the class of device we refer to as the “Internet of Things” or IoT is another factor. We’re now seeing data breaches that expose information we simply never had in digital format until recently. In recent times, we’ve seen security vulnerabilities that have exposed data in cars, household appliances and even toys (both those targeted at children and those designed for consenting adults to use in the bedroom). All internet connected and all leaking data that didn’t even exist in digital form a few years ago.
Data Maximisation as a Feature
Exacerbating both the prevalence and impact of data breaches is a prevailing attitude of “data maximisation”, that is the practice of collecting and retaining as much data as possible. We constantly see this when signing up for services with requests for information that is entirely unnecessary for the function of the service itself. For example, requests for personal attributes such as date of birth and physical address, both data points that frequently provide no functional benefit to the service.
Further compounding the data maximisation problem is the fact that the retention period of the data usually extends well beyond the period in which the service is used by the owners of the data. (Indeed, even that term – “data ownership” – can be interpreted to mean either the service retaining it or the individuals to whom the data relates.) For example, signing up to an online forum merely to comment on a post means the subscriber’s personal data will usually prevail for the life of the service. There are many precedents of data breaches occurring on sites where those who’ve had their personal data exposed haven’t used the service for many years.
Individuals’ personal data is also frequently collected without their informed consent, that is it’s obtained without them consciously opting in to the service and the purpose for which it’s being used. Our data is aggregated, “enriched” and sold (often entirely legally) as a commodity; the people themselves have become the product and alarmingly, we’re seeing the aggregation services themselves suffering data breaches both in the US and abroad. In this environment, it’s the organisations holding personal data that control it, not the people to whom that data rightfully belongs.
I frequently hear from subscribers of HIBP that they have no recollection of using a service that’s suffered a data breach. The alert they receive after the data is exposed is often the first they’ve heard of the service in many years. In fact, so much time has often passed that they frequently reject the notion that they were members of the site until they discover the welcome email in their archives or perform a password reset and logon to the service. The site was providing zero ongoing value to them yet it still retained their data and subsequently exposed it in a breach.
Data maximisation prevails as a practice for a variety of reasons. One is that it’s increasingly cost effective to simply retain everything possible, once again due to the emergence of cloud services as well as rapidly declining storage costs. Another is that purging old data comes at a cost; this is a feature that has to be coded and supported. It also creates other challenges around technical constraints such as referential integrity; what happens to records such as comments on a forum when the creator of that comment has their record purged? Organisations view data on their customers as an asset, yet fail to recognise that it may also become a liability.
Attempts by individuals to reduce their data footprint often lead to frustration. There’s frequently no automated way of purging their own personal information and in some cases, organisations have even imposed a financial barrier in a “user-pays to delete” model. Even then, the purging of data from a live system is unlikely to purge that same data from backups that may stretch back years and we’ve seen many cases of the backups themselves being exposed in breaches.
We need to move beyond an attitude of data maximisation and instead embrace the mantra of “you cannot lose what you do not have”.
There’s a Lack of Accountability and a Propensity to Repeat Mistakes
Time and time again, we see serious data breaches that impact people’s lives around the world and we ask “Is this the watershed moment?” “Is this the one where we start taking things more seriously?” Yet clearly, nothing fundamental has changed and we merely repeat the same discussion after the next major incident.
There’s a lack of accountability across many of the organisations that suffer breaches as they’re not held strictly liable for the consequences. Despite the near-daily headline news about major security incidents, there remain fundamental shortcomings in the security posture of most organisations. They trade off the cost of implementing security controls against the likelihood of a data breach occurring and inevitably, often decide that there’s not a sufficient return on investment in further infosec spend. This attitude contributes to both the frequency and severity of serious security incidents and without greater accountability on behalf of the organisations involved, it’s hard to see the status quo changing. There’s not enough incentive to do things right and not enough disincentive to do them wrong therefore the pattern repeats.
Data Breach Redistribution is Rampant
An important factor exacerbating the impact of data breaches is the prevalence with which the data is redistributed once exposed. Data breaches often spread well beyond the party that originally obtained it and the ease with which huge volumes of digital information can be replicated across the globe means that once it’s exposed, it spreads rapidly.
There are multiple factors driving the spread of data that has been breached from a system. One is commercial incentives; data breaches are often placed for sale in marketplaces and forums where they may be sold many times over. The personal information contained within these breaches poses value to purchasers ranging from the ability to compromise other accounts of the victims’ (frequently due to the prevalence of password reuse unlocking other unrelated services) to value contained within the accounts themselves (such as the ability to acquire goods at the victims’ expense) through to outright identity theft (the accounts contain data attributes that help attackers impersonate the victim). In short, there is a return on investment for those who pay for data breaches therefore it has created a thriving marketplace.
More worrying though in terms of the spread of data breaches is the prevalence with which they’re redistributed amongst individuals. Data breach trading is rampant and I often liken it to the sharing of baseball cards; two people have assets they’d like to exchange so they make a swap. However, unlike a physical commodity, the trading of data breaches replicates the asset as each party retains their original version, just like making a perfectly reproduced photocopy. Most of those involved in the redistribution of this data are either children or young adults, doing so as a hobby. Often, they’ll explain it away as a curiosity; they wanted to see if any of their friends (or sometimes, enemies) were involved. Other times they’re experimenting with “hash cracking”, the exercise of determining the original passwords when a system stores them as cryptographic hashes. They rarely believe there are any adverse consequences as a result of redistributing the data.
The exchange of data breaches is enormously prevalent. Sites hosting hundreds or even thousands of separate incidents are easily discoverable on the internet; there’s often terabytes of data simply sitting there available for anyone to download. Forums dedicated to the discussion of data breaches frequently post links to new breaches or old data which may have finally surfaced. These are not hidden, dark web sites, these are easily discoverable mainstream websites.
Exposed Data is (Often) Immutable and (Usually) Irrevocable
Many of the data classes exposed in breaches are immutable, that is they cannot be changed. For example, people’s names, their birth dates, security questions such as their mother’s maiden name or even the IP address they were using at the time (which can be used to geographically locate them and potentially tie them to other exposed accounts). Other data attributes may be mutable albeit with a high degree of friction; an email address or a physical address, for example. They may both change over time but the effort of doing so is high and it’s unlikely to happen merely because that data has been exposed in a breach.
Paradoxically, the data that is most easily changed is frequently the data people are most concerned about. Credit cards, for example, are often referenced in disclosure statements as not having been impacted by a breach yet a combination of fraud protection by banks and the ability to cancel and refund fraudulent transactions whilst issuing a new card means the real-world impact on card holders is frequently limited and short lived.
Exposed passwords are also easily changed and the impact of them falling into unauthorised hands can be minimal, albeit with one major caveat: The prevalence of password reuse means that the exposure of one system can result in the compromise of accounts on totally unrelated systems. But the password itself is readily changed and unlike immutable personal attributes, doing so immediately invalidates its usefulness.
Frequently, I’m asked how someone’s data can be removed from the web; they’re a victim of a data breach, now how do they retrieve that data and ensure it’s no longer in unauthorised hands? In reality, that’s a near impossible objective, exacerbated by the aforementioned redistribution of data breaches. Digital information replicates so quickly and is so difficult to trace once exposed, there’s no putting the data breach genie back in the bottle.
The Emerging Prevalence of OSINT Data and the Power of Aggregation
Data available within the public domain is often referred to as “Open Source Intelligence” or OSINT data. OSINT data can be collated from a range of sources including social media, public forums, education facilities and even public government records to name but a few. It’s data we either willingly expose ourselves or is made publicly available by design. Often, the owner of the data is not aware of its publicly available presence; they inadvertently published it publicly on a social media platform or had it put on public display without their knowledge by a workplace or school. In isolation, these data points may appear benign yet once aggregated from multiple sources they can expose a huge amount of valuable information about individuals.
Data aggregation – whether it be from OSINT sources alone or combined with data breaches – is enormously powerful as it can result in a very comprehensive personal profile being built. One system may leak an email address and a name in the user interface, another has a data breach and exposes their home address then that’s combined with an OSINT source that lists their profile photo and date of birth. Suddenly, many of the ingredients required to identify and indeed impersonate the individual are now readily available.
The Impact on Knowledge-Based Authentication
Knowledge-based authentication (KBA) is predicated on the assumption that an individual holds certain knowledge that can be used to prove their identity. It’s assumed that this knowledge is either private or not broadly known thus if the individual can correctly relay it then, with a high degree of confidence, they can prove their identity. KBA is typically dependent on either static or dynamic “secrets” with the former being the immutable data attributes mentioned earlier (date of birth, mother’s maiden name, etc.) and the latter being mutable such as a password.
The risks associated with static KBA have changed dramatically in an era of data breaches and an extensive array of OSINT sources. Further to that is the frequency and effectiveness of phishing attacks which provide nefarious parties with yet another avenue of obtaining personal data from unsuspecting victims. In years gone by, personal data attributes used for verification processes had very limited exposure. For example, one’s date of birth or mother’s maiden name would normally only be known within social circles which in the past, meant people you physically interacted with. A government issued ID was typically only provided to professional services that had limited exposure.
Now, however, the availability of static KBA data has fundamentally changed yet its use for identity verification prevails. The threat landscape has progressed much more rapidly than the authentication controls yet we’re still regularly using the same static KBA approaches we did before the extensive array of OSINT sources we have available today and before the age of the data breach.
Closing
Data breaches will continue to grow in both prevalence and size for the foreseeable future. The rate at which we willingly share personal data will also continue to grow, particularly with an increasing proportion of the population being “internet natives” who’ve not known a time where we didn’t willingly share information online. Increasingly, the assumption has to be that everything we digitise may one day end up in unauthorised hands and the way we authenticate ourselves must adapt to be resilient to this. |
Venezuela's leader blames right-wing saboteurs for power outage
The blackout leaves Caracas and 17 states without power for hours. President Nicolas Maduro calls it an 'electricity coup'; a consultant suspects human error.
In a Twitter message Tuesday night, Maduro said the failure was due to an "electricity coup" engineered by the "extreme right." Claiming he had authorized several new power projects to address shortages, Maduro maintained that his opponents were conspiring to "destabilize" the country.
Power was restored by early Wednesday to most of metropolitan Caracas, the capital, and a dozen states, according to the government. But by midday, officials in five other states said they were still without power.
The power shutdown began midday Tuesday after an apparent failure in high voltage transmission lines in Aragua and Guarico states, which led to total outage in several of the country's most populous areas.
CARACAS, Venezuela — With parts of Venezuela still dark after a mysterious blackout that left the capital and 17 states without electricity, President Nicolas Maduro laid the blame on opposition sabotage as his government scrambled to respond to the power failure.
Maduro did not offer details on how such sabotage might have been accomplished. He also blamed unnamed opposition figures for an outage in February, and last month declared a state of emergency, sending police and soldiers to occupy power installations.
Jose Aguilar, a Chicago-based international power systems consultant, said Wednesday that he suspects that the failure resulted from human error combined with the fact that the country's largest power plant, Planta Centro, was operating at only 82% capacity in the hours before the crash.
"The knowledge that the national grid had been unstable for 48 hours leading up to the collapse invites the suspicion that there was some imprudent action taken by the government operators," Aguilar said. He called on the government to conduct a thorough investigation to correct the problems and refrain from "frivolous" political accusations.
Aguilar and other critics say independent analyses of the grid's problems are difficult to carry out because the government has kept power-use data secret since 2010.
Interior Minister Miguel Rodriguez said members of the Sebin domestic intelligence service had been "deployed across the nation to protect the population." Energy Minister Jesse Chacon said Tuesday that an investigation has been launched.
Venezuela's power system is plagued by problems caused by a lack of investment and maintenance, critics have charged. But Maduro has countered that the outages are a result of a "low intensity campaign" that is leading up to a "final assault on the revolutionary base."
Maduro took power in January as his leftist mentor, President Hugo Chavez, was suffering from cancer and near death. Maduro won election in April to fill out Chavez's term, but has never displayed his predecessor's charisma or enjoyed as much certainty that he can hold together the disparate groups that make up the ruling coalition.
Special correspondents Mogollon reported from Caracas and Kraul from Bogota, Colombia. |
Each month, hundreds of thousands of Couchsurfers have real-world experiences with one another. References are a powerful snapshot of these experiences, and our community writes more than 1 million references each year. People write references to express gratitude or provide helpful feedback, and they read references to learn more about people they haven’t met yet. Because we believe references should be as timely, honest, and relevant as possible, we’re happy to announce that we’re rolling out some enhancements to the way references work on Couchsurfing.
1. If you use Couchsurfing to host someone or find a host, we’ll now ask you directly to submit feedback about your experience, rather than hoping you’ll remember! You’ll only see these prompts and be able to leave a “Host” or “Guest” reference if you’ve planned and communicated via Couchrequest or Public Trip.
“Host” and “Guest” references should accurately reflect real-world experiences. We can’t prompt you to write a reference if we don’t know that you planned your stay on Couchsurfing. When you’re planning to meet someone, use Couchrequests to discuss and finalize your dates.
If you don’t plan your stay via Couchrequest, you’ll be unable to leave references as a Host or Guest. You’ll still be able to leave a Personal reference.
2. You have 14 days to write a Host/Guest reference. References are published when a host and guest have both written a reference, or after 14 days have passed.
References should be honest and timely, so we want to allow you to write your own reference about your experience, without influence from what your Host or Guest has to say. We also don’t want to let too much time pass so you can write your reference while your stay is still fresh in your mind!
3. You’ll no longer be able to edit or delete a reference.
All references, including Personal references, should be a snapshot of a past experience, rather than a continually evolving post with updates. We want to eliminate the back and forth nature of references so other Couchsurfers can trust that what they see more accurately reflects how each of you felt about your stay.
If someone stays with you or you stay with someone more than once, you can now write them more than one Host/Guest reference. You can still only leave one Personal reference.
4. You can now leave private feedback.
There are some situations where you may want to provide additional feedback about your stay for our team to review. This confidential feedback option allows you to tell our team anything about your stay that you may prefer not to post publicly.
We’re excited about improving the accuracy and timeliness of references. We look forward to your feedback.
Like this: Like Loading...
Comments
comments |
A Michigan teen pleaded guilty to charges of murder and attempted criminal sexual conduct in the attempted rape and slaying of his cousin last year.
In court Monday, Joshua Keyzer, now 16, admitted to killing his 21-year-old cousin, Kassandra Keyzer, at his grandmother's home in Wayland Township, Michigan, on June 21, 2014.
Kassandra, the mother of a 2-year-old boy and a student at Aquinas College, was engaged to be married when her cousin attacked her, tried to rape her, and cut her throat with a knife. Although he was only 15 at the time, Joshua Keyzer was charged as an adult in her murder.
Joshua Keyzer pleaded guilty to multiple charges on May 11.
Keyzer appeared via video link from Allegan County Juvenile Detention Center on Monday. As Judge Margaret Zuzich-Bakker asked him questions about the gruesome murder, Keyser responded in a halting voice, his leg twitching on camera.
"I, uh, killed her, ma'am," he said.
"Did that involve some sort of sexual penetration?" the judge asked.
"Yes, your honor," he responded.
"And did you do that while you were involved in killing Kassandra?" the judge continued.
"Yes, your honor," Keyzer said.
As part of a plea deal, Keyzer won't face a life sentence without the possibility of parole, which is the mandatory sentence in Michigan for adults convicted of first-degree or felony murder. Prosecutors will instead seek a sentence of 40 to 60 years in prison when Keyzer is sentenced June 15, according to MLive.com.
State troopers arrived at Sharon Keyzer's house last June after the 67-year-old woman called 911 and said her grandson had attacked her and slashed her neck.
WOOD-TV, a local station, obtained chilling audio of the 911 call:
Police arrived to find Keyzer clutching a blood-soaked towel to her neck, according to WOOD-TV. But what the cops found downstairs was truly horrific.
Kassandra Keyzer's body was sprawled on an ottoman in the basement soaked in blood, according to a police report. The report described a "deep laceration starting from her chin to the middle of her right cheek," and a stab wound in her neck. They noted that "the words 'My Bad' were written in apparent blood on the north wall above the counter."
Sharon Keyzer (left) and granddaughter Kassandra Keyzer (right)
Police said that the young woman's clothes were torn and her underwear was cut off. They found Joshua Keyzer, who had been living with his grandmother part-time, on the roof of the house holding a hunting knife. The boy, who was covered in blood, jumped off the roof in an apparent escape attempt, but state troopers subdued him with a taser.
Keyzer received two psychiatric evaluations and was declared competent to stand trial, according to the New York Daily News. The confessed killer offered no explanation for the murder in court Monday. |
Two earthquakes, which struck Italy this week, were retribution for the countrys support of the UNESCO resolution disregarding the Jewish connection to Jerusalem, Israeli Deputy Minister for Regional Cooperation Ayoob Kara said.
Im sure that the earthquake happened because of the UNESCO decision, Kara, a member of the ruling Likud Party, wrote in a memo, Ynetnews website reported.
Ironically, the Israeli politician was on a state visit to the Vatican when the quakes hit central Italy on Wednesday, killing one and injuring 10 people.
Earlier the same day, UNESCO (United Nations Educational, Scientific and Cultural Organization), passed a resolution criticizing Israel for its handling of the holy site in Jerusalem called Temple Mount by Jews, and Haram al-Sharif by Muslims.
The document was adopted after heated debate over its wording, and particularly the Arabic names used in the document. Italy was among the nations voting in favor of the resolution.
Israel blasted UNESCO and its Arab members for trying to undermine Jewish connections to the holy site.
Kara arrived in the Vatican in a fruitless effort to avert the resolution, but still managed to have a small chat with the leader of the Catholic Church.
According to Kara, Pope Francis strongly disagreed with the resolution.
As for surviving the natural disaster, the Israeli politician said that going through the earthquake was not the most comfortable of experiences, but we trusted that the Holy See would keep us safe.
He (the Pope) even said publicly that the holy land is connected to the Nation of Israel, the deputy minister stressed. |
And one for the secretary. A long line of Model S in front of Tesla’s China headquarters.
Reports in Chinese media say Tesla China has a record inventory of 2301 Model S cars. The number is based on a comparison of the 2014 sales number and the 2014 import number: in 2014 Tesla sold 2499 Model S cars in China, but the company imported 4800 cars, leaving a gap of exactly 2301.
It is the first time that the exact 2014 sales number for Tesla in China has been revealed, 2499 accounts for 208.25 cars a month. The numbers have been confirmed by various Tesla employees and ‘insiders’, speaking anonymously to said media.
Why did Tesla ship so many cars to China, and where are they now?
An anonymous source, quoted by Chinese media, says the huge inventory is partly caused by the many cancellations of orders by Chinese buyers. The cancellations went up in October last year after Tesla announced the new P85D, with many customers cancelling their original order and going for the D instead. But in many cases, still according to the same source, their original-ordered cars were already shipped and underway.
Normally, a deposit prevents most cancellations. But not in China. Officially, Tesla China used a two-phase deposit system: 15.000 yuan ($2394) when the car was ordered and 250.000 yuan ($39.920) when the car was ready to be shipped to China. In reality however Tesla China only collected the first deposit, with another anonymous ‘Tesla insider’ saying that collecting the second deposit was “too difficult”. This lenient policy caused a lot of cancellations.
Price for a Model S in China starts at 648.000 yuan ($103.428) for the S 60.
Late last year Tesla’s U.S. headquarters found out about the cancellations and decided to upper the deposit on order to 50.000 yuan, while leaving the second deposit unchanged. The decision to raise the deposit was made in the U.S. without any consultation with Tesla China, according to sources from within the company, who also say Tesla China warned against the raise.
And Tesla China appears to have been right. The higher deposit caused an immediate drop in sales in late 2014 as potential buyers considered 50.000 yuan as far too much. The drop continued in 2015 with only 120 cars sold in January.
It is unknown how many orders were cancelled exactly. Still, considering the 2014 monthly average sales of 208.25 cars and the fact that the 85D was announced only in October, it seems unlikely that the cancellations count for the whole 2301 car inventory. Another apparent reason was a too optimistic forecast of fleet sales, and cars that were ordered for stores that were scheduled to open, but didn’t.
Tesla China suggested two ways to deal with the unsold cars: 1) sell the inventory cars with a 20% discount. 2) use third-party sellers, such as large dealer groups, to get rid of the inventory, as Tesla still has very few stores in China.
Both suggestions were rejected out of hand by Tesla’s headquarters in the United States, again according to Chinese sources from within Tesla China. This led to unease and anger, with sources saying staff in China feels that they were not being taken seriously by Tesla in the U.S., with one employee describing the U.S. side as ‘stubborn’.
These feelings surfaced again earlier this month after Tesla CEO Elon Musk announced Tesla’s intention to cut one third of its staff in China. Anonymous employees called Musk ”too authoritarian and too impatient” for the complicated Chinese market.
Back to the inventory. Where are the cars now?
Most of the cars are stored at a facility at the Port of Tianjin, which is the port of entry for almost all vehicles coming from the United States. A large number of cars is parked on a lot near the Tesla store in Pudong District in Shanghai.
About 170 to 200 (sources vary) cars are stored in a ‘warehouse’ in Beijing. The warehouse belongs to Lei Xingxing Automobile Corporation (web), the largest Mercedes-Benz dealer in the capital with eight large facilities all over town.
The whereabouts of the other cars are currently unknown. |
InMotion invests $5 million to establish first
U.S. manufacturing in Blacksburg
-Italy-based manufacturer of electric motors and drives
brings 80 jobs to Montgomery County-
FOR IMMEDIATE RELEASE
October 7, 2014
Montgomery County, Virginia
Economic Development Department
Montgomery County, Virginia – InMotion, a leading manufacturer of electric motors and drives for electric and hybrid vehicles, today announced it will invest over $5 million to establish its first U.S. manufacturing operations in Montgomery County. The company, a subsidiary of Italy-based Zapi S.p.A., will locate in the Technology Manufacturing Building in Blacksburg. InMotion anticipates growing to 80 employees in Montgomery County over the next three years.
Speaking about today’s announcement, Governor McAuliffe said, “We are thrilled that InMotion will join the corporate roster of manufacturing companies in Virginia and continue to expand in the New River Valley. Increasing advanced manufacturing jobs is vital to building a New Virginia Economy. We look forward to InMotion’s future success in Montgomery County and in the Commonwealth for years to come.”
“InMotion’s choice of Virginia for the location of its first U.S. manufacturing operation is great news for Montgomery County and the Commonwealth,” said Virginia Secretary of Commerce and Trade Maurice Jones. “The County offers the infrastructure to meet the company’s current and future growth needs, research and development capabilities, and a robust workforce. We are grateful to InMotion for contributing to the economic health of the New River Valley and the Commonwealth.”
InMotion is a global supplier of advanced software, power electronics, controls, motors and generators for electric vehicles with over 300 employees in offices in Europe, Virginia, Japan and China. With annual sales topping $105 million, InMotion’s products are used by leading manufacturers of electric lift trucks, mobile off-highway vehicles, personal mobility devices, elevators, buses, and refrigeration systems. InMotion is a subsidiary of Zapi S.p.A., a leader in advanced controller technology based in Italy with annual sales of over $350 million.
“Montgomery County has an outstanding combination of local talent and cost-effective facilities as well as improved access to Virginia Tech and its world-class electric vehicle research,” said Mike Jellen, General Manager of InMotion US. “Our Montgomery County location, with its attractive standard of living, coupled with InMotion’s ground-breaking product development will continue to attract and retain the industry’s best people.”
Montgomery County worked with the New River Valley Economic Development Alliance, Virginia Economic Development Partnership, MBC Development Corporation, and the Town of Blacksburg to support the project. Through a lease agreement with the Economic Development Authority (EDA) of Montgomery County, InMotion will lease approximately 60,000-square-feet of office and manufacturing space at the Technology Manufacturing Building in Blacksburg. Built in 2001, the Technology Manufacturing Building is a 109,000-square-foot facility owned and managed by EDA.
“We are excited to welcome InMotion to Montgomery County,” said Bill Brown, Chairman of the Montgomery County Board of Supervisors. “Our selection by this great Italian company for its first U.S. manufacturing operations speaks volumes to the innovative business climate in Montgomery County and the positive reputation our highly-skilled workforce enjoys around the world.”
“We’re honored to welcome InMotion to Blacksburg,” said Mayor Ron Rordam. “The fact that this international company has decided to locate in our community speaks to the strong work force and incomparable quality of life we offer in Blacksburg and the surrounding region. We look forward to working with InMotion as they expand their business opportunities in the Town.”
Contacts:
Ruth Richey
Montgomery County, Va.
Public Information Office
Phone: (540) 381-6887
E-mail: richeyrl@montgomerycountyva.gov
Mike Jellen
InMotion U.S.
Phone: 408-717-2157
E-mail: mike.jellen@evs-inmotion.com
###
About Us |
Original design goes to thebigunit3000, Jonas Wilson, I merely helped him tweak and tune the deck through many iterations.
Think it lacks money? Afraid of parasite? Don't be concerned, this deck is undefeated by any and all Silhouettes and Exiles in the jnet casual room. Just install MCH, start searching for assets, and laugh as your opponent complains about no longer being able to read the cards.
The real concern comes in the fear of bodily harm from other tournament goers. If you are taking this to a live event, you should be prepared to defend yourself. No, that isn't really Ben Affleck in the dark alley behind the store, don't believe the lies. Also, keep an eye on your bag and wallet. Play this deck at your own risk. I assume no responsibility for any injuries that may occur as a result of playing this deck. |
Hello once again Star Wars: Battlecry fans, it’s time for another update!
2015 has been a big year for the Battlecry team and we thank all of you for your great support from the start. This is such a fantastic project that we all enjoy working on in our free time and can’t wait to share it with everyone.
What have we been up to?
This year we made a lot of progress on every part of the game and released our first video showing off in-engine gameplay on our Cloud City map. We were incredibly excited to show everyone that what we've been working on is real. We decided a video was the perfect way to demonstrate that.
Since the last update we have made massive changes to the layout of portions of Cloud City to improve how they play. Specifically, the outside of the map seen in the video above is no longer wide open and somewhat reflects the design of the Bespin: Platforms map from Pandemic’s 2004 Star Wars: Battlefront game.
The HUD effects have been tweaked to look more holographic and improve visibility. We have been working on improving all of the existing character animations and creating new ones. Bugs and other critical issues that needed to be fixed in order to implement new features took up a considerable amount of our time.
We have been experimenting with new technologies such as virtual reality and we would like to announce basic VR implementation in Battlecry along with 4k support. Examples of which can be seen below:
Image captured from an Oculus Rift Development Kit 1
On December 23 we launched a brand new design for our community forums. Every single part of the forum has been retouched to have a consistent design and fix all known issues. Usability was improved on both desktop and mobile by making the design familiar to those who have used other popular forum softwares in the past. On smartphones the forum can also function like an app on both iOS (Safari) and Android (Chrome) by adding it to the home screen.
http://community.swbattlecry.com/
Indie of the Year Awards
We can't forget to mention the 2015 Indie of the Year awards. Thanks to all of your support, we made it into the top 100 games on all of IndieDB! It was an honor for all of us to be among so many great indie titles, but made us realize we have a long road ahead of us.
What are our goals for 2016 and beyond?
We can’t go without some new year’s resolutions of course. First of all we are going to continue to make progress on the game to get it into a fully playable state.
This past year we have not communicated as well as we have in the past. After the video we released showing some very early in-engine gameplay, we may have set the bar a bit too high for the amount of content needed to publish a new development update. We are aiming to communicate more with our community and have less of a gap between updates.
We need more developers!
We are looking for more people to help us continue to make this game a reality. If you or anyone you know would like to help, please apply at http://swbattlecry.com/apply.
We are always looking for:
Programmers (C++)
Environment Artists
Hard-Surface Artists (Weapons, Vehicles)
2D Artists
Animators
Even if you do something different please feel free to contact us! We may not always respond to everyone, but we always make sure to read everything. We apologize if it seems like we have been ignoring anyone.
The New Year
What a year 2015 has been not just for us, but for the Star Wars franchise as a whole. A new Battlefront that is a blast to play despite some minor issues and the long awaited Star Wars: The Force Awakens! We can't wait to see what will happen over the course of the next year…
Happy New Year and may the force be with you,
The Battlecry team |
Now Mark Sanford says he is "thankful" for the experience afforded him by his extra-marital affair.
Seriously.
In an op-ed written for South Carolina newspapers, Sanford writes:
It is true that I did wrong and failed at the largest of levels, but equally true is the fact that God can make good of our respective wrongs in life. In this vein, while none of us has the chance to attend our own funeral, in many ways I feel like I was at my own in the past weeks, and surprisingly I am thankful for the perspective it has afforded.
Look, Mark, I don't know if you've ever been to a funeral, but what you've been through over the past few weeks -- a five-day vacation with your mistress in Argentina and another five-day vacation with your wife in an undisclosed location (all the while pulling down a fat paycheck) -- isn't anything like a funeral.
If you had resigned as governor, maybe you could claim this was like a funeral of sorts, but judging by the fact that you are keeping yourself in the public eye, you're holding out hopes for your political career, aren't you? In fact, you are trying to nurse it back to life, right? A sort of a political reincarnation, no?
Truth is, you're trying to seduce the Republican electorate by spinning this whole sordid affair as a story of God -- with yourself as the hero, struggling to return from the beyond, a story of your own rebirth. To wit:
I’ve been humbled and broken as never before in my life, and as a consequence have given up areas of control in a way that I never have before. And it is my belief that this will make me a better father, husband, friend and advocate. It’s in the spirit of making good from bad that I am committing to you and the larger family of South Carolinians to use this experience both to trust God in his larger work of changing me and, from my end, to work to becoming a better and more effective leader.
Listen, Mark, you might as well have gone all the way and started talking about what it will be like at your own resurrection, because that's what you're going for here.
Jebus. Talk about a complex. You've got the fever something fierce. |
Research Article
1Department of Orthodontics, School of Medicine with the Division of Dentistry in Zabrze, Medical University of Silesia in Katowice, Traugutta Square 2, 41-800 Zabrze, Poland
2Department of Microbiology and Immunology, School of Medicine with the Division of Dentistry in Zabrze, Medical University of Silesia in Katowice, Jordana 19, 41-808 Zabrze, Poland
3Department of Conservative Dentistry with Endodontics, School of Medicine with the Division of Dentistry in Zabrze, Medical University of Silesia in Katowice, Akademicki Square 17, 41-902 Bytom, Poland
4Department of Dental Surgery, School of Medicine with the Division of Dentistry in Zabrze, Medical University of Silesia in Katowice, Akademicki Square 17, 41-902 Bytom, Poland
Correspondence should be addressed to Agnieszka Machorowska-Pieniążek; lp.teno@hcamaga
Received 5 December 2016; Revised 12 February 2017; Accepted 7 March 2017; Published 14 March 2017
Academic Editor: Koichiro Wada
Copyright © 2017 Agnieszka Machorowska-Pieniążek et al. This is an open access article distributed under the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Abstract
Few reports have been published on the early microbiota in infants with various types of cleft palate. We assessed the formation of the oral microbiota in infants with complete cleft lip and palate (CLP ) or cleft soft palate (CSP ) in the neonatal period (T1 time) and again in the gum pad stage (T2 time). Culture swabs from the tongue, palate, and/or cleft margin at T1 and T2 were taken. We analysed the prevalence of the given bacterial species (the percentage) and the proportions in which the palate and tongue were colonised by each microorganism. At T1, Streptococcus mitis (S. mitis) were the most frequently detected in subjects with CLP or CSP (63% and 60%, resp.). A significantly higher frequency of methicillin-sensitive Staphylococcus aureus (S. aureus MSSA) was observed in CLP compared to the CSP group. At T2, significantly higher percentages of S. mitis, S. aureus MSSA, Staphylococcus epidermidis, and members of the Enterobacteriaceae family were noted in CLP infants compared to the CSP. S. mitis and Streptococcus sanguinis appeared with the greatest frequency on the tongue, whereas Streptococcus salivarius was predominant on the palate. The development of the microbiota in CLP subjects was characterised by a significant increase in the prevalence of pathogenic bacteria.
This paper is dedicated to the memory of colleague and friend Professor Wojciech Król, who recently died
1. Introduction
The oral cavity, which remains sterile throughout prenatal development, becomes a diverse ecosystem colonised by numerous microorganisms during the first hours following delivery. The skin and mucus membranes of neonates are colonised by microbiota as a result of contact with the external environment. A significant part of the oral microbiota in the early neonatal period originates from the mother and is transient population of microorganisms consisting of intestinal bacteria (in neonates born naturally) [1]. The resident microbiota in this period depends mainly on external factors, including gestational age, mode of delivery, type of feeding, the length of hospital stay following delivery, and general condition [1–10]. The complex structure of the oral cavity, with its numerous recesses, the mucosal folds of the palate, and the invaginations of the cheeks and tongue, creates niches with different pH values, local oxygen concentrations, redox states, ionic compositions, buffer capacities, hydration, access to saliva, and mechanical interactions. These conditions are favourable for the development of a diverse ecosystem based on the interactions between bacteria and the host environment [11, 12]. The early oral microbiota occurring within several hours following delivery is composed of viridans streptococci and Streptococcus salivarius (S. salivarius), which are commensals permanently colonising the oral cavity [2]. Along with other bacteria, they participate in the formation of a “colonisation cascade” that determines future indigenous microbiota [2, 5, 6].
Congenital orofacial malformation affects the structure and functions of the oral cavity, thereby significantly modifying its characteristics [13]. As a result, such malformations may exert influence on the microbiota of the environment. Orofacial clefts are the most common congenital developmental malformation of the oral cavity [14]. Neonates with complete cleft lip and palate (CLP) are characterised by the existence of communication between oral and nasal cavities extending from the upper lip and nasal vestibule to the end of the soft palate. This condition adversely affects natural sucking or even impairs the ability to swallow food [15]. Moreover, neonates and infants with orofacial cleft require specialised care to maintain proper hygiene of the incisive bone, nasal passages, and the oral cavity with special attention paid to preparation for future surgical procedures [14]. Cleft soft palate (CSP) is a less severe form of orofacial cleft with the continuity of the lips and hard palate maintained. Dysmorphia of the oral cavity in patients with this malformation affects the dorsal part of the oral and nasal cavities, which are characterised by significantly reduced communication compared to CLP [16].
Previous studies have confirmed that patients with orofacial cleft are at increased risk for the development of caries and periodontal diseases compared to noncleft children [13, 14]. Furthermore, changes in the amount and composition of oral microbiota have been reported in subjects with different types of cleft palate during deciduous or permanent dentition [17] and as the result of surgical or orthodontic treatment [18–20].
Both abnormal morphology and improper function of the oral cavity in newborns with cleft palate create a different environment from that of healthy neonates. Therefore, these abnormalities may affect oral microbiota [21]. Few reports have been published on the early microbiota in neonates and infants with various types of cleft palate.
The primary aim of the study was to compare the oral microbiota in infants with CLP and infants with CSP group. The second aim was to assess the development of the oral microbiota in subjects with complete CLP and age-matched CSP group during the neonatal period and then in the gum pad stage of the infancy period before surgery.
2. Materials and Methods
2.1. Design and Participants
This study was conducted from May 2012 to December 2014 in the Developmental Anomaly Outpatient Clinic at the Centre of Dentistry and Specialist Medicine, Medical University of Silesia in Zabrze. The study was approved by the Bioethics Committee of the Medical University of Silesia in Katowice, Poland (KNW/0022/KB1/54/12). All legal guardians of the subjects enrolled in the study provided written consent for their participation.
The study materials consisted of microbiological smears from the oral cavity mucosa collected from neonates and infants with cleft malformation who were consulted and treated at the Developmental Anomaly Outpatient Clinic of the University Centre of Dentistry and Specialised Medicine in Zabrze, Poland.
The inclusion criteria for newborns were as follows: (1) complete CLP or CSP; (2) gestational age over 37 weeks, (3) birth weight of 2,500–4,000 g, and (4) Apgar score of 9-10 at 1 min and of 10 at 5 min. The exclusion criteria were (1) the coexistence of orofacial cleft with other developmental abnormalities, (2) antibiotic therapy, (3) respiratory tract infections, (4) tube feeding, (5) treatment with palatal plate, (6) natal or neonatal teeth, (7) deciduous teeth at T2, (8) past surgical repair of cleft lip and/or palate, and (9) failure to appear for the follow-up visit between the eighth and eighteenth week of life. Figure 1 shows a flowchart of the process that was used to screen and select the trials.
Figure 1: Number of subjects recruited and flow of patients within the study.
At the first visit all parents were provided with the feeding instructions that were adapted individually to the needs of each patient. All mothers were encouraged to put the child to the breast. All patients with CLP were bottle-fed with a broad or standard nipple. Eight patients with CLP were fed partially with modified milk and partially with breast milk from the bottle. The remaining newborns/neonates were given modified milk only. Two patients with CSP were breast-fed (within 3 and 6 weeks). After this period they were additionally fed with modified milk. Four neonates with CSP were bottle-fed (Haberman Feeder) with modified milk. The other patients were fed with the regular nipple and partially with modified milk and partially with breast milk from the bottle.
Feeding problems occurred in 16 patients (10 with CLP and 6 with CSP). The problems were related to the long feeding period (>40 min), choking, coughing, crying with feeding and regurgitation. In these patients a lower weight gain was observed within the first month of life by ~90–110 g per week.
The subjects were divided into two groups (Table 1). The first group consisted of 30 infants with unilateral or bilateral complete CLP. In this group, smears were obtained from palatal mucosa on the cleft margin (sample A1) and from the dorsum of the tongue (sample A2). The second group comprised 25 subjects with isolated CSP, and smears were obtained from the palatal mucosa (sample B1) and the dorsum of the tongue (sample B2). The samples were collected by rubbing the mucous membrane with a sterile cotton swab.
Table 1: Descriptive statistics and statistical comparisons between CLP group and CSP group.
The smears were collected twice from the subjects of both groups. The first smear was obtained within the first or the second week of life (time T1), and the second was taken between the eighth and the eighteenth week of life (time T2), prior to cleft lip and palate repair. All samples were obtained using a sterile EUROTUBO® collection swab with Amies transport medium (DELTALAB, Rubi, Spain) and were delivered to the Department of Microbiology and Immunology in Zabrze within 1 h, where the material underwent analysis.
2.2. Microbiological Examination
The samples collected for microbiological investigation were smears from the palatal mucosa and smears from the dorsum of the tongue. All the studied samples were inoculated on a solid culture media from Biomerieux (Marcy l’Etoile, France): Columbia agar with 5% ram blood, MacConkey agar, Mannitol salt/Chapman agar, and Sabouraud agar. The bacteria were grown on suitable media at 37°C in aerobic conditions. Yeast fungi of the Candida species were multiplied on selective solid medium Sabouraud agar at a temperature of 35°C in aerobic conditions. Before identification, the studied microorganisms were cultured and isolated on solid nonselective medium Columbia agar with 5% of ram blood in order to evaluate the morphology of the pure culture, haemolytic activity, and pigmentation production. We also used related selective solid media, such as MacConkey agar for rods and Chapman agar for cocci. After isolation and further culture of each microorganism, their species were identified using the following set of reagents: Slidex Staph Plus, ID Color Catalase, Oxidase Reagent, and Api Candida (Biomerieux, Marcy l’Etoile, France), as well as STAPHYtest 24, STREPTOtest 24, ENTEROtest 24N, and NEFERMtest 24N (Erba-Lachema, Brno, Czech Republic). In the case of Gram positive, catalase negative beta-haemolytic cocci we analysed the presence of group antigens using Slidex Strepto Plus kit, and a sensitivity test to optochin was performed to clearly differentiate pneumococci from other streptococci. Species of microorganisms were identified using conventional methods, with the use of commercial test kits (STAPHYtest 24, STREPTOtest 24, ENTEROtest 24N, and NEFERMtest 24N), from among the MIKROLATEST identification kits manufactured by Erba-Lachema. The MIKROLATEST kits are a standardized micromethod system for rapid, reliable routine identification of the most clinically important bacteria and yeasts, in every case on the basis of 24 biochemical tests placed in microwells. For evaluation of identification results, we used TNW LITE 6.5 software as recommended by Erba-Lachema. Identification of the microorganism species using these reagents was performed according to the vendors’ protocols.
2.3. Data Collection
At T1 and T2, we assessed the prevalence of the given bacterial species (the percentage) found in the oral cavities of subjects with CLP or CSP group. We also analysed the proportions in which the palate (A1, B1) and tongue (A2, B2) were colonised by each microorganism.
The intensity of bacterial growth was considered using the following scale: (1) scant growth, (2) medium growth, and (3) abundant growth. The evolution of the microbiota between T1 and T2 was assessed separately for both groups by analysing the number of patients for whom a given microorganism was detected at both timepoints or only at T1 or T2.
2.4. Statistics
Descriptive statistics are expressed as number and percentage and as median and interquartile range, as appropriate. The distributions of continuous variables were compared with a Mann–Whitney test and proportions with the chi-square test.
The differences in the frequency of the occurrence of each bacterial species between CLP and CSP group at T1 and T2 were assessed using the chi-square test.
The McNemar test was used to compare within-group differences in the frequency of detection of single bacterial species between T1 and T2. Odds ratios (ORs) with 95% confidence intervals (CIs) were computed. All statistics were two-tailed, and the significance level was defined as p < 0.05. Statistical analyses were performed using Statistica v.10.
3. Results
3.1. Gum Pad Stage of the Neonatal Period (T1)
The subjects with CLP were delivered by caesarean section significantly more frequently compared to the CSP group (p = 0.038). The study groups were age- and birth weight-matched (Table 1).
The genus Streptococcus was found most frequently in both the CLP and CSP groups in the neonatal period (63%), whereas Streptococcus mitis (S. mitis) was the most frequently observed species (63.3% and 60.0%, resp.) (Table 2). Moreover, the frequency of methicillin-sensitive Staphylococcus aureus (S. aureus MSSA) was significantly higher in the CLP group (p = 0.020) than in the CSP group (Table 2).
Table 2: Statistical comparison of microorganism frequency (prevalence), colonisation, and growth intensity between CLP group and CSP group at T1.
The majority of Streptococcus species showed abundant growth in subjects of both the CLP and CSP groups (Table 2).
A difference in microbiota colonisation between the palate (A1, B1) and tongue (A2, B2) was observed in both groups. This change was related to the bacteria that appeared with frequencies higher than 20%. S. mitis and S. salivarius were dominant on the tongue (A2, B2), whereas Streptococcus sanguinis (S. sanguinis) prevailed on the palate (A1, B1). The remaining bacteria did not demonstrate significant differences in their colonisation of the palate and tongue (Table 2).
3.2. Gum Pad Stage of the Infancy Period (T2)
S. salivarius was the most frequently isolated bacterial species in both CLP and CSP patients (100% and 84%, resp.; Table 3). Furthermore, compared to the CSP group, subjects from the CLP group presented a significantly higher percentage of the following bacterial species: S. mitis (p = 0.002), S. salivarius (p = 0.022), S. aureus MSSA (p < 0.001), Staphylococcus epidermidis (S. epidermidis) (p < 0.001), and the members of the Enterobacteriaceae family, that is, Enterobacter cloacae (E. cloacae) (p = 0.007), Klebsiella pneumoniae (K. pneumoniae) (p < 0.001), and Klebsiella oxytoca (K. oxytoca) (p < 0.001) (Table 3).
Table 3: Statistical comparison of microorganism frequency (prevalence), colonisation, and growth intensity between CLP group and CSP group at T2.
The proportions in which the palate (A1, B1) and the tongue (A2, B2) were colonised by each microorganism were similar to that observed in the neonatal period. S. mitis and S. salivarius were dominant on the tongue (A2, B2), whereas S. sanguinis prevailed on the palate (A1, B1) (Table 3).
Moreover, infants from CLP group and from CSP group presented with Streptococcus agalactiae (S. agalactiae) (6.6%, 16%), and infants with CLP also presented with Streptococcus pyogenes (S. pyogenes) (13.3%) (Table 3).
3.3. Development of the Microbiota (between T1 and T2)
3.3.1. CLP Group
In the CLP group, between T1 and T2, a statistically significant increase was observed in the prevalence of 9 bacterial species: S. mitis (p = 0.006), S. sanguinis (p = 0.012), S. salivarius (p < 0.001), S. aureus MSSA (p < 0.001), S. epidermidis (p < 0.001), Neisseria spp. (p = 0.007), E. cloacae (p = 0.021), K. pneumoniae (p = 0.006), and K. oxytoca (p < 0.001). Moreover, a statistically significant decrease in the percentage of Gemella morbillorum (p = 0.041) was revealed at T2. The odds ratio for S. salivarius in the CLP group during T2 was 22 times higher compared to T1, OR = 22 [95% CI, 2.96–16.21]. The odds ratio for S. aureus MSSA, OR = 16 [95% CI, 2.12–12.65], and K. oxytoca, OR = 18 [95% CI, 2.40–13.83], were 16 and 18 times higher, respectively, at T2 than at T1 (Table 4).
Table 4: Development of the oral microbiota between T1 and T2 in CLP subjects.
3.3.2. CSP Group
A statistically significant increase in the frequency of S. salivarius was observed (p = 0.022) at T2, OR = 5.5 [95% CI, 1.219–24.814]. The frequency of the occurrence of the remaining bacteria changed insignificantly (Table 5).
Table 5: Development of the oral microbiota between T1 and T2 in CSP subjects.
4. Discussion
The study presents the prevalence of oral microbiota in several-day old newborns with CLP and CSP and changes in microbial population during the infant predental period prior to surgical procedure, which has not previously been described in the literature.
The prevalence of nonpathogenic commensal oral bacteria (i.e., S. mitis and S. salivarius) was revealed in T1 in subjects from both CLP and CSP groups. Long and Swenson confirmed the ability of S. mitis and S. salivarius to adhere to oral epithelial cells in the oral cavity epithelia of 1-day-old newborns [22]. The early colonisation of the oral cavity by streptococci facilitates further colonisation by other strains and plays a crucial role in maintaining a healthy oral cavity throughout life [5, 22]. Thus, mechanisms exist to enable physiological colonisation of the mucous membrane by nonpathogenic microbiota in both CLP and CSP group patients during the neonatal period despite different local conditions related to the occurrence of cleft. An interesting finding of this study is the demonstration of the occurrence of S. sanguinis in tongue and palate swabs of toothless infants with CSP and CLP during T1 and T2. Arief et al., when analysing the saliva of 3-39-month-old patients with CLP, did not find S. sanguinis, either in the preoperative or postoperative period [21]. On the other hand, Caufield et al., in their long-term studies of saliva samples and dental plaque from infants, demonstrated that S. sanguinis precedes S. mutans colonisations and that both compete for niches on the tooth surface [23]. Other studies also show that colonisation of both species depends on tooth emergence [24, 25]. However, some authors prove that S. sanguinis [26] and S. mutans [27] may colonise the oral mucosa of predental infants and the dorsum of the tongue, which is an important ecological niche [28, 29]. S. sanguinis is considered to be the antagonist of S. mutans, and early colonisation with S. sanguinis delays colonisation by S. mutans, which is considered to be a significant factor in the development of caries [23, 30]. Caufield et al. raise the question of whether this phenomenon should be used in the prevention of caries, inducing early colonisation by S. sanguinis, and thus delaying colonisation by S. mutans [23].
The distribution of Streptococcus species in the oral cavities of both CLP and CSP subjects demonstrated differences between the tongue and palate. The majority of S. salivarius and S. mitis strains were cultured from samples collected from the tongue (A2, B2), whereas S. sanguinis mainly derived from palate samples (A1, B1). This observation is consistent with the reports of other authors who confirmed a selective ability in the adherence of streptococci to oral epithelial cells [12, 31]. The majority of streptococci collected from the palate and the tongue showed abundant growth in both CLP and CSP group.
Group A β-haemolytic streptococci were not found in CLP or CSP group subjects in the neonatal period. However, other potentially virulent pyogenic streptococci were observed, including Streptococcus pneumoniae (S. pneumoniae), Streptococcus dysgalactiae (S. dysgalactiae), and Streptococcus intermedius (S. intermedius). Subjects with CLP presented with all members of the Streptococcus anginosus group (i.e., S. anginosus, S. constellatus, and S. intermedius). These strains can cause acute infections, particularly in immunodeficient individuals. These inflections include brain, mouth, or liver abscesses and endocarditis, whereas S. agalactiae can cause bacteraemia in neonates, acute pulmonary insufficiency, and cerebrospinal meningitis [20]. In a study of the microbiota in patients with lip cleft prior to surgical intervention, Cocco et al. did not detect the presence of group A β-haemolytic streptococci in any of the patients. However, the researchers observed the species in only 2.3% of patients with cleft palate [32]. In contrast, Chuo and Timmons detected β-haemolytic streptococci in 11% of positive smears taken from patients with cleft lip and/or palate prior to surgical repair [18]. The presence of β-haemolytic streptococci in the oral cavity of subjects with CLP is related to postoperative complications, such as slow wound healing or the development of abscesses and fistulae [32].
S. aureus was detected in 40% of CLP subjects and in only 12% of CSP group subjects in the neonatal period. The difference between the study groups was statistically significant. The variation in the frequency of S. aureus between both groups may be related to the type of cleft and different local environmental conditions. In patients with CSP, the oral and nasal cavities form two nearly separate environments, as only the distal part of the soft palate is cleft. A different morphology is found in CLP patients, in whom the oral and nasal cavities are connected, which facilitates communication, including the transmission of mucus, food, saliva, air, and microbiota. In a study of CLP with oronasal fistulae, Tuna et al. emphasised the significance of S. aureus transmission from the oral to the nasal cavity in the risk of infection after the surgical treatment of patients with this malformation [19]. A slightly different opinion was expressed by Cocco et al., who questioned the pathogenicity of S. aureus in infants under 1 year of age [32]. Similarly, Jolleys and Savage did not observe an increased number of postoperative complications in patients with S.aureus detected in the preoperative period [33].
In both groups of patients, nonpathogenic streptococci were the most prevalent at T2, in the gum pad stage of the infancy period. Among nonpathogenic streptococci, S. salivarius and S. mitis were the most frequent. S. aureus was the most frequently detected potentially pathogenic strain and was observed in 93.3% of subjects with CLP and significantly less frequently in subjects with CSP (20%).
The formation of the microbiota in subjects with CLP proceeded differently than in CSP group subjects. An increased frequency of potential pathogens, mainly S. aureus and S. epidermidis, was observed in the CLP group. The odds ratio for these bacteria increased 16 times with development. These results may be explained by the altered anatomical conditions of the oral cavity, which disturbs self-cleaning and the flow of saliva, facilitating the retention of food in the recesses of the cleft and the nasal cavity and changing the physiological exposure of this area to oxygen and carbon dioxide [34].
The significant increase in the members of the family Enterobacteriaceae observed in subjects with CLP (i.e., E. cloacae, K. pneumoniae, and K. oxytoca) may be related to the transmission of this microbiota from the external environment to the oral cavity. The frequent contact of parents’ hands with the mucous membranes of the cleft lip, alveolar ridge, and incisive bone during hygiene procedures in the area of the cleft likely plays an important role in this process. The lip massage recommended by orthodontists as part of preoperative preparations may also significantly contribute to the observed changes in microbiota. Similarly, Cocco et al. indicated a high percentage of CLP subjects with Gram-negative organisms isolated preoperatively [32].
The development of the microbiota in CSP subjects was characterised by a statistically significant increase in the prevalence of S. salivarius, whereas the frequency of bacteria from the Enterobacteriaceae family decreased insignificantly. Therefore, the formation of the oral microbiota in subjects with CSP shows a tendency similar to that of healthy infants, in whom the number of the environmental Gram-negative rods decreases with age [5].
In conclusion, this study shows that (1) the development of the microbiota in subjects with CLP is accompanied by a significant increase in commensal and potentially pathogenic organisms (S. aureus, S. epidermidis, Neisseria spp., K. pneumoniae, and K. oxytoca); (2) S. aureus was detected in neonates with CLP significantly more frequently than in subjects without CLP. The prevalence of S. aureus increases significantly with the development of the child, and the odds ratio increases 16-fold. Patients with CLP are potentially at an increased risk of developing oral infectious diseases. Early oral health maintenance program in patients with CLP should be reinforced.
Further research on early oral microbiota of patients with oral clefts and its effect on later infectious diseases of the oral cavity, especially dental caries and periodontal diseases, is needed.
Conflicts of Interest
All participating authors of this study have no conflicts of interest.
Acknowledgments
The authors are grateful to Professor Wojciech Król, who initiated the project, enabled conducting microbiological tests, and had substantial contributions to the conception of the study. This work was supported by Medical University of Silesia in Katowice, academic project (KNW-1-106/K/4/0). |
A Crunchie split in half.
Crunchie is a brand of chocolate bar with a honeycomb toffee (or known as "sponge toffee" in Canada and "honeycomb" or "Cinder toffee" in the UK) sugar centre. It is made by Cadbury and was originally launched in the UK by J. S. Fry & Sons in 1929.[1]
Size and variations [ edit ]
The Crunchie is sold in several sizes, ranging from "snack size" – a small rectangle – to "king size". The most common portion is a single-serve bar, about 1 inch wide by about 7 inches long, and about 3⁄ 4 inch deep[2] (2.5 cm × 18 cm × 2 cm).
In the late 1990s there was a range of limited edition Crunchies on sale in the UK. These included a lemonade bar and a Tango Orange bar, in which the chocolate contained the different flavourings. A champagne-flavoured bar was launched for New Year's Eve 1999.[3] In South Africa, Cadbury sold a white chocolate version in a blue wrapper until recently.
In 2003, a short-lived bourbon Crunchie was launched in test markets across the Nashville, Tennessee area in partnership with 7-Eleven. The bourbon Crunchie was not well received because of a boycott initiated by western factions of the Southern Baptist Coalition, and production was subsequently discontinued.[2]
Like other chocolate brands, Crunchie brand ice cream bars and cheesecake are also sold in some countries. Such products contain nuggets of the honeycomb.
In 2006, a "Crunchie Blast" variety of the product was launched, which featured "popping candy" inside the bar. It was soon discontinued; but an ice cream of the same name, which is Magnum (ice cream)-shaped honeycomb ice cream with popping candy covered in milk chocolate, is sold in the UK and Ireland.[4]
In 2010, Cadbury's launched Crunchie Rocks, a mixture of chocolate, cornflakes and Crunchie.[5]
Until September 2010, Crunchie was produced in the Somerdale, Keynsham plant in Somerset, UK; however, production has now transferred to Cadbury's new plant in Skarbimierz, Poland.[6] Labels for these products do not state a country of origin, instead stating "Made in the EU under licence from Cadbury UK Ltd".
In some countries a competing product called Violet Crumble is available.
Availability [ edit ]
The Crunchie bar is widely available in the United Kingdom, Ireland, Canada, Australia, New Zealand, South Africa and India.[citation needed] It is imported in other countries, including Cyprus, Hong Kong, Malta, Nigeria, Panama, Lebanon, Saudi Arabia, Malaysia, the Philippines, Portugal, Singapore, Sri Lanka, Nepal, Tahiti and less widely so in the United States (more widely in New York City than anywhere else across the continental U.S.).[citation needed] A similar product, with or without a chocolate coating, is sold as sponge candy in the United States, although honeycomb in these forms are also available outside the USA.
Manufacture [ edit ]
During manufacture, the honeycomb toffee is produced in large slabs, and is cut up using a highly focused jet of oil. The use of a blade would lead to fragmentation, while water would dissolve the honeycomb toffee. Oil prevents both of these happening, and produces uniform sharp-edged portions. The honeycomb toffee is then covered with chocolate, cooled, and packaged.[3]
Crunchie Tango and other limited editions [ edit ]
In 2000, a short-lived (but successful) sister chocolate bar was launched, called Crunchie Tango.[7] It was co-produced by Cadbury and Britvic and featured Tango Orange flavouring. Other limited edition flavours included Lemonade, Fiesta Burrito, Champagne and Mint. In the 1960s a Crunchie Peppermint was also available. An "Endless Crunchie" was released in 2013 for Christmas and contained 40 Crunchie bars.
Nutrition information [ edit ]
Average values (UK) Per 100 g Per
40 g bar Energy (kJ) 2020 775 Energy (kCal) 465 185 Protein 3.0 g 1.6 g Carbohydrate 73.5 g 27.8 g Fat 18.4 g 7.6 g
Advertising [ edit ]
In Australia and New Zealand, Crunchie bars are widely known for having New Zealand's longest-running television advertisement, the "Crunchie Train Robbery" which won many awards[8][9][10] and ran in unchanged form for over 20 years from the late 1970s.[11]
In both Ireland and the United Kingdom, the Crunchie has been advertised since the 1980s with the slogan "Get that Friday feeling". Prior to the 1980s Crunchie was advertised as "Crunchie makes exciting biting".
Literary references [ edit ]
The Crunchie bar is mentioned in Enid Bagnold's 1935 novel National Velvet, as the Brown sisters' sweet of choice for the year.
Stuart buys Bertie a mint Crunchie bar in the 44 Scotland Street book "The Importance Of Being Seven" by Alexander McCall Smith.
See also [ edit ] |
Back when I taught at Yale, I used to give a quiz about Plato’s Republic in the first class meeting for one of my upper-level seminars. The students were all supposed to have taken at least one prior course in which the Republic was read, and I wanted to see how well they remembered it. (I also wanted to show them that I meant business.) The quiz had two questions: What is the Republic’s ’noble lie’? What is its political purpose?
Nearly everyone got the first question right, at least in broad outline. The noble lie is one of the things that’s remembered by everyone who’s ever read The Republic — that, and the notion that none but philosophers are fit to be kings.
The noble lie is the public deception that Plato has his Socrates propose at the end of the Republic’s Book III. It is a myth about human origins — in the most literal immediate sense: a myth about where babies come from. According to this myth, the city’s babies are not born from the bodies of human mothers, but from out of the earth, after having been formed in an underground mineral womb. All share a common maternity — the ground under their feet — yet their souls and their destinies differ, depending on their mineral composition. Those with gold in their souls are destined for the ruling elite. Those whose souls are made up of silver are to be denied that distinction, but are fit to bear arms. Everyone else — the general populace — are left to their private occupations, befitting the baser metals of which they are made.
From their earliest childhood, the citizens of Socrates’ city are to be taught that they share the same geological parentage. Yet for some unexplained reason, the specific type of metal in their soul forbids them to come into contact with certain substances or pratice certain activities. The commoner sorts — their souls made of iron and bronze — are not to bear arms, nor take part in government. That means they are left to money-making occupations — agriculture, crafts, and commerce. Their gold- and silver-souled brethren, on the other hand, are not to touch money, or possess anything bought with it. Their lodgings and sustenance are to be provided to them by the state, on a communal basis.
So much for the first question. Everyone who’s read The Republic remembers this, more or less. How about the next one? If that’s the noble lie, what’s its political purpose? What rationale does Plato put in Socrates’ mouth, when proposing this myth’s propagation?
I’d get the same answer from almost every student. It didn’t matter if they’d read the Republic in a philosophy course, or a course in political theory. It didn’t matter if they’d read it in a course taught by me. In all likelihood, it’s the answer that comes to mind to anyone reading this. The political purpose of the noble lie is to persuade the common citizens to accept the rule of the guardians (the philosopher kings). It’s propaganda designed to inculcate submission and subordination. It’s an ideological prop for a regime of benevolent despots, designed to make the people accept their subordination.
There are some things in certain books that we almost always misread, or misremember. Fortunately for us, in this case, it’s a book that’s always worth a re-reading.
Plato isn’t interested in any of that. For better or worse, he takes it for granted that in a well-ordered state — or in a disorderly one, for that matter — some few must rule, and the others obey. He takes it for granted, too, that most people are satisfied with that arrangement, most of the time, so long as they aren’t threatened, or despoiled of their possessions — or would be, anyway, if not over-excited by self-serving demagogues. Be that as it may — the noble lie isn’t intended for them. Plato has Socrates introduce the noble lie at a specific juncture of his argument, to address a different problem entirely.
The problem has nothing to do with the people resenting or chafing against the guardians’ rule. The noble lie isn’t directed at the common people at all, except incidentally. It’s directed at the elite. In particular, it’s directed at the elite’s silver class, the warrior caste. The stated purpose of the noble lie is ensure that this class be kept from taking advantage of their armed strength to despoil the city, by teaching them to shun all contact with money, and the things that money can buy. The myth appeals to this caste’s love of honor — prestige — while also slyly insinuating the belief that the essence of prestige is to dine in a mess hall, and live in a barracks.
The bit about the noble lie comes at the end of a long discussion (taking up much of book II, and all of book III) concerning the selection and training of those who are to guard the good city. (At this point in The Republic, nothing has been said about ruling — “guarding” is treated as a matter of defending against enemies.) When Socrates first raises this issue, he compares guardians to well-bred sheepdogs. Like shepherds’ dogs, the city’s guardians must possess sufficient “spiritedness” (thumos) as to be ready and able to fight off the enemies of the city, and yet they must show perfect gentleness to its friends (that is, the law-abiding citizens).
Socrates goes on in great deal about the proper training for these guardians, which — true to his sheepdog analogy — is essentially a matter of properly disciplined habituation. (Only later in the Republic does it become fully clear that this is all just the first phase of the much longer training of those who are to serve as the city’s true, ruling guardians — the philosopher-kings.) Unexpectedly, at the end of that long discussion, it emerges that Socrates himself has little confidence in this educational program’s results. Out of every cohort selected to receive this rigorous training, only a fraction will prove worthy and reliable guardians. The would-be guardians are to be subjected to various (unspecified) trials and tests, so as to identify those “who believe throughout their lives that they must eagerly pursue what is advantageous to the city [i.e., the city’s good] and be wholly unwilling to do the opposite.” Those who are to guard the city must be counted upon, above all else, to “guard” their belief that what is best for the city is best for themselves; they must prove themselves able to withstand the seductions of desires or fears as might becloud or confuse this belief.
Socrates and his companions seem to take it for granted that only a few of the would-be guardians will satisfy this criterion; when put to the test, the rest will prove not to be so reliable after all. And yet Socrates also assumes that these unreliable ones, too — or some of them, anyway — must be kept in the service of the city. For reasons not yet explained, the true guardians must count on retaining the help and support of the others.
(A part of the equation that emerges only later in the Republic: those who bear arms for the city are younger than the true guardians — too young to have been fully tested. Socrates cannot explain this properly at this point in the dialogue, for it is only later, in Book VII, that we are given the reason why the true guardians will have to be much older men and women – it takes that long to be educated as a philosopher, to show oneself worthy of that education.)
The device is needed, Socrates implies, precisely because the armed cohort otherwise cannot be counted upon to identify their own best interest with that of the city; the myth of the metals is to fortify that identification by appealing to their sense of honor — that same ‘spiritedness’ which is the outstanding trait of this caste.
Although Socrates recommends having the noble lie propagated everywhere in the city, this is chiefly in order to increase the likelihood that it will take hold among the warrior caste. It’s a drastic solution, and it isn’t so clear that Socrates himself thinks it’s likely to work. What’s interesting is that he’s so acutely aware of the peril in the problem.
“The most terrible and most shameful thing of all is for a shepherd to rear dogs to help him with his flocks in such a way that… they do evil to the sheep and become like wolves instead of dogs.”
δεινότατον γάρ που πάντων καὶ αἴσχιστον ποιμέσι τοιούτους γε καὶ οὕτω τρέφειν κύνας ἐπικούρους ποιμνίων, ὥστε ὑπὸ ἀκολασίας ἢ λιμοῦ ἤ τινος ἄλλου κακοῦ ἔθους αὐτοὺς τοὺς κύνας ἐπιχειρῆσαι τοῖς προβάτοις κακουργεῖν καὶ ἀντὶ κυνῶν λύκοις ὁμοιωθῆναι.
Republic 416a |
Falso
Frase a revisión: “Hay un avance real y muy significativo en apenas dos años. Un avance que acredita que los homicidios dolosos que se venían cometiendo (en Ciudad Juárez) han reducido en más de un 40%”.
Autor: Enrique Peña Nieto, presidente de la República.
Lugar y fecha: Comida con la sociedad civil. Ciudad Juárez, Chihuahua, 14 de enero de 2015.
Calificación de El Sabueso:
¿Peña Nieto puede presumir una baja en los homicidios en Juárez?
El presidente Enrique Peña Nieto presumió en enero pasado que en Ciudad Juárez, Chihuahua, hay un “avance real y muy significativo” en la reducción de los homicidios dolosos. Y, además, dio fechas: dijo que el avance corresponde a los dos últimos años. Los de su gobierno.
Sin embargo, después de revisar los números, El Sabueso concluye que es una frase falsa.
Por un lado, es cierto que si comparamos las averiguaciones previas por homicidios dolosos que se iniciaron en 2012 en Ciudad Juárez, con las de 2014, hay una baja significativa. El problema, es el siguiente:
Al desagregar los datos mes por mes, éstos revelan que la disminución de la que habló Peña Nieto, en realidad ocurrió en el sexenio pasado. En cambio, en los 24 meses del gobierno actual el número se ha mantenido estable. Es decir, no hay una baja atribuible a esta administración.
Además hay un dato adicional y relevante: el Presidente habló de homicidios, cuando en realidad se refería a averiguaciones previas. Una averiguación, puede contener más de un homicidio.
La versión de Presidencia
La Presidencia de la República explicó a El Sabueso que el Presidente basó su declaración en la comparación entre 2012 y 2014 del total de averiguaciones previas por homicidio doloso, cifras que contabiliza el Secretariado Ejecutivo del Sistema Nacional de Seguridad Pública.
Y en efecto, tras comparar el total de eventos registrados en 2012 frente a los de 2014 (647 contra 389) el decremento es de 39.9%.
Los datos mes a mes
Al revisar revisar esos mismos datos, pero desagregados por mes, las cifras muestran que la tendencia a la baja en las averiguaciones previas se dio en el sexenio pasado. Incluso, la baja se frenó en junio de 2012, seis meses antes de que Peña Nieto tomara posesión.
Desde entonces y hasta ahora (los dos años de los que habló el Presidente), la cifra de averiguaciones por homicidio es constante: 34, como promedio mensual.
¿De quién es el logro? ¿Es cierto que “hay un avance real y muy significativo en apenas dos años”?
“En efecto, de 2012 para acá se redujeron los homicidios, pero es una declaración tramposa: cuando empieza este gobierno, en diciembre de ese año, la cifra ya tenía un promedio de 30-35. En realidad, con Peña, no ha cambiado nada”, dice el analista de seguridad, Alejandro Hope, tras revisar las cifras del Secretariado Ejecutivo del Sistema Nacional de Seguridad Pública.
“Lo que dicen los datos es que la política de seguridad del gobierno no ha tenido efecto, pues llevamos dos años con un promedio de entre 30-35 averiguaciones previas por mes”, agrega Hope.
A su vez, Data4, empresa dedicada a la generación, procesamiento, análisis y visualización de datos, concluye que “la caída se observa a partir de 2010, el año pico de violencia en el municipio, que alcanzó una tasa de 265 homicidios por cada 100 mil habitantes”.
El ritmo de caída, añade, se ha desacelerado con el tiempo y parece ya estabilizarse.
Entre 2010 y 2011 la tasa cayó 39 puntos porcentuales (datos del INEGI).
Entre 2011 y 2012 la tasa cayó 63 puntos porcentuales (datos del INEGI).
Entre 2012 y 2013 la tasa cayó 24 puntos porcentuales (datos del INEGI).
Entre 2013 y 2014 la tasa cayó 14 puntos porcentuales (datos del Secretariado Ejecutivo, ya que son los únicos disponibles para el último año y que, contrario al INEGI que mide víctimas de homicidio, miden averiguaciones previas).
“La disminución a la que se refiere el Presidente equivale a la caída en la tasa de averiguaciones previas por homicidio doloso entre 2012 y 2014. En 2012 la tasa era de 43 por cada 100 mil habitantes, y 2014 cerró con una tasa de 27 por cada 100 mil habitantes”.
Data4 también precisa dos puntos: el primero es que todavía no sabemos cuántos homicidios ocurrieron en Ciudad Juárez en 2014 y eso sólo lo sabremos en un año. Y lo segundo es que “es incorrecto hablar de una disminución en homicidios dolosos ya que la disminución se refiere a averiguaciones previas. Y en Ciudad Juárez hay menos averiguaciones previas que víctimas de homicidios dolosos”.
Si se revisa lo sucedido en años anteriores, “uno podría considerar que INEGI ponga a Ciudad Juárez con una tasa cercana a 37 homicidios por cada 100 mil habitantes en 2014. De ser así, se podría decir que la caída en la tasa de homicidios dolosos esperada (dada la proyección) equivaldría a 36 puntos porcentuales”.
Víctimas de homicidio
Finalmente, la Mesa de Seguridad y Justicia de Ciudad Juárez -nacida en el contexto del programa de seguridad Todos Somos Juárez, iniciado en el sexenio de Calderón para la vigilancia ciudadana de los avances en justicia y seguridad- coincide con este juicio sobre el pico de homicidios.
La Mesa tiene contabilizados los homicidios del municipio.
“El pico más alto que tuvimos en homicidios fue en 2010 cuando hubo 359 en un mes. Bajó dramáticamente de 2010 al 2012 y luego de ahí ha continuado la tendencia y la baja no es tan agresiva. En mayo del 2012 tuvimos 48 homicidios y de ahí se ha mantenido”, dijo el coordinador del comité de indicadores de la mesa, Mario Dena.
Según los datos mes a mes tanto del Sistema Nacional de Seguridad Pública, como del registro ciudadano de la Mesa de Seguridad, desde que Peña Nieto tomó posesión, la comisión del delito de homicidio es constante en Ciudad Juárez, por lo que la declaración del presidente del 14 de enero de 2015 es falsa.
¿Comentarios? ¿Sugerencias? ¿Quieres que El Sabueso verifique una frase? Escribe a [email protected] |
Video-game play (particularly “action” video-games) holds exciting promise as an activity that may provide generalized enhancement to a wide range of perceptual and cognitive abilities (for review see Latham et al., 2013a). However, in this article we make the case that to assess accurately the effects of video-game play researchers must better characterize video-game experience and expertise. This requires a more precise and objective assessment of an individual's video-game history and skill level, and making finer distinctions between video-games that fall under the umbrella of “action” games. Failure to consider these factors may partly be responsible for mixed findings (see Boot et al., 2011).
Assessing Video-Game Experience and Expertise
Current cross-sectional research investigating video-game play has relied on self-reports in order to distinguish expert video-game players (VGPs) from non-VGPs. Participants who report playing “action” video-games (e.g., Bialystok, 2006; Dye et al., 2009; Dye and Bavelier, 2010) for multiple hours per week, 6 months to a year prior to testing (e.g., Green and Bavelier, 2003; West et al., 2008; Hubert-Wallander et al., 2011) are classified as expert VGPs. Those who report no video-game play in the same period are classified as non-VGPs. Current criterion, however, fail to appreciate the significant difference between VGPs who have played for 5 h per week over the past 6 months and those who have played for 20+ h per week over the past 10 years (whom, in addition, would be classified as non-VGPs if currently abstaining from video-game play).
The purpose of cross-sectional research is to test the limits to which perceptual and cognitive processes may or may not be impacted by video-game play, while training studies using appropriate controls establish causal relationships between those differences and video-game play (see Boot et al., 2013). Unfortunately, the assumption that recent video-game experience reflects expertise is mistaken. There is no guarantee that VGP participants used in most current research papers possess either the experience or expertise necessary to be classified as expert VGPs. Similarly, there is no guarantee that individuals classified as non-VGPs, in their past, do not possess the relevant experience or expertise that would qualify them as expert VGPs. The misclassification of expert VGPs, non-VGPs or both, may be the basis of null results in the video-game literature (e.g., Murphy and Spencer, 2009; Irons et al., 2011), and other studies that have not been published.
A few early studies classified participants as expert VGPs and non-VGPs based on performance in a screening video-game (Greenfield et al., 1994; Sims and Mayer, 2002). As long as experimenters are able to set the appropriate performance threshold this is a valid method of classification. There is, however, a simpler method, used in other areas of expertise research (i.e., musical performance) that assigns expertise on the basis of professional attainment (i.e., highest instrument grade attained) and some objective assessment of their skill (i.e., achievement, awards or rankings). Similar measures of expertise are often freely available to video-game researchers on the internet and VGPs' in-game statistics. Level of professional attainment in video-game play can be assessed through placings in open tournaments and leagues, and qualifying, or being invited, to compete in closed tournaments and leagues. Like other competitions, video-game contests occur at a local, regional, national and international level, with each subsequent level representing higher levels of attainment.
Objective measures of video-game expertise are commonly available in the form of skill ratings and ladder rankings (based on the ELO system used in Chess) found in-game or online. For example, Guild Wars 2 and World of Warcraft maintain ratings and rankings of individual players and teams. Some video-games do not assign exact ratings or rankings, but instead assign a token which represents skill level. For example, Counter Strike: Global Offensive assigns players one of 18 emblems ranging from Silver I to The Global Elite. Meanwhile, in Starcraft II, players are divided into different competitive tiers. The top 200 players on a server are in the Grand Master League, followed by the next 2% in the Master League and next 18% in the Platinum League. This is followed by Diamond, Gold, Silver, and Bronze, respectively. Finally, in some video-games, such as Defense of the Ancients II, ratings and rankings are maintained openly by online communities (e.g., joinDota, GosuGamers).
While video-game experience is not well suited to assigning expertise, it can highlight the qualitative and quantitative features of video-game engagement that may underlie expertise and its development. For example, Ericsson et al. (1993) used a diary study with musicians and found expert musicians engaged in more “deliberate practice” than non-experts. Deliberate practice refers to structured task rehearsal for the sake of improving performance, and is contrasted with “play” which is task immersion for the sole purpose of enjoyment. While many people play video-games, very few deliberately practice them. Engaging in deliberate practice is almost certainly also a characteristic feature of video-game expertise, however, video-games' success may come from an ability to blur the lines between deliberate practice and play.
Other relevant features of video-game experience include length of experience and the age at which they began gaming (e.g., Latham et al., 2013b). Unfortunately, potential variability in video-gaming histories increases the complexity of both variables. As a result, length of experience and age began might also be better understood in terms of play and deliberate practice. For example, a VGP may begin regular play during childhood, play more regularly and begin deliberate practice during adolescence, and then cut back to irregular play during tertiary study. Expertise-related changes are likely to reflect not just the accumulation of video-gaming experience but the nature of that experience as well, especially during formative years. The human brain is most malleable during childhood and adolescence (Freitas et al., 2011), thus perceptual, cognitive and neural changes resulting from intensive training (be it video-game, music or some other expertise) may be more likely during this period.
Teasing Apart Major “Action” Video-Game Genres
Video-game researchers have largely restricted interest to the link between “action” video-game play and, perceptual and cognitive performance. The term “action,” however, actually refers to a vast array of different video-game genres. Early video-game researchers noted the significance of video-game type, showing that while spatially-orientated video-games enhanced visual cognition, non-spatially orientated games did not (e.g., Subrahmanyam and Greenfield, 1994; De Lisi and Wolford, 2002). Surprisingly, the importance of video-game genre has only recently been made apparent with real-time strategy (RTS) games shown to extend beyond the traditional results of enhanced visual cognition to improve higher order cognitive abilities, such as working memory and cognitive flexibility (e.g., Basak et al., 2008; Glass et al., 2013).
Briefly we highlight four major sub-genres that support international competition. These genres are: first-person shooters (FPSs), RTS, action RTS, and massively multiplayer online role-playing games (MMORPG). It is important to note that the complexity of these genres is greater than can be highlighted here (i.e., team roles, play-styles, meta-game), which may help shape specific perceptual and cognitive demands. In addition, there are many other “action” video-game sub-genres (i.e., driving, sport) with unique demands and potential to provide different sets of enhancements to players.
In RTS games players take control of a race, continually create, and utilize worker units to obtain resources, create, and expand a base, and create and improve combat units. Using combat units, players must destroy opponents or force them to concede. Countless combinations of build orders and unit combinations exist, which must be performed, controlled and adjusted in real-time against opponents. Success is reliant on the ability to assess, update and plan the most efficient series of mechanical responses. During professional Starcraft II play, players commonly execute up to 250 actions per minute, increasing to over 300 during combat. Other RTS games can have additional layers of complexity through the alternative victory conditions. For example, in Civilization V players can actively obtain victory through science, culture and diplomacy. Given these demands it is unsurprising RTS games may emphasize and enhance executive processes.
The term “action,” when used by researchers, however, has typically referred to FPSs. Players aim a targeting reticule at opponents and click in order to eliminate them. Success is dependent on the ability to make rapid visual judgments and responses. Although the executive demands are lower in FPSs than in RTS games, the demands on speed and accuracy of visual abilities are far higher. Many FPS games are objective and team-based (i.e., Counter Strike: Global Offensive) and include vehicles (i.e., Battlefield 3). However, even with these additions, success is still highly dependent on the speed and accuracy of basic visual and motor processes.
Action real-time strategy (ARTS) games arose from RTS games whereby players control a single unit with a handful of unique abilities called a “hero.” Often there are hundreds of unique heroes to choose from (e.g., League of Legends has 115 heroes and Defense of the Ancients II has 102). In a game, two teams of five players fight alongside waves of computer-controlled units in order to destroy the opponent base. Eliminating enemy heroes and units confers experience and currency. Experience allows heroes to gain levels which make them more powerful and grant skill points which are used to learn and improve skills. Currency is spent on items that either make a hero more powerful and provide additional skills, or supports the team by granting map vision, temporary invisibility, or revealing hidden units.
The competitive player-vs.-player element of many massively MMORPG shares some similarities with ARTS games. Players control a hero who has a whole pool of unique abilities to choose from rather than only a handful. Furthermore, “talent systems” allow players to customize their hero according to their specifications. However, unlike ARTS games, skills, talents, and items are selected prior to competing. With large pools of heroes, items, and abilities, the numerous possible combinations make each game played potentially unique. Success in ARTS and MMORPGs is reliant on ability to rapidly assess opponent hero roles and actions from visual cues. The specific perceptual and cognitive demands are roughly an intermediary between the RTS and FPS genres.
While there is undoubtedly a large overlap between the skills required to succeed across video-game genres (e.g., the ability to perform precisely timed bi-manual movements in response to complex visual cues), each genre typically has unique perceptual and cognitive demands necessary for success. Specific enhancements may result from these demands. Distinctions between genres are, therefore, of particular importance to researchers conducting training studies and those who wish to target specific abilities.
Researchers investigating expert VGPs typically provide lists indicating the “action” video-games participants report playing, with little appreciation given to the breadth of genres shown. Breadth itself may be another characteristic trait of experts, as the unique capabilities trained by specific tasks in a domain are likely to be advantageous to general performance within the domain as a whole. For expert VGPs, the unique capabilities trained by specific genres are likely to benefit video-game performance in general, and those with greater breadth may also tend to show greater expertise. As a result, the genre of video-games played needs to be considered in conjunction with video-game experience (see Assessing video-game experience and expertise).
Understanding the extent to which video-game play can shape perceptual and cognitive abilities requires testing expert VGPs. Current research, however, mistakenly classifies participants as expert VGPs using only a limited assessment of recent video-game experience, hindering progress in the field. While knowledge of a participant's video-game experience is incredibly useful, it cannot be used to definitively assign expertise. Proper classification of expertise requires the use of professional attainment, objective performance measures, or both. Once expertise has been correctly assigned, differences in experience between experts and non-experts may highlight factors, or combinations of factors, that promote the development and maintenance of expertise. Perhaps more significantly, it may reveal the key/s to shaping perceptual and cognitive processes.
References
Boot, W. R., Simons, D. J., Stothart, C., and Stutts, C. (2013). The pervasive problem with placebos in psychology: why active control groups are not sufficient to rule out placebo effects. Perspect. Psychol. Sci. 8, 445–454. doi: 10.1177/1745691613491271 CrossRef Full Text
Ericsson, K. A., Krampe, R. T., and Tesch-Romer, C. (1993). The role of deliberate practice in the acquisition of expert performance. Psychol. Rev. 100, 363–406. doi: 10.1037/0033-295X.100.3.363 CrossRef Full Text
Greenfield, P. M., de Winstanley, P., Kilpatrick, H., and Kaye, D. (1994). Action video games and informal education: effects on strategies for dividing visual attention. J. Appl. Dev. Psychol. 15, 105–123. doi: 10.1016/0193-3973(94)90008-6 CrossRef Full Text
Hubert-Wallander, B., Green, C. S., Sugarman, M., and Bavelier, D. (2011). Changes in search rate but not in dynamics of exogenous attention in action video game players. Atten. Percept. Psychophys. 73, 2399–2412. doi: 10.3758/s13414-011-0194-7 Pubmed Abstract | Pubmed Full Text | CrossRef Full Text
Irons, J. L., Remington, R. W., and McLean, J. P. (2011). Not so fast: rethinking the effects of action video games on attentional capacity. Aust. J. Psychol. 63, 224–231. doi: 10.1111/j.1742-9536.2011.00001.x CrossRef Full Text
Murphy, K., and Spencer, A. (2009). Playing video-games does not make for better visual attention skills. J. Articles Support Null Hypothesis 6, 1–20.
Sims, V. K., and Mayer, R. E. (2002). Domain specificity of spatial expertise: the case of video game players. Appl. Cogn. Psychol. 16, 97–115. doi: 10.1002/acp.759 CrossRef Full Text |
A Ugandan survey found that the population has risen from 786 in 2010 to 880 today, due to conservation efforts
The world's population of mountain gorillas has increased by more than 10% in two years, new census figures show.
A survey carried out in Uganda's Bwindi Impenetrable national park and released by the Ugandan Wildlife Authority has found that numbers of the critically endangered species, Gorilla beringei beringei, have risen from an estimated 786 in 2010 to 880 today.
Threats to the mountain gorilla – including war, habitat destruction and disease – were once thought to be so severe that the species could become extinct by the end of the 20th century, but the population has increased significantly in the last 30 years.
Drew McVey, species programme manager at WWF-UK, who supported the census as part of the International Gorilla Conservation Programme, said he believed the latest increase was due to conservation efforts that had successfully engaged the local community.
"Mountain gorillas have only survived because of conservation. Protected areas are better managed and resourced than they have ever been, and our work is a lot more cross-cutting to address threats - we don't just work with the animals in the national parks, but also with the people."
McVey said conservation now balanced species survival against the needs of an incredibly poor area with high population pressures, for example, tackling the loss of gorilla habitat due to the illegal collection of firewood by providing the community with access to alternative energy sources.
Mountain gorillas, a subspecies of the eastern lowland gorilla, live in mountain forests in only two locations in the world – Bwindi in south-west Uganda and the Virunga Massif, a range of extinct volcanoes that border the Democratic Republic of the Congo, Uganda and Rwanda.
According to the census report, there are more than 400 mountain gorillas in Bwindi, living in 36 distinct social groups, with 16 solitary males. Ten of these social groups are accustomed to human presence for either tourism or research. A 2010 survey counted 480 individuals in Virunga Massif.
"Gorillas are slow breeders," McVey said. "And we're quite impressed with how much the population has increased."
But McVey said this should not be read as a sign that the fight to save the species is over. "Mountain gorillas are only found in protected areas, and outside these areas there are more than 600 people per square kilometre, so there is immense pressure to secure their habitat and pay their way. We haven't got everything right yet, but it's vital we continue to keep working and build on this success."
The number of mountain gorillas has increased from the 2010 estimate of 786 after a count in Uganda’s Bwindi Impenetrable national park. Photograph: Anna Behm Masozera/WWF
The greatest current threats to mountain gorillas are entanglement in hunting snares, disease transfer from humans, and habitat loss for agriculture and livestock.
"Gorillas have almost the same DNA as us, and humans can transmit anything from a common cold to ebola. Gorilla populations are incredibly fragile and sensitive to environmental change. There are only two populations, so disease could easily wipe out an entire population," said McVey.
The prospect of oil exploration in Democratic Republic of the Congo's Virunga national park by petroleum companies has also become a cause for concern.
"More people in Virunga would likely lead to an increase in deforestation, illegal hunting and more snares in the forest," said David Greer, WWF's African great ape programme manager. "At least seven Virunga mountain gorillas have been caught in snares this year and two did not survive. The gorilla population remains fragile and could easily slip into decline if conservation management was to be disregarded in the pursuit of oil money by elites."
The number of mountain gorillas declined dramatically during the 1960s, stabilised during the 1970s and started to increase in the 1980s. Political instability and war prevented a complete census until 1989, when it was revealed that there were 620 individuals.
The war in Rwanda in the early 1990s and years of civil unrest in the DRC led to poaching and destruction of gorilla habitat and made survey and conservation work difficult and dangerous. Since 1996, 140 Virunga rangers have been killed in the line of duty, including one in May.
Many mountain gorillas have become accustomed to human presence and are a major tourist draw. In 2009 Virunga national park – home to the largest mountain gorilla population – received 550 visitors. This year visitors were projected to reach 6,000.
"The amount of revenue and jobs that gorillas generate is so important for these areas that are so desperately poor," McVey said. "People really see gorillas as important for the national and local economies, and a portion of this goes back to conservation efforts and the local community."
But park authorities have been forced to suspend tourism again after fighting, and last month a Congolese rebel group accused of killings, mass rapes and other atrocities was found to be using the proceeds of gorilla treks to fund its insurgency. |
From the moment a revert to Islam utters the Shahadah (declaration of faith), his/her life as a Muslim begins.
Some of the top priorities for a new Muslim are learning how to worship properly, perform other acts of worship and live a life in accordance with the Quran as well as the Sunnah of Prophet Muhammad (peace be upon him).
There are, however, some worldly issues that need attention as well. One of them is telling your friends and family that you have embraced the Islamic faith. For many Muslims, the uncertainty regarding how their families will react to their conversion to Islam is very stressful and causes intense anxiety. For others, keeping their Islam a “secret” is debilitating and cripples aspects of his or her daily life.
Telling your family that you are Muslim can be a difficult undertaking especially if you know that members of your family already have a negative image of Islam. There is a very real possibility that one or all of your family members may shun you as a result of your new faith.
Contrastingly, there is another reality that is much more promising in that some or all of your family members may accept that you’ve chosen the Islamic faith and continue the relationship. A new Muslim has no way of knowing how family members and friends will react. However, revealing your new state of Islam is both energizing and humbling. This is what I learned firsthand when I told my family that I had embraced the Islamic faith years ago.
Revelation
I never wanted to be a Muslim.
What I had heard about Islam was not only negative, but also downright frightening. Women were supposedly treated like “dogs” in Islam and were allegedly not allowed to even set foot in a mosque. I didn’t even want to know about Islam.
Even after marrying my Muslim husband, with the stipulation that I would never become a Muslim, I still had no interest in Islam. That was until my life changed overnight with the news that my grandmother had been brutally murdered in her home. A caretaker, who had worked for her for years, strangled her to death. I did not know how to cope with the loss.
Nothing that I had learned in the Christian faith could help me as I struggled with the grief. I turned to an English translation of the Quran with a skeptic heart full of so much rage that I accidentally ripped the index page as I opened it.
As I began to read, God Almighty removed the cloak from my eyes. And as I read, my tears began to soak the pages as I tried to dab them away as quickly as they fell. I soon learned that I had been Muslim for years, unbeknownst to me. It wasn’t long before I became a full-fledged Muslim.
Initial Hesitation
While I was ecstatic to be a Muslim and had an intense urge to acquire as much Islamic knowledge as I could get my hands on, something was holding me back. No one in my Christian family knew that I had become a Muslim and I was reluctant to tell anyone. Growing up, I witnessed countless instances of negative words and imagery concerning racial minorities from some members of my family. I was convinced I would be shunned by my family and, for some reason, felt that I needed their approval to be a Muslim.
Keeping the “secret” of my conversion to Islam caused me to lose sleep, have nightmares and I would often have difficulty catching my breath because I was riddled with anxiety. I would lay awake at night creating mental imagery of how my family would react and how they would ridicule or humiliate me for being a Muslim. It was almost too much to bear.
During this time period, and by the grace of God Almighty, I happened to buy a biography of the life of Prophet Muhammad (peace be upon him). I learned about how the new converts to Islam were forced to keep their Islam a secret in the beginning for fear of reprisals from the unbelievers. I also read how many new Muslims professed their faith, regardless of the consequences, out of sheer adoration for God Almighty and Prophet Muhammad. I was completely ashamed of myself for not proclaiming my new faith from the tallest rooftop I could find.
Spilling the Beans
My heart was overtaken and I felt an urgency in telling everyone that not only I am a Muslim, but also that Islam is the greatest religion in the world.
Given that I was living an entire ocean away, I had to tell my family that I was a new Muslim by phone. The first call I placed was to my mother who stuttered into the phone that not only I must be “joking”, but that I better be lying to her. I wasn’t. And I let her know that it is my life and, if she wanted to be in it, Islam is the new fabric of my being.
The family grapevine apparently worked faster than I could place another international call. By the time I reached my sister on the phone, everyone in my family knew. And just as I had feared, I had created a scandal of epic proportions.
The Fall Out
The best part of telling my family that I am a Muslim is that it brought inner peace and soothed my heart. Over the coming weeks and months following my announcement, my relationship with my mother became so strained that we no longer speak today. Her deplorable speech about Muslims in general and pure hatred for Islam is enough for me to stay away.
Other family members have claimed that they accept my new faith, but still send me verses from the Bible in my email and Christmas cards by post. And a few family members take great pleasure in making derogatory statements about Islam or Muslims with the cloak of it being in jest when clearly it isn’t.
Regardless of the fall out, I have not regretted my decision to tell my family about my Islam even once. What I regret immensely is that I did not have the courage to tell them on the day I took the shahadah.
If Only I Could
I wish there was a way I could turn back time and have been able to reveal my Islam to my non-Muslim family in the beginning. I’ve always felt blessed and humbled to have been shown the light of Islam. I literally have always felt that God Almighty plucked me out of a place of degradation and replaced it with a place of dignity.
However, it’s important to note that everyone’s situation is different. I was an adult and was able to reveal my new life as a Muslim from the comfort of my home. There are many new Muslims who may choose to keep their state of Islam a secret for good reason, such as being physically abused or harmed by members of their family.
When revealing your Islam to family members always make sure to weigh the consequences carefully. Envision a worst-case scenario and ask yourself if you have the means to accept that before you reveal your state of Islam. |
Ever wanted an "else" statement in *ngIf ? It's finally here, together with some other nice improvements around dealing with Observables in templates. Let's explore them here.
Contents are based on Angular version >=4
If..Then..Else
The ngIf directive gets a nice improvement in Angular version 4.0.0. It’s been the target of many critiques even in AngularJS (v1.x) because of the lack of an “else” clause. In order to simulate if-then-else blocks in Angular templates, we had to use two ngIf directives with opposed boolean conditions.
< div * ngIf = ”isLoggedIn()” > Hi, {%raw%}{{ user.name }}{%endraw%} </ div > < div * ngIf = ”!isLoggedIn()” > You’re not logged in. </ div >
In Angular version 4 we now get an “else” instruction as part of the ngIf directive. We can thus transform the above template to the following:
< div * ngIf = "isLoggedIn(); else notLoggedIn" > Hi, {%raw%}{{ user.name }}!{%endraw%} </ div > < ng-template # notLoggedIn > You're not logged in. </ ng-template >
Better Observables support in templates
RxJS and Observables are already heavily being used within Angular, even when it comes to rendering async data into a template. When we execute an HTTP call with Angular, we get an Observable in return.
import { Component } from '@angular/core' ; import { Http } from '@angular/http' ; import { Observable } from 'rxjs/Observable' ; import 'rxjs/add/operator/map' ; @ Component ({ selector : 'users-list' , template : ` <ul> <li *ngFor="let user of users"> {%raw%}{{ user.username }}{%endraw%} </li> </ul> ` }) export class UsersListComponent { users ; constructor ( private http : Http ) { } ngOnInit () { this . http . get ( '/api/users' ) . map ( res => res . json ()) . subscribe (( data ) => { this . users = data ; }); } }
In this example, we use the Observable’s subscribe function to register a callback for when the data is retrieved from our backend API. Once we have the data, we assign it to a local variable of our component, users , which in turn is data-bound on the ngFor in our template. The async pipe which is already present in Angular version 2, allows to write this in a more elegant way:
import { Component } from '@angular/core' ; import { Http } from '@angular/http' ; import { Observable } from 'rxjs/Observable' ; import 'rxjs/add/operator/map' ; @ Component ({ selector : 'users-list' , template : ` <ul> <li *ngFor="let user of users$ | async"> {%raw%}{{ user.username }}{%endraw%} </li> </ul> ` }) export class UsersListComponent { users$ ; constructor ( private http : Http ) { } ngOnInit () { this . users$ = this . http . get ( '/api/users' ) . map ( res => res . json ()); } }
We can directly assign the returned Observable to our users$ variable and bind it in our template. The Async Pipe ( .. users$ | async ) handles the subscription and unsubscription on the Observable for us, which makes it really convenient for directly binding asynchronous data in our templates.
Note, the “$” suffix in our variable name is simply a naming convention to communicate this variable holds an Observable.
Safe Navigation Operator, RxJS and Async Pipe tinkering Learn how to use the async pipe to write elegant, RxJS powered async code /blog/2016/11/safe-nav-operator-and-async-pipe/
There’s one caveat though. We cannot access the collection within our template. Consider for instance if we wanted to enumerate the position of the rendered user entry in the collection with respect to the total number of entries.
Enumerate *ngFor loops using as and async pipes
While in version 2 we had to fallback to subscribing to the Observable in the component class, Angular version 4 now gives us a possibility to handle such scenario by assigning the async result from the Observable to a template variable: .. of users$ | async as users . A template variable is a variable declaration in our template, just like the user in our ngFor loop statement.
import { Component } from '@angular/core' ; import { Http } from '@angular/http' ; import { Observable } from 'rxjs/Observable' ; import 'rxjs/add/operator/map' ; @ Component ({ selector : 'users-list' , template : ` <ul> <li *ngFor="let user of users$ | async as users; index as i"> {%raw%}{{ user.username }} ({{ i }} of {{ users.length }}){%endraw%} </li> </ul> ` }) export class UsersListComponent { users$ ; constructor ( private http : Http ) { } ngOnInit () { this . users$ = this . http . get ( '/api/users' ) . map ( res => res . json ()); } }
Leveraging the as keyword with *ngIf
Using the as keyword also works with ngIf . Within the ngIf expression we can again directly use the Observable and assign it to a template variable. Hence, once the asynchronous call resolves, we can render that data in our template.
import { Component } from '@angular/core' ; import { Http } from '@angular/http' ; import { Observable } from 'rxjs/Observable' ; import 'rxjs/add/operator/map' ; @ Component ({ selector : 'users-list' , template : ` <div *ngIf="user$ | async as user"> Hi, {%raw%}{{ user.name }}!{%endraw%} </div> ` }) export class UserLoginComponent { user$ ; constructor ( private http : Http ) { } ngOnInit () { this . user$ = this . http . get ( '/api/auth/currentuser' ) . map ( res => res . json ()); } }
Check out my Egghead video lesson on how to leverage the new else clause in the *ngIf together with the async pipe for creating a loading indicator.
{% assign video_title = “Show a loading indicator in Angular using *ngIf/else, the as keyword and the async pipe” %} {% assign video_url = “https://egghead.io/lessons/show-a-loading-indicator-in-angular-using-ngif-else-the-as-keyword-and-the-async-pipe" %} {% assign affiliate_client = “eggheadio” %} {% assign affiliate_uid = “lessons/show-a-loading-indicator-in-angular-using-ngif-else-the-as-keyword-and-the-async-pipe” %} {% include video-banner.html %}
Conclusion
These two additions to the ngIf and ngFor directives makes working with Observables directly within the templates a lot easier and more convenient.
Try them out by yourself with this runnable Plunker. |
I find that this is the album I listen to most; hand's down. Like so few other band's, they have managed to create an album that is not only thematically cohesive, but listenable beginning to end. One of those rare 'perfect' albums. I still can't stop listening to "Sick, sick, sick", because of it's hard driving guitar bridge. I find it pumps me up, gives me energy to get through the day. Same for the punk tinged "Battery Acid". Guitar riffs on "3's & 7's" always grab me, along with the bitter, biting lyrics. I never get tired of listening to this album because it pairs dark, deeply felt emotion with serious musicianship. My brain gets bored easily and I never get tired of counting out rhythm's for each instrument. Not one note is a throwaway. Everything is precise, except for Josh Homme's voice. Funny how the precision of the instrumentation is what makes the instrumentation fascinating, yet Homme's voice is deliberately blurred, stretched, muttered, altered. By making his voice hard to interpret, you listen closer. It even allows you to interpret what he's saying, since you might not be sure. Listen closely and you will find that Homme's insights and observations are perceptive and deeply felt. I still listen to it often because it speaks for me and yet it still challenges me. I find new things in it all the time. I guess you could say it engages me deeply on all emotional, spiritual, intellectual, and musical levels. One of the best albums ever made. |
14th March 2013 – Step into Luigi’s quaking shoes, strap on the Poltergust 5000 – the ghost-catching and puzzle-solving vacuum cleaner – and clear out all the ghosts in Luigi’s Mansion 2, launching exclusively for the Nintendo 3DS family of systems on 28th March.
Luigi returns to help his mad-scientist friend, Professor E. Gadd, by sucking up the ghosts in each of the mysterious mansions filled with puzzling contraptions that must be manipulated to proceed. Armed with a torch and the Poltergust 5000, which can be upgraded with new features such as the Strobulb light to stun the wacky wraiths and the Dark-Light Device to reveal invisible clues and objects, Luigi is fully equipped to tackle any paranormal activity!
As you explore, keep a close eye on your surroundings and investigate every nook and cranny to bring the secrets of the mansions to light. Experimenting with the Poltergust’s functions to either suck things up or blow them away will often be key to making your way: rolling up a carpet may reveal a concealed switch, while peeling off loose wallpaper can expose a secret doorway. As you come up with different ways to use your tools, you’ll also uncover fiendishly hidden gems for your collection and gather stacks of cash to gain equipment upgrades.
The adventure is brought to life with rich, cartoon-like graphics in immersive auto-stereoscopic 3D. Each mansion has its own characteristics and equally the variety of ghosts roaming the mansions’ halls and rooms are full of personality, fun and mischief, lending a light-hearted touch to the exploration of the haunted houses.
In addition to the solo adventure, up to four players can get together locally or online to venture into the Thrill Tower in the multiplayer mode, where each controls a differently-coloured Luigi to tackle three varied challenges awaiting on its floors. In Hunter mode, work together to hunt down ghouls and clear each floor before the clock strikes zero. In Polterpup mode, chase the adorable ghost dog with your Dark-Light Device in tow to catch him. In Rush mode, frantically hunt for the escape hatch that allows your team Luigi to climb up to the next floor.
Nintendo of Europe reveals that the European retail package of Luigi’s Mansion 2 will have an exclusive glow-in-the-dark cover while stocks last. Players who pre-order the game may also receive a limited edition Boo Anti-Stress Ball to settle their nerves when the going gets too spooky!
Luigi’s Mansion 2 will make its ghostly appearance across Europe on 28th March 2013 exclusively for Nintendo 3DS, both as a packaged game with glow-in-the-dark cover at retail and as a digital download from Nintendo eShop. |
The head of Russia's main security agency says Caucasus rebels are believed to have carried out two suicide bombings on Moscow's subway system that killed 36 people.
Officials say two female suicide bombers blew themselves up on trains as the subway was packed with rush-hour passengers Monday morning.
In a televised meeting with President Dmitry Medvedev, the head of the Federal Security Service said preliminary investigation points to terrorists connected to the restive Caucasus region that includes Chechnya.
Alexander Bortnikov said the assessment was based on fragments of the bombers' bodies. He did not elaborate.
Emergency Ministry spokeswoman Svetlana Chumikova said 23 people were killed in an explosion shortly before 8 a.m. at the Lubyanka station in central Moscow. The station is underneath the building that houses the main offices of the Federal Security Service, or FSB, the KGB's main successor agency.
A second explosion hit the Park Kultury station about 45 minutes later. Chumikova said at least 12 were dead there. The ministry later said 38 people were injured.
"I heard a bang, turned my head and smoke was everywhere. People ran for the exits screaming," said 24-year-old Alexander Vakulov, who said he was waiting on the platform opposite the targeted train at Park Kultury.
"I saw a dead person for the first time in my life," said 19-year-old Valtin Popov, who also was standing on the opposite platform.
Moscow Mayor Yuri Luzhkov said both explosions were believed to have been set off on the trains.
"The first data that the FSB has given us is that there were two female suicide bombers," Luzhkov told reporters at the Park Kultury site.
The blasts practically paralyzed movement in the city center as emergency vehicles sped to the stations.
In the Park Kultury blast, the bomber was wearing a belt packed with plastic explosive and set it off as the train's doors opened, said Vladimir Markin, a spokesman for Russia's top investigative body. The woman has not been identified, he told reporters.
A woman who sells newspapers outside the Lubyanka station, Ludmila Famokatova, said there appeared to be no panic, but that many of the people who streamed out were distraught.
"One man was weeping, crossing himself, saying 'thank God I survived'," she said.
The last confirmed terrorist attack in Moscow was in August 2004, when a suicide bomber blew herself up outside a city subway station, killing 10 people.
Responsibility for that blast was claimed by Chechen rebels and suspicion in Monday's explosions is likely to focus on them and other separatist groups in the restive North Caucasus region.
Russian police have killed several Islamic militant leaders in the North Caucasus recently, including one last week in the Kabardino-Balkariya region. The killing of Anzor Astemirov was mourned by contributors to two al-Qaida-affiliated Web sites.
The killings have raised fears of retaliatory strikes by the militants.
In February, Chechen rebel leader Doku Umarov warned in an interview on a rebel-affiliated Website that "the zone of military operations will be extended to the territory of Russia ... the war is coming to their cities."
Umarov also claimed his fighters were responsible for the November bombing of the Nevsky Express passenger train that killed 26 people en route from Moscow to St. Petersburg.
The Moscow subway system is one of the world's busiest, carrying around 7 million passengers on an average workday, and is a key element in running the sprawling and traffic-choked city.
Helicopters hovered over the Park Kultury station area, which is near the renowned Gorky Park.
——
Associated Press Writers Jim Heintz and Mansur Mirovalev in Moscow contributed to this report. |
Weekly Epidemiological Record, 22 December 2017, vol. 92, 51/52 (pp. 781–788)
The International Health Regulations (IHR) – 10 years of global public health security
Index of countries/areas
Index, Volume 92, 2017, Nos. 1–52
Weekly Epidemiological Record, 15 December 2017, vol. 92, 50 (pp. 761–780)
Review of global influenza activity, October 2016– October 2017
Monthly report on dracunculiasis cases, January-October 2017
Weekly Epidemiological Record, 8 December 2017, vol. 92, 49 (pp. 749–760)
Schistosomiasis and soil-transmitted helminthiases: number of people treated in 2016
Weekly Epidemiological Record, 1 December 2017, vol. 92, 48 (pp. 729–748)
Meeting of the Strategic Advisory Group of Experts on immunization, October 2017 – conclusions and recommendations
Weekly Epidemiological Record, 24 November 2017, vol. 92, 47 (pp. 717–728)
Progress towards poliomyelitis eradication: Pakistan, January 2016–September 2017
Performance of acute flaccid paralysis (AFP) surveillance and incidence of poliomyelitis, 2017
Weekly Epidemiological Record, 17 November 2017, vol. 92, 46 (pp. 701–716)
Global routine vaccination coverage, 2016
Progress in rubella and congenital rubella syndrome control and elimination – worldwide, 2000–2016
Weekly Epidemiological Record, 10 November 2017, vol. 92, 45 (pp. 681–700)
Progress report on the elimination of human onchocerciasis, 2016–2017
Country Immunization Information System Assessments (IISAs), in Kenya (2015) and Ghana (2016)
Weekly Epidemiological Record, 3 November 2017, vol. 92, 44 (pp. 661–680)
Update on vaccine-derived polioviruses worldwide, January 2016–June 2017
Progress with the implementation of rotavirus surveillance and vaccines in countries of the WHO African Region, 2007–2016
Weekly Epidemiological Record, 27 October 2017, vol. 92, 43 (pp. 649–660)
Progress towards regional measles elimination – worldwide, 2000–2016
Monthly report on dracunculiasis cases, January-September 2017
Weekly Epidemiological Record, 20 October 2017, vol. 92, 42 (pp. 625–648)
Recommended composition of influenza virus vaccines for use in the 2018 southern hemisphere influenza season
Zoonotic influenza viruses: antigenic and genetic characteristics and development of candidate vaccine viruses for pandemic preparedness
Weekly Epidemiological Record, 13 October 2017, vol. 92, 41 (pp. 609–624)
Executive summary of the 9th meeting of the WHO working group RT-PCR for the detection and subtyping of influenza viruses
Executive summary of the 6th meeting of the WHO Expert Working Group of the GISRS for Surveillance of Antiviral Susceptibility
Continuing risk of meningitis due to Neisseria meningitidis serogroup C in Africa: revised recommendations from a WHO expert consultation
Progress towards eliminating onchocerciasis in the WHO Region of the Americas: elimination of transmission in the north-east focus of the Bolivarian Republic of Venezuela
Weekly Epidemiological Record, 6 October 2017, vol. 92, 40 (pp. 589–608)
Summary of global update on preventive chemotherapy implementation in 2016: crossing the billion
Global programme to eliminate lymphatic filariasis: progress report, 2016
Weekly Epidemiological Record, 29 September 2017, vol. 92, 39 (pp. 573–588)
Armenia, Maldives, Sri Lanka and Kyrgyzstan certified malaria-free
Malaria elimination: report from the inaugural global forum of countries with potential to eliminate malaria by 2020
Monthly report on dracunculiasis cases, January-August 2017
Weekly Epidemiological Record, 22 September 2017, vol. 92, 38 (pp. 557–572)
Global leishmaniasis update, 2006–2015: a turning point in leishmaniasis surveillance
Control of visceral leishmaniasis in Somalia: achievements in a challenging scenario, 2013–2015
Weekly Epidemiological Record, 15 September 2017, vol. 92, 37 (pp. 537–556)
Meeting of the International Task Force for Disease Eradication, June 2017
Weekly Epidemiological Record, 8 September 2017, vol. 92, 36 (pp. 521–536)
Cholera, 2016
Performance of acute flaccid paralysis (AFP) surveillance and incidence of poliomyelitis, 2017
The International Health Regulations (IHR) – 10 years of global public health security
Weekly Epidemiological Record, 1 September 2017, vol. 92, 35 (pp. 501–520)
Global leprosy update, 2016: accelerating reduction of disease burden
Weekly Epidemiological Record, 25 August 2017, vol. 92, 34 (pp. 477–500)
Cholera vaccines: WHO position paper – August 2017
Monthly report on dracunculiasis cases, January-June 2017
Weekly Epidemiological Record, 18 August 2017, vol. 92, 33 (pp. 453–476)
Progress towards poliomyelitis eradication: Afghanistan, January 2016–June 2017
Human cases of influenza at the human-animal interface, January 2015–April 2017
Health conditions for travellers to Saudi Arabia for the pilgrimage to Mecca (Hajj), 2017
Weekly Epidemiological Record, 11 August 2017, vol. 92, 32 (pp. 437–452)
Deployments from the oral cholera vaccine stockpile, 2013–2017
442 Yellow fever in Africa and the Americas, 2016
Weekly Epidemiological Record, 4 August 2017, vol. 92, 31 (pp. 417–436)
Diphtheria vaccine: WHO position paper – August 2017
WHO African Region Immunization Technical Advisory Group: Call for nominations
Weekly Epidemiological Record, 21 July 2017, vol. 92, 29/30 (pp. 405–416)
Progress towards measles elimination in Bangladesh, 2000–2016
Performance of acute flaccid paralysis (AFP) surveillance and incidence of poliomyelitis, 2017
Weekly Epidemiological Record, 14 July 2017, vol. 92, 28 (pp. 393–404)
Meeting of the Global Advisory Committee on Vaccine Safety, 7–8 June 2017
Monthly report on dracunculiasis cases, January– May 2017
Weekly Epidemiological Record, 7 July 2017, vol. 92, 27 (pp. 369–392)
Hepatitis B vaccines: WHO position paper – July 2017
Weekly Epidemiological Record, 30 June 2017, vol. 92, 26 (pp. 357–368)
Index of countries/areas
Index, Volume 92, 2017, Nos. 1–26
WHO Alliance for the Global Elimination of Trachoma by 2020: progress report on elimination of trachoma, 2014–2016
Weekly Epidemiological Record, 23 June 2017, vol. 92, 25 (pp. 345–356)
Yellow fever vaccine: WHO position on the use of fractional doses – June 2017
Global polio eradication: progress towards containment of poliovirus type 2, worldwide 2017
Weekly Epidemiological Record, 16 June 2017, vol. 92, 24 (pp. 333–344)
Validation of maternal and neonatal tetanus elimination in Equatorial Guinea, 2016
Weekly Epidemiological Record, 9 June 2017, vol. 92, 23 (pp. 321–332)
The International Health Regulations (IHR) – 10 years of global public health security
Japanese encephalitis: surveillance and immunization in Asia and the Western Pacific, 2016
Monthly report on dracunculiasis cases, January– April 2017
Weekly Epidemiological Record, 2 June 2017, vol. 92, 22 (pp. 301–320)
Meeting of the Strategic Advisory Group of Experts on immunization, April 2017 – conclusions and recommendations
Weekly Epidemiological Record, 26 May 2017, vol. 92, 21 (pp. 293–300)
Virologic monitoring of poliovirus type 2 after OPV2 withdrawal in April 2016: an important advance in eradicating poliomyelitis and eliminating live oral poliovirus vaccines worldwide, 2016–2017
Weekly Epidemiological Record, 19 May 2017, vol. 92, 20 (pp. 269–292)
Dracunculiasis eradication: global surveillance summary, 2016
Fact sheet on Ebola virus disease (updated May 2017)
Weekly Epidemiological Record, 5 May 2017, vol. 92, 18 (pp. 229–240)
Progress towards measles elimination – African Region, 2013–2016
Monthly report on dracunculiasis cases, January– March 2017
Weekly Epidemiological Record, 14 April 2017, vol. 92, 15 (pp. 181–192)
Immunization and Vaccine-related Implementation Research Advisory Committee (IVIR-AC): summary of conclusions and recommendations, 1–2 February 2017 meeting
Zika virus: an epidemiological update
Weekly Epidemiological Record, 7 April 2017, vol. 92, 14 (pp. 165–180)
Surveillance systems to track progress towards polio eradication worldwide, 2015–2016
Performance of acute flaccid paralysis (AFP) surveillance and incidence of poliomyelitis, 2017
Monthly report on dracunculiasis cases, January– December 2016
Weekly Epidemiological Record, 31 March 2017, vol. 92, 13 (pp. 145–164)
Epidemic meningitis control in countries of the African meningitis belt, 2016
Ensuring the timely supply and management of medicines for preventive chemotherapy against neglected tropical diseases
Weekly Epidemiological Record, 24 March 2017, vol. 92, 12 (pp. 129–144)
Zoonotic influenza viruses: antigenic and genetic characteristics and development of candidate vaccine viruses for pandemic preparedness
Weekly Epidemiological Record, 17 March 2017, vol. 92, 11 (pp. 117–128)
Recommended composition of influenza virus vaccines for use in the 2017–2018 northern hemisphere influenza season
Weekly Epidemiological Record, 3 March 2017, vol. 92, 9/10 (pp. 97–116)
Roadmap to elimination standard measles and rubella surveillance
Meeting of the International Task Force for Disease Eradication, November 2016
Weekly Epidemiological Record, 24 February 2017, vol. 92, 8 (pp. 89–96)
Continued endemic wild poliovirus transmission in security-compromised areas – Nigeria, 2016
Weekly Epidemiological Record, 17 February 2017, vol. 92, 7 (pp. 77–88)
Human rabies: 2016 updates and call for data
Weekly Epidemiological Record, 3 February 2017, vol. 92, 5 (pp. 45–52)
Early warning, alert and response system in emergencies: a field experience of a novel WHO project in north-east Nigeria
Fact sheet on Guillain-Barré syndrome (updated October 2016)
Weekly Epidemiological Record, 27 January 2017, vol. 92, 4 (pp. 37–44)
Detection of influenza virus subtype A by polymerase chain reaction: WHO external quality assessment programme summary analysis, 2016
Weekly Epidemiological Record, 20 January 2017, vol. 92, 3 (pp. 21–36)
Maternal and neonatal tetanus elimination: validation in Punjab Province, Pakistan, November 2016
Monthly report on dracunculiasis cases, January– November 2016
Weekly Epidemiological Record, 13 January 2017, vol. 92, 2 (pp. 13–20)
Global Advisory Committee on Vaccine Safety, 30 November – 1 December 2016 |
NEW YORK: 29 Sept. 2016 — Overall satisfaction with alternative video services such as streaming and transactional-based OTT (over the top) offerings is considerably higher than it is with traditional pay television service, spurring an increase in cord-cutting from 2015, according to three J.D. Power studies released today.
The related studies are the J.D. Power 2016 U.S. Residential Television Service Provider Satisfaction StudySM; the J.D. Power 2016 U.S. Residential Internet Service Provider Satisfaction StudySM; and the J.D. Power 2016 U.S. Residential Telephone Service Provider Satisfaction Study.SM
The annual wireline studies, now in their 15th year, evaluate residential customers’ experiences with TV, internet and phone services in four geographical regions: East, South, North Central and West. The ISP and telephone studies measure customer satisfaction across five factors: network performance and reliability; cost of service; billing; communication; and customer service. The TV study measures satisfaction in those same five factors plus a sixth: programming. Satisfaction is calculated on a 1,000-point scale.
The TV study finds that, compared with pay TV service providers, satisfaction is significantly higher with paid streaming video services like Netflix, Amazon and Hulu; skinny bundle offerings like SlingTV and PlayStation Vue; and programming apps like HBO Go.
For example, customers rate their primary alternative video service higher than their TV service for the overall experience (7.92 vs. 7.18, respectively, on a 10-point scale), which is largely driven by much higher ratings for the overall cost of service experience (7.99 vs. 6.42). Customers also rate their primary alternative video service higher than their TV service for the overall performance and reliability (7.98 vs. 7.82), programming (7.87 vs. 7.76) and billing (8.04 vs. 7.54) experiences.
Subsequently, with relatively low prices and increasing rates of adoption, alternative video services are helping drive the cord-cutting trend. Nearly two-thirds (63%) of customers have used an alternative video service in the previous year, up from 58% in 2015. Additionally, 73% of customers who plan to cut the cord on TV service in the next year indicate they will use an alternative video service.
“This finding partly reflects age demographics since younger customers are more likely to use alternative video services than older customers, and younger customers are more satisfied with alternative TV service than older customers,” said Kirk Parsons, senior director and technology, media & telecom practice leader at J.D. Power. “Despite their higher satisfaction, customers who have used an alternative video service in the previous year are much more likely than those who haven’t used one—14% vs. 4%—to cut the cord on TV in the next year.”
However, customers who cut the cord on TV are not necessarily lost for TV providers, and increasing their satisfaction raises the likelihood that they will reactivate TV service or upgrade their internet service in the future. Among customers who plan to drop TV service during the next 12 months, 44% say they expect to reactivate it during certain times of year. Overall satisfaction among customers in this group is 845, compared with 575 among those who do not plan to reactivate TV service and 561 among those who don’t know if they will reactivate it.
Key Study Findings
Residential Television Service Provider Satisfaction Study
AT&T U-verse/DIRECTV ranks highest in TV customer satisfaction in the East (782) and South (764) regions; Verizon FiOS (776) ranks highest in the West region; and DISH Network (747) ranks highest in the North Central region.
ranks highest in TV customer satisfaction in the East (782) and South (764) regions; (776) ranks highest in the West region; and (747) ranks highest in the North Central region. Among customers who switched providers in the previous 12 months, the most commonly cited reasons for switching are “cost was too high” (25%); “moved locations/previous provider not available at new location” (24%); “competitor offered a better deal” (17%); and “customer service was poor” (10%).
Residential Internet Service Provider Satisfaction Study
Verizon ranks highest in ISP customer satisfaction in the East (735), South (755) and West (755) regions; AT&T (717) ranks highest in the North Central region.
ranks highest in ISP customer satisfaction in the East (735), South (755) and West (755) regions; (717) ranks highest in the North Central region. Slightly more than one-third (34%) of customers purchase premium speed internet. This compares to 37% in 2015 and 27% in 2014. Performance and reliability satisfaction among customers with premium speed internet is 763, while satisfaction among those without premium speed internet is 694.
Over a typical three-month period, 42% of customers experienced a website connection error or failure to load (vs. 45% in 2015); 48% experienced a website that was excessively slow to load (vs. 51%); and 36% experienced a general service outage (vs. 37%). The incidence of customers experiencing an email connection error—28%—is unchanged from 2015.
Residential Telephone Service Provider Satisfaction Study
Cincinnati Bell ranks highest for the first time in telephone customer satisfaction in the North Central region (765); Verizon ranks highest in the East (757), South (766) and West (770) regions.
ranks highest for the first time in telephone customer satisfaction in the North Central region (765); ranks highest in the East (757), South (766) and West (770) regions. Nearly half (46%) of highly satisfied residential telephone customers (overall satisfaction scores of 900 or higher) say they “definitely will not” switch providers in the next 12 months, compared with only 12% of dissatisfied customers (scores below 550) who say the same.
The 2016 U.S. wireline studies are based on responses from 31,072 customers nationwide who evaluated their cable/satellite TV, high-speed internet and telephone service providers. The studies were fielded in four waves: November 2015, February 2016, April 2016 and July 2016.
For more information about these three J.D. Power studies, visit:
https://www.jdpower.com/resource/us-residential-television-customer-satisfaction-study
https://www.jdpower.com/resource/us-residential-internet-service-provider-customer-satisfaction-study
https://www.jdpower.com/resource/jd-power-residential-telephone-customer-satisfaction-study
Media Relations Contact
Geno Effler; Costa Mesa, Calif.; 714-621-6224; [email protected]
About J.D. Power and Advertising/Promotional Rules www.jdpower.com/about-us/press-release-info |
Alex Tuch puts on a team sweater after being selected as the number eighteen overall pick to the Minnesota Wild in the first round of the 2014 NHL Draft at Wells Fargo Center. Bill Streicher-USA TODAY Sports
Minnesota Wild Need Jason Pominville to Step Up or Step Aside
Minnesota Wild Need Jason Pominville to Step Up or Step Aside by Derek Felska
The Minnesota Wild’s 2014 first round pick Alex Tuch was a part of today’s announcement of the U.S. National Junior Team’s preliminary roster. USA Hockey announced the complete roster today.
Through 14 games with Boston College this season Tuch has five goals, five assists, an average of 2.8 shots per game for 39 total, eight penalty minutes, and a plus-2 rating. Tuch also scored a pair of goals in the team’s lone exhibition game at the beginning of the season.
Also among the 30 players making the prelim team was top 2015 NHL Entry Draft prospect Jack Eichel as well as a number of Minnesotans.
Here’s the full lineup via the USA Hockey website:
Seven camp invitees were members of the 2014 U.S. National Junior Team, including forwards Jack Eichel (North Chelmsford, Mass./Boston University), Adam Erne (New Haven, Conn./Quebec Remparts) and Hudson Fasching (Apple Valley, Minn./University of Minnesota); defensemen Will Butcher (Sun Prairie, Wis./University of Denver), Ian McCoshen (Hudson, Wis./Boston College) and Steve Santini (Mahopac, N.Y./Boston College); and goaltender Thatcher Demko (San Diego, Calif./Boston College). Additionally, nine invitees were members of the U.S. Men’s National Under-18 Team that won gold at the 2014 IIHF Under-18 Men’s World Championship. The group includes forwards Eichel, Dylan Larkin (Waterford, Mich./University of Michigan), Auston Matthews (Scottsdale, Ariz./U.S. National Under-18 Team), Sonny Milano (Massapequa, N.Y./Plymouth Whalers) and Alex Tuch (Baldwinsville, N.Y./Boston College); defensemen Ryan Collins (Bloomfield, Minn./University of Minnesota), Jack Dougherty (Cottage Grove, Minn./University of Wisconsin) and Noah Hanifin (Norwood, Mass./Boston College); and goaltender Alex Nedeljkovic (Parma, Ohio/Plymouth Whalers).
That makes three players from Minnesota, two of which play college hockey in-state, one Wild draft pick, and two players from Wisconsin. (#OneOfUS, right?)
Some thought that after a strong start to the season and having gone through the development program that the Wild’s second round draft pick Louie Belpedio might make the roster as well, but he was not on today’s announcement. He’s having a solid start to his college career though. Lots of promise there.
There was certainly going to be a fair amount of Juniors coverage here at GPW, but with Tuch on the team, we’ll be update regularly once the tournament starts in Toronto on December 26. |
Rek’Sai’s ultimate now launches her at a target she’s recently damaged. Tunnels can be re-entered more quickly. Base damages down, scaling up.
Rek’Sai is a ferocious predator who strikes fear into the hearts of her prey as she hunts them. The way Rek’Sai is currently played —a vanguard who sets up plays for her team—just doesn’t sync up with that promised fantasy.
We see a lot of potential for Rek’Sai, and this update is an opportunity to double down on Rek’Sai’s predatory instincts. Rek’Sai should be feared for her individual threat, not the followup of her team. With these changes, we’re reimagining a Rek’Sai who has better backline access in teamfights and does more damage when she gets in there, but is less consistent when behind or built tanky.
For more information, check our our post on Rek’Sai’s update.
Base stats
HEALTH GROWTH STAT 90 ⇒ 85
Passive - Fury of the Xer'Sai
Fury generation faster. Burrowed health regeneration ticks more quickly but max regeneration is down.
FURY GENERATION 5 for basic attacks, 10 for unburrowed abilities, 2.5 for additional units hit beyond the first by abilities ⇒ 25 for all attacks and unburrowed abilities
MAX BURROWED HEALTH REGENERATION 25-450 (at levels 1-18) over 5 seconds ⇒ 20-190 (at levels 1-18) over 3 seconds
Q - Queen's Wrath
Base damage down at later ranks. Ratio doubled.
DAMAGE 15/25/35/45/55 (+0.2 bonus attack damage) ⇒ 15/20/25/30/35 (+0.4 bonus attack damage)
Burrowed Q - Prey Seeker
Now has a bonus AD ratio. Does physical damage instead of magic.
new RATIO 0.4 bonus attack damage (in addition to the existing 0.7 ability power ratio)
DAMAGE TYPE Magic ⇒ Physical
Burrowed W - Un-burrow
Primary target is still knocked up, others are now knocked back.
BASE DAMAGE 40/80/120/160/200 ⇒ 50/65/80/95/110
DISPLACEMENT 0.5-1 second knockup on all nearby enemies based on proximity to Rek'Sai ⇒ 1 second knockup on the primary target; 250 range knockback to other nearby enemies
IMMUNITY Only the knocked up target is granted immunity from further knock-ups
IMMUNITY DURATION 10/9.5/9/8.5/8 seconds ⇒ 10 seconds at all ranks
new STOP AND SMELL THE ROSE ROOTS Rek’Sai can interact with Plants by attacking them while burrowed
E - Furious Bite
Now has base damage, and scales off bonus AD instead of total AD. Damage no longer scales linearly with Fury, but full-Fury casts still deal double damage as true damage.
DAMAGE 80/90/100/110/120% total attack damage ⇒ 50/60/70/80/90 (+ 0.85 bonus attack damage)
removed RAMP-UP Damage no longer scales linearly with Rek’Sai’s current Fury
THRESHOLD Damage still doubled and dealt as True Damage at max Fury
Burrowed E - Tunnel
Tunnel creation and re-entry cooldowns both reduced at later ranks.
COOLDOWN 26/24/22/20/18 seconds ⇒ 26/23/20/17/14 seconds
RE-ENTRY COOLDOWN 10/9/8/7/6 seconds ⇒ 10/8/6/4/2 seconds
new R - Void Rush
Rek’Sai marks champions she damages and can dash to a marked target, dealing damage based off their missing health.
PREY Rek’Sai passively marks enemy champions she damages as Prey for 5 seconds
VOID RUSH Rek’Sai targets a Prey-marked enemy, burrowing after a 0.35 second cast time before emerging from underground and leaping at her target after an additional 0.75 seconds.
UNSTOPPABLE Rek’Sai is unstoppable during both the cast and leap
DAMAGE 100/250/400 (+1.6 bonus attack damage) (+20/25/30% target’s missing health)
COOLDOWN 100/80/60 seconds |
Here’s how to judge the pragmatic case for gun control: if the pro-control lobby managed to have each of its favorite restrictions enacted, could we as individuals be more casual about our safety than we are today? The answer clearly is no. So what’s the point of the restrictions beyond letting their advocates feel good about themselves?
A false sense of security is worse than no sense of security at all.
Mass shooters have obtained their guns legally, having had no disqualifiers in their records; used guns legally obtained by someone else; or obtained them despite existing laws. Therefore, the controls most commonly called for would not have prevented those massacres. In the latest massacre, the shooter had a disqualifier — a less-than-honorable discharge from the Air Force after a year in the brig for domestic abuse — but the Air Force failed to report that disqualifier to the FBI and so it never got into the database that was checked when the shooter bought guns from licensed dealers. New controls, such expanded background checks, would not have prevented the shooting because the Air Force was already required to report the shooter’s conviction to the FBI. Even a ban on rifles with certain features, misleadingly called “assault weapons,” would not have prevented the shooting because equally powerful rifles would have been available
Thus the victims of the latest shooter, like the victims of the previous mass shootings, would have been no safer under the sought-after gun-control regime than they were at the time they were murdered.
But this is not the end of the story. Even if those shooters had been unable to obtain their guns as they did, it does not follow that they would have been prevented from committing their monstrous offenses. How many times must it be pointed out that someone who is bent on murder is not likely to be deterred by legal restrictions on the purchase of guns? The gun-control advocates pretend that legal methods are the only way to obtain firearms, but we know that is not true. People have always been able to obtain guns through illegal channels. Gun-running — firearms smuggling and trafficking — is probably as old as the earliest gun restrictions. Guns can be stolen and sold. (There are 300 million of them.) Guns can be made in garages. Guns will eventually be made routinely on 3D printers. Supply responds to demand. Black markets thrive whenever products are prohibited.
But the black market — by definition — is already illegal. So what are gun-controllers to do, make the black market doubly illegal? I don’t think that’s a solution.
Even more drastic forms of control won’t change this story. The Australian tax-financed eminent-domain approach, in which the government ordered people to sell their guns to the government — failed to remove all guns from society. “That policy … removed up to one million weapons from Australians’ hands and homes,” Varad Mehta writes. “This was, depending on the estimate, a fifth to a third of Australia’s gun stock.” How many bad people do you imagine surrendered their guns?
How would an Australia program — which some Democrats, including Hillary Clinton and Barack Obama, favor — do here? Not very well, I’d guess. Think what would happen if the government tried to confiscate people’s guns with a heavier hand than that used by the Australian politicians. Individual rights aside, would that be an acceptable outcome for the sake of reducing gun violence? (Do be aware that most gun fatalities are suicides.)
If people with bad intent would continue to obtain guns no matter what gun controls were on the books, it follows that people’s responsibility for their own safety remains the same in all circumstances. Even beefing up police forces won’t deliver greater safety: the cops are always too far away, and besides, they have no legal obligation to save you.
The worst outcome would be for people to believe they were safe simply because Congress passed some piece of magic legislation favored by gun-controllers. That would be a cruel joke. |
It seems that Trojan infections are starting to perform new actions when installing a fake anti-spyware application. From our many articles and removal guides[,] we have explained that rogue anti-spyware programs are sometimes installed or downloaded through a Trojan infection. Now with Internet users wizing up to the Trojan tricks, Trojan makers have tighten the lease on Users' computers by running files that prevent users from opening other programs or hiding the C: drive.
If Trojan infections continue performing these actions it will be very difficult for any spyware removal program or tool to identify the issue let alone remove the infection. This is big news and a very serious threat to all computers on the Internet. If this type of infection becomes widespread it will force many spyware and security vendors to revamp detection and removal procedures. Below is a list of the identified symptoms of this Trojans behavior.
Identified Symptoms of Trojan behavior:
You are not able to open any programs on your computer including spyware removal tools and your C: drive is hidden or not accessible.
You have more than one user logged on your computer where the second user performs unknown actions in the background without your permission.
What can you do if you encounter this type of Trojan infection?
Continual updates of your spyware or security software is always recommended. If you feel the need to download a new spyware removal program you may do so after you perform proper research.
You may utilize removal guides for current Trojan infections to ward off any other malicious files that may aide to the infection of new actions performed by this Trojan. Once we get more information we should be in position to provide additional assistance in removal of Trojans of this type. |
Colin Busby is Associate Director, Research, and Ramya Muthukumaran is a Researcher at the C.D. Howe Institute. They are authors of "Precarious Positions: Policy Option to Mitigate Risks in Non-standard Employment."
The government of Ontario is currently looking at ways to address precarious work through a sweeping review of labour legislation and enforcement in the province – other provinces are planning to follow suit. Employment risks are also featured in federal Finance Minister Bill Morneau's statements about "job churn" and the persistence of short-term employment. But how bad is the problem and what should be done about it?
In a recent C.D. Howe Institute publication, we looked at the common meanings of precarious work and assess the policy levers available to address it. We consider three types of precarious employment that we refer to as "non-standard" jobs – part-time, temporary and unincorporated self-employment.
Story continues below advertisement
We find that non-standard work as a share of total employment surged in the early 1990s, from 28 per cent to 34 per cent, but has been relatively stable since. Within the categories of non-standard work, however, temporary work has seen the most growth since 1997, particularly in services sectors such as health and education. Part-time employment has grown in line with total employment and seen an increase of 30 per cent between 1997 and 2015. Unincorporated self-employment has remained relatively stable since the late 90s.
Opinion: 'Job churn' and 'precarious work' don't have to be the new normal
Related: Is precarious work really a problem? Business and labour groups clash during Ontario review
Opinion: An era of precarious employment calls for a new social architecture
There are multiple reasons why non-standard employment persists in making up around one-third of Canadian jobs – they include globalization, technological change and shrinking union density. But another reason is shifting worker preferences – the majority of non-standard workers, after all, do it voluntarily.
In many cases, temporary positions are stepping stones to more permanent positions. But not all workers are fortunate enough. OECD data suggest that a little more than a quarter of Canadian part-time workers in 2014 would have preferred full-time employment. Likewise, for temporary workers, another OECD study found that, in 2013, a quarter of them in Canada would have preferred permanent positions. Even though the nationwide picture isn't as bleak as it is made out to be, we urge policy makers to focus on the segment of the labour force involuntarily engaged in temporary and part-time employment.
Policy makers can approach this problem in two ways: 1) with more rigid labour laws that try to shape employment arrangements; or 2) with policies that strengthen the safety net under workers with precarious jobs. Other countries have a head-start on Canada on this score and their experiences hold important lessons.
Story continues below advertisement
Story continues below advertisement
Netherlands, for example, found shortening the maximum term of a temporary job under labour regulations had the unintended consequence of higher rates of job dismissals and more temporary workers. The Danish experience focused more on improving access to social programs and less on legislative restrictions for employers. This was largely successful in reducing unemployment while facilitating a rewarding work environment.
Governments looking to reduce the incidence of non-standard work should be wary of heavy-handed legislative interventions and focus instead on bolstering social policy frameworks. Federal and provincial governments, acting in concert or independently, should reduce the uncertainties of a volatile labour market for newcomers and incumbents.
They could start by:
1) Ensuring appropriate access to Employment Insurance benefits for the most vulnerable workers, including part-time and temporary non-seasonal workers, many of whom don't qualify because of hours-worked eligibility criteria. Instead, weeks-worked entrance requirements and claimant categories that recognize non-standard workers should be strongly considered;
2) Ensuring obstacles to EI access are not hindering access to other critical social programs under the EI umbrella, such as special benefit programs such as maternity/paternity leave and courses for skills upgrading;
3) Doing a better job filling the gaps in health coverage experienced by workers in precarious jobs for services such as prescription drugs, mental health, vision and dental care. Provinces, such as Ontario, should take direct responsibility for this and improve coverage rather than wait for unlikely federal interventions;
Story continues below advertisement
4) Protecting the value of expanded CPP benefits for low-income workers by exempting them from punitive, income-tested guaranteed income supplement clawbacks.
Understandably, provinces want to look at labour legislation and improved enforcement for solutions. But there are ways the government can ease the burden for workers in precarious jobs, without too much tampering with the way new jobs are created. |
Climate models usually end up in the news because of projections of future climate, but many researchers use the models to study other planets or the Earth's past. They can help test hypotheses about past climate events by comparing model simulations to estimates of past climates obtained from things like ice and sediment cores.
One climatic event that looms large in Earth's history is the end-Permian mass extinction about 252 million years ago—the worst mass extinction event on record. A volcanic event seems to have been at least partly to blame. Tremendously vast eruptions in Siberia coughed up lava flows and ash that may have covered an area nearly as large as Australia—a feature known as the Siberian Traps. During this event, some 90 percent of marine species disappeared, and species on land didn’t fare well, either.
Apart from the warming caused by all the carbon dioxide emitted by the eruptions, many researchers have explored the problems that volcanic gases might have caused for land-dwelling organisms. Currently, the pH of the ocean is dropping as we increase atmospheric CO 2 , but at high enough levels of carbon dioxide, acidification of rain can become a problem as well. Add in volcanic emissions of sulfur dioxide—the same compound that we control in coal emissions to prevent acid rain—and the atmosphere would get even worse.
Among the Siberian rocks in the vicinity of the eruptions were salts deposited by evaporating seawater. The heat of the magma feeding such eruptions can metamorphose surrounding rocks, causing those salts to release chloride-containing gas. Along with some methane cooked out of organic compounds in the rocks, that could have depleted the ozone layer, allowing more dangerous UV radiation through to the surface.
In order to investigate how these effects could have played out during the eruption of the Siberian Traps, a group led by MIT’s Benjamin Black used a global climate model. To simplify things, atmospheric CO 2 was held constant at a little less than 10 times today’s concentration—roughly the level it likely reached during that time. A number of plausible Siberian eruptions were then simulated using what we know about the timing and size of individual events during that time. Other than CO 2 , the gases released by the eruptions only last a few years in the atmosphere before breaking down, so rainfall acidity and ozone depletion worsened in pulses that abated soon after each eruption died down.
The average pH of rain water today is about 5.0 to 5.5, on the acidic side of a neutral pH of 7.0. The higher CO 2 in the model alone lowered the average pH to about four—a significant increase in acidity since pH is measured on a logarithmic scale. During the simulated eruptions, the pH dropped even more. Because the Siberian Traps were almost as far north then as they are now, the effects were more intense in the Northern Hemisphere. From the equator to about 50 degrees north latitude, rainwater pH in the model was as low as two—about as acidic as lemon juice. That’s acidic enough to harm many types of organisms.
Simulations of ozone loss covered a wider range due to uncertainty about the amount of ozone-depleting gases that might have been released. Still, the impact was significant, with average global ozone concentration decreasing by as much as 70 percent. Near the poles, where ozone depletion would be strongest, the amount of harmful UV radiation reaching the surface would increase by a factor of 49.
These climate model simulations can now be compared to records from the mass extinction to see if evidence consistent with the impacts of acid rain and ozone depletion matches the patterns in the model. The simulations describe a terrestrial environment swinging into and out of relative extremes of nastiness as eruptions thrashed and subsided. They don’t call it “the Great Dying” for nothing.
Geology, 2014. DOI: 10.1130/G34875.1 (About DOIs). |
The Buffalo Bills kept their playoff hopes alive for one more week large in part to a dominant defensive effort. The defense only allowed 110 net yards in the second half, keeping the team in the game. It was a gritty effort and it was led by one of the longest-tenured players on the team, defensive tackle Kyle Williams.
Flashback to mid-August, when it was announced that $100 million defensive tackle Marcell Dareus was going to be suspended for four games, fans and the organization were extremely agitated.
“We are very disappointed Marcell chose to put himself first, before his teammates, coaches, the rest of the organization, and fans through his recent actions,” said general manager Doug Whaley.
Selfish is an to antonym for Dareus’ fellow linemen, Williams. Although the defense has struggled at times on Sunday, it’s hard to imagine just how much worse they would have performed without the leadership and blue collar effort put in by Williams.
Williams finished the game with four combined tackles, one tackle for loss and one quarterback hit while his relentless effort and selflessness create plays for others.
Inside linebacker Zach Brown has been kept clean because of Williams and it has allowed him to lead the league in tackles (101). How about 33-year-old Lorenzo Alexander and league-leading 10 sacks and 33 total pressures, per Pro Football Focus.
Alexander has seen a lot of one-on-one matchups simply due to Williams’ presence. There is no doubt that Williams has single-handedly helped those two players have career years thus far.
That is what Williams does – he makes his teammates better – and he did that repeatedly throughout Buffalo’s 16-12 win on Sunday.
Kyle Williams’ disruption
On this first-down play, the Bills have four down lineman. Due to the pre-snap motion by Cincinnati, Marcell Dareus and Williams adjust their alignment. Dareus settles in at the three-technique (outside shoulder of the guard) and Williams at the ‘2i’ technique (inside shoulder of the guard).
Hughes is in a wide-nine alignment outside the shoulder of the tight end while Alexander in a six-technique, head up on the tight end because of the motion.
On the snap, Williams reads the play and attacks the center which forces the Bengals to double-team him, giving Alexander a one-on-one matchup due to the interior pressure.
Dareus easily beats his one on one matchup with left guard Clint Boling.
The four-man pressure worked and prevented Dalton from connecting with tight end Tyler Eifert down the sideline.:
Buffalo held the Bengals to 93 rushing yards on 27 attempts, a 3.4-yards-per-carry average.. Having Williams and Dareus paired up alongside each other played a huge role in limiting Cincinnati’s run game and forced them to be one-dimensional.
Burst
On this second-and-goal, defensive coordinator Dennis Thurman put Williams at the five-technique with defensive lineman Jerel Worthy at a tilted shaded one-technique between the Bengals’ center and guard. Dareus aligned in 2i technique, just inside the right guard.
Cincinnati’s attempt to run power to the right of their offensive formation was blown up by Williams, who exploded off of the snap.
This image in slow motion shows just how quickly Williams got off the ball compared to his fellow defensive linemen.
As William’s is firing out of his stance, he is also reading the opposing offensive linemen. He sees Boling pulling and Williams instinctively knifes his way through the vacated gap.
His speed, technique, strength and low pad level proved too much for left tackle Andrew Whitworth, an NFL All-Pro and former Pro Bowler to handle.
But, Bengals running back Jeremy Hill manages to avoid Williams, but Dareus, who is 6-foot-3, 331-pounds has the middle clogged. Williams doesn’t give up on the play, somehow managing to wrap up Hill’s legs while on the ground just as Dareus swallows the running back for a huge stop.
Williams is a player that is highly respected around the league on and off the field. The 33-year old is the team’s nominee for the NFL’sArt Rooney Award, which is presented to the player ‘who best demonstrates the qualities of on-field sportsmanship, including fair play, respect for the game and opponents, and integrity in competition.’
The veteran is playing under his fourth head coach and sixth defensive coordinator since being selected in the fifth-round of the 2006 NFL Draft. Every coach has differing defensive schemes and Williams’ experience and versatility gives Thurman lots of flexibility in how he can utilize the 6-foot-1, 305-pound defensive tackle.
On the Bengals first play of the second half, Williams disrupts yet another play, this time coming against a zone run. Buffalo is in an odd front, with Williams at defensive end.
At the snap, he fires upfield and puts Whitworth in a precarious position, so the tackle uses a ‘turn out’. This technique is often used by a lineman who is past the point of attack, which is the case on this inside zone.
Using a ‘turn out’ technique especially effective when an offensive lineman is matched up with a penetrating defender like Williams. If he shoots upfield, they are then easily ‘turned out’.
Williams does just that, but he uses his low center of gravity and lower body strength so well, that he is able to recover.
As Hill cuts inside, Williams works through the hold by Whitworth and manages to bring down Hill, here is the play in full.
Williams is an ageless wonder and possesses all of the traits you look for in a defensive tackle.
Traits of an All-Pro defensive tackle
His quickness and burst off the snap is simply ridiculous. Williams got his hands into the pads of Cincinnati’s offensive lineman before the ball is even halfway to the quarterback.
Williams’ technique is flawless. He initiates contact, gets his hand inside so he can control the lineman while simultaneously keeping his eyes on the running back.
Boling, then works to get his left hand inside of Williams, but the wily veteran immediately counters. bringing his right hand over the top, and swipes it away as he closes in on the back.
Hill spins out of traffic but runs right into Alexander and safety Corey Graham. Williams didn’t receive credit on the stat sheet for the play, but it’s clear that he was the reason this play was shut down.
Quickness, power, leverage, hand placement, and hand fighting.Just watch his hands. This guy is ridiculous. #Bills pic.twitter.com/3cz2OZEA63 — Cover 1 (@Cover1Bills) November 22, 2016
According to Pro Football Focus, Williams was the Bills highest graded defender against the run on Sunday and when watching the following play, it’s clear to see why. He is nothing short of incredible.
Once again, the Bengals try to run inside zone to Williams’ side.
Knowing that Williams has been smoking linemen off the snap, Boling fires out of his stance quicker in an attempt to reach block Williams.
While Boling does a great job of getting in a position to execute the reach, Williams quickly reads it and executes a gap exchange. Williams then flawlessly executes a swim move en route to Hill, leaving Boling grasping at thin air.
As Williams shoots the ‘A’ gap, inside linebacker Zach Brown becomes responsible for the ‘B’ gap.
It’s easy to see why Williams is one of the most productive and dominant defensive linemen in the NFL. Through the Bills’ first 10 games, he’s notched four sacks, seven quarterback hits, and 14 hurries.
According to PFF, his 25 total pressures are the 10th-most among 3-4 defensive ends. Here he converts the speed rush to a bull rush and mauls Boling once again, walking him right into the lap of Andy Dalton.
Against the run, Williams is PFF’s second-best run defender, recording a ‘stop’ on 10.2-perent of his snaps. Of his 37 total tackles, 27 came against the run with 20 resulting in a stop – both of which are the second-most in the NFL while adding eight tackles for loss.
Selfless broke
He is the epitome of a team player. Here he lines up as a 1 tech or ‘shade’ defensive tackle, he takes on the double team allowing Dareus to be one on one with the guard. Dareus disengages and makes the tackle.
Williams may not always show up in the box score, but you can bet that once you turn on the film, you will see him executing at a high level.
For more Bills-related X’s and O’s analysis, follow Erik Turner on Twitter @Cover1Bills, |
FOXBOROUGH, Mass. -- Sunday's AFC Championship Game was won days before the raucous Gillette Stadium crowd sang along to Bon Jovi during a kickoff, before Patriots broadcaster Scott Zolak sent them into a frenzy by holding up a "Where is Roger?" sign on the jumbotron.
The game was won by Tuesday, when coach Bill Belichick and his staff put the finishing touches on yet another Patriots game plan that Steelers coach Mike Tomlin had no answer for. Home-field advantage is nice, but there is no advantage greater than being more prepared than your opponent. After seven conference titles in the last 16 years, Belichick keeps his group in line, one step ahead of the rest of the AFC.
"I'm cattle. I just go with the herd," wide receiver Julian Edelman said after a convincing 36-17 victory over Pittsburgh, unknowingly coining another Patriots catchphrase to replace "Do your job."
***
From the very first drive Sunday, Belichick and Tom Brady set out to make Pittsburgh's young secondary uncomfortable. New England opened the game in hyperspeed, often snapping the ball with more than 25 seconds left on the play clock, forcing the youngsters in the Steelers' secondary to make quick decisions under pressure unlike any they've ever experienced -- and they crumbled.
"It's something we'd seen on film that was going to work against them," offensive tackle Nate Solder explained of the up-tempo approach, which carried over into the second half. "But it only works if you're making completions."
No matter how many Steelers dropped back in coverage, Brady found receivers wide open. Tomlin told Ross Tucker on Westwood One at halftime that his young guys were "scatterbrained." The Patriots were 9 of 12 on third down through the first three quarters because Brady knew the Steelers' zone defense better than the Steelers did. Brady focused on Chris Hogan and Edelman because they were the open ones.
"No one cares who gets the ball," tight end Martellus Bennett said. "You may not get any passes, I don't give a s---. I just want to win."
Tomlin and his staff were outclassed, appearing to coach against the Patriots team they saw in Week 7. The Steelers did a great job shutting down New England's running game, unlike in that October contest. But no coach is better at adjusting on the fly than Belichick, so the Patriots gave up on the run early.
The Steelers, on the other hand, stubbornly stayed in their soft zone pass defense all night. The Steelers coaches didn't ask their cornerbacks to press the line of scrimmage to disrupt New England's receivers early in their routes. Giving Brady the same looks repeatedly over 60 minutes is asking for a slow death. The only constant in a Belichick game plan is constant change.
The last few times the Steelers faced New England, Belichick left his best cornerback, Malcolm Butler, to fend with Antonio Brown in man coverage. This time, Belichick sent his two best defensive backs at Brown much of the night, with safety Devin McCourty over the top. During one goal-line sequence, the Patriots had three defenders on Brown's side of the field with no other receiver in the area. The Patriots were determined to make young receivers Cobi Hamilton and Sammie Coates beat them on the outside, and they couldn't oblige.
"It's an explosive team," Patriots safety Duron Harmon said when discussing the strategy. "We knew we would try to keep them in front of us, no shots, and really make them drive the whole field."
Brown didn't have a play over 20 yards, again proving the cliche that Belichick takes away an opponent's best weapon. Then again, the Patriots didn't have to worry about running back Le'Veon Bell after he left the game in the first quarter with a groin injury. Bell and backup DeAngelo Williams wound up with only 54 rushing yards on 20 total carries.
"We didn't do a good enough job or a quick enough job adapting to the circumstance," Tomlin said.
Failing to adapt is not a criticism you hear often of Belichick, who had his cake and ate up the Steelers' offensive game plan. The Patriots often kept five defenders on the line of scrimmage to stop the run, but they also kept five or six defensive backs on the field nearly the entire game to prevent big plays.
While Tomlin failed to understand the game flow, choosing to punt early in the second half rather than going for it on fourth down, Brady and Belichick were in attack mode all night.
"When you get in a game with [Brady], there's no better way to say it: He's a stone-cold killer. He's all business. Very confident. It's fun to be around," Solder said.
Brady had so much fun throwing for 384 yards and three touchdowns that he had to be reminded after the game of his second touchdown toss to Hogan, a one-time afterthought in Buffalo who finished this season tied for the NFL lead in yards per catch.
"Oh, the flea flicker," Brady said. "How could I forget that? It was just a great call. They were a little winded, I thought."
That 34-yard touchdown -- which came in response to Pittsburgh's first touchdown -- neatly summarizes the difference in these coaching staffs. Offensive coordinator Josh McDaniels knew Steelers safety Mike Mitchell gets overaggressive in run defense, and he bit hard on the play. He knew the Pittsburgh defense was sucking wind due to New England's hurry-up approach.
Mitchell, meanwhile, told reporters that the flea flicker wasn't part of the team's film study this week. That's despite the Patriots using the same play against Baltimore in December. That's despite the Patriots famously using a similar flea flicker to expose Tomlin's secondary way back in 2007, during his first season as Steelers head coach. It's hard to imagine the Patriots' staff failing to coach their players up to that possibility.
Tomlin noted earlier in the week that the Patriots "haven't had to go through us" during his tenure as Steelers head coach. But that's only true because the Steelers didn't get far enough to make it happen. While the Patriots have made six straight AFC Championship Games, this was Tomlin's first trip to the doorstep of the Super Bowl since the 2010 season. The Patriots are 5-2 against Tomlin's teams, making him one of many coaches who have won an occasional battle against Belichick while losing the war.
This game shouldn't have been such a blowout. The teams appeared evenly matched in the first half, and Steelers quarterback Ben Roethlisberger was moving the ball well. Pittsburgh's offensive line dominated the Patriots' pass rush up front. But the Steelers struggled in situational football, like their goal-line offense. The Steelers had two separate trips inside the 5-yard line that didn't result in a touchdown. The Patriots have the No. 1 scoring defense because they bend but don't break.
"It was really 11 points there that we saved, so yeah, it was huge," Belichick noted.
While the Steelers were moving backward on the goal line late in the first half, Belichick was smartly taking timeouts to give his offense another chance to score before halftime. Brady didn't capitalize on that opportunity, but it was another example of the Patriots being given every possible chance to succeed. With one more brilliant game plan against the high-flying Falcons, Belichick could put to rest the debate about who's the greatest NFL coach of all-time. But that excellence starts with a level of attention to small details that defines his organization.
"Every coach prepares and has their scouting reports," Bennett said. "The difference is, every guy here is taking notes and making sure they get all the coaching points, because that's what they expect. The expectations are so high. If you aren't in the right spot, it makes such a big difference. They hold those high standards for everybody. If Bill calls a bad play, coach will say, 'That was a bad play, but you guys made up for it.' Every guy is held accountable. That's the biggest difference."
This victory, coming at the end of a season that started with Brady suspended, felt like it had special resonance for the Patriots organization. Belichick was effusive in his praise of the team, liberal with his smiles. Brady and friends talked about "finishing the job" in Super Bowl LI but took time to enjoy the tail end of a run that has gone on longer than anyone could have dreamed, longer than the rest of the AFC can stomach.
"I mean, we're going to the Super Bowl, man. S---. You've got to be happy now," Brady said.
Follow Gregg Rosenthal on Twitter @greggrosenthal. |
The U.S Department of Energy (DOE) has awarded a total of 80 million processor hours on the fastest supercomputer in the nation to an astrophysical project based at the DOE’s Princeton Plasma Physics Laboratory (PPPL). The grants will enable researchers led by Amitava Bhattacharjee, head of the Theory Department at PPPL, and physicist Will Fox to study the dynamics of magnetic fields in the high-energy density plasmas that lasers create. Such plasmas can closely approximate those that occur in some astrophysical objects.
The awards consist of 35 million hours from the INCITE (Innovative and Novel Impact on Computational Theory and Experiment) program, and 45 million hours from the ALCC, (ASCR — Advanced Scientific Computing Research — Leadership Computing Challenge.) Both will be carried out on the Titan Cray XK7 supercomputer at Oak Ridge National Laboratory. This work is supported by the DOE Office of Science.
The combined research will shed light on large-scale magnetic behavior in space and will help design three days of experiments in 2016 and 2017 on the world’s most powerful high-intensity lasers at the National Ignition Facility (NIF) at the DOE’s Lawrence Livermore National Laboratory. “This will enable us to do experiments in a regime not yet accessible with any other laboratory plasma device,” Bhattacharjee said.
The supercomputer modeling, which is already under way, will investigate puzzles including:
Magnetic field formation. The research will study “Weibel instabilities,” the process by which non-magnetic plasmas merge in space to produce magnetic fields. Understanding this phenomena, which takes place throughout the universe but has proven difficult to observe, can provide insight into the creation of magnetic fields in stars and galaxies.
Magnetic field growth. Another mystery is how small-scale fields can evolve into large ones. The team will model a process called the “Biermann battery,” which amplifies the small fields through an unknown mechanism, and will attempt to decipher it.
Explosive magnetic reconnection. The simulations will study still another process called “plasmoid instabilities” that have been widely theorized. These instabilities are believed to play an important role in producing super high-energy plasma particles when magnetic field lines that have separated violently reconnect.
The NIF experiments will test these models and build upon the team’s work at the Laboratory for Laser Energetics at the University of Rochester. Researchers there have used high-intensity lasers at the university’s OMEGA EP facility to produce high-energy density plasmas and their magnetic fields.
At NIF, the lasers will have 100 times the power of the Rochester facility and will produce plasmas that more closely match those that occur in space. The PPPL experiments will therefore focus on how reconnection proceeds in such large regimes.
Joining Bhattacharjee and Fox on the INCITE award will be astrophysicists Kai Germaschewksi of the University of New Hampshire and Yi-Min Huang of PPPL. The same team is conducting the ALCC research with the addition of Jonathan Ng of Princeton University. Researchers on the NIF experiments, for which Fox is principal investigator, will include Bhattacharjee and collaborators from PPPL, Princeton, the universities of Rochester, Michigan and Colorado-Boulder, and NIF and the Lawrence Livermore National Laboratory.
PPPL, on Princeton University's Forrestal Campus in Plainsboro, N.J., is devoted to creating new knowledge about the physics of plasmas — ultra-hot, charged gases — and to developing practical solutions for the creation of fusion energy. Results of PPPL research have ranged from a portable nuclear materials detector for anti-terrorist use to universally employed computer codes for analyzing and predicting the outcome of fusion experiments. The Laboratory is managed by the University for the U.S. Department of Energy’s Office of Science, which is the largest single supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov. |
Pep Guardiola has underlined Manchester City’s focus on youth in their immediate transfer planning.
City have revitalised their attacking options by signing the likes of Raheem Sterling, Leroy Sane and Gabriel Jesus over the past two years.
But the squad as a whole still has an ageing profile, most notably in defence, and there is a recognition younger blood is required.
Manager Guardiola told City TV: “We are buying for the long term. That’s why Leroy is here, why Raz is here and why Gabriel is here. All of them are (around) 20 years old.
“Except in one or two cases when we need experience, the age is so important. Maybe we are going to buy a guy who is 28 or 29 years old because we are short in that position and he is going to give three or four important years – but of course, as young as possible is much better.”
City have invested heavily in youth football in recent years in Guardiola is hopeful some Academy graduates will break into his first-team squad in the near future. Guardiola is planning to take some of the better prospects on City’s pre-season tour this summer.
He said: “I think Manchester City has to be so proud. I am really impressed with the work of the English people, who’ve worked for a long time here at the Academy.
“There are four or five players – of course they are young, only 16, 17 and 18-years-old – that we would like to put on the field and play them, because I know how important it is for the fans to have people who grew up in the Academy.
“We cannot put them in without the talent, or if they are not able or willing to take on that challenge, but if they are good enough and they have a passion and they want to become something in Europe and world football, all the players in the Academy have to know we are there (watching).
“In the summer, four or five guys will come with us, but it is up to them.”
Guardiola recently met with owner Sheikh Mansour and chairman Khaldoon Al Mubarak during a trip to Abu Dhabi and was impressed by their long-term vision.
He said: “The people cannot expect we just spend money and the results are immediate. To achieve in football we need time. Money can help to advance that but it is not immediate.
“When I spoke with Khaldoon and Sheikh Mansour, I felt this is a commitment for a long time. It doesn’t matter who the players or managers will be, that will continue.
“They want to help the community and Manchester to be recognised, not just with one strong team like United, with another one.” |
Leave some for the rest of us!
Hey, you can only buy 1 of these.
No, you're not seeing things. Out of our desperate attempt to be rich love for giving you options, we've added a little something extra for today. Come on in and check it out!
First, yes, that is a tattoo of a Toshiba Quad-Core Laptop, and second, I refuse to let you shame me for it.
In fact, it's just one of many tributes to its 8GB memory, 750GB hard drive, and USB 3.0 connectivity inked onto my body. And oh, the stories each one could tell:
On my left forearm: a picture of my Toshiba surrounded by roses and a banner that says "Mom".
On my right forearm: a picture of my Mom surrounded by roses and a banner that says "Toshiba".
Across the front of my neck: QUAD-CORE in Old English script. Lot of bleeding when I got this one. A lot of bleeding.
On my chest: two majestic bluebirds lifting my Toshiba into Heaven, surrounded by the gleaming rays of a sun whose radiant, smiling visage looks just like Beau Bridges (the truly talented one in that family). I admit, I got this one when I was drunk.
On my back: exactly like the one on my chest except the Toshiba is wearing a sombrero. I got this one when I was drunk in Tijuana.
On my right bicep: four open Toshiba laptops laid on their sides and arranged to form an ancient Sanskrit symbol of life and good fortune called a "swastika".
Right above my left ankle: a little bitty Toshiba with a purple-and-blue butterfly on its screen. You never forget your first Toshiba tat. I got this one when I pledged my sorority.
Back to top |
They looked longer and paid more attention to scenario that violates the transitive inference to figure out why it is different from what they predicted
of making transitive inference about social hierarchy of dominance as early as 10 months old
used puppets to see how babies respond to social situations
Don't let the googly eyes or bewildered look fool you, babies are smarter than they seem.
New research has found that babies are capable of working out social hierarchies as early as 10 months old.
In a non-verbal experiment with puppets, they responded to different scenarios that suggested they knew which characters were more dominate than others.
Scroll down for video
Emory University found babies are capable of making transitive inferences about social hierarchy of dominance as early as 10 months, in a study that used puppets playing out different scenarios.
'We found that within the first year of life, children can engage in this type of logical reasoning, which was previously thought to be beyond their reach until the age of about four or five years,' says Stella Lourenco, a psychologist at Emory University who led the study.
In the first experiment, babies were shown a video of three puppets, an elephant, a bear and a hippopotamus, arranged in a row.
Each one similar in size, but positioned left to right social hierarchy.
In the first scenario, the elephant is holding a toy and the bear reaches over and grabs it, which suggests the bear is the more dominate than the elephant and the hippo is more dominate than the bear.
The babies were then shown a scenario where the elephant takes the toy from the hippo, which captures most of the infants' attention longer than any of the other scenarios.
'Dominance by the elephant violates the expected transitive-inference relationship, since the bear took the toy from the elephant and the hippo took the toy from the bear,' Lourenco explains.
BABIES CAN SEPARATE SPEECH FROM GIBBERISH Athena Vouloumanos of New York University played a series of recordings for a group of nine-month-old babies. The recordings included a wide array of noises, all of which could be divided into four sections. First, babies heard a female voice saying words like 'truck' and 'dinner.' Second, they heard a parrot mimicking human speech. Third and fourth, the kids heard human non-speech (throat clearing, whistling) and parrot sounds. While they focused on this eclectic mix tape, the babies were shown pictures of checkerboards, human faces, and a cup (basically, this was an avant-garde art show). By noting how long babies stared at the images, scientists could tell if the kids comprehended what they were hearing. For example, when babies heard words spoken by a human, they stared at the pictures for a long time. They didn't have a problem identifying the sound of a real person. As for the human sound effects, when the babies heard coughs and hacks, they didn't pay any attention to the images on screen. They could easily tell the difference between language and gibberish.
'The babies look longer and pay greater attention to the scenario that violates the transitive inference as they try to figure out why it is different from what they would have predicted.'
In the second part of the experiment, researchers introduced a giraffe as the fourth character.
However, the infants did not pay more attention to the scenarios with this new character because it was not interacting with the other puppets during the familiarization phase, even if it displayed more dominance.
The data revealed that most of the infants who were shown unexpected dominance behaviors, or 23 out of 32, were engaging in transitive inference when they saw scenarios of unexpected behavior by the puppets, compared to other scenarios.
The researchers concluded that that transitive inference for social dominance is evolutionary important, so the mechanisms to support this type of logical reasoning are part of our early development.
New research has found that babies are capable of making transitive inferences about social hierarchy of dominance as early as 10 months old. In a non-verbal experiment with puppets, the tiny tots responded to different scenarios that suggested they knew which characters were more dominate than others
'It's remarkable that the infants could make these inferences about social dominance with minimal presentation,' said Regina Paxton Gazes, a psychologist from Bucknell University who designed the non-verbal experiment for infants and assisted in the study.
'It suggests an early emerging, and perhaps evolutionary ancient ability, that is shared with other animals.'
Not only is this data significant in learning about how the mind develops, it could also help determine where infants are on track in the learning process.
'Since a majority of babies show the ability to engage in this kind of logical problem solving, our paradigm could certainly become an important tool for assessing normative cognitive development,' Lourenco says.
Puppets were also used in an experiment conducted earlier this year to determine if infants are able to comprehend complex situations. |
A CORONER has found a direct link between a young mother's suicide and her addiction to poker machines.
Katherine Natt, 24, died in August 2006 after taking an overdose of paracetamol.
South Australian Coroner Mark Johns said a suicide note left by the Adelaide woman indicated she had been addicted to pokies for some time, losing thousands of dollars.
But independent senator Nick Xenophon and the dead woman's mother have criticised Mr Johns for not going far enough.
Before her death, Ms Natt worked at the Adelaide Casino and Senator Xenophon said evidence had been presented to the inquiry to show such workers were 10 times more likely to develop a gambling addiction.
"I was hopeful that there would be further findings made as to the responsibility of gambling establishments - of the casino to its employees and poker machine venues to their employees," he said.
"Clearly there is an issue of shifting the culture.
"And the evidence is clear, if you work for a casino, the risk of developing a gambling problem is much greater and therefore, the duty of care should be much greater."
Ms Natt's mother, Kristine Mathews, said she felt let down by the state government.
"The government is prepared to take money from gamblers, but is not prepared to do anything to protect them from themselves," she said.
"Sometimes people need to be protected from themselves.
"It was so obvious to so many people that Kat had a gambling problem and yet nothing was ever done to assist her.
"I feel like everybody has just failed her and now the coroner has failed me in trying to help other people."
Ms Mathews said the coroner should have recommended stronger regulations to control how much money gamblers could withdraw from ATMs at pokie venues after evidence showed her daughter repeatedly took out large amounts, often on the same day.
In his findings, Mr Johns said from Ms Natt's suicide note and other evidence it was clear she was addicted to gambling on poker machines.
He said she suffered heavy financial losses and became concerned that she would lose the custody of one or both of her children.
"In consequence of these matters, she took an overdose of paracetamol in what was a clear act of suicide," he said.
But Mr Johns said there was no basis to draw a particular link between Ms Natt's employment at the Adelaide casino and her gambling addiction.
He said the woman did not seek assistance from a counselling service provided by the casino and did not raise her problems with management.
Ms Mathews said her daughter did not go to management because she feared she would lose her job.
Casino general manager David Christian said staff regularly received information regarding its problem gambling assistance programs and how to seek help from its trained staff.
"Adelaide Casino goes well above and beyond current regulations as a gaming venue employer and ensures that it provides a safe workplace for our employees, as well as customers," he said in a statement.
"Katherine Natt was a popular and valued employee of Adelaide Casino and is greatly missed."
The coroner has recommended a copy of his findings go directly to Prime Minister Julia Gillard and Tasmanian independent MP Andrew Wilkie following recent indications the federal government may consider new measures to deal with problem gamblers and poker machines.
Readers seeking support and information about suicide prevention can contact Lifeline on 13 11 14 or SANE Helpline on 1800 18 SANE (7263) or visit www.beyondblue.org.au.
Originally published as Coroner links mother's suicide to pokies |
Four South Korean men were taken into police custody Monday on suspicion of stealing a Buddha statue from a temple in Tsushima, Nagasaki Prefecture, local police said.
The four men, whose ages range from 47 to 70, allegedly committed several thefts of local cultural assets as a team, the police said. Their immediate charge is the theft Monday morning of a Buddha statue from a temple in the town of Mitsushima on the island, which is located in the Sea of Japan midway between the two countries.
The 11-cm-tall statue is designated as the city’s cultural asset.
The theft was initially reported by a monk of the temple, the police said. Then the police spotted the four men at a nearby port and arrested them after finding the stolen statue among their belongings.
In recent years, Tsushima’s Buddha statues have become the targets of South Korean thieves. In February 2013, a South Korean court ordered the return of two stolen statues to Tsushima after they were taken from a shrine and a temple on the island. |
Part of the water may have poured into the sea through a drainage ditch, Osamu Yokokura, a spokesman for the utility, said by phone. The company known as Tepco stopped the leak from a pipe connecting a desalination unit and a tank today, he said.
“There will be similar leaks until Tepco improves equipment,” said Kazuhiko Kudo, a research professor of nuclear engineering at Kyushu University, who visited the plant twice last year as a member of a panel under the Nuclear and Industry Safety Agency. “The site had plastic pipes to transfer radioactive water, which Tepco officials said are durable and for industrial use, but it’s not something normally used at nuclear plants,” he said. “Tepco must replace it with metal equipment, such as steel.”
Tepco has about 100,000 tons of highly radioactive water accumulated in basements at the Fukushima Dai-Ichi nuclear station nearly 13 months after the March 11 quake and tsunami caused meltdowns and the worst radiation leaks since Chernobyl. The tsunami knocked out all power at the station, causing cooling systems for reactors to fail. The utility was forced to set up makeshift pumps to get cooling water to the reactors, with most of it then draining into basements.
More Leaks
Tepco has been criticized before over its handling of the radioactive water following several leaks into the sea, including the one reported on March 26.
Last year, the environment group Greenpeace International said it found seaweed and fish contaminated to more than 50 times the 2,000 becquerel per kilogram legal limit for radioactive iodine-131 off the coast of Fukushima during a survey between May 3 and 9.
Mol, Belgium-based Nuclear Research Centre and Herouville- Saint-Clair, France-based Association pour le Controle de la Radioactivite dans l’Ouest confirmed at the time they conducted analysis of the samples supplied by Greenpeace.
The radioactive material discharged into the sea from the Fukushima plant is the largest in history, according to a study by the Institute for Radiological Protection and Nuclear Safety. The institute, which is funded by the French government, made the estimate in October last year and said it was 20-times the amount calculated by Tepco. Tepco declined to comment on the report at the time.
Strontium Risk
The latest leak contains about 16.7 becquerels per cubic centimeter of radioactive cesium 134 and 137 combined, Tepco said in a statement today. It’s still investigating how much strontium and other types of radioactive particles are contained in the water, Yokokura said.
Strontium can be absorbed in the body through eating tainted seaweed or fish. It then accumulates in bone and can cause cancer, said Tetsuo Ito, the head of Kinki University’s Atomic Energy Research Institute, in a December interview.
On March 26, about 120 tons of radioactive water may have leaked from a pipeline connected to the desalination unit, Yokokura said. Of the leaked water, Tepco believes about 80 liters poured into the sea, he said.
To contact the reporter on this story: Tsuyoshi Inajima in Tokyo at tinajima@bloomberg.net |
Lawyers representing the victims of the Israeli military attack on the Mavi Marmara have affirmed that they are pursuing legal action against the perpetrators, including individuals Israeli soldiers. The statement signed by Attorney Ramazan Aritürk says:
We have the ID information of some of the Israeli soldiers who carried out the May 31 attacks and we have been filing criminal complaints against them in local and international courts including the ICC [International Criminal Court] and we will exert utmost effort to ensure that they receive the necessary punishment.
Nine civilians were killed, many execution-style, by Israeli forces who attacked attacked the Gaza Freedom Flotilla, of which the Mavi Marmara was part, in international waters.
A copy of the statement was received by The Electronic Intifada from IHH, the Turkish charity that operated the ship and participated in the Gaza Freedom Flotilla.
Palmer report “null and void”
The Mavi Marmara laywers also rejected the so-called Palmer report, which was leaked to The New York Times last week. The panel, appointed by the UN Secretary General, had offered legal justification for Israel’s siege on Gaza, a position Turkey has rejected.
Turkey imposed severe diplomatic and military sanctions on Israel following the report and Israel’s refusal to apologize for the attack.
The lawyers’ statement affirmed that the Palmer report was “politically-motivated,” biased due to the presence of notorious human rights abuser and Israel ally, former Colombian President Alvaro Uribe, and was legally “null and void” because it had not been adopted by all members.
Here is the full statement:
Press release from Mavi Marmara lawyers Israel’s attack on Gaza Freedom Flotilla (Mavi Marmara and other ships in the flotilla) on May 31, 2010 is being followed by our law office in the international arena. As the lawyers of the Mavi Marmara victims, we have felt the need to make this statement following the leakage of the UN Palmer Report on the flotilla attack and the following developments. 03.09.2011 - Press release from Mavi Marmara lawyers As everyone knows, the mission and ultimate goal of the Palmer Commission assigned by the UN Secretary General is to “positively influence the relations between Turkey and Israel and the general situation in the Middle East.” In this regard, Palmer report is significantly different from the UN Human Rights Council’s report on the Mavi Marmara incident and it is politically-motivated. The goal of the commission was to prepare a report which would help Turkey and Israel reconcile. The commission is comprised of four members who include former Prime Minister of New Zealand Geoffrey Palmer, who was also the head of the commission, former Colombian President Alvaro Uribe, the deputy head of the commission, Israeli representative Joseph Ciechanove and Turkish representative Oezdem Sanberk, a retired ambassador. For the commission to release its report, it was necessary for all its four members to reach a consensus on the report and approve of it. If no consensus is reached among the members of the commission, the report can never become an official report, it becomes null and void. In this respect, since the Palmer report has been rejected by Turkey, it has no legal validity, meaning that it is null and void. In addition, we would like to remind everyone once again that as the lawyers of the Mavi Marmara victims, we conveyed our reservations about the impartiality of Alvaro Uribe, a member of the commission, to UN Secretary-General Ban Ki-Moon. Uribe is very well known for his pro-Israeli stance. In this regard, his impartiality and independence is highly controversial. Uribe received the Light Unto the Nations Award on May 4, 2007 given by the American Jewish Committee (AJC), one of the strongest Jewish lobbying institutions in the United States. The AJC president introduced Uribe as a good friend of Israel and the Jewish community while he was presenting the award to Uribe. Moreover, an investigation was launched by the Hague-based International Criminal Court (ICC) against Uribe due to the mass crimes he committed while serving as the Colombian president and the investigation is still in progress. The illegitimate and unjust actions of those perpetrating the May 31 attacks are so obvious that even the Palmer report, no matter its being legally null and void and the lack of impartiality of its members, admits that Israeli soldiers used excessive and disproportionate force against the flotilla passengers. We have the ID information of some of the Israeli soldiers who carried out the May 31 attacks and we have been filing criminal complaints against them in local and international courts including the ICC and we will exert utmost effort to ensure that they receive the necessary punishment. We would like to remind everyone once again that on behalf of our clients, we filed criminal complaints last October at the ICC against Israeli officials for the war crimes and crimes against humanity they committed. Finally, first as a human and then as the lawyers of the Mavi Marmara victims, we regret the leakage of the UN’s Palmer Report on Sept. 1, the International Day of Peace. Best regards Atty. Ramazan ARITÜRK |
Nearly 90 per cent of the city's top-performing schools (shown in dark blue) are above the "latte line". "If you are south of that line, you are largely a 'have-not." Fairfax Media mapped nearly 330 metropolitan schools according to the share of exam results that scored 90 or above in this year's HSC. Schools in blue performed in the top half of the list; schools in red performed in the bottom half. Nearly 90 per cent of Sydney's 55 highest-performing schools (shown in dark blue) are located either on or above the line, which stretches diagonally from Rouse Hill in the north-west to Tempe in the south-east. Less than 25 per cent of the schools below the line performed in the top half of the honour roll, according to the analysis. Above the line, nearly 85 per cent of schools were ranked in the top half.
Malek Fahd students pose for a picture with principal Aiyub Ahmed and teachers Tulin Bragg and Houda Kabbr. Credit:Daniel Munoz The rift becomes even more stark when selective schools are excluded. In that case, none of the top-performing schools, in dark blue, are below the line. But the reality could be worse. The map excludes dozens of schools that failed to achieved one Band 6 score of 90 or above. Many of these would be located below the line. "This clearly shows there is a direct correlation between results and postcodes and is the very reason needs-based funding is so important," said NSW Labor education spokesman Jihad Dib. Government schools struggled hardest to buck the trend, with only six of those below the line finishing in the top half of honour roll schools.
A separate Fairfax Media analysis of public schools by their local government area prior to amalgamation, shows educational disadvantage is being cemented in some of the areas tipped to have the state's largest population booms. In Camden, for example, which will see the biggest growth in school-aged population of any Sydney local government area over the next decade, students achieved Band 6 scores in only 3 per cent of exams. In this area, student numbers are set to surge by up to 55 per cent by 2026, according to NSW Department of Planning projections. The percentage of Band 6 scores was just as low among students in Blacktown and Liverpool, where the school population is projected to boom by roughly 25 per cent over the next decade.
All three local government areas are below the line. It's a far cry from the results of students north of the line in the City of Sydney, The Hills Shire and Hornsby LGA. Here, students achieved Band 6 scores in up to 28 per cent of exams, according to Fairfax Media's analysis, which does not include results from students who sat the International Baccalaureate. The data also reveals some of the state's hottest properties are found in neighbourhoods straddling the line, suggesting parents hoping to give their children the best shot at HSC success are driving up house prices in Strathfield, Ryde, Cherrybrook and Castle Hill. Eight out of the 10 public school catchment areas with the highest growth are along the border, according to 2015 figures from the Fairfax Media-owned Domain Group. The median house price in neighbourhoods surrounding Strathfield Girls, for example, one of the highest-performing public schools, surged to $1.7 million last year.
Pete Goss, the school education program director at the Grattan Institute, said a range of different ways of looking at education outcomes confirmed student background made a difference. "We have known about this for a long time but we are not finding any ways of making it go away," he said. NSW Education Minister Adrian Piccoli said the government's new resource allocation model "addresses exactly this issue". "Student results should not be determined based on where a student lives or the school they go to," he said.
"Hundreds of millions of dollars in additional needs-based funding have gone to schools in south-western and western Sydney, targeted to the students who need the most support." But state budget figures show that despite record total levels of school funding, the share directed to education has shrunk over the past decade. Education now makes up 21 per cent of total spending compared with 26 per cent in 2003. The Fairfax Media analysis comes after the latest PISA results, which tested thousands of teenagers around the world and highlighted widening educational disparity between Australian students. Australia's PISA results found 15-year-olds in the most socio-economically disadvantaged 25 per cent were a full three years' schooling behind the top 25 per cent. However, the map also reveals dozens of schools in lower socio-economic areas - including Canley Vale High, Bonnyrigg, Fairvale, St Johns Park, Birrong Girls and Liverpool Girls - that achieved remarkable HSC outcomes despite an increasingly unequal playing field.
"Our school does have really high expectations of our kids," Fairvale principal Kathleen Seto told Fairfax Media last week. "Our community is very multicultural and very poor but very aspirational." Greenacre's Malek Fahd Islamic school is another institution that defied its postcode and the socio-economic status of its many migrant families. Earlier this year, the 2100-student school was also threatened with having its funding taken away from it by the federal government after an investigation found the private school was operating for profit following allegations of six-figure loans to some board members. Teachers, desperate to reassure students they would get through their HSC, started planning to set up classrooms in their living rooms and garages.
"I think the future of the school, and whether there was going to be a school to actually sit the exam in was definitely a difficult thing to handle, but it actually turned out for the best in the end," said student Majed Kheir. This year's students achieved scores of 90 or above in 18 per cent of their exams. The school's future remains in the balance pending an appeal that will decide whether the school continues to receive funding it next year. In April, the school was granted a last-minute stay by the Administrative Appeals Tribunal as the federal government moved to withdraw $19 million in funding. For now, teachers aren't taking the results for granted.
Loading "It just shows how resilient the kids can be. We gave them extra hours, extra tuition time, Saturday classes," said Tulin Bragg the school's curriculum co-ordinator. "It gave them comfort that the doors were always going to be open for them, whether it's our school doors or garage doors or our house doors. It brought out the best in us." |
As a B.C. Transit bus swings open its doors in front of the Pine Centre Mall in Prince George, a small crowd of people carrying duffel bags and backpacks file on.
They each stuff $5 into the fare box to take the three-hour ride from Prince George to Burns Lake, a community to the west along B.C.'s Highway 16, a 720-kilometre remote stretch of road that's also known as the Highway of Tears.
"They should have put this bus up years ago before people started going missing," said Roger Joseph, 61, who was seated by a window.
These images are of 18 women and girls whose deaths and disappearances are part of the RCMP's investigation of the Highway of Tears in British Columbia. The women were either found or last seen near Highway 16 or near Highways 97 and 5. (Individual photos from Highwayoftears.ca)
This bus trip is possible because of two new subsidized routes that run along part of Highway 16 between Prince George and Smithers that officials hope will reduce hitchhiking and provide safe, reliable and affordable transit.
Since 1969, police officials have confirmed 18 women and girls have gone missing or have been murdered along this notorious corridor, but Indigenous leaders believe the tally is likely much greater, perhaps as high as 50.
Along this northern highway, communities and First Nations reserves are few and far between. At times traffic is sparse, and it can feel hopelessly remote if you need to go somewhere and have no ride to get there.
Roger Joseph is using the new bus service, because his daughter felt it was too dangerous for him to hitchhike. (Briar Stewart/CBC)
After Joseph lost his driver's licence eight years ago, he turned to hitchhiking as a way to make the 230-kilometre trip to Prince George to visit his three daughters. It would take him all day to find someone to pick him up along the highway, and it left his family anxious.
"My daughter, she said, 'Daddy, I don't want you hitchhiking. You're always hitchhiking and we're afraid you might catch a ride with the wrong person.'"
A fear many share
That's a fear shared by many who have walked the highway, or who have driven past others alone on the shoulder.
"Family members will not let family members get on the highway anymore," said Renata Heathcliff, 49.
In her younger days, she was an avid hitchhiker, recalling a scary incident she and a girlfriend had one night when a two-seater car pulled up while they were walking toward Prince George.
The driver opened the door and motioned to the passenger seat saying he had room for just one of them. They both slammed the door and walked away.
A sign near the community of Moricetown, B.C., reminds women about the dangers of hitchhiking on Highway 16 also known as the 'Highway of Tears.' (Dillon Hodgin/CBC)
Today, she can't imagine hitchhiking let alone hitchhiking on the Highway of Tears.
"That's just not possible. It's just not safe," Heathcliff said.
Without a driver's licence or a car, she had been paying people to drive her from her home on the Nadleh Whut'en First Nation. And with no job, those trips proved too expensive.
Since BC Transit launched the new service at the end of June, Heathcliff has taken the bus four times to make the 140-kilometre trip to Prince George to visit her grandchildren.
Renata Heathcliff has been using the new bus service regularly to visit her family in Prince George. (Chris Corday/CBC)
'Safe, reliable and affordable'
The two new bus routes cover a 400-kilometre section of Highway 16.
One runs between Prince George and Burns Lake, and the other begins in Burns Lake and goes farther west to Smithers. Each trip costs $5, which is about a tenth of the price currently charged by Greyhound to make a similar trip.
The company has said that it wants to withdraw from the area because of a lack of ridership, but BC Transit says its customer base is growing.
The bus service travels along a long stretch of highway spanning B.C.'s remote northwest region. (Dillon Hodgin/CBC)
One route between Prince George and Burns Lake averages 20 riders a day, while the other, 12.
"This initiative was about introducing a transit service that was safe, reliable and affordable," said Chris Fudge, a senior regional transit manager at BC Transit.
BC Transit is able to offer the $5 fare because two-thirds of the operating costs are covered by the province, with the remaining portion, just over $129,000, paid for by the communities and First Nations groups along the route.
Many of them had been lobbying for this type of service for years, and some of the most vocal were driven by their own loss.
The death of Mary Teegee's cousin Ramona Wilson was part of the reason she and many others pushed for the new bus service. (Chris Corday/CBC)
'Precious jewel'
"It's definitely something I have been personally committed to," said Mary Teegee, executive director of Child and Family Services at Carrier Sekani Family Services.
In 1994, her 16-year-old cousin Ramona Wilson went missing after telling her family she was headed out to a party.
Her body was found in a wooded area near Smithers the following year.
"We lost one of our precious, precious jewels, and because of that, you look at all of the need for safety," Teegee said.
Richard Skin says he's pleased the new bus service is available. (Briar Stewart/CBC)
While the motivation for the bus service was to make it safer for people to travel along Highway 16, Teegee said there was also a desire to make it easier for people to get around, whether to shop or go to the doctor.
On a Wednesday morning in September, Richard Skin, 60, was taking a Smithers-bound bus heading for the community of Houston to go to a medical clinic.
"This is pretty nice to have a transit system like this," he said.
Richard Skin felt he couldn't wait a number of hours for the bus to return, so he tried to hitchhike back to his community. (Chris Corday/CBC)
A few hours later, he was seen on the side of the highway with this thumb out, trying to catch a ride home after his doctor's appointment. He didn't know the bus made a return trip.
As it wouldn't be there for a few more hours, he decided to keep hitchhiking even though he admitted it was going to be difficult to get a ride by "a small community in the middle of nowhere."
The new bus routes connect many small, remote communities in northwestern B.C. (Chris Corday/CBC)
'A dangerous highway'
Currently the bus routes run on alternating days, and they don't connect the entire highway, although BC Transit hopes to add another section later this fall that will link the city of Terrace with New Hazelton.
Officials say they are working to educate the public and increase awareness about the new service and are reviewing the routes.
Heathcliff believes people in her First Nations community know about the new transit line, but she is frustrated that more haven't been using it. She still sees people hitchhiking along the road.
"I see women out there and I don't understand why they're not getting it," she said.
"That's a dangerous highway." |
ICBC could reduce its overall costs by abandoning a culture of fighting claimants in court, says a personal injury lawyer who has been in the business for two decades.
On Tuesday, the government announced the insurance company would seek approval of a 6.4 per cent increase to basic premiums and increase overall optional premiums by 9.6 per cent to address the company's financial woes.
"They're not coming to the table with dollars at an early stage that would resolve the claim. They tend to hold it back until another year goes by," said Dairn Shane, a lawyer with Preszler Law.
"Essentially every claim is fought to a certain degree."
Shift in decision-making power
ICBC pays out 24 per cent of its expenses not to repairing damaged bumpers or soft tissue injuries, but to legal fees, according to a July report by Ernst & Young.
Shane said that B.C.'s adversarial approach to claims pushes legal costs up, results in larger settlements and angers customers who spend long periods of time waiting for settlements.
He said he's noticed the issue has become acute in the last five years and remembers the days when he used to deal with ICBC adjusters face-to-face.
"The claim would settle, everybody would shake hands, everybody would walk away," he recalled.
He believes the shift came when decision-making power at the corporation was centralized in the hands of management.
"They took it away from the adjusters that we would typically be dealing with," he said. "Some of the adjusters who previously had $200,000 to pay, suddenly had $20,000."
That shift meant far more settlements ended up in court, where the long legal process would drive up associated legal fees, he said.
'Deep and profound issues'
In the last six months, Shane said he noticed ICBC shift toward the old style of settling out of court.
"That's going to be much more beneficial than some sort of no fault system which at the end of the day just punishes victims of car accidents to the benefit of those that are causing them," said Shane.
Eby said the government needs to address "deep and profound issues" to keep insurance rates from rising even further.
In addition to the application for a rate increase, Eby said the government will increase the use of red light cameras, focus on reducing distracted driving, improve problem intersections and commission an "immediate and comprehensive" business audit of ICBC management.
While the new attorney general is quick to blame the previous Liberal government, Liberal MLA Andrew Wilkinson has fired back in a statement saying the NDP are offering no new solutions to the problem.
"Many of the actions the NDP are proposing today were contained in the Ernst and Young report commissioned earlier this year. Instead of offering new ideas on how to control the cost issue, Eby is attempting to lay blame for the challenges ICBC is currently facing rather than provide his own plan," the statement said.
The Ernst & Young report was commissioned by the former Liberal government and warned insurance rates may need to climb 30 per cent by 2019.
An internal report, released last November, said rates could even climb as high as 42 per cent by 2020.
Here is full release from Government on <a href="https://twitter.com/icbc">@icbc</a>. <a href="https://twitter.com/hashtag/bcpoli?src=hash">#bcpoli</a> <a href="https://t.co/PCF4NoQxSl">pic.twitter.com/PCF4NoQxSl</a> —@richardzussman
With files from Liam Britten and CBC Radio One's On the Coast |
The Rand Paul filibuster is turning out to be a huge moment. The Kentucky senator hit a populist nerve, even on the left. For instance, Chris Matthews ignored the filibuster the day it happened, and then embraced it the following night, partly at the urging of his own son. I should think the Nation will also honor Paul’s arguments against assassination-by-drone.
On the right, the Rand Paul surge is further evidence of the purge of the neoconservatives. Rush Limbaugh has been all-but endorsing Paul as a corrective to the neocons and their Senate amigos, Lindsey Graham and John McCain. The Atlantic’s Conor Friedersdorf quotes long passages from Limbaugh’s radio show, in which he calls the neocons justly “paranoid” about what is befalling them.
Here’s the substance of this. There is a fear among McCain, Lindsey Graham, and others who favor an interventionist foreign policy. Think of the neocons.Think of going into Iraq and not just securing Iraq, but building a democracy. Nation building, if you will. Think of the outbreak of the Arab Spring and the people on our side who thought, “Wow, this is wonderful. This is the outbreak of American democracy,” when it wasn’t. It was the exact opposite. Rand Paul, they’re asking themselves, is he his father’s son or is he on his own here? They’re worried that he’s his father’s son. They’re worried that Rand Paul is an isolationist. They’re worried that Rand Paul’s diatribe on drones really means that Rand Paul wants to bring the military home and not use it unless we’re attacked. He doesn’t like it being used in an intervention. This is what they fear. And as he succeeds in making a connection with the American people, they are worried, the neocons are worried that they are being undermined by this.
[Friedersdorf: “The talk radio host seems to think the neocons are right to be scared”]
I’ll tell you why. Rand Paul made a connection with the American people. These other people do not. He made a connection. Therefore, he has the ability to influence and motivate people. I’m telling you what their fears are. They thought that Ron Paul was absolute nutcase, wacko. That’s why they’re calling Rand Paul a wacko, ’cause that’s what they thought of Ron Paul. Libertarian, fruitcake, nutcase, isolationist, shut down the US military, speak positively about Islamists, all this kind of stuff. They are afraid that’s who Rand Paul is, and they’re afraid that what Rand Paul was doing with this filibuster was not just speaking out against the use of drones on American citizens on American soil. They’re afraid that Rand Paul is actually setting the stage for building up public support to stop the interventionist usage of American military might and foreign policy all over the world. It’s a fear that they’ve got. |
After a months-long standoff in Venezuela that included the Socialist-controlled Supreme Court overturning many of the opposition-controlled National Assembly's decisions, the court has explicitly ruled it will now act as the legislative branch, Reuters reports. "As long as the situation of contempt in the National Assembly continues," the court ruled, "this constitutional chamber guarantees congressional functions will be exercised by this chamber or another chosen organ."
The secretary general of the Organization of American States, Luis Almagro, called the action a "self-inflicted coup d´état perpetrated by the Venezuelan regime against the National Assembly." Several Latin American countries, including Brazil, Colombia, and Mexico, expressed concerns and Peru withdrew its envoy, which Venezuela's foreign minister called "rude support for the violent and extremist sectors in Venezuela."
The opposition won control of the National Assembly in late 2015 as the long socialist project in Venezuela was coming to a brutal and inevitable head. Since then, the government has doubled-down on the kind of centrally planned and redistributionist policies that brought Venezuela to where it is in the first place. Instead of changing course, the government has found more and more scapegoats and "enemies" to blame for the economic crisis. Earlier this month, the socialist government accused bakers of waging "economic war" against the country and started arresting them for making bread rolls.
The United States and the European Union also chimed in on the latest developments in Venezuela. A spokesperson for the State Department said the U.S. condemned the "decision to usurp the power of the democratically elected National Assembly" and called it "a serious setback for democracy," while the E.U. called for a "clear electoral calendar." The opposition has called for early presidential elections as the popularity of President Nicolas Maduro continues to scrape new lows. The government responded by accusing a "right-wing regional pact" of plotting against it. State-controlled Telesur TV called the characterization of the court's decision as a coup "fake news," insisting the court's ruling was because the occupants of 3 of the 167 seats in the legislature were accused of voting irregularities. The opposition controls 112 seats. Maduro tried to dissolve the legislature last year after it attempted to launch a recall effort against him.
U.S. responses to the crisis in Venezuela in recent years have largely been profoundly unproductive. A few months after the opposition party wrested control of the legislature, President Obama renewed the U.S. declaration of Venezuela as a a "national security risk," providing Maduro and the socialists new ammunition to smear opposition as foreign stooges. Yesterday, Sen. Bob Menendez (D-N.J.) joined Sen. Marco Rubio (R-Fla.) in condemning the court's power grab, calling it "an attack on what remained of democratic institutions in Venezuela" and Maduro "an unhinged dictator who has systematically dismantled democracy in this country." The two also met with various opposition lawmakers. Menendez, who has called for an "independent" investigation of Russia's alleged interference in the U.S. election, should be keenly aware of how his words and actions could be weaponized by the ruling party in Venezuela and used against the opposition.
For Venezuelans, liberation from socialism won't come from the U.S. or the OAS or any foreign actor. Instead it will come from within, with the help of the kind of decentralized technology that is challenging state power around the world, like Bitcoin: |
The LSPIRG Team not only includes the above staff members, but also our Board of Directors, partners and volunteers. We are grateful to everyone in our community for making all that we do possible.
LSPIRG Mission, Vision and Values
Mission: To provide support and advocacy by and for campus and community members marginalized by systemic violence by cultivating meaningful relationships, political education, and anti-oppressive organizing
Vision: To act as a reliable support to student and community members building and sustaining movements that tear down systems of violence and replace them with equitable and just communities
Values:
Anti-Oppression - LSPIRG challenges manifestations of violence with attention to the colonial systems and ideologies that enact and perpetuate this violence. We acknowledge that our anti-oppressive organizing must dismantle the ongoing colonization of Turtle Island.
Movement Building - LSPIRG strives to meet people and their communities where they are to provide support, knowledge, and resources that build their capacity for political engagement. We engage in this difficult work because we believe in the ends we seek to achieve.
Prefigurative Politics - LSPIRG is committed to forms of organizing and personal relationship that reflect the future society we are hoping to create through our work.
Community Care - LSPIRG is responsible for responding to instances of harm from a holistic perspective that supports the individual and community while working to dismantle the conditions that precipitate them.
Self-Care - LSPIRG encourages its members to engage in behaviours and politics that validate and honour their unique subjectivity, needs, and emotional well-being.
Environmental Justice - LSPIRG aims to address the ways in which capitalism, resource extraction and colonization impact the land, air, and water. We stand for climate justice and supporting Indigenous communities and land defenders who are speaking out against the ongoing destruction.
Activism - LSPIRG provides its members with the skills to self-organize and engage in actions that advance their political objectives.
Education - LSPIRG facilitates the building of skills and political analysis among students and community members through inclusive and anti-oppressive pedagogies. |
This week we publish surprising and, on the face of it, disturbing findings. According to Christopher Murray and colleagues at the Institute for Health Metrics and Evaluation (IHME) at the University of Washington in Seattle, there were 1·24 million deaths (95% uncertainty interval 0·93–1·69 million) from malaria worldwide in 2010—around twice the figure of 655 000 estimated by WHO for the same year. How should the malaria community interpret this finding? Before we answer that question, we need to look beneath the surface of this striking overall mortality figure.
First, annual malaria mortality peaked in 2004 at 1·82 million. Since then, there has been a 32% reduction in malaria deaths, driven mainly by “accelerated decreases” in sub-Saharan Africa. Second, although there has also been a substantial decrease in the number of deaths outside sub-Saharan Africa, adults now make up the major burden in these regions. In Asia and the Americas, the median proportion of deaths in those older than 15 years was 76% and 69%, respectively. Overall, the IHME data show that malaria deaths in 2010 in those aged 5 years and older were much higher than previously thought—524 000 deaths compared with 91 000 as estimated by WHO. Third, malaria accounts for many more child deaths in sub-Saharan Africa than previously estimated—24% of total child deaths, compared with the 16% https://doi.org/10.1016/S0140-6736(10)60549-1 for 2008.
The reliability of these findings will certainly be the subject of much debate, as were the similarly https://doi.org/10.1016/S0140-6736(10)60831-8 for India (by different methods), reported in 2010. Murray and colleagues used inputs from vital registration systems, published and unpublished verbal autopsy reports, and estimates of malaria transmission intensity to construct an array of models, which were then assessed for predictive validity. The authors will need to make their data and assumptions fully available to others who will surely wish to reproduce their calculations.
One aspect of the findings that is unlikely to raise objections is the implication that interventions scaled up since 2004 have been phenomenally successful in reducing the number of malaria deaths. Much of this success can be attributed to the work of the Global Fund To Fight AIDS, Tuberculosis and Malaria, now celebrating its tenth anniversary. The Global Fund contributes about two-thirds of the world's funding for malaria programmes, and since its inception in 2002 has dispersed 230 million insecticide-treated bednets and a similar number of doses of artemisinin-based drugs. Coverage of indoor residual insecticide spraying now stands at around 70% for the countries with the highest disease burden. With the recent and untimely resignation of its Executive Director, Michel Kazatchkine, the Global Fund is facing an unprecedented emergency. The results we report today show how essential it is for donors to recommit to the Global Fund, as they did last summer for the Global Alliance for Vaccines and Immunisation. We therefore welcome the US$750 million promissory note announced last week by the Bill & Melinda Gates Foundation. This commitment for 2011–16 is a legally binding agreement for future payment, but also counts as cash in the bank and can thus be used to cover all grants the Global Fund has already signed off. It has thrown the Global Fund a lifeline at a time when donor support is in desperately short supply. Others should follow this lead.
We must also conclude from today's study that malaria might be a far more important cause of childhood mortality than previously thought. If correct, this finding has substantial implications for child survival programmes. It also seems clear that malaria is a greater long-term threat to adult health than we had previously imagined. Again, if correct, this finding means that malaria control and elimination programmes should be paying far greater attention to adults than is currently the case. Finally, although we can be grateful for these new estimates of malaria mortality, one important lesson from the science of estimation is that the urgency to revitalise health information systems has never been greater. We need reliable primary cause of death data to ensure that trends in malaria mortality are readily and reliably monitored—and acted upon.
What should happen now? WHO's new independent advisory body, the Malaria Policy Advisory Committee (MPAC), held its first meeting this week. But MPAC only has 15 members. We believe urgent technical and policy analyses must be initiated by WHO—involving a broader group of experts (eg, including those in child survival) and country representatives—to review these new data and their implications for malaria control programmes. This opportunity needs to be grasped with urgency and optimism.
Copyright © 2012 Corbis
Article Info Publication History Identification DOI: https://doi.org/10.1016/S0140-6736(12)60169-X Copyright © 2012 Elsevier Ltd. All rights reserved. ScienceDirect Access this article on ScienceDirect |
Remember the line of when Hans starts to say “We finish each others’…” then Anna finishes it by comically saying “Sandwiches!” By creating an artificial image of himself, he is able to fool Anna into thinking that he agrees, even though he is pretending.
Even though he had to sing the next line quickly, you’d think that Hans would still be confused at Anna’s change of the line. Well, here is his hidden look of bewilderment when she sings the word.
Now it comes so fast that you can’t see it too well between the frames. But once you see it here, you can clearly interpret Hans’s thoughts on Anna. In particular, mine would be something like “What did she just say? Oh man, she’s the one who is crazy!”
But then afterwards, he is able to return to the song by continuing to reel her in when he pretends to agree, “That’s what I was gonna say!”
And even with a short exchange like this, it still makes Hans appear to be too good to be true…and so Anna only continues to fall for him. |
This week we continue our look at the Top Ten Villains with the last of the “Big Three” from the Avengers. We’ve finished Captain America, Iron Man, and now its time to bring the thunder. Thor is for all purposes immortal, at least close enough to it compared to us mortals. With all that time and only so much mead to drink, that means that there are a ton of villains to choose from his rogue gallery. We narrowed it down to the customary ten and chose who we felt are Thor’s definitive villains. Of course let us know your thoughts and top ten in the comments below!
10. Hela (First Appearance Journey into Mystery #102, 1964)
Who or what is Hela you might ask? Hela is the daughter of Loki from an alternate universe. So Loki’s daughter, but not; the whole Ragnarok thing in Asgard is all kinds nuts. Odin bestowed upon Hela rulership of both Hel and Niffelheim and the granted the title of Goddess of Death. Which is basically hell for the gods, Vallhalla being the heaven version. You would think that after being given all this she would be satisfied, but no. Hela strove to take over Vallhalla and she consistently battled Thor for power. She have even cursed Thor with the inability to die, but made it so his bones were brittle an armor was constructed to keep Thor’s frame together. Hela had to make a deal with Death herself when Hel was destroyed during the Siege of Asgard.
9. The Absorbing Man (First Appearance Journey into Mystery #114, 1965)
Crusher (The Absorbing Man) Creel has been a bad guy in the Marvel Universe for a really long time. He has fought just about every Marvel in the universe. The thing that makes him so special to goldielocks is that Loki gave good old Creel his powers and his trademark ball and chain. Now Crusher started out as a thug and a low rent boxer (he was the fighter that got his butt handed to him the last time he faced Battling Murdock). So what makes The Absorbing Man so awesome is the fact that he uses powers to absorb the abilities of whatever he touches. This has made somewhat troubling for Thor to battle old Creel since absorbing Mjolnir front time to time made the Crusher invincible. It has always been Thor’s superior fighting skills that made him victorious. A version of the Absorbing Man made his appearance in the recent season of Marvel’s Agents of S.H.I.E.L.D. and even in Daredevil’s first season.
8. The Destroyer (First Appearance Journey into Mystery #118, 1965)
The Destroyer was a device created by Odin to protect Midgard and Asgard for impending doom from the stars. The Destroyer was built from an enchanted Uru metal making it somewhat invulnerable. The armor has been possessed by Loki on several occasions though. He has used the suit to nearly kill Thor many times. During a stand still battle with Galactus, Thor had to trade the Destroyer to Galactus for the freedom of his herald Firelord. While the Destroyer isn’t necessarily a villain, it is still feared throughout the nine realms as power to be cautioned. Of course they used the Destroyer in the first Thor film.
7. Malekith the Accursed (First Appearance Thor #344, 1984)
Malekith the Accursed is one of the Dark Elves from Svartalheim that has been particularly nasty for Thor. Malekith teamed up with Loki and the demon Surtur to gain control of Asgard. During this plot, Malekith had to gain control of the Casket of Ancient Winters. The Casket, if opened, would release an untold winter storm upon Midgard. Malekith succeeded in opening the Casket at the loss of the lives of the human guardians. Thor and the Avengers ended up stopping the evil plot and resealed the Casket. Malekith attempted to impersonate Baldur the Brave on many occasions. He was thought killed by Kurse during that particular storyline. He has returned time and time again to burden Thor. Malekith the Accursed made his big screen appearance in Thor the Dark World.
.
6. The Wrecking Crew (First Appearance Defenders vol. 1 #17, 1974)
The Wrecker, Piledriver, Bulldozer and Thunderball collectively make up the Wrecking Crew. Originally Wrecker (Dirk Garthwaite) and Thunderball (Dr. Eliot Franklin) were in prison together. They had plotted to build a gamma bomb and hold New York ransom for millions of dollars. The Wrecker was a violent criminal that used his wrecking bar to destroy crime scenes. During this plot, the Wrecker was accidentally given super strength and an indestructible wrecking bar. The bar was struck by lightning giving the Wrecking Crew their power. As the Wrecking Crew they have battled just about every Marvel hero only to be defeated every time. You would think team work would even overcome the Mighty Thor. Thor has won in battle on every event.
5. Ulik (First Appearance Thor #137, 1967)
There’s something to be said about Rock Trolls and nobody knows this more than Thor. Ulik’s father was part of the trolls banished to the underworlds of Nornheim. Ulik was created to be the physical equal to Thor. Ulik’s primary goal was to steal Mjolnir so that an invasion of Asgard could seem more possible. Ulik has increasing strength and dust knuckles made of uru metal which was the same stuck Mjolnir was created. What makes Ulik such a bad ass is his unyielding hatred of Thor and his quest for “rock troll” justice, but Thor continues to make his life hard. It would be nice to see good Ulik in the next Thor film since the rendering would be eternal.
4. The Midgard Serpent (First Appearance Marvel Tales #105 1952)
Jormungand the World Serpent is based on the Norse version of the serpent. The mythology works this way: Thor is to die during Ragnarok in a battle with Jormungand. Though several encounters with the Serpent has come, Thor out on top and eventually killing the serpent. Strangely the Serpent is resurrected each every time. One of the most memorable battles with the Serpent is Thor #379-380. Walt Simonson uses full splash pages to depict the size and epic battle that Thor and Jormungand unleash upon the world. The recent Secret Wars saga has seen the Midgard Serpent’s return.
3. Enchantress (First Appearance Journey into Mystery #103 1964)
Amora the Enchantress was a student of Karnilla, Queen of the Norns. Her apprenticeship was cut short when Karnilla banished her for being power-hungry and overzealous. During her first appearance, Odin himself sent Amora to Midgard to destroy Thor’s relationship with Jane Foster. The Enchantress also craved the attentions of Thor himself. During the her earlier encounters things never really panned out so well, even when she teamed up with Skurge the Executioner. She continued to use her sexuality to enchant Thor. They eventually shacked up with Thor in New York until the Heroes Reborn saga happened and Thor went missing for a year. Amora met her end during the latest Ragnarok versus Loki. The Thor film is currently filling the role for Enchantress, so let’s hope it’s a good one.
2. Surtur (First Appearance Journey into Mystery #97, 1963)
Surtur is a fire demon from the realm Muspelheim. This nasty beast has been hell-bent on the destruction of Asgard since Odin banished him for teaming up with the trolls. Ironically, most of Thor’s burdens have come from Odin’s overbearing past. Surtur is most notably on this list due to Walter Simonson’s run in the 80’s Surtur teamed up with Loke and Malekith to overthrow Odin. During the battle, Surtur defeats both Odin and Thor, but Loki has a change of heart and creates an illusion that tricks Surtur in believing that he has won the day. This gives Odin the upper hand and he throws himself upon Surtur and they fall into the pits below Asgard. Odin ended up returning, but not without Surtur’s essence buried within his soul. Surtur returns to create more troubles for Thor later on.
1. Loki (First Appearance Journey into Mystery #85, 1962)
Without any doubt Loki is bound to make number one on any list he might be considered, especially for Thor’s villains, this one is no exception. Loki is Thor’s adopted brother which has tended to cause issues for eons. Loki has been jealous of Thor and their history which is a long and sorted one. Loki has consistently been a burden, but occasionally has had a change of heart during the heat of the moment. The character has seen some changes for years and years, but I suppose for 50 years of history a little flux can be needed to make the character seem fresh. Fortunately, for those not well versed in the comics we have had Tom Hiddelson in the driver’s seat during both Thor films and the Avengers to make everyone know just how amazing Loki can be. His performance stands out above most of the performances in the films. |
Hendrik Antoon Lorentz (; 18 July 1853 – 4 February 1928) was a Dutch physicist who shared the 1902 Nobel Prize in Physics with Pieter Zeeman for the discovery and theoretical explanation of the Zeeman effect. He also derived the transformation equations underpinning Albert Einstein's theory of special relativity.
According to the biography published by the Nobel Foundation, "It may well be said that Lorentz was regarded by all theoretical physicists as the world's leading spirit, who completed what was left unfinished by his predecessors and prepared the ground for the fruitful reception of the new ideas based on the quantum theory."[2] He received many honours and distinctions, including a term as chairman of the International Committee on Intellectual Cooperation,[3] the forerunner of UNESCO, between 1925 and 1928.
Biography [ edit ]
Early life [ edit ]
Hendrik Lorentz was born in Arnhem, Gelderland, Netherlands, the son of Gerrit Frederik Lorentz (1822–1893), a well-off nurseryman, and Geertruida van Ginkel (1826–1861). In 1862, after his mother's death, his father married Luberta Hupkes. Despite being raised as a Protestant, he was a freethinker in religious matters.[B 1] From 1866 to 1869, he attended the "Hogere Burger School" in Arnhem, a new type of public high school recently established by Johan Rudolph Thorbecke. His results in school were exemplary; not only did he excel in the physical sciences and mathematics, but also in English, French, and German. In 1870, he passed the exams in classical languages which were then required for admission to University.[B 2]
Lorentz studied physics and mathematics at the Leiden University, where he was strongly influenced by the teaching of astronomy professor Frederik Kaiser; it was his influence that led him to become a physicist. After earning a bachelor's degree, he returned to Arnhem in 1871 to teach night school classes in mathematics, but he continued his studies in Leiden in addition to his teaching position. In 1875, Lorentz earned a doctoral degree under Pieter Rijke on a thesis entitled "Over de theorie der terugkaatsing en breking van het licht" (On the theory of reflection and refraction of light), in which he refined the electromagnetic theory of James Clerk Maxwell.[B 2][4]
Career [ edit ]
Professor in Leiden [ edit ]
On 17 November 1877, only 24 years of age, Hendrik Antoon Lorentz was appointed to the newly established chair in theoretical physics at the University of Leiden. The position had initially been offered to Johan van der Waals, but he accepted a position at the Universiteit van Amsterdam.[B 2] On 25 January 1878, Lorentz delivered his inaugural lecture on "De moleculaire theoriën in de natuurkunde" (The molecular theories in physics). In 1881, he became member of the Royal Netherlands Academy of Arts and Sciences.[5]
During the first twenty years in Leiden, Lorentz was primarily interested in the electromagnetic theory of electricity, magnetism, and light. After that, he extended his research to a much wider area while still focusing on theoretical physics. Lorentz made significant contributions to fields ranging from hydrodynamics to general relativity. His most important contributions were in the area of electromagnetism, the electron theory, and relativity.[B 2]
Lorentz theorized that atoms might consist of charged particles and suggested that the oscillations of these charged particles were the source of light. When a colleague and former student of Lorentz's, Pieter Zeeman, discovered the Zeeman effect in 1896, Lorentz supplied its theoretical interpretation. The experimental and theoretical work was honored with the Nobel prize in physics in 1902. Lorentz' name is now associated with the Lorentz-Lorenz formula, the Lorentz force, the Lorentzian distribution, and the Lorentz transformation.
Electrodynamics and relativity [ edit ]
In 1892 and 1895, Lorentz worked on describing electromagnetic phenomena (the propagation of light) in reference frames that move relative to the postulated luminiferous aether.[6][7] He discovered that the transition from one to another reference frame could be simplified by using a new time variable that he called local time and which depended on universal time and the location under consideration. Although Lorentz did not give a detailed interpretation of the physical significance of local time, with it, he could explain the aberration of light and the result of the Fizeau experiment. In 1900 and 1904, Henri Poincaré called local time Lorentz's "most ingenious idea" and illustrated it by showing that clocks in moving frames are synchronized by exchanging light signals that are assumed to travel at the same speed against and with the motion of the frame[8][9] (see Einstein synchronisation and Relativity of simultaneity). In 1892, with the attempt to explain the Michelson-Morley experiment, Lorentz also proposed that moving bodies contract in the direction of motion (see length contraction; George FitzGerald had already arrived at this conclusion in 1889).[10]
In 1899 and again in 1904, Lorentz added time dilation to his transformations and published what Poincaré in 1905 named Lorentz transformations.[11][12] It was apparently unknown to Lorentz that Joseph Larmor had used identical transformations to describe orbiting electrons in 1897. Larmor's and Lorentz's equations look somewhat dissimilar, but they are algebraically equivalent to those presented by Poincaré and Einstein in 1905.[B 3] Lorentz's 1904 paper includes the covariant formulation of electrodynamics, in which electrodynamic phenomena in different reference frames are described by identical equations with well defined transformation properties. The paper clearly recognizes the significance of this formulation, namely that the outcomes of electrodynamic experiments do not depend on the relative motion of the reference frame. The 1904 paper includes a detailed discussion of the increase of the inertial mass of rapidly moving objects in a useless attempt to make momentum look exactly like Newtonian momentum; it was also an attempt to explain the length contraction as the accumulation of "stuff" onto mass making it slow and contract.
Lorentz and special relativity [ edit ]
In 1905, Einstein would use many of the concepts, mathematical tools and results Lorentz discussed to write his paper entitled "On the Electrodynamics of Moving Bodies",[13] known today as the theory of special relativity. Because Lorentz laid the fundamentals for the work by Einstein, this theory was originally called the Lorentz-Einstein theory.[B 4]
In 1906, Lorentz's electron theory received a full-fledged treatment in his lectures at Columbia University, published under the title The Theory of Electrons.
The increase of mass was the first prediction of Lorentz and Einstein to be tested, but some experiments by Kaufmann appeared to show a slightly different mass increase; this led Lorentz to the famous remark that he was "au bout de mon latin" ("at the end of my [knowledge of] Latin" = at his wit's end)[14] The confirmation of his prediction had to wait until 1908 and later (see Kaufmann–Bucherer–Neumann experiments).
Lorentz published a series of papers dealing with what he called "Einstein's principle of relativity". For instance, in 1909,[15] 1910,[16][17] 1914.[18] In his 1906 lectures published with additions in 1909 in the book "The theory of electrons" (updated in 1915), he spoke affirmatively of Einstein's theory:[15]
It will be clear by what has been said that the impressions received by the two observers A0 and A would be alike in all respects. It would be impossible to decide which of them moves or stands still with respect to the ether, and there would be no reason for preferring the times and lengths measured by the one to those determined by the other, nor for saying that either of them is in possession of the "true" times or the "true" lengths. This is a point which Einstein has laid particular stress on, in a theory in which he starts from what he calls the principle of relativity, [...] I cannot speak here of the many highly interesting applications which Einstein has made of this principle. His results concerning electromagnetic and optical phenomena ... agree in the main with those which we have obtained in the preceding pages, the chief difference being that Einstein simply postulates what we have deduced, with some difficulty and not altogether satisfactorily, from the fundamental equations of the electromagnetic field. By doing so, he may certainly take credit for making us see in the negative result of experiments like those of Michelson, Rayleigh and Brace, not a fortuitous compensation of opposing effects, but the manifestation of a general and fundamental principle. [...] It would be unjust not to add that, besides the fascinating boldness of its starting point, Einstein's theory has another marked advantage over mine. Whereas I have not been able to obtain for the equations referred to moving axes exactly the same form as for those which apply to a stationary system, Einstein has accomplished this by means of a system of new variables slightly different from those which I have introduced.
Though Lorentz still maintained that there is an (undetectable) aether in which resting clocks indicate the "true time":
1909: Yet, I think, something may also be claimed in favour of the form in which I have presented the theory. I cannot but regard the ether, which can be the seat of an electromagnetic field with its energy and its vibrations, as endowed with a certain degree of substantiality, however different it may be from all ordinary matter.[15]
1910: Provided that there is an aether, then under all systems x, y, z, t, one is preferred by the fact, that the coordinate axes as well as the clocks are resting in the aether. If one connects with this the idea (which I would abandon only reluctantly) that space and time are completely different things, and that there is a "true time" (simultaneity thus would be independent of the location, in agreement with the circumstance that we can have the idea of infinitely great velocities), then it can be easily seen that this true time should be indicated by clocks at rest in the aether. However, if the relativity principle had general validity in nature, one wouldn't be in the position to determine, whether the reference system just used is the preferred one. Then one comes to the same results, as if one (following Einstein and Minkowski) deny the existence of the aether and of true time, and to see all reference systems as equally valid. Which of these two ways of thinking one is following, can surely be left to the individual.[16]
Lorentz also gave credit to Poincaré's contributions to relativity.[19]
Indeed, for some of the physical quantities which enter the formulas, I did not indicate the transformation which suits best. That was done by Poincaré and then by Mr. Einstein and Minkowski [...] I did not succeed in obtaining the exact invariance of the equations [...] Poincaré, on the contrary, obtained a perfect invariance of the equations of electrodynamics, and he formulated the "postulate of relativity", terms which he was the first to employ. [...] Let us add that by correcting the imperfections of my work he never reproached me for them.
Lorentz and general relativity [ edit ]
Lorentz was one of few scientists who supported Einstein's search for general relativity from the beginning – he wrote several research papers and discussed with Einstein personally and by letter.[B 5] For instance, he attempted to combine Einstein's formalism with Hamilton's principle (1915),[20] and to reformulate it in a coordinate-free way (1916).[21][B 6] Lorentz wrote in 1919:[22]
The total eclipse of the sun of May 29, resulted in a striking confirmation of the new theory of the universal attractive power of gravitation developed by Albert Einstein, and thus reinforced the conviction that the defining of this theory is one of the most important steps ever taken in the domain of natural science.
Lorentz and quantum mechanics [ edit ]
Lorentz gave a series of lectures in the Fall of 1926 at Cornell University on the new quantum mechanics; in these he presented Erwin Schrödinger's wave mechanics.[23]
Assessments [ edit ]
Einstein wrote of Lorentz:
1928: The enormous significance of his work consisted therein, that it forms the basis for the theory of atoms and for the general and special theories of relativity. The special theory was a more detailed expose of those concepts which are found in Lorentz's research of 1895.[B 7]
1953: For me personally he meant more than all the others I have met on my life's journey.[B 8]
Poincaré (1902) said of Lorentz's theory of electrodynamics:[24]
The most satisfactory theory is that of Lorentz; it is unquestionably the theory that best explains the known facts, the one that throws into relief the greatest number of known relations ... it is due to Lorentz that the results of Fizeau on the optics of moving bodies, the laws of normal and abnormal dispersion and of absorption are connected with each other ... Look at the ease with which the new Zeeman phenomenon found its place, and even aided the classification of Faraday's magnetic rotation, which had defied all Maxwell's efforts.
Paul Langevin (1911) said of Lorentz:[B 9]
It will be Lorentz's main claim to fame that he demonstrated that the fundamental equations of electromagnetism also allow of a group of transformations that enables them to resume the same form when a transition is made from one reference system to another. This group differs fundamentally from the above group as regards transformations of space and time.''
Lorentz and Emil Wiechert had an interesting correspondence on the topics of electromagnetism and the theory of relativity, and Lorentz explained his ideas in letters to Wiechert.[B 10]
Lorentz was chairman of the first Solvay Conference held in Brussels in the autumn of 1911. Shortly after the conference, Poincaré wrote an essay on quantum physics which gives an indication of Lorentz's status at the time:[25]
... at every moment [the twenty physicists from different countries] could be heard talking of the [quantum mechanics] which they contrasted with the old mechanics. Now what was the old mechanics? Was it that of Newton, the one which still reigned uncontested at the close of the nineteenth century? No, it was the mechanics of Lorentz, the one dealing with the principle of relativity; the one which, hardly five years ago, seemed to be the height of boldness.
Change of priorities [ edit ]
In 1910, Lorentz decided to reorganize his life. His teaching and management duties at Leiden University were taking up too much of his time, leaving him little time for research. In 1912, he resigned from his chair of theoretical physics to become curator of the "Physics Cabinet" at Teylers Museum in Haarlem. He remained connected to Leiden University as an external professor, and his "Monday morning lectures" on new developments in theoretical physics soon became legendary.[B 2]
Lorentz initially asked Einstein to succeed him as professor of theoretical physics at Leiden. However, Einstein could not accept because he had just accepted a position at ETH Zurich. Einstein had no regrets in this matter, since the prospect of having to fill Lorentz's shoes made him shiver. Instead Lorentz appointed Paul Ehrenfest as his successor in the chair of theoretical physics at the Leiden University, who would found the Institute for Theoretical Physics which would become known as the Lorentz Institute.[B 2]
Civil work [ edit ]
After World War I, Lorentz was one of the driving forces behind the founding of the "Wetenschappelijke Commissie van Advies en Onderzoek in het Belang van Volkswelvaart en Weerbaarheid", a committee which was to harness the scientific potential united in the Royal Netherlands Academy of Arts and Sciences (KNAW) for solving civil problems such as food shortage which had resulted from the war. Lorentz was appointed chair of the committee. However, despite the best efforts of many of the participants the committee would harvest little success. The only exception being that it ultimately resulted in the founding of TNO, the Netherlands Organisation for Applied Scientific Research.[B 2]
Lorentz was also asked by the Dutch government to chair a committee to calculate some of the effects of the proposed Afsluitdijk (Enclosure Dam) flood control dam on water levels in the Waddenzee. Hydraulic engineering was mainly an empirical science at that time, but the disturbance of the tidal flow caused by the Afsluitdijk was so unprecedented that the empirical rules could not be trusted. Originally Lorentz was only supposed to have a coordinating role in the committee, but it quickly became apparent that Lorentz was the only physicist to have any fundamental traction on the problem. In the period 1918 till 1926, Lorentz invested a large portion of his time in the problem. Lorentz proposed to start from the basic hydrodynamic equations of motion and solve the problem numerically. This was feasible for a "human computer", because of the quasi-one-dimensional nature of the water flow in the Waddenzee. The Afsluitdijk was completed in 1932, and the predictions of Lorentz and his committee turned out to be remarkably accurate.[B 11][B 2] One of the two sets of locks in the Afsluitdijk was named after him.
Family life [ edit ]
In 1881, Lorentz married Aletta Catharina Kaiser. Her father was J.W. Kaiser, a professor at the Academy of Fine Arts. He was the Director of the museum which later became the well-known Rijksmuseum (National Gallery). He also was the designer of the first postage stamps of The Netherlands.
There were two daughters, and one son from this marriage.
Dr. Geertruida Luberta Lorentz, the eldest daughter, was a physicist. She married Professor W.J. de Haas, who was the Director of the Cryogenic Laboratory at the University of Leiden.[26]
Death [ edit ]
In January 1928, Lorentz became seriously ill, and died shortly after on February 4.[B 2] The respect in which he was held in the Netherlands is apparent from Owen Willans Richardson's description of his funeral:
The funeral took place at Haarlem at noon on Friday, February 10. At the stroke of twelve the State telegraph and telephone services of Holland were suspended for three minutes as a revered tribute to the greatest man the Netherlands has produced in our time. It was attended by many colleagues and distinguished physicists from foreign countries. The President, Sir Ernest Rutherford, represented the Royal Society and made an appreciative oration by the graveside. O. W. Richardson[B 12]
Unique 1928 film footage of the funeral procession with a lead carriage followed by ten mourners, followed by a carriage with the coffin, followed in turn by at least four more carriages, passing by a crowd at the Grote Markt, Haarlem from the Zijlstraat to the Smedestraat, and then back again through the Grote Houtstraat towards the Barteljorisstraat, on the way to the "Algemene Begraafplaats" at the Kleverlaan (northern Haarlem cemetery) has been digitized on YouTube.[B 13] Einstein gave a eulogy at a memorial service at Leiden University.
Legacy [ edit ]
Lorentz is considered one of the prime representatives of the "Second Dutch Golden Age", a period of several decades surrounding 1900 in which in the natural sciences in the Netherlands flourished.[B 2]
Richardson describes Lorentz as:
[A] man of remarkable intellectual powers ... . Although steeped in his own investigation of the moment, he always seemed to have in his immediate grasp its ramifications into every corner of the universe. ... The singular clearness of his writings provides a striking reflection of his wonderful powers in this respect. .... He possessed and successfully employed the mental vivacity which is necessary to follow the interplay of discussion, the insight which is required to extract those statements which illuminate the real difficulties, and the wisdom to lead the discussion among fruitful channels, and he did this so skillfully that the process was hardly perceptible.[B 12]
M. J. Klein (1967) wrote of Lorentz's reputation in the 1920s:
For many years physicists had always been eager "to hear what Lorentz will say about it" when a new theory was advanced, and, even at seventy-two, he did not disappoint them.[B 14]
In addition to the Nobel prize, Lorentz received a great many honours for his outstanding work. He was elected a Foreign Member of the Royal Society (ForMemRS) in 1905.[1] The Society awarded him their Rumford Medal in 1908 and their Copley Medal in 1918.
See also [ edit ]
References [ edit ]
Primary sources [ edit ]
Many papers by Lorentz (mostly in English) are available for online viewing in the Proceedings of the Royal Netherlands Academy of Arts and Science, Amsterdam.
Lorentz, Hendrik Antoon (1900), "Considerations on Gravitation", Proc. Acad. Science Amsterdam , 2 : 559–574
Lorentz, Hendrik Antoon (1927–1931), Lectures on Theoretical Physics (vol. I-III), New York, [NY.]: Macmillan & Co. , (Vol. I online) |
In a typical year, the number of children killed by fathers is Read More .. less equal to the number of children killed by mothers.
In 2004, 36 children were killed by their parent, 17 parents killed by their child, 11 victims killed by a sibling and 22 other family-related homicides.
The man, who died after jumping, led police on a frantic chase yesterday afternoon to a bridge on Don Mills Rd., over Highway 401. They were called there by the girl's mother at 4:30 p.m. after she found a suicide note at her nearby apartment. Read More ..
It's a miracle: A 5-year-old girl taken by her father, who was bent on killing her and himself, was thrown 15 metres from an overpass into oncoming traffic and survived without a broken bone.
The temperature in the apartment rose to at least 35 degrees Celsius, and she had no food or water. DaSilva had lied to people, saying someone was at her apartment taking care of Adrianna. Read More ..
Clara DaSilva, his former girlfriend, went salsa dancing in September 2002. She left her two-year-old daughter Adrianna alone in a sweltering apartment for 33 hours.
"I'm sick to my stomach," Mark Yetman told reporters outside a Toronto courthouse on Monday. "You go out and beat up a guy on the street corner, you get five or 10 years. You kill my kid, it's totally fine."
A Toronto father pronounced himself "sick" after the salsa-dancing mother of his toddler daughter was given a three-year sentence in connection with the child's death by abandonment.
Infanticide -Criminal Code of Canada
Mothers Getting Away With Murder
Canadian Children's Rights Council Position Statement on Eliminating the Criminal Code of Canada Offence of Infanticide
June 1, 2009
It is the position of the Canadian Children's Rights Council that a baby's life is worth as much as that of an adult and that the Criminal Code offence of Infanticide should be eliminated from the Criminal Code of Canada.
Most of the social issues have changed over time and the situations for new mothers have great improved including, but not limited to; the historic stigma of single motherhood out-of-wedlock; the implementation of universal health care, availability of abortions for unwilling mothers; current birth control methods and the equality of women are justification for eliminating special consideration for women and a maximum 5 year sentence for women who murder their babies. The same homicide charges should apply for babies, children and adults.
Historically, women were treated as inferior and less responsible for their own actions than men. The infanticide offence discriminates based on sex, a violation of the Canadian Charter of Rights and Freedoms.
The Criminal Code of Canada includes a provision for a defence of diminished responsibility based on mental illness for all people charged with criminal offences. Postpartum depression also called postnatal depression suffered by some women, and even some men, after child birth, is an issue at trial and a defence.
When the police and Crown attorneys charge a mother with infanticide, there is an element of predetermination of her mental state at the time of the murder. The equivalent charge doesn't exist with regards to the murder of an adult. If it did exist, it would be murder by a mentally ill person. Justice is better served by leaving such a judgement up to the court which considers all of the expert evidence in an impartial way without influence or predetermination.
This position is consistent with that of the United Nations.
From the UNICEF U.N. Digest 2 Children and Violence
Infanticide and homicide of children
"An analysis of 285 homicides committed in the United Kingdom from 1989 to 1991 involving victims under the age of 18 years found just 13 per cent had been killed by strangers; 60 per cent were killed by parents. 56 Similar results have been reported in the United States and in Australia. In countries where homicide statistics are analysed according to age of victim, infants and very young children are often found to be the age group most at risk. In the United Kingdom, under-one-year-olds are four times as likely to be victims of homicide as any other age group almost all killed by their parents.
Infanticide remains defined in many legal systems as a lesser crime than murder, although it involves the intentional killing of a baby. The rationale is to provide a special defence for mothers suffering psychological trauma as a result of birth. However, in many of the same legal systems, there are general recognized defences of diminished responsibility to charges of murder which could be applied in special cases. It therefore seems clear that the roots of the special status of this crime lie in regarding an infants life as of less worth than that of an older person.
Contrary to the usual assumption that infanticide is an Eastern rather than a Western problem, Lloyd deMause in his classic History of Childhood documents that infanticide of legitimate as well as illegitimate children
was a regular practice of antiquity, that the killing of legitimate children was only slowly reduced during the Middle Ages (hence the grossly unequal ratios of men to women in many societies) and that illegitimate children continued to be regularly killed right up into the nineteenth century. . . . Even though Thomas Coram opened his foundling hospital in 1741 because he couldn't bear to see the dying babies lying in the gutters and rotting in the dung-heaps of London, by the 1890s dead babies were still a common sight in London streets . . .
Infanticide has been practised as a brutal method of family planning..."
The digest from the U.N. also states:
"In the case of many categories of violence to children, greater sensitivity is leading to greater visibility a prelude, it is hoped, to effective prevention. Available research from different countries suggests that, at least outside active war zones, children are most at risk of violence, including sexual violence, within their own homes and from the adults closest to them. But generally attempts to document the overall extent of violence to children are in their infancy, a reflection of the low status of children, and the low political priority accorded to them and perhaps Read More ..mediately a reflection of the individual and collective guilt of adult perpetrators of violence to children........"
".......It is a sad reflection on human civilizations that the smallest and most vulnerable of people should have had to wait until last for consistent social and legal recognition of their equal right to physical and personal integrity, to protection from all forms of interpersonal violence. Only a handful of countries have as yet adopted laws to give children the same protection that adults enjoy
from physical assault. In most states violent punishments, including beatings with tools, remain common and sanctioned by the law.
Nevertheless, there is now growing recognition that asserting children's right to protection from routine physical violence in the home and in institutions is as vital to improving their status as it has been to women's status to assert their equal right to protection from routine violence in the home and the community.
Leading this trend is the Committee on the Rights of the Child, the international monitoring body for the Convention, which has consistently challenged laws that permit any physical punishment of children, recommending clear legal reform and educational programmes." |
Former George W. Bush administration aide Tony Fratto said Wednesday that if Republicans allow Alabama Senate candidate Roy Moore to serve despite the serious sexual assault allegations against him, then the GOP is "over."
"We can scream our heads off all we want, Alabama voters will do whatever they do," Fratto tweeted. "The real test for the GOP is whether Moore is allowed to serve, should he win. Fail that test and the party's over."
We can scream our heads off all we want, Alabama voters will do whatever they do. The real test for GOP is whether Moore is allowed to serve, should he win. Fail that test and the party’s over. — Tony Fratto (@TonyFratto) November 22, 2017
Fratto, formerly the deputy assistant and deputy press secretary for Bush, is a longstanding critic of President Trump.
Trump all but gave his endorsement of Moore on Tuesday despite the stories of nine women accusing him of misconduct.
"We don't need a liberal person in there, a Democrat,” Trump told reporters on the South Lawn of the White House.
ADVERTISEMENT
Moore has denied all the allegations and has indicated he has no intention to step down ahead of the Dec. 12 special election for Attorney General Jeff Sessions Jefferson (Jeff) Beauregard SessionsTrump says he hasn't spoken to Barr about Mueller report Ex-Trump aide: Can’t imagine Mueller not giving House a ‘roadmap’ to impeachment Rosenstein: My time at DOJ is 'coming to an end' MORE's former seat.
The former aide's comments come after a new poll found that a majority of voters surveyed believe Moore should be expelled from the Senate if he is elected in December. |
MIRPUR: Sri Lanka 's T20 captain and strike bowler Lasith Malinga has hinted that he might call time on his international career after the ICC World T20 in India as he is finding it difficult to manage a grave knee injury.At the post-match media conference, Malinga was asked if he would like to quit after the upcoming edition of World T20, just like Mahela Jayawardene and Kumar Sangakkara did after winning the last edition, he replied: "Might be".The Lankan slinger elaborated that his current injury is so bad that complete recuperation might take close to two years, which effectively ends his career now that he is nearly 33."I have played 12 years for the national team. I am now 32 and will soon be 33. I have had a bad injury and if at this stage I have to take one or one and half years of rest, I would rather have to finish my career. If I need to play tough cricket for my country, I don't think I can then totally recover from the injury. That's why I am saying I don't know how many months or years are left in me. Whatever little I play, I want to serve the national team and my IPL team (Mumbai Indians) well," Malinga said.For Malinga, he is ready to play through pain for the national team as the defending champions need him in the big event."This is not the right time to rest. We have the T20 World Cup and I am the most experienced bowler for Sri Lanka in this format. Whatever painkillers and injections that needs to be used, I will use as this is the end of my career. If I can do something for the team, I will do in these last few months may be. I am still only 60-70 per cent of where I want to be. But I am happy with the result against UAE though," said Malinga, who led from the front with a four wicket haul in a low-scoring game. |
'I feel like no-one': Girl, 12, dies in father's arms from mystery condition after being tormented by school bullies
Parents discover trove of heart-breaking letters revealing their daughter's suffering
Father hands police the names of 13 children accused of tormenting girl
A distraught father has told how his 12-year-old daughter collapsed and died in his arms after being tormented by bullies at school.
Holly Stuckey’s sudden death has not been explained by doctors and is being investigated by police and education officials.
Holly, who was an only child, was described as innocent, quiet and timid by her grieving family.
Bullied: Father Clive Stuckey holds his daughter Holly
Her father Clive, 42, said: ‘People made fun of her because she did not know much about sex education.
‘She was a beautifully innocent young girl but the kids turned on her and started to call her a lesbian because she didn’t know as much as them.
‘In the weeks before Holly died she wouldn’t go anywhere on her own.
‘She wanted me to take her everywhere.’
Mr Stuckey said he found letters in her bedroom which described how she was being bullied by other children – which he believes put a strain on her heart.
Holly had only just started Year 8 at secondary school when she complained at home of chest pains and being unable to breathe.
Her family called an ambulance but moments later she stopped breathing and could not be revived.
Note: Holly Stuckey's letter of despair
Mr Stuckey, who is trained in first aid and works as a carer for the elderly, said yesterday: ‘She died in my arms – it was the worst thing any parents could go through.
‘It wasn’t until afterwards that we discovered the torment she’d been going through.
‘Bullies had been putting her through hell.’
Mr Stuckey says he found letters in Holly’s bedroom which described her experiences at Maesteg Comprehensive School, near Bridgend in South Wales.
A note written in pink biro read: ‘I hate you for what you have done to me. I feel like no one.’
He added: ‘Holly was withdrawn when she came home from school – she only had three friends, the rest turned on her.
‘But her confidence would come rushing back when she came into the house. She was always singing and dancing around, or playing on her Wii.’
Mr Stuckey has given the names of 13 children to police who are investigating his daughter’s death for the coroner.
Although Holly suffered from asthma, the condition was thought to be under control, and an initial post-mortem examination was inconclusive.
An inquest has been opened but her parents have been told it could be three months before they know what caused her death.
Mr Stuckey, who lives in Maesteg with his wife Lee, 47, said: ‘We just don’t know but it could have been the emotional strain of what she was going through which brought on a heart attack.
‘I have been contacted by several other parents who have told me that their children are being bullied.
‘I want other parents to stand up for their children. I want to protect them.’
The Stuckeys asked mourners to wear pink at Holly’s funeral at the Church of St Michael and All Angels, in Maesteg.
Distraught: Holly's parents have handed the names of 13 of her alleged tormentors to police
Her body was carried in a Hannah Montana coffin to the sound of Whitney Houston’s song I Will Always Love You.
During the service her parents paid tribute to their daughter, saying: ‘You brought out the sunlight in our lives. The angels will look after you now.’
A school spokesman said: ‘We employ a zero-tolerance approach to bullying and offer quick action to support victims, deal with bullies and resolve any conflicts.
‘If any pupil is found to have been involved in bullying, their parents are informed and are asked to attend a meeting along with their child, where the school’s position is made very clear.
‘Appropriate action is then taken as required.’
Inquiry: Staff at Maesteg Comprehensive School, where Holly was a pupil, said there was a 'zero-tolerance' approach to bullying |
If you were planning on attending noodle mass (nuddelmasse) in Templin, Germany this year, you might have to ask a local for directions. A Brandenburg court has ruled that the Church of the Flying Spaghetti Monster (FSM) is no longer permitted to display placards advertising its nuddelmasse services at any of the town’s four entrances, where they once hung alongside similar signs for the local Lutheran, Catholic, and Evangelical churches. This is a major setback for the young church’s ongoing fight—in the birthplace of Protestant Reformation, no less—for public legitimacy.
The Church of the Flying Spaghetti Monster began in 2005 with a satiric open letter written by then-25-year-old Bobby Henderson in response to the Kansas Board of Education’s decision to teach the theory of intelligent design alongside evolution in public schools. Henderson argued that schools ought also to devote class time to teaching the theory that a flying spaghetti monster had created the universe. This, he reasoned, was as probable a version of intelligent design as any other. The letter inspired a biblical flood’s worth of memes and launched a religious group that now claims a global membership. As this so-called Pastafarianism has grown, some branches of the FSM church have started demanding the rights and privileges enjoyed by more established religious organizations. What started as a fake religion is now angling to be an authentic one.
One German “Pastafarian” on his American counterparts: “It is only about partying and pasta recipes. There is only a marginal social concern there.”
In his article “Fake Religion: Ordeals of Authenticity in the Study of Religion,” David Chidester argues that the concept of religion has always been leveraged as the authentic alternative to chicanery and superstition, and accusations of fake religion have long been a strategy to undercut the claims of religious upstarts. Mohammed, for instance, was labeled a fraudster, and more recently Joseph Smith, a charlatan. Since the 1960s, however, avowedly fake religions designed to repudiate conventional notions of religious authenticity have thrived.
Invented religions tend to embrace irony over piety and satire over sincerity, preferring to critique existing institutions than to displace—or, worse—to join them. In the 1990s and 2000s, such groups exploded across the internet. Virtual religions like Discordianism and the First Church of the Last Laugh challenged “any preconceptions we might have about religious authenticity.” But when an invented religion exits the virtual space of play and enters the public sphere, its own conceptions of religious authenticity inevitably clash with those of the state.
When the Brandenburg court decided on August 2nd, 2017 to deny the FSM church recognition as a religious group, it did so because “the critique of beliefs expressed in it is not a comprehensive system of thought.” For the court, Pastafarianism’s satiric origins, along with the fact that its iconography and rituals are so clearly intended to hold an absurdist mirror to Christianity, render it inauthentic. This is a hard charge to deny. At the same time, the court implicitly devalued the humor and play at the heart of Pastafarianism as valid elements of authentic religion, privileging instead history, solemnity, and intellectual coherence.
Spearheading the church’s quixotic fight with the German judiciary has been Rüdiger Weida (aka Brother Spaghettus) who upon hearing the ruling accused the court of judging the FSM church not by its German branch (or is it tentacle?) alone, which Weida insists is a secular humanist organization merely adorned in cookware and eye patches, but by its American counterpart, which he has described as “relatively ridiculous. It is only about partying and pasta recipes. There is only a marginal social concern there.” Perhaps Brother Spaghettus’s frustration with his colander-clad brethren on the other side of the Atlantic will soon propel him into the role of a Pastafarian Luther. This might even help him when he takes his case next to the European Court of Justice, since nothing is more authentically religious than a good old fashioned schism. |
Enjoy watching
Watch filmmaker Michael John Evans as he follows Sean Walling (owner of Soulcraft bikes) through the artful process of bringing a bike to life in his Petaluma, California shop.Filmmaker Michael John Evans sets out to visually portray “the zone” which one enters when their craft is honed.Sean Walling, owner of Soulcraft, builds top notch custom steel bicycle frames. This short film documents Sean’s fabrication methods: a well choreographed dance of experience and muscle memory producing a seemingly effortless ode to process.From Steel: invites the viewer into Sean’s machine shop for an up close and personal look at the work that results in yet another awesome Soulcraft.Musical score provided by the internationally praised duo, Mattson 2. Courtesy of Galaxia Records.Directed by: Michael John Evans: michaeljohnevans.com Starring: Sean Walling: soulcraftbikes.com Music by: The Mattson 2: mattson2.com |
Carter's lawyer now trying to get statements she made to cops thrown out
But prosecution said Carter engaged in a 'systematic campaign of coercion' that targeted Roy's insecurities
Her attorneys had argued the texts were free speech protected by the First Amendment and didn't cause Roy to kill himself
Michelle Carter, then 17, sent Conrad Roy III, 18, text messages instructing and encouraging him to take his own life in 2014, prosecutors say
A teenage girl who sent her boyfriend text messages encouraging him to kill himself asked a judge Friday to keep statements she made to police out of her involuntary manslaughter trial.
The request was among almost two dozen motions filed by lawyers for Michelle Carter, now 19, in Taunton Juvenile Court, The Boston Globe reported.
The Plainville woman is charged with involuntary manslaughter in the 2014 death of 18-year-old Conrad Roy III, of Mattapoisett.
A woman who sent her boyfriend text messages encouraging him to kill himself asked a judge Friday to keep statements she made to police out of her involuntary manslaughter trial.
The request was among almost two dozen motions filed by lawyers for Michelle Carter, 19, (left and right) in Taunton Juvenile Court
The judge did not rule on the motions but said the trial could start in December
Roy (pictured) hadn't seen Carter in more than a year when he died, even though they lived only about 50 miles apart in Massachusetts, Carter in Plainville, and Roy in Mattapoisett
At the hearing, the judge did not rule on the motions but said the trial could start in December. He plans to hear the motion to suppress Carter's interview on October 14.
Roy's body was found in his pickup in Fairhaven on July 13, 2014. He died of carbon monoxide poisoning and police found a gasoline-operated water pump in the back seat. Prosecutors say Carter and Roy communicated as he sat in the truck.
'I thought you wanted to do this. The time is right and you're ready, you just need to do it!' Carter wrote in one message.
Carter's lawyer, Joseph Cataldo, has said that the texts are protected free speech and that Roy was depressed and had previously tried to take his own life.
The state's highest court ruled earlier this month that a grand jury had probable cause to indict Carter based on evidence suggesting she engaged in a 'systematic campaign of coercion.'
Carter was 17 at the time, but is charged as a youthful offender, making her subject to adult sentences.
Carter's lawyer had argued that her texts were free speech protected by the First Amendment and didn't cause Roy to kill himself.
But the court said Carter engaged in a 'systematic campaign of coercion' that targeted Roy's insecurities and that her instruction to 'get back in' his truck in the final moments of his life was a 'direct, causal link' to his death.
Michelle Carter (pictured right in August last year) was 17 when she told Carter Roy III (left), then 18, to 'get back in' a truck filled with carbon monoxide fumes, prosecutors say
Roy and Carter (pictured) had met in Florida two years earlier while visiting relatives. Their relationship largely consisted of text messages and emails
'In sum, we conclude that there was probable cause to show that the coercive quality of the defendant's verbal conduct overwhelmed whatever willpower the eighteen year old victim had to cope with his depression, and that but for the defendant's admonishments, pressure, and instructions, the victim would not have gotten back into the truck and poisoned himself to death,' Justice Robert Cordy wrote for the court in the unanimous ruling.
The case drew national attention after transcripts of text messages Carter sent to Roy according to the indictment were released publicly, showing her urging him to follow through on his plan to kill himself and chastising him when he expressed doubts.
'I thought you wanted to do this. The time is right and you're ready, you just need to do it!' Carter wrote in one message according to prosecutors.
'You can't think about it. You just have to do it. You said you were gonna do it. Like I don't get why you aren't,' authorities say she wrote in another message.
Carter and Roy had met in Florida two years earlier while visiting relatives. Their relationship largely consisted of text messages and emails. They hadn't seen each other in more than a year when Roy died, even though they lived only about 50 miles apart in Massachusetts, Carter in Plainville, and Roy in Mattapoisett.
Roy's grandmother Janice Roy said the family is happy Carter can be put on trial.
'He was very vulnerable at that stage,' she said.
'IT'S NOW OR NEVER': MICHELLE CARTER'S MESSAGES TO CONRAD ROY Prosecutors say Michelle Carter sent her boyfriend, Conrad Roy III, dozens of text messages urging him to take his own life. Carter's lawyer argues that she tried repeatedly to talk him out of it and only began to support the plan when it became clear he would not change his mind. Here are excerpts from their text exchanges, with messages cited by her lawyer first, followed by those cited by prosecutors: June 29, 2014: Carter: 'But the mental hospital would help you. I know you don't think it would but I'm telling you, if you give them a chance, they can save your life' Carter: 'Part of me wants you to try something and fail just so you can get help' Roy: 'It doesn't help. trust me' Carter: 'So what are you gonna do then? Keep being all talk and no action and everyday go thru saying how badly you wanna kill yourself? Or are you gonna try to get better?' Roy: 'I can't get better I already made my decision.' July 7, 2014: Roy: 'if you were in my position. honestly what would you do' Carter: 'I would get help. That's just me tho. When I have a serious problem like that, my first instinct is to get help because I know I can't do it on my own' Roy: 'Well it's too late I already gave up.' Between July 6, 2014 and July 12, 2014: Carter: 'Always smile, and yeah, you have to just do it. You have everything you need. There is no way you can fail. Tonight is the night. It's now or never.' Carter: '(D)on't be scared. You already made this decision and if you don't do it tonight you're gonna be thinking about it all the time and stuff all the rest of your life and be miserable. You're finally going to be happy in heaven. No more pain. No more bad thoughts and worries. You'll be free.' Carter: 'I just want to make sure you're being serious. Like I know you are, but I don't know. You always say you're gonna do it, but you never do. I just want to make sure tonight is the real thing.' Carter: 'When are you gonna do it? Stop ignoring the question' Carter: 'You can't keep living this way. You just need to do it like you did the last time and not think about it and just do it, babe. You can't keep doing this every day. Roy: 'I do want to but I'm like freaking for my family I guess. I don't know.' Carter: 'Conrad, I told you I'll take care of them. Everyone will take care of them to make sure they won't be alone and people will help them get through it. We talked about this and they will be okay and accept it. People who commit suicide don't think this much. They just could do it.'
Carter's lawyer, Joseph Cataldo, argued that Roy was a depressed teenager who had previously tried to take his own life and was determined to finish the job this time. He also argued that Carter shouldn't have been charged with manslaughter because Massachusetts doesn't have a law against encouraging or assisting suicide.
Cataldo said earlier in July that he was surprised and disappointed by the court's ruling. But he noted that the court didn't weigh in on Carter's guilt or innocence, but merely found there was enough evidence for the case to proceed to trial.
'At trial, it's proof beyond a reasonable doubt, which is a much higher standard, and I'm confident that ultimately, after trial, Michelle Carter will be acquitted,' he said.
In addition to the text messages, prosecutors focused on a phone conversation Carter had with Roy while he was in his truck inhaling carbon monoxide fumes. Prosecutors said Carter sent a text to one of her friends after Roy's death and told her that while she was on the phone with Roy, he got out of his truck because he became frightened. She said she told him to 'get back in'. |
A state judge refused Thursday to delay the start of same-sex marriage in New Jersey until a legal appeal can be settled, denying efforts by governor Chris Christie's administration to put off gay weddings.
As it stands, the state must grant marriage licenses for same-sex couples starting 21 October. But the administration of Christie, a Republican who is considered a possible presidential candidate for 2016, was expected to appeal the denial of the stay to a higher court.
"Granting a stay would simply allow the state to continue to violate the equal protection rights of New Jersey same-sex couples, which can hardly be considered a public interest," Judge Mary Jacobson wrote.
The Christie administration had already asked the state supreme court to weigh in Jacobson's initial decision last month that the state had to allow gay marriage.
In her order Thursday, she ruled that the start of nuptials did not have to be delayed, finding the state was not likely to win its appeal and that it would not hurt the state if same-sex marriage licenses are issued.
Gay couples who want to wed "would suffer many hardships of constitutional magnitude if the stay were to be issued, but the state has not demonstrated how it would suffer in any meaningful way if the order is enforced," she wrote.
Thirteen states, including most in the northeast, already allow gay couples to marry. New Jersey offers gay couples civil unions but not marriage.
The matter has been fought in New Jersey's courts and legislature for a decade.
"The court's decision once again confirms that the hardships of not being able to marry are real and immediate. Every day does count," said Hayley Gorenberg of Lambda Legal, which had filed a brief in support of same-sex couples seeking the right to marry.
Jacobson's ruling in September concluded that it's unconstitutional for New Jersey to block gay marriage now that the federal government is giving married gay couples legal benefits.
New Jersey's attorney general had argued unsuccessfully that a state law cannot be found unconstitutional because of a change in federal policy.
Gay rights groups are also pushing lawmakers to override Christie's veto last year of a law that would have allowed gay marriage. |
Oscar Pareja was a bright-eyed 21-year-old when the tragic news filtered through. It was the evening of 15 November 1989, and he was heading home after Deportivo Independiente Medellín’s league clash with America de Cali. A man who he had shared a field with earlier had been murdered in cold blood on the streets of Pareja’s home town.
Pareja, part of the Medellín midfield engine room that day, was startled. The victim, Alvaro Ortega, had refereed the encounter, one which Medellín needed to win.
The murder was no random act. The Colombia of the 1980s and 90s was dominated by drug cartels. There were few areas their tentacles failed to reach. They built houses for the poor. They were involved in politics, gambling and bribery. And football. But when things didn’t go their way, bloodlust often ensued.
The king of them all was notorious cocaine lord Pablo Escobar, whose empire at one point dominated 80% of the global cocaine market while simultaneously ripping apart Colombian society. But there was a turf war. His Medellín cartel had a chief rival in the Cali cartel to the south, headed up by prominent foes Gilberto and Miguel Rodriguez Orejuela. And the rival cartel bosses shared more than a lust for control of the drugs trade. They were soccer fanatics with stakes in at least three top clubs between them.
For referee Ortega, that proved fatal. Medellín was said to be owned or linked to Escobar, who was more commonly associated with city rival Atletico Nacional. America belonged to the Cali brothers, a pair often euphemistically referred to as “the gentlemen” after their preference for bribery over violence.
FC Dallas shun stars for youth – but will it work? Read more
Reports vary, but it seems Escobar ordered Ortega’s death when decisions had gone against Medellin in a previous game against America.
It’s an all too raw memory for Pareja. “We needed to win,” he tells the Guardian. “I thought the referees had had a bad performance. When I was going back home after the game we heard that one of the referees was killed. I remember we were numb.”
It was a fraught time. Yet Pareja remembers he and his team-mates were unable to fully grasp the gravity of the events going on around them.
“That problem was there. It was part of our society. I can tell you now after time has gone by, we can see now clear how those days were. I thought then they were involved there somehow. We could feel it. We were numb. We didn’t know much better. We knew that we needed to play for the team. We knew we had to defend the colors of the club. And we needed to fight for our fans.
“But we knew there was a lot going on with the owners of the club, that they were very shady and [there were] things that we didn’t know. When you are a soccer player you don’t know much about those things.”
It’s a far cry from the tranquillity of life now. Pareja is a prized asset of the FC Dallas homegrown-driven juggernaut. As head coach, he is known for developing young talent that may otherwise go astray – an ingredient that perhaps draws from his own blighted youth in the pressure cooker of Colombia in the 1980s and 90s. And at Dallas, he has placed a small corner of Colombia amid his ranks, components of the burgeoning talent pot that makes up the current generation of the country’s footballers.
MLS all-star and FC Dallas left winger Fabian Castillo is one. Fellow winger Michael Barrios, an even more diminutive presence on the right, has been showing promise recently having been acquired by Pareja from the Colombian second division.
Pareja’s own generation put Colombian football on the map. After making their first World Cup appearance in 28 years at Italia 90, they returned again for the 1994 tournament in the United States as one of the favorites. The upsurge in the Colombian domestic game may have owed much to the investment of tainted money, but the talent of emerging players like Faustino Asprilla and peaking veterans like Carlos Valderrama was genuine. Pareja was among the Colombian game’s top players when the national team made a fourth-placed finish at the 1991 Copa America, but not for the World Cup three years later. He watched on in horror, though, as his country and friends crashed out in the group stages under the weight of unwanted pressure from home.
Facebook Twitter Pinterest FC Dallas coach Oscar Pareja. Photograph: Matthew Visinsky/Icon Sportswire/Matthew Visinsky/Icon Sportswire/Corbis
“The national team, when they came to the United States World Cup, they received a lot of pressure from many people who didn’t belong to the game,” Pareja recalls. “And it put the players in a place where they were not in the right state of mind. Players being threatened by people who were betting for games. Players who were feeling the pressuring of problems that was there, and empowered by people who were crazy. That was very difficult to overcome. And that magical generation just surrendered under the pressure of the shady people who don’t do right.”
It was a bewildering time. And Pareja had a front row seat. Literally.
In the years to come, before he left his homeland for MLS in 1998, the now 46-year-old would live through many more incidents of intrigue and skulduggery, some tragic like the death of Ortega. There were attempts on the lives of presidential contenders, and Luis Carlos Galan was assassinated the same year as Ortega. There were bombings. But others simply entered the realms of the absurd.
Perhaps the most bizarre: taking his own personal orders from Escobar. Along with a group of Medellin teammates, Pareja was summoned to the infamous drug czar’s self-built, one-man prison in the foothills overlooking the Andean city. Pareja is a stoic figure, uncompromising as a football man on the FC Dallas touchline. But as he recounts those strange days, Pareja’s gaze narrows briefly, fixes on his midriff, the merest hint of nervousness as he remembers that time in his life. It is not one he likes to recall.
“I was demanded, we were demanded – not in a bad way, you know,” he says. “As I said, those people were looked at in this time as Robin Hoods. So, back in that time when you receive a call and said you needed to go and play with Robin Hood and his people, you see it as an appearance that is going to bring joy to someone in the jail.”
Robin Hood seems a peculiar choice of moniker. But Escobar and other cartel leaders were known to build houses for the poor and other community amenities, however twisted their motives many have been.
But joy? “I got to tell you something, Colombia in those days with soccer and all what was happening, there was a lot of joy,” says Pareja. “And the game just embraced everybody in a certain way. So when we were playing we were just having fun with people who, many of them, you see growing in your community. Now the people see, it gets bigger and bigger, but back in that time we were people who just loved the game and shared it.
World Cup: 25 stunning moments … No7: Andrés Escobar's deadly own goal | Barry Glendenning Read more
“Now I can see back and see different. But back there when we were doing it, we were just, with policemen, with the guy who sells you the milk, with the guy who is a professional like you, with Robin Hood, with everybody. We were invited to go everywhere. With the kids that were in the hospital, with the kids that were less fortunate, with people who were in the jail, we made a lot of appearances.”
In Escobar’s prison, named La Catedral, Pareja and his team-mates played the drug lord and his henchmen, taking care to ensure the score remained close and avoiding any rash tackles. In person, Pareja is immensely pleasant, and Escobar, too, was enamored. He called Pareja by a popular old nickname, El Guapo, or “the handsome one”. But Escobar also noticed Pareja’s habit of arguing decisions. Escobar apparently informed Pareja that his protestations were futile, the decisions preordained by the drug lord.
It was all a long time ago. Pareja left his homeland as it was still struggling to cope with the aftermath of the big cartels. A civil war, too, continued to rage. Today, things are calmer, a pleasing aspect for a man who has previously spoken of a desire to one day return home to the family ranch.
The nation’s game is also riding a wave of optimism unknown since the generation to which Pareja belonged. Colombia’s 1994 World Cup squad was the pinnacle of a journey which began at the 1990 tournament in Italy. It was festooned with talent. Names such as Valderama and Asprilla, as well as Antony de Avila, Freddy Rincon, Gabriel Gomez. And the one seared into Colombia’s – and football’s – conscience: Andres Escobar. Andres was another man gunned down on a Medellin street, his untimely demise occurring after the 1994 World Cup, and after Andres, a teammate of Pareja at the 1991 Copa America, had scored a fateful own goal in Colombia’s second group match against the United States. By this time Pablo Escobar had been killed by the authorities, but the lawlessness of the country prevailed.
Perhaps he has to be, but Pareja is a passionate follower and proponent of Colombian soccer and the product it churns out. These days, that output is not inconsiderable. The current generation of Colombians, probably the most stylish at last year’s World Cup in Brazil, might be the natural successor to the one which featured Pareja the player. That squad may have been Colombia’s best ever at the time, a golden generation robbed of a true shot at glory.
The question might be, can this generation, with all of its bubbling talent, social and political advantages do what their predecessors of the early-to-mid 1990s couldn’t? A win in Russia in 2018 seems somewhat hazy right now, particularly after Colombia’s lackluster excursion at the recent Copa America in Chile. But for the core of this promising Colombian collective, largely born in the aftermath of the era darkened by the major cartels, a fairytale triumph would rank as a fitting tribute for the generation who had their best chances effectively ripped from their grasp.
“Are we going to be able to win a World Cup? I think we will have a chance. With this generation we probably do. We have a lot of talent there. But it is not easy to win a World Cup,” says Pareja. “But I think we are on a great path. I think our country financially, economically, politically, my society is striding toward a good place. Our ingredients are there. I think we have a better country, we have better people around.” |
Greenville’s Bonafide Kayaks, a company that makes fishing kayaks, is launching a new manufacturing operation that’s expected to create 76 new jobs over the next three years.
According to a press release, the company plans to invest $2 million in the operation, which will be located at 10 Quest Lane in Greenville. Bonafide has already started constructing the facility and completed the installation of production equipment. The company’s operations, including rotational molding and assembly, are expected to come online by December.
“We looked long and hard for the best place to set up our manufacturing operations, and Greenville, S.C. offers everything we were looking for. It’s one of the best places in the world to live; there’s a thriving workforce available; it’s a great shipping point; and, every resource and utility we need are readily available. We couldn’t be happier to call Greenville our new home,” said Bonafide Kayaks’ CEO Luther Cifers, who also established YakAttack, a kayak fishing accessory brand.
Bonafide Kayaks, which formed in 2016, has spent the past several months developing its first models of kayaks. It recently debut at the iCAST trade show in Arlington, Va.
“In Greenville County, we cherish up-and-coming small businesses. They are the foundation and the future of our strong, diverse economy. We could not be more proud to have Bonafide Kayaks join our great community,” said Greenville County Council Chairman Butch Kirven.
Hiring is expected later this year. Interested applicants are encouraged to email buff@bonafidekayaks.com. Learn more at bonafidekayaks.com.
Comments |
Indian employers appear to be upbeat about government initiatives like digitalisation and smart cities as they plan to increase hiring activities by up to 15 per cent in specific sectors such as IT, manufacturing and construction, says a survey.
About 51 per cent respondents surveyed believe there will be a hiring growth of up to 15 per cent this fiscal, according to the Genius Hiring-Attrition & Compensation Trend Survey: 2015-16.
"There is an overall positive environment, which has given rise to a fair amount of movement to the job scenario. The government's emphasis on smart cities, digitalisation, infrastructure among others have added to this positivity. However, the growth in hiring is on the expected lines and not as what people anticipated will happen after the general elections", Genius Consultants Chairman and Managing Director Rajendra Prasad Yadav told PTI in Mumbai.
Kolkata-based Genius Consultants' 'Hiring Trend Survey' is an online survey conducted annually to gauge the hiring trends across sectors. Around 22 per cent of the 714 companies surveyed across India said new jobs will be created in FY 2015-16.
The survey further revealed the growth in hiring will mainly be in sectors like IT, ITES, BPO, manufacturing, construction and engineering. The jobs will be created in cities, including New Delhi, Kolkata and Mumbai and there will be both new vacancies as well as replacement hiring, it said.
"The sectors where both new vacancies and replacement hiring will take place are IT, ITES, BPO, manufacturing, banking and finance in cities like New Delhi, Kolkata and Chennai", the survey said.
According to the report, only 2 per cent of the companies surveyed believe that that there will be layoffs this fiscal and only 5 per cent said there will be no hiring. When companies were asked on what experience band they would expect for hiring, about 41 per cent said 4-8 years and 30 per cent within 1-3 years.
About 14 per cent felt it would be in the experience band of 8-15 years and 14 per cent felt jobs would be created for freshers. The survey further said 33 per cent companies feel that the maximum talent crunch was witnessed in the 4-8 year experience bands.
The sectors like construction and engineering, manufacturing and pharmacy and medical industries will face difficulties while hiring (this year). On attrition, which has long been an area of concern for organisations, 38 per cent say that anticipated rate of it will range in between 5-10 per cent in view of optimistic job market scenario in the country during the current financial year. |
It's well after the lunchtime rush at Mamaleh's Delicatessen in Kendall Square and the place is still buzzing. Waiters dish out knishes, pastrami, lox, bagels — and of course, chopped liver ad infinitum.
"Ours is never as good as their grandmother's, or their great aunt's, or their mother's," says Rachel Miller Munzer, one of Mamaleh’s seven owners. "Or they go, 'My grandmother's were never like this. Oh my God, this is so much better.' "
Of course, it isn't your grandmother making the chopped liver here. It's line cooks like 22-year-old Marvin Bonilla, who came to the U.S. three years ago from El Salvador.
"Welcome to Mamaleh's in Cambridge, Massachusetts," Bonilla announces in a voice suited for commercial radio. "If you want to have good food just try our matzo ball soup. You can also try our pastrami and the house lox salmon. You will love it."
Bonilla loves his job — but there's a but. He knows those who work in the front of the house make about twice what those in the kitchen make. It's because of the tipping system, which rewards servers and bartenders for hard work (and increased prices), while the wages of dishwashers and cooks stay flat.
Mamaleh's, in a 2016 file photo (Jesse Costa/WBUR)
“If we get busy or we are slow we make the same, but for these people up front, if they get busy they make more money," Bonilla says. "And then you see who really does the hard job. The back kitchen is the part of the kitchen that's making the whole food."
Some say there's an advantage to flat wages — you always know what you're going to earn — but the bottom line is clear to anyone paying attention: Tipped workers make more.
Mamaleh's and its sister restaurant, State Park, are the latest of about a dozen establishments in Boston and Cambridge to adopt "revenue sharing" programs. The details vary from restaurant to restaurant, but the basics are the same: Charge customers a few cents on the dollar and funnel the money to cooks and dishwashers.
"In the industry [the wage gap] has been our No. 1 concern for the last five to seven years," says restaurateur Keith Harmon, who co-owns three restaurants in Jamaica Plain.
In 2015 he adopted revenue sharing at Tres Gatos before putting it into place at Centre Street Cafe. When they opened Casa Verde in the spring of 2016, revenue sharing was part of the business model from the get-go. Harmon says people often ask: 'Why not just increase prices and raise kitchen wages?' But that would automatically increase tips, he says, and thus perpetuate the wage gap.
"Now what you're doing is you're converting the idea that the busier the restaurant is, the better it is for everyone who is working in back of house," Harmon says.
Keith Harmon is part owner of three restaurants in Jamaica Plain. In 2015, he established revenue sharing by way of a 3 percent fee that goes straight to kitchen employees. Harmon helped convince the owners of Mamaleh’s and State Park in Cambridge to implement a similar program. (Simón Rios/WBUR)
Revenue sharing has already taken off in California’s big cities, to the extent that a spokesperson for the California Restaurant Association calls it "the emerging new norm." In the Bay Area, the association says revenue sharing is more common than not.
Another solution to the restaurant wage gape is to do away with tipping entirely, a model some restaurateurs in New York are experimenting with. Danny Meyer is leading that charge.
"I believe that hospitality is a team sport," Meyer said on "Freakonomics Radio" a year ago. "And same way as if you went to a soccer game, the ticket you bought would include the seat, but it would only include the strikers ... and expect you to pay the goalie separate, based on what you as a fan thought of the goalie's performance."
The tipping model is a relic of the Civil War, Meyer says, and it's time has come.
Harmon doesn't think the market he operates in — casual dining in Jamaica Plain — is ready for the elimination of tips.
"We didn't want to alienate the tipped staff to take care of the non-tipped staff, and so we came up with this pennies-on-the-dollar approach," he says.
Harmon admits revenue sharing is only a partial solution, but he calls it a highly effective form of "duct tape." Before it, Harmon says tipped employees earned about 2.5 times what back of the house staff earned. Now, that gap has been cut by a third, and Harmon says the entire staff is happier as a result.
A receipt from Centre Street Cafe includes a 3 percent “hospitality administration fee,” which comes out to 39 cents on $13 of food sales. The entire fee goes to non-tipped employees in the kitchen. (Simón Rios/WBUR)
Harmon is spreading its gospel among his colleagues in the industry. Different restaurants have different names for the fees — living wage fee, administrative hospitality fee, kitchen appreciation fee — and each model is a bit different.
At Harmon's three restaurants, a 3 percent fee on all sales goes directly to the kitchen.
At Mamaleh's, 5 percent of food sales (not beverages) goes to kitchen workers at the end of the month. And because the fees are distributed equally, the lowest earners, the dishwashers, stand to benefit most relative to their base pay.
That's not the case at every restaurant doing revenue sharing. At Puritan & Company in Cambridge, co-owner and general manager Chris Yorty says management will decide what to do with the 3 percent "kitchen staff fee."
Yorty says he increased kitchen wages $1, and the rest of the money collected will be distributed as mangers see fit -- whether that means a raise for one worker, or a signing bonus for someone the company wants to hire.
“Selfishly, it’ll help attract more talent,” Yorty says.
Augusto Lino, a tip-earning bartender at Area Four Boston, has told people, "You never talk about your money in front of the kitchen, because these guys are working harder and making the same amount of money." Area Four's owner is considering a revenue sharing program. (Jesse Costa/WBUR)
At Mamaleh's, which implemented revenue sharing in early March, owner Miller Munzer is nervous about raising prices on customers.
“It's frightening," she says. "I mean, our stomachs are turning right now because we don't know what effect it will have."
Mamaleh's regular Dan Meyers interrupts Munzer to say he supports better pay for hard work. "Let people earn a good wage," he says. "I'm happy to pay another 20 percent."
He adds: "It's a great thing, and that shows that [for] the people running the place, it's not just lip service. They actually care about their people."
Mamaleh's also cares about keeping its kitchen staffed. With a booming economy, restaurateurs across the state say a labor shortage is the top problem facing the industry. Munzer says Mamaleh’s is constantly hiring, and they want revenue sharing to release some of the pressure.
“Ultimately we hope it will help with staffing, which means less turnover," she says. "I mean, there's a tremendous expense in hiring people, training them, and then turning them over."
The wage hike hasn't taken effect yet, but Mamaleh's cook Bonilla is beaming at the idea that his pay could go up as much as $3 an hour.
"I'm going to make another 5 percent of everything," he says. "I feel this is going to be a great thing in my life.” He calls Mamaleh's "one of the best places" he's worked.
And the fringe benefits almost go without saying: all the matzo balls and chopped liver a line cook can dream of.
Correction: An earlier version of this story said Marvin Bonilla is from Honduras, not El Salvador. We regret the error. |
10 Things Every New Parent Goes Through in Their First Year
The arrival of a newborn is a life changing experience for new parents and over the first year of your baby’s life, you and your baby will go through an exciting period of enormous change. This first year will be full of surprises, a combination of incredibly special moments and some tough times as you adjust to this brand new chapter of your life together.
The Car Seat Challenge
Having purchased a car seat, every new parent will discover that installing this seemingly straightforward piece of equipment can be completely baffling at times.
Choosing the correct car seat for your baby and fitting it correctly will protect your child from harm, should something go wrong so it is essential to make sure that the seat is fitted correctly and the baby is securely strapped in.
Have an expert show you how to fit the seat correctly at the point of purchase and keep the manufacturer’s instructions close by in case you need to refer back to them. This task is not always as easy as it looks, particularly when you are strapping in an older baby who is keen on wriggling or protesting!
Sleep Debt
A newborn may sleep for as much as 18 hours each day, which sounds like an awful lot, but with a small tummy, your baby will only sleep for three to four hours at a time in between feeds, so interrupted nights are inevitable in the first few months and many hours of those precious ZZZ’s will be lost.
The first few weeks are the toughest, but by around three months, your baby will begin to nap at times that are more predictable and at six months, your baby may begin to sleep through the night, giving you the opportunity to recharge your batteries!
An Overwhelming Love
For the majority of parents this is an instantaneous feeling, whereas for others it may take a little time, but the incredible bond and unconditional love a parent feels for their child is often both surprising and overwhelming. Parents often describe feeling a depth of emotion that they had never previously known prior to the birth of their child.
The author Elizabeth Stone really summed up the depth of a parents love concisely when she said, “Making the decision to have a child is momentous. It is to decide forever to have your heart walking around outside your body”
Copious Amounts Of Conflicting Advice
Everyone from your friends, parents and in- laws to complete strangers will bestow their best advice and tips to you as a new parent with the best of intentions. The trouble is, the majority of it will be conflicting, leaving many first time parents feeling confused and unsure.
The thing to remember is that your baby is not like any other, she is unique and through trial and error, you will get to know your baby, learning precisely what works for you and your baby and gradually establishing what doesn’t. Take what works and discard the rest; it is a learning process and one size does not fit all.
Life Can Get A Little Messy
Any new parent will expect to be changing a large quantity of wet and soiled nappies when entering the world of parenthood and new parents soon master the art of changing a nappy after experiencing the odd, inevitable disaster. Excellent preparation and speed can help you to succeed and a carrying a spare change of clothes can save you in an embarrassing situation when it has all gone terribly wrong.
Babies will frequently bring small amounts of milk back up following a feed; this is called possetting and is quite normal for the first six months, or sometimes longer until the lower oesophagus valve matures and tightens. Along with possetting, babies will also vomit occasionally, which can be incredibly inconvenient at the best of times. Providing your baby is content, comfortable and growing well, there is no need to worry; it is simply a case of riding the storm with an armful of fresh muslin cloths and bibs at the ready.
An Altered Sense Of Time
Tasks like popping out for milk that would once have taken a mere five minutes can turn into mission impossible when baby becomes hungry or needs a nappy change the moment you are ready to leave. Longer trips can be even more complicated due to the fact that you need to pack and prepare for every eventuality, before walking out of the door.
New parents often find that they must temporarily forfeit their previously stellar punctuality record and adopt a slightly more flexible approach to time in the short term.
Your Relationships Change
The arrival of your newborn immediately changes the dynamics in your household, there is an additional person to care for and interact with which leaves less time for you and your partner to focus on one another and your relationship. It is important to set time aside for the two of you, even if it is just a ten-minute catch up over coffee.
Outside of your home, you are likely to experience many other relationship changes, some friendships may drift apart slightly due to your lifestyle changes whereas others will develop a stronger connection. You are likely to meet new people and many new parents find that becoming new parents themselves adds a completely new depth and dimension to their relationship with their own parents.
New Parents Often Worry About Their Baby
Common anxieties for new parents will be associated with bowel movements, or concerns over whether or not baby is getting enough to eat. New parents may also worry that their baby is crying too much or feel anxious about how much or how little their baby is sleeping.
First time tasks like bathing your baby alone or cutting their microscopic fingernails can be incredibly daunting, new parents will worry about making all kinds of mistakes, but mistakes are inevitable so it is important to be gentle with yourself!
There is a large variability from one baby to the next in terms of everything from feeding and bowel movements to developmental milestones, so comparing your child with others can make you worry unnecessarily. Discuss any persistent concerns with your child’s paediatrician for reassurance.
Leaving Your baby In The Care Of Someone Else For The First Time
It is important to take advantage of the care and support that is available to you as it is crucial to have a break and rest when you need to. Nonetheless, leaving your baby in the care of someone else for the first time can be something of an emotional rollercoaster for any new parent.
Building up to any longer periods of separation by leaving your baby for short periods at first can help to alleviate much of your anxiety. This process allows baby to get used to being away from you and helps to reassure you that your baby is content in your absence when left with an attentive and gentle carer.
A Complete Change Of Lifestyle
In the first twelve months of your baby’s life, many aspects of your lifestyle will change. Alone time becomes precious, free time is suddenly a scarce commodity, your alarm clock becomes redundant and all of a sudden, going to bed at 9pm on a Friday night feels like a luxury!
Your priorities change when you become a new parent and you develop a brand new perspective on life. The sleepless nights, the occasional worry and the mucky bits may take a bit of time to adjust to, but the abundance of joy and pure love that you experience as you watch your precious baby grow, will more than compensate for any of the trickier challenges you face in the first year as a new parent. |
The Phillies' win total slipped by five, to 66, in 2017, but there were signs that the rebuild is moving in the right direction. Fifteen players made their big-league debut and several, such as Nick Williams, Rhys Hoskins and J.P. Crawford, have the look of potential difference-makers. The Phillies played .500 ball over the final 76 games and their runs per game jumped from 3.8 to 4.7 after the All-Star break. For the season, their run differential was minus-92. In 2016, it was a majors-worst minus-186.
The bullpen took a solid step forward in the second half, as well. Over the final 33 games, it recorded a 2.54 ERA, second-best in the majors to Cleveland (2.41) over that span. Hector Neris, Luis Garcia, Adam Morgan, Edubray Ramos and Hoby Milner fueled the improvement. In the offseason, GM Matt Klentak fortified the promising unit by signing veteran setup men Pat Neshek and Tommy Hunter to two-year deals. Klentak hopes a deeper bullpen will keep the Phils in games on nights when a young starter's pitch count swells and he is out of the game after five innings. |
While most of us were celebrating Independence Day last Monday and enjoying the long weekend, the struggle in Marawi did not let up. Over the past few days, distressing reports of our marines being killed, of trapped civilians dying of hunger, and of a young boy shot dead by a sniper ran in the news, underscoring the gravity of the situation in the southern city under siege. The death toll, as of Monday’s noontime salute to the “Heroes of Marawi,” includes 58 servicemen, all killed in action from May 23 to June 9. And while we in other parts of the country might easily feel like everything is fine, given our comfortable distance from it all, reality paints a different picture.
Fortunately, a few brave men and women are on the ground to paint that picture for us, and give us an intimate, if difficult look at the situation. Have a look at these dispatches from photojournalists in Marawi—whose images are poignant reflections of what’s really happening in the most embattled corner of our country right now.
More from EsquireMag.ph Marawi City Siege: Who Are The People Behind The Maute Group? Days of Terror: A Timeline of What Happened in Marawi Photojournalist Shows Heartbreaking Photos of Marawi Refugee Children
Carlo Gabuco
Luis Liwanag
Jess Aznar
Linus Escandor
Ted Aljibe
Romeo Ranoco
Quite a few news reporters in Marawi have also been posting their views from the ground:
Jeff Canoy
Raffy Tima
Chiara Zambrano |
Fox News guest J. Christian Adams, a right-wing blogger and author who previously worked for the Department of Justice and contributes to Pajamas Media and Breitbart, claimed on Monday morning that the New Black Panthers, and not the body of a teen killed by a gunshot wound, were secretly the real “spark” that kicked off George Zimmerman’s murder trial.
Adams told Congress in April (PDF) that the Obama administration relied upon racial biases when considering a voter intimidation charge against a member of the New Black Panthers who was filmed standing outside a Philadelphia polling place in 2008. Nothing ever came of the incident — even top Republicans began mocking the notion of Obama protecting the New Black Panthers — but Fox News keeps trying to revive it for some reason.
Appearing Monday, Adams continued pushing that narrative, telling the hosts of “Fox & Friends” that the National Association for the Advancement of Colored People (NAACP) has “teamed up with the New Black Panthers” to lobby for civil rights charges against Zimmerman. Adams added that he’s certain the Zimmerman murder trial would not have taken place if it weren’t for the little known, fringe political group.
“I have a piece at Breitbart today that shows how the New Black Panthers were the spark behind the whole Zimmerman investigation,” he insisted. Amazingly, Fox News host Steve Doocy agreed, seeming riveted by the claim and pressing for more information. “Mr. Adams, why does it seem the Department of Justice is taking their marching orders from the New Black Panther Party?” Doocy asked.
“This is part of a radical, racial agenda that Eric Holder has implemented since he took office,” Adams said, citing the 2008 video of a black activist standing in a public place and the attorney general’s attempts to enforce provisions of The Voting Rights Act against historically racist jurisdictions, among others.
“This is just one, a couple of, many, many examples of this radical racialism that this Justice Department has pursued,” Adams said. “I think the FBI will tell the lawyers that, no, there’s no evidence of racial intent. I think the criminal section of the civil rights division will be reluctant to pursue charges.”
“The big question is, will Eric Holder overrule them as he has in other instances because of a political agenda, a racial agenda,” he continued. “Will he listen to the Black Panthers? Or will he listen to the FBI? In the past he’s listened to the radicals. This one might be so incendiary that he won’t touch it. We’ll just have to wait and see.”
This video is from “Fox & Friends,” aired Monday, July 15, 2013, clipped by liberal watchdog group Media Matters. |
Activision has made much of the fact that Call of Duty: WW2 is designed to take the venerable FPS franchise back to its roots. However, there’s now confirmation that a particular mechanic introduced in recent years is being removed, which should help make gameplay more closely resemble older installments.
Last week, a fans asked Sledgehammer Games co-founder Michael Condrey whether Call of Duty: WW2 would feature the unlimited sprint mechanic. Condrey was more up-front with his answer than you might expect, responding with a succinct tweet that simply read, ‘nope.’
Nope — Michael Condrey (@MichaelCondrey) May 24, 2017
Recent Call of Duty games have put a huge emphasis on giving the player advanced traversal capabilities, and infinite sprint was a big part of that evolution. These changes allowed multiplayer action to be more fast-paced and frenetic, but traditionalists have maintained that always being able to sprint removes a degree of strategy and tactical play from the game.
Of course, it makes complete sense that Sledgehammer Games would choose to remove this mechanic from a game based around World War II. This year’s instalment is a marked departure from the futuristic setting of last year’s Infinite Warfare, and it’s good to see gameplay reflecting that change.
Much like Battlefield 1 before it, Sledgehammer Games is hoping to achieve a degree of historical accuracy as Call of Duty: WW2 returns the franchise to its roots in real-world conflict. Having soldiers sprint around without ever getting tired would perhaps detract from these efforts.
It’s clear that Activision wants to do something very different with this year’s Call of Duty game, and early response from fans has been overwhelmingly positive. It seems likely that the franchise will split off into more different strands than we’ve ever seen before — but, of course, this all depends on how well WW2 performs critically and commercially when it launches in a few months time.
Call of Duty: WW2 is scheduled to release for PlayStation 4, Xbox One, and PC on November 3, 2017.
Source: Dexerto |
Large sedans are not my favorite thing in this world to drive; add the fact that Acura’s RLX Sport Hybrid sedan is – obviously – a hybrid and I’m instantly looking at both dull and mostly boring. I don’t really have anything against large sedans, as they serve a very real market and are comfortable for day-to-day driving. But if it was up to me, happiness is a small sports car and large SUV. If spending the money myself on a large, boring sedan I’d just rather have a large, boring SUV. Or at least that’s my rationale going into this review of the RLX.
If you haven’t heard of the RLX, well, you’re probably not alone. This isn’t a car that Acura sells a lot of, but it is their flagship sedan, replacing the RL back in 2013. Our review car is dubbed the Sport Hybrid SH-AWD, which becomes more important later on, as it’s basically the top of the line of the top of the line.
Exterior – Looking around the outside the RLX has all the right lines and details to make it known this is an Acura, but it also closely resembles the proportions of a Lexus LS, although competing more directly with the smaller GS. The hood is long and the body lines carry that long hood in straight lines back to the trunk, making the visuals of this car both very big and very long. Up front you get the now-recognizable Acura beak and some stunning “Jewel Eye” LED headlights. It’s a very appealing design and definitely gives an a(c)ura of luxury and class.
Interior – And when speaking of luxury and class, know the interior is right on par with the exterior, and also full of enough technology to make any techy excited. The Acura brand is targeted at a more tech-minded, younger audience looking to move into a more luxurious vehicle. While you may not be able to tell from the exterior styling – sans the jeweled headlights – the interior conveys that message very well. It all starts with the way you turn on and off the vehicle and put it into gear. You get a push button start standard on all trims and the gear shifter is fully electronic, making you push or pull buttons like you’re preparing for a rocket launch.
The gauges are pretty standard, but you do get a large LCD display between the standard gauges, allowing you to scroll through different menus and information about the vehicle. You also get a heads-up display which can be configured to show things like your current speed or how the RLX is putting power to the ground and utilizing the hybrid system. You get a plethora of steering wheel controls that allow you to configure the LCD or interface with the infotainment system, or set features like the lane keep assist and adaptive cruise control.
Moving over to the center dash you get a two-tiered screen setup with a display screen – up top and pushed back – to display information like the navigation system and hybrid driving monitor. Down below that is a multi-use touchscreen that can be used for tuning the radio, setting the climate, entering navigation information and much more. While I like the idea of this two-tiered setup it can be a bit confusing; personally I think the user interface looks outdated and cheap for such a tech-minded vehicle.
Powertrain – You can buy a RLX with a 3.5-liter V6 engine that delivers 310-horsepower and matched with a 6-speed automatic transmission. That’s all well and good, but our review vehicle is the Sport Hybrid model – note there is not a non-sport hybrid option – which uses the same 3.5-liter V6 engine but is also matched to one electric motor in the front and two electric motors at each rear wheel, giving it Acura’s Super Handling All-Wheel Drive system or SH-AWD. This setup nets you 377-combined horsepower and 377-lb-ft of torque, and is pushed through a 7-speed dual clutch transmission.
The Drive – The hybrid system is what brings this car alive. Basically the same setup you get in the new NSX supercar, and it really works well in the RLX Sport Hybrid. The Sport Hybrid designation is important because you won’t see an eco button in the car, so while this may help with your economy it’s built to make the car go faster and handle better. The individual electric motors on the rear wheels can have power allocated to each individually, allowing the RLX to transfer power and push you through corners, maintaining grip like no other big sedan I’ve ever driven. Off-the-line acceleration also benefits from this system, as all the motors working together can put that 377-hp to the ground quickly and without fuss. You’re not going to be lighting the tires on fire, but you’ll be propelled forward really quickly. I’ve seen 0-60 times of around 5-seconds flat; for a big sedan this is pretty impressive.
Everything that I don’t enjoy about driving a big sedan is thrown out the window with the RLX Sport Hybrid. Not only is it a good looking and comfortable vehicle, it’s also incredibly quick and light on its feet. You’re able to feel and carve corners decently, and you’ll be surprised every time you mash your foot to the floor.
Competition – When looking at other luxury cars this size you’re comparing brands like the BMW 5-Series, Audi A6, or the Lexus GS. Base MSRP for the RLX with Technology package is $54,450, and you can move up to the Advanced package for $60,450. To get the Sport Hybrid SH-AWD with Technology package is $59,950 and the vehicle we drove was the Sport Hybrid SH-AWD with Advance package, which bases out at $65,950. With destination and handling ($920) our total vehicle price would be $66,870. The base price is on par with the competition, but as soon as you begin equipping something like the BMW 5-Series comparably you’ll be looking at an extra $4k. Both the Lexus and BMW offer hybrid versions, but they’re more traditional hybrid setups unlike the one offered by this Acura.
Overall, I really like the Lexus GS and have always loved BMW and Audi, so pulling anyone away from these brands and convincing them that the Acura is a better deal – without sacrificing luxury – is a hard sell. I think that the Sport Hybrid system could be a game changer in this segment and the more people getting behind the wheel of the Acura the better. If you’re thinking about buying in this market be sure to at least give the RLX a test drive and see what you think. It has definitely changed my mind on the desirability of driving a large luxury sedan, although for my $67K I’d probably still go with a bigger SUV, even if looking at BMW or Lexus. |
Miss Peru 2018, the Peruvian beauty pageant, switched things up this year.
During a typical pageant, beauty contestants are asked to state their bust, hip, and waist measurements for the judges. However, the Peruvian contestants didn’t state their size, but instead gave statistics on ongoing issues happening to women throughout Peru.
Sexual assault, trafficking, harassment, and female-hate crimes were just a couple of the topics discussed during the beauty pageant.
Here are a few of the statements given, via Mirror:
“My name is Camila Canicoba and I represent the department of Lima. My measurements are 2,202 cases of femicide reported in the last nine years in my country.”
“My name is Juana Acevedo and my measurements are: more than 70% of women in our country are victims of street harassment.”
“My name is Luciana Fernandez and I represent the city of Guanacu. My measurements are 13,000 girls suffer from sexual harassment in our country.”
“Greetings. Almendra Marroquin. I represent Lima. My measurements are more than 90 percent of teenagers are abused in their educational centers.”
“My name is Bélgica Guerra and I represent Chincha. My measurements are the 65% of university women which are assaulted by their partners.”
“My name is Romina Lozano and I represent the constitutional province of Callao, and my measurements are 3,114 women victims of trafficking up until 2014.”
You can peep the whole contest below. Contestant speeches start at about 3:45.
Romina Lozano, whose statement was, “My name is Romina Lozano and I represent the constitutional province of Callao, and my measurements are 3,114 women victims of trafficking up until 2014,” was the winner of the pageant.
Former beauty queen Jessica Newton had organized the Miss Peru 2018 event, telling Buzzfeed that the decision to dedicate Miss Peru’s event to gender was based on the idea of empowering women and all those involved.
“Everyone who does not denounce and everyone who does not do something to stop this is an accomplice. Women can walk out naked if they want to. Naked. It’s a personal decision. If I walk out in a bathing suit I am just as decent as a woman who walks out in an evening dress.”
This was a very cool way of using the typically problematic setting of a beauty pageant to shed light on issues affecting the women in Peru. |
New Discovery: Cas A Supernova Explosion–Seeding the Universe with Building Blocks of Life Posted on Oct 31, 2011 Heavy elements may be no more than rare cosmic pollutants, but they are exceedingly important to us. Without them, solid, rocky planets would be impossible, and the prospects for Earth-like life would be correspondingly dim. The iron Chandra X Ray Space Telescope has recently imaged in Cas A might one day flow as hemoglobin in the blood of some future alien species. Fast moving knots of silicon from the Cas A supernova could provide the raw material for sand on otherworldly shores, where crashing waves of H2O send thunderous sound waves through a nitrogen-rich atmosphere. A team of astronomers led by Dr. John Hughes of Rutgers University used observations from NASA's orbital Chandra X-ray Observatory to make an important new discovery that sheds light on how silicon, iron, and other elements were produced in supernova explosions. An X-ray image of Cassiopeia A (Cas A), the remnant of an exploded star, reveals gaseous clumps of silicon, sulfur, and iron expelled from deep in the interior of the star. (more…)
Elusive Preon Stars –Do They Exist? Posted on Oct 31, 2011 A preon star is a proposed type of compact star made of preons, a group of hypothetical subatomic particles that could originate from supernova explosions or the Big Bang. Preons were originally proposed as quark constituents over three decades ago, but in 2005, Fredrik Sandin and Johan Hansson of the Luleå University of Technology in Sweden came up with the concept of preon "stars" or "nuggets" in space. These objects, would be somewhere between the size of a pea and a football, with a mass comparable to the Moon with a density that would be in the range between a neutron star–the densest ordinary form of matter–and a black hole. (more…)
Pulsar’s Superluminal Speeds: Really Faster than Speed of Light? Posted on Oct 31, 2011 We learned in our intro to science courses that information cannot be transmitted faster than the speed of light. Yet laboratory experiments done over the last 30 years clearly show that some things appear to break this speed limit without abrogating Einstein's special theory of relativity. Yet astrophysicists in the US have observed such superluminal speeds in space in the form of radio pulses from a pulsar. (more…)
Update: CERN’s “Faster-than-Speed-of-Light” Claim Get’s a Second Look Posted on Oct 31, 2011 Scientists who challenged current models of physics by reporting particles that broke the Universe's speed-of-light limit said on Friday they were taking a second look at their hotly debated experiment. (more…) |
Fox has picked up two animated series eyed for next season, Napoleon Dynamite, an adaptation of the hit 2004 movie, and Allen Gregory, co-written and executive produced by Jonah Hill. Both series are produced by 20th Century Fox TV; Allen Gregory is a co-production with studio-based Chernin Entertainment. The order to Napoleon Dynamite is for 6 episodes, while Allen Gregory has been picked up for 7 episodes. Both projects originated as presentations.
The original cast of Napoleon Dynamite led by Jon Heder is back to voice the animated series, which follows the misadventures of an awkward high school teenager and his quirky friends as they struggle to navigate life in rural Idaho. The film’s writers Jared Hess, who also directed it, and Jerusha Hess wrote the adaptation with The Simpsons veteran Mike Scully. All 3 are executive producing.
Allen Gregory‘s title character is the world’s most celebrated seven-year-old who embarks on the greatest challenge of his life: attending elementary school with regular kids. Hill co-wrote the script with Yes Man co-writers Andrew Mogel and Jarrad Paul. The three are executive producing with Peter Chernin and Katherine Pope.
The sizes of the orders to Napoleon Dynamite (6 eps) and Allen Gregory (7 eps) are unusual as they represent one regular 13-episode order split in 2. I hear the shows’ producers are not particularly thrilled because animated series with such short orders are more difficult to staff and more expensive to make. Also, in success, the small pickups will lead to a pretty large production gap between the initial and the back orders. Fox has a new animated series on deck for midseason, Bob’s Burgers. |
Story highlights "Gods of Egypt" stars European and Australian actors playing Egyptians
"It is clear that our casting choices should have been more diverse," director Alex Proyas said
(CNN) The makers of the forthcoming film "Gods of Egypt" apologized for showcasing a predominantly white cast amid criticism over lack of diversity in a film based on Egyptian mythology.
The fantasy epic, slated for release in February, stars Scotsman Gerard Butler of "300" fame and Danish actor Nikolaj Coster-Waldau, best known as Jaime Lannister on "Game of Thrones," as warring Egyptian gods. The cast also includes Australian actors Geoffrey Rush, Brenton Thwaites and Courtney Eaton, along with African-American actor Chadwick Boseman and French-Cambodian actress Elodie Yung.
The mostly white cast came under scrutiny as soon as shooting started in 2014. "And so, the time-honored tradition of Hollywood whitewashing continues," Australian writer Ruby Hamad wrote at the time
There was new attention in November, when production company Lionsgate released the first look at the film in character posters and a trailer.
Actress Bette Midler was among those to call out the filmmakers for casting white men in a film based on Egyptian mythology.
Read More |
According to Mychal “Trihex” Jefferson, there really isn’t much of a difference between speedrunning video games and lifting weights. “You have to be goal-oriented,” he recently said as he beamed live and uncut from a private Discord call. “It’s very easy to get into a speed game and feel intimidated, insecure and inadequate about how bad your times are, and you can get overwhelmed and salty.
Very similarly, when you start working out you get into the gym, you feel like everyone is looking at you, you’re feeling judged, and you can’t do things that support your own body weight. You’re looking at someone like, ‘Oh my god, look at that guy, he’s got abs on his abs.’ You have to be addicted to the tangible progress. … Once I started crushing those plateau points in working out, I applied that mindset to speedrunning and I became way happier with it.”
Trihex’s specialty is the intricate, and infinitely frustrating Super Mario World 2: Yoshi’s Island—which is widely considered to be one of the most demanding games in the scene. His 100 percent performance at Awesome Games Done Quick 2014 is legendary. For three hours and 20 minutes, Trihex is stone cold; his fingers violently ricochet off the candy-colored Famicom controller as he launches pixel-perfect eggs at offscreen walls, pipes, and enemies to maximize his efficiency. On his knee, he drapes a white kitchen towel to keep his hands free of any sweat that might muck up an input. He uses it sparingly, usually during the split-second cartridge-derived loading screens.
That run made Jefferson a star, and it highlighted who he is: a big man throwing himself against hard-as-nails Nintendo games. In the years that followed his breakout run, his likeness would become immortalized as one of the most widely used emotes on Twitch, and spark an ongoing culture war in the streaming community.
Advertisement
Trihex started speedrunning Yoshi’s Island in 2004, long before the scene was made mainstream by the democratization of internet video. His introduction to the community was an accident. In the early 2000s, gaming magazines on newsstands were occasionally packed with a DVD. In a pre-YouTube world, those DVDs served as a way for publishers to deliver game trailers and gameplay footage to those without a G4-enabled cable box. Trihex was an avid Electronic Gaming Monthly reader, but his subscription ran out, so he made the trek to a local convenience store to buy the latest issue straight from the rack. “The copy I bought had a bundled DVD with one of SpeedDemosArchives’ attempts to reach out and explain what speedrunning was,” said Trihex. “It had a little five minute demo of a bunch of runs ... and that was it. I put that disc into my PS2 and was like ‘oh my god. Speedrunning is a thing! This is what I already wanted to do.”
Trihex on the early days of speedrunning: “It was all a giant love letter for the game you enjoyed. When beating the game wasn’t enough, and you wanted more out of it. Speedrunning gave the game an infinite purpose.”
Advertisement
Those early days were magical. Imagine trying to parse frame-perfect tricks through murky forum jargon. “There was a lot of compromise and limitations,” Trihex recalled. “Like, ‘hey I found this really cool skip! ...it’s kind of tough to describe, there’s this one corner of this one section of this level. I don’t have capture, it’s pretty cool though!’” The community instead relied on an archaic, analog form of tape-trading, in which hardcore speedrunners recorded their best runs on VHS and sent them off to a staff member at Speed Demos Archive (which remains the governing body of the scene to this day,) who would host it on the site through an analog-digital converter for the official world record tally. It was obscene and convoluted, but that was the point.
The pre-Twitch era of speedrunning required a pathological obsession with a specific game, and the idea of ever getting famous by beating Yoshi’s Island quickly was laughable. “Everyone was poor, everyone was broke, there was no revenue to be had,” he said. “You did it for the glory. It was all a giant love letter for the game you enjoyed. When beating the game wasn’t enough, and you wanted more out of it. Speedrunning gave the game an infinite purpose.”
Trihex learned about livestreaming from fellow speedrunner Narcissa Wright, whose mind-boggling 20-minute Ocarina of Time run helped propel the community into video game ubiquity. It completely changed his perception of the culture. For years, speedrunning was presented only in its most seamless, perfected form, but Trihex discovered that there was a significant audience willing to watch him try (and fail) a tough sequence over and over again. They would watch the unglamorous laboratory work that adds up to a refined clear. “When I saw that people would want to watch that, I was like ‘oh, speedrunning can be social? I’m in!” he said. Trihex broadcasted his first stream in February of 2011, and moved his channel to Twitch the same day the platform launched, later that year.
Jefferson was a natural. He can complete transcendent speedruns, but he’s also affable, intelligent, and gleefully, joyously geeky. Tune in as Trihex stares at the screen, mesmerized by a Beyblades tactics video, only to be interrupted by his incredulous girlfriend, understandably asking “what the fuck are you watching?” He desperately tries to switch tabs to find something less incriminating, only to bring up reams and reams of more Beyblade videos. So many people struggle in the transition from competitive gamer to public personality, but Trihex stuck the landing. Watching his stream feels like being his friend.
Advertisement
Like most things on Twitch, the origin of the Trihard emote, which turned Trihex from celebrity to ubiquitous meme, is fairly trollish. As Jefferson remembered it, he was at an anime convention in the summer of 2012, and took a photo with a cute girl. When he looked back at the picture, he discovered that the look on his face was priceless: sweaty, mouth slightly open, eyes wide and full of fear. The eternal panic smile, as Trihex described it. He posted the photo to his Twitter, and was immediately inundated by playful photoshops re-appropriating his soon-to-be-famous visage into high-pressure situations. “They made it into an in-chat meme,” he said. “It went viral in my own chat, and I was happy with it. I agree, that was a very cringe smile I made. That girl was cute and I was freaking out and lost all of my spaghetti, and it’s forever documented in that face being super-awkward.”
This was still in the early days of Twitch, when there were only 20 global emotes available to chat. (You may be aware of the most famous ones - Kappa, Pogchamp, 4Head.) The corporate brass wanted to add more, so the staff stickied a thread in the (long-gone) Twitch forums petitioning the burgeoning community for options. It was spammed constantly; hundreds of streamers begging, pleading to be baked in the DNA of the world’s most prominent live-streaming platform. Trihex never advocated for himself, but his followers made it a personal mission to deify that manic moment. They flyered the thread with Trihex’s face like they were campaigning for a crucial state primary, and a week later, a Twitch employee entered his chat.
“I saw that sweet, totally eye-fetching wrench symbol next to their name, and I was like, ‘Oh my god, there’s staff here!’” said Trihex. “I stop my run of Yoshi’s Island and go to the hardest level with the most swag in it, and start pounding my controller at full capacity. While doing that I put on Beauty and the Beast’s ‘Be Our Guest’ at full volume. And the staff was like ‘Hey, why is this guy trying so hard?’ And that was it. [The emote was] Trihard. The staff member came up with the name for it.”
Advertisement
Five years later, Trihard is one of the most prominent emotes in the Twitch community. Like Kappa or ResidentSleeper, it’s become a crucial fixture for the subculture’s linguistics, and Trihex has embraced his face’s canonization with open arms. The Trihard is his calling card; you can find it all over his YouTube channel thumbnails, and he takes giddy pride in its top-three placement amongst all Twitch emotes.
Recently, the semantic definition of Trihard has changed. Trihex is black, and if you spend enough time on Twitch, you’ll see Trihards used to denigrate people of color. From my own anecdotal experience, I’ve seen the emote fill chat feeds when a personality uses an innocuous word like “steal” on stream. In Hearthstone broadcasts, Trihards are sometimes posted when players play a spell called “Burgle” or summon King Mukla, an in-universe stand-in for King Kong. Sometimes, Trihards are used to target and harass specific people. Last year Terrence “TerrenceM” Miller, a black Hearthstone pro, made an impressive, second-place run at Dreamhack Austin, but was bombarded by racist messages, many punctuated with Trihards. At TwitchCon, last October, a diversity panel’s chatbox was hijacked in a similar way. The video game community’s racism problem is nothing new, but now, Jefferson’s face has emerged as a de facto slur.
Recently, the semantic definition of Trihard has changed. Trihex is black, and if you spend enough time on Twitch, you’ll see Trihards used to denigrate people of color.
Advertisement
Trihex knows that if he wanted to, he could pen an open letter to Twitch complaining about the emote’s newfound contextual racism and have Trihard purged from the database the next morning. But that’s not something he’s interested in doing, “It’s not the emote’s fault that it’s being used for racist things,” he said, in a video addressing the controversy, “Nothing in the emote is racist at all. It’s not like there’s exaggerated lips, or anything to make it look offensive. It’s a picture of me very, very happy.” His position is understandable, Trihex isn’t interested in letting anyone tell him what his face does and doesn’t mean.
“If for any reason I got rid of the Trihard emote, or if was deemed derogatory speech or whatever, all chat would do is migrate ‘cmonbruh,’ ‘punchtrees,’ and ‘kevinturtle’—the other black guy emotes,” he said. “If anything you’re empowered them, you’ve told them, ‘Oh, if we cause enough ruckus, we caused them to react.’ If you give the shitters attention they’ll say ‘Let’s just rock and roll further.’ ... It goes back to the accountability of the individual with the context, not the emote.”
Trihex also doesn’t want to obscure the fact that the emote is associated with more innocent definitions. “It is used for racism a lot, but it’s also used for good a lot,” he said. “One of the way the emote is being used for good that I’m really proud of is there’s a Dota 2 streamer named Arteezy, and when he’s winning really hard, he’ll put Trihard on the screen, and cover his minimap. He’ll finish the match without using his minimap and call it ‘Trihard mode.’ His chat is spamming Trihard all day. Clearly it’s a play on the face—because I look hyped in the face—and it’s a play on the pun of the emote itself. If you’re ‘trying hard’ ... you gotta turn off the minimap to win in style.”
Advertisement
Essentially, Trihex is fighting for the purity of Trihard, to make sure that its translation remains firmly in his grasp. He also holds a surprisingly unsympathetic position towards those who want to see it removed. In that same video where he lays out his love for the emote, he wraps things up by echoing some of the diction used in alt-right circles. “It’s a great emote that itself does not [promote] racism, so for those who want to get rid of it: dude, your mods suck, your mods are cucklords, you gotta talk to this community and clean that shit up, it can be done,” he said. “Or change your snowflake values of what you get offended by, because it’s all over the internet,”
When I asked him if he’s ever bothered when Trihard is used to hound people of color on Twitch, he adopted a slightly softer tone, but still put the onus on the streamer to govern their chat, rather than blaming the audience.
“If [streamers] were to ask me about [getting harassed,] it I’d say ‘You need to set your culture better, you probably have crap mods that aren’t doing a good job, and you have to set a tone,” he replied. “If you don’t curate your culture and your environment as early as possible, then you’re going to let your viewers remain toxic. You have to call out what you want to call out. You are building your castle, and if you allow those things to happen, people are going to get the mindset that you’re okay with that. I personally don’t take racism very seriously. To me it’s just people trying to provoke you.”
Advertisement
Trihex on how to deal with misuse of the Trihard emote: “If you don’t curate your culture and your environment as early as possible, then you’re going to let your viewers remain toxic. You have to call out what you want to call out.”
Other people of color on Twitch don’t hold Trihard in the same regard, and they’re happy to purge it from their chat when things get bad. Deejay Knight, a Star Citizen streamer and military veteran, told me that there’s been several times where he’s put the emote on a one-second delay for its racist usage. (He does this when his broadcast is featured on the Twitch homepage, or when his stream was raided by trolls.) I asked him what he thought about Trihex’s thesis—that streamers can snub out the bad actors by enforcing a strong code of laws—he told me he can see both sides.
“I, for one, know the importance of not allowing people to say whatever they like. He’s correct there. If you let people say whatever they like, they’ll walk all over you, and I have no intention of giving power to racists,” replied Knight. “There is also a penchant for some Twitch communities to be very racist. That exists everywhere in life, it would be no different within Twitch.”
Advertisement
Knight, like Trihex, doesn’t want to see the Trihard banned. “That’d be like saying ‘a select group of racist people use sheets, should we ban sheets? ”And he also believes that Twitch could improve some of their moderation tools to make weeding out racism easier, (though he does say that overall, the platform does a good job.) The one thing Knight does seem sure about is the unlikeliness that the tone of our online discourse is going to change anytime soon. “Good old mods and ban tools help plenty for me, but not everyone can shrug it off. Racism is a part of the world, and by association, the internet,” he says. “Being rid of it is impossible, so the trick is navigating around it. At least for me.”
That’s what makes Trihex different. He has a deep, slightly naive faith in Twitch. Despite his occasional bristliness, Trihex is not toxic, nor does he root for toxicity. He told me he believes that the gaming community is a “diverse, happy place, where everyone feels welcome.” The hate and misogyny that corrodes online culture? “The shitstains of the past, but it will get better.” It’s a bold stance to take when his face is being used in place of the n-word, but Trihex still believes that gamers are affectionate, considerate, and empathetic. He made friends through clever Yoshi’s Island skips—the true unity of the games community on its best days—and he sincerely hopes that with a little more education and some better moderation, we’ll be on the path to utopia. In the meantime, the fate of Trihard rests in his hands. It’s not going anywhere, because he doesn’t want to lose.
“All I’d be doing is saying, ‘Hey racist cucks, great job getting empowered enough to rid Twitch of one of their best emotes. One of our best emotes. Fuck, of me,” he says in that video. “We end the story because it’s abused for the wrong reason.”
Trihex has curtailed his gym schedule for November to focus solely on perfecting Super Mario Odyssey’s embryonic speedrun path. To compensate, he’s drummed up a series of macros for his caloric intake to compensate for his temporary “sedentary lifestyle.” This week he set a new personal best at one hour, 26 minutes, and he just discovered a way to skip the ground-pounds behind the waterfall.
Advertisement |
Image caption The Kyushu e-mail scandal sparked protests last week
Dozens of workers at Japan's Kyushu Electric Company posed as citizens and lobbied for a power plant to be reopened, an internal inquiry says.
A whistleblower last week revealed that some 50 workers had sent e-mails to a televised debate backing a plan to restart Kyushu's Genkai plant.
But the firm's internal inquiry has found more than 100 employees may have been involved.
Two-thirds of Japan's 54 reactors have been idle since the 11 March quake.
The 9.0-magnitude tremor, and the massive tsunami it triggered, wrecked the Fukushima Daiichi plant and sparked a review of the country's nuclear industry.
All the nuclear plants that were closed for routine inspections were ordered to stay closed until their safety could be guaranteed.
The plant at Genkai, in the south, was one of the first plants scheduled to be reopened.
But the government's announcement last week of more rigorous tests across the board scuppered the firm's attempts to have the reactors restarted.
Popularity slump
The e-mail scandal has dealt a further blow to Kyushu Electric, and the firm's boss made a public apology last week.
Nuclear crisis 11 Mar: Fukushima Daiichi nuclear plant struck by huge earthquake and tsunami
Fukushima Daiichi nuclear plant struck by huge earthquake and tsunami 16 Mar: 20km (11-mile) evacuation zone declared around plant
20km (11-mile) evacuation zone declared around plant 17 Apr: Plant owner Tepco says crisis will be under control by end of the year
Plant owner Tepco says crisis will be under control by end of the year 20 May: Tepco President Masataka Shimizu resigns as firm posts losses of 1.25tn yen (£9.4bn; $15.3bn) for the past financial year
Tepco President Masataka Shimizu resigns as firm posts losses of 1.25tn yen (£9.4bn; $15.3bn) for the past financial year 2 Jun: Naoto Kan survives no-confidence vote over his handling of quake and nuclear crises
A Kyushu employee told Japanese media how senior officials asked about 50 subordinates to send supportive messages to a televised meeting hosted by the government.
But on Tuesday, sources at the firm revealed that more Kyushu offices had been involved in the lobbying.
National public broadcaster NHK reported that the messages from Kyushu employees accounted for more than 30% of all messages sent in support of the Genkai plant being reopened.
Meanwhile, Prime Minister Naoto Kan has announced that Japan needs to rethink its commitment to nuclear energy.
Before the Fukushima crisis, the country had targeted 53% of its electricity supply to be nuclear by 2030.
But Mr Kan said this commitment should be scrapped, and the reliance on nuclear power must be reduced.
The prime minister, who has been under immense pressure to resign, has slumped to his lowest level of popularity since he took office just over a year ago.
According to the latest opinion polls, just 16% of the population believe he is doing a good job. |
Otoy makes tools that artists can use to create stunning 3D art that looks as real as anything captured on film. The company recently launched its X.IO App Streaming service so that developers can create cloud graphic services such as next-generation cloud games, streaming virtual reality media, and workstation applications that can run on low-end devices.
The service is the latest cloud-based innovation from Los Angeles-based Otoy, which has also created cloud-based tools for filmmakers via Octane Render and for gamemakers with its Brigade tools. Those tools enable artists to create photorealistic images for games or movies using cloud-based computing resources, said Jules Urbach, the chief executive of Los Angeles-based Otoy, in an interview with GamesBeat.
Otoy has funding from Russian investor Yuri Milner, former Morgan Stanley boss John Mack, and Autodesk. Its advisers include Google chairman Eric Schmidt, talent agent Ari Emanuel, former IBM CEO Sam Palmisano, and former IBM exec Irving Wladawsky-Berger.
Here’s our edited transcript of our interview.
Image Credit: Dean Takahashi
GamesBeat: Tell us what you’ve been doing.
Jules Urbach: X.IO has launched. That means developers can upload any application in a zip file. It could be a game. It could be a Windows app. It’s a little bit like YouTube. You just get a link back that allows you to play back the stream from a URL. There’s no App Store here. This is just running Unreal Engine 4 from my iPhone, and it works.
We’ve been doing this for a while, getting it to run in the browser in pure JavaScript. You now have no barrier to entry for deploying high-end content. You can use one graphics processing unit (GPU) in the cloud, four GPUs, scale way beyond anything that consoles or PCs have. A lot of the promise of gaming is finally realized.
Going forward, we’re going to port to devices that are different from just a 2D screen – things like Gear VR, the Samsung device, which is launching shortly. You have things like Project Tango, Google’s play on VR, where you can take your tablet and move it through space. We’re working toward streaming not just a video of what’s happening but almost a hologram. We announced this technology at [graphics conference] Siggraph. The idea is that when you have a holographic video stream, it allows you to look around, like through a window. You have a portal into that world. It’s super low latency.
This is something we’re doing specifically for VR streaming. You can have the ability, whether it’s with Oculus’s Development Kit 2 (DK2) or the Samsung Gear, to connect with this perfectly rendered virtual world and get a stream. It solves a huge number of problems, especially with Gear VR, where you have very low rendering power and you don’t have a lot of storage. That’s the plan. We’ll migrate from streaming existing 2D applications in the browser to doing full VR immersive streams.
Then you have some of these interesting new device categories, like Magic Leap, which is working on AR. The closer you get to the wearable glasses or contact lenses, the more you need the cloud to stream this stuff. That’s what we’re working toward. A big part of that is the developer backend we now have, and improvements in the codec that allow you to stream things like depth and multiple layers to give this holographic effect.
GamesBeat: I wonder about virtual reality. When you’re streaming VR to something, do you have to have everything there, everything sent to you? If you’re looking this way on a screen, it can just deliver what is visible in that direction. You don’t have to render what’s over here or there.
Urbach: We created a solution to get around that. This is an example of rendering on Octane, where I’m rendering everything. This is the entire 360. With ray tracing, it becomes pretty easy. This is one of the things we’re watching at Amazon. You don’t have to render one view. You’re rendering everything that’s around you, so the latency doesn’t matter, because as you look around, it’s all there.
It’s hard to do that with traditional rasterization, but with Octane and Brigade, we can render this stuff in 360 with multiple layers. That makes the whole effect in VR work much better. Even without that, we’re able to stream — this is streaming on the Samsung Note 4. This is already the speed we’re getting – better than the browser – with our native VR app. It’s pretty low latency and 120 frames per second.
We have two modes. We have the ability to stream what’s in the view at 120Hz, and we also have what you were seeing before, where we use a second stream to send the entire ray traced panorama as well. If you look around or the connection drops, you still don’t see any missing pieces. That’s the plan for VR. As we go forward, we also can send out more complex information that allows you to even, with one chunk, move around and see without having to re-download any new information from the server.
First step is just getting the 360 parts down. Step two is rendering the layers behind that so you can navigate through the scene to a certain point. All of those things are already part of the codec that we’ve built.
In the Galaxy Gear VR, this is what you see in your view port as we’re streaming it down, from Amazon and X.IO. If you look too quickly around, you’ll see a black edge, but I’m over LTE and there’s no black edge. It connects right into the time warping that Oculus’s John Carmack created, which re-projects the view very quickly so you don’t get nausea. We use that to send the server predictive information to send down the next image based on where you’re going to look. We never had that in traditional cloud gaming. It helps.
Image Credit: Otoy
GamesBeat: I wonder what the quality of the Samsung’s Gear VR is like.
Urbach: It’s way better than the desktop VR. It’s a higher-res screen. This is the device. It’s 2560 by 1440. DK2 is 1920 by 1080, so this is double the number of pixels. It’s a higher quality screen. My assumption is that when Oculus launches the final version of a consumer desktop system, it’ll be this quality or better. But the better VR experience is on mobile. Mobile is the bleeding edge.
We’re working toward getting mobile VR to be a replacement for desktop VR. I think Carmack believes in that. He devoted his last nine months to getting Gear VR to work. That’s his big thing. We’re the software complement to that. We want to make that happen. The cloud and streaming and all these other tricks we’re doing with panoramic streaming and depth streaming and layers are the ways to make it work. He’s been very supportive. He got us in the development program for Gear VR very early on.
Our business model with X.IO is pretty straightforward. Normally at Amazon, you need to buy a server for an hour. At scale, we’ve figured out how to slice that into per-minute costs. We announced our pricing today. If you want a quick one-minute game experience on Gear VR, we can deliver that for five cents. It’s five cents a minute. Costs will go down as we get more users. |
0 Activists: Cash from pharmaceutical companies killed medical marijuana bill
ATLANTA - Some grassroots political activists say campaign cash from pharmaceutical companies swayed two state lawmakers responsible for killing the medical marijuana bill.
Channel Two’s Lori Geary has been poring over the campaign disclosures and reached out to the two lawmakers, who vehemently deny the accusation.
“Money is behind everything in politics,” said political grassroots activist Chris Wall.
He said you need to look no further than the contributions to state Sen. Renee Unterman, R-Gwinnett, and state Rep. Sharon Cooper, R-Cobb County.
He and others accuse the two lawmakers who chair the Health and Human Services Committee for killing the bill that would have granted immunity to parents of kids with severe seizure disorders who brought non-FDA-approved cannabis oil back from states where it’s legal.
Wall points out both women receive campaign cash from pharmaceutical companies that have nothing to gain from natural treatments.
Both women pushed for more clinical trials involving a drug called Epidiolex. Its manufacturer, GW Pharmaceuticals, recently partnered with Novartis on its medical marijuana drug.
Geary found big pharmaceuticals gave more than $60,000 to Cooper over the past decade, Novartis was among them. Over eight years, Unterman brought in $33,000 from drug companies including Novartis.
“When you are getting contributions, considerable contributions for years and years and years by companies who are clearly vested in not seeing this thing passed, it’s worth a look,” Wall told Geary.
Both lawmakers denied money influenced their decisions.
Unterman released a statement saying:
“When HB 885 was being heard during the committee process, big pharmaceutical companies were nowhere to be found and seemed to show no interest in the bill. Some would suggest that the failure of HB 885 was directly related to campaign contributions that I receive from big pharmaceutical companies – and this could not be further from the truth.
“The debate about the bill would not have gone until the final hours of session if campaign contributions were influencing key decision makers. I am grateful Gov. Deal intervened and came up with a solution. At no time did I desire the defeat of this important piece of legislation for our state.”
“That's totally ridiculous. I'm really offended that they would make that accusation against us. Both of us are nurses. Our first concern is the safety of patients, especially children," Cooper told Geary by phone.
Wall said he doesn’t believe the denials, saying “Safe? Ready? When the alternative is to die? What is there to lose? What is the risk factor when your biggest risk factor is possible death from seizures?”
Both lawmakers are running unopposed in the May primary and general election. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.