text
stringlengths
8
5.77M
Over-the-counter packages of Nexium, Prevacid and Prilosec tell you to take the pills—known to doctors as proton-pump inhibitors, or PPIs—for just two weeks at a time unless otherwise directed by a physician. Yet drugs of this best-selling class prevent heartburn and ease related ailments so well that patients—particularly those who suffer from a condition called GERD (gastroesophageal reflux disease)—are often advised to take the medications for years. By decreasing acid production in the stomach, the agents prevent the caustic liquid from backing up—or refluxing—into the esophagus, where it can cause pain and can damage the food tube's delicate lining. In recent years, though, safety questions have been raised about prolonged use of the blockbuster drugs. (The medications appear to be safe when taken for a short period, as directed.) Some studies, for example, have linked continuous treatment with proton-pump inhibitors to serious infections caused by the bacterium Clostridium difficile. Presumably something about lowering the acid environment of the stomach allows the pathogens to survive when they otherwise might not. Other investigations suggest long-term changes in the stomach's acid content can lead to improper absorption of several vitamins—such as B12—and minerals, triggering bone loss, among other ill effects. Perhaps the biggest surprise came last year when two studies linked the regular use of proton-pump inhibitors to conditions that were seemingly unrelated to the acid levels of the stomach. One of the studies, published in JAMA Neurology, found that the drugs increased the risk of developing dementia, including Alzheimer's disease; the other, published in JAMA Internal Medicine, suggested a greater risk of kidney problems. The papers did not prove that PPIs cause the problems. But some researchers have nonetheless suggested possible mechanisms by which long-term use of the drugs could trigger dementia or kidney problems. A reduction in vitamin B12, for example, might leave the brain more vulnerable to damage, says Britta Haenisch, an author of the JAMA Neurology study and a neuropharmacologist at the Bonn campus of the German Center for Neurodegenerative Diseases. Last spring clinicians at the Houston Methodist Research Institute reported another plausible explanation for how PPIs might lead to these unexpected health issues: they picked up signs that the drugs act not only in the stomach but elsewhere in the body, too. These discoveries leave patients and doctors alike wondering who should and should not use proton-pump inhibitors long term. “At this point, we don't have enough data to weigh one risk versus the other,” says Kyle Staller, a leading gastroenterologist at Massachusetts General Hospital. But he and others are feeling their way forward. Proton Pumps Some amount of acid is, of course, crucial for the stomach to break down food. Specialized cells that dot the stomach's inner lining pump out hydrogen ions, or protons, which, from a chemical point of view, are what make the stomach's juices so acidic. As the name implies, proton-pump inhibitors reduce acid in the stomach—and thus reflux into the esophagus—by shutting down many of these cellular pumps. The shutdown is permanent, but the drugs are not cures, because the cells replace lost pumps. Another popular class of drugs known as H2 blockers (Tagamet among them) also limit acid production but in a different, less powerful way. Antacids, such as Tums, neutralize stomach acids but are even less potent, useful only for occasional, mild discomfort. The effectiveness of PPIs has fueled a huge surge in their use since their release in the 1980s. Today they are available both over the counter and by prescription, and Nexium remains one of the most prescribed medications in the world. The studies reported in 2016 grew out of earlier hints that such chronic use could affect the brain and kidneys. One 2013 study in PLOS ONE, for instance, found that proton-pump inhibitors can enhance the production of beta-amyloid proteins, a hallmark of Alzheimer's. Three years later the JAMA Neurology study, which included 74,000 Germans older than 75, found that regular PPI users had a 44 percent higher risk of dementia than those not taking PPIs. Similarly, worries about kidneys emerged from evidence that people with sudden renal damage were more likely to be taking PPIs. In one 2013 study in BMC Nephrology, for example, patients with a diagnosis of kidney disease were found to be twice as likely as the general population to have been prescribed a PPI. The 2016 study of PPIs and kidney disease, which followed 10,482 participants from the 1990s through 2011, showed that those who took the drug suffered a 20 to 50 percent higher risk of chronic kidney disease than those who did not. And anyone who took a double dose of PPIs every day had a much higher risk than study subjects who took a single dose. The 2016 Houston Methodist study that suggests a new explanation for a link between PPIs and Alzheimer's or kidney problems looked at cells that were grown in culture. It showed that besides acting on cells in the stomach, the drugs also affect certain cells that normally line blood vessels. As with many other cells in the body, those in blood vessel walls need to make acid so that they can break down and get rid of abnormal or damaged proteins. The cells safely store the acid in special internal compartments, which essentially serve as molecular garbage dumps. If, however, a cell's internal trash is not broken down—as occurs if acid levels are too low—bits of microscopic detritus start to pile up. A cell overflowing with its own garbage cannot function properly and quickly becomes damaged. “We actually showed these rubbish piles accumulating in the cells,” says John Cooke, a cardiovascular researcher at Houston Methodist and one of the study authors. The resulting problems can become particularly severe wherever many blood vessels are found—as is the case in the brain and kidneys. Indeed, some recent studies have also hinted at a possible connection between long-term use of PPIs and damage to another organ with lots of blood vessels, the heart. Though reasonable, Cooke's conclusion cannot be considered proved. Proof would require more study of the effect of proton-pump inhibitors on the vasculature in animals or humans, as opposed to cell cultures. Researchers also need to explore other factors that could account for the link between PPIs and dementia, heart disease or kidney problems. After all, some of the most well-known risks for these conditions are smoking, obesity and a high-fat diet, which, as it happens, also increase the likelihood of acid reflux. In this case, use of drugs could be a marker for certain unhealthy habits—versus a new, additional cause for these conditions. Decisions, Decisions Without conclusive data, physicians and patients have to balance the need to prevent the ill effects of excess stomach acid and reflux with the desire to avoid potentially serious—if theoretical—side effects from long-term use of PPIs. Many doctors worry that reports of potential side effects will scare away patients who have a real need for the medication. Some people with GERD, for example, suffer from such miserable heartburn without PPIs that they struggle with daily life. Untreated acid reflux also carries risks besides acute pain. Studies have shown that it may, over time, alter the lining of the esophagus in a way that increases the risk for a condition called Barrett's esophagus, which can, in turn, be a precursor to cancer. Reducing acid is thought to help reduce the risk. (It is also possible to get Barrett's esophagus or cancer without having had any reflux symptoms, however.) Whenever one of Staller's patients at Mass General says he or she wants to stop taking a PPI, he likes to perform a simple test. He has the person stop taking the medication for a week and substitutes Tagamet or another H2 blocker. (Stopping a PPI cold turkey, without adding another drug, typically causes a rebound effect, pushing the stomach to produce even more acid than it otherwise would.) He also recommends cutting back on acidic and spicy food for the length of the test. Then he sees if the patient is still bothered by heartburn at the end of a week, especially during the day, when gravity should help prevent acid from rising up into the throat. The persistence of heartburn indicates the presence of a more severe problem, Staller says. And thus, the benefit of taking a daily PPI outweighs the risks in such cases. The calculus, obviously, is different for everyone. For Vicki Scott Burns, a children's book author in Bolton, Mass., PPIs are “the lesser of two evils.” She says her quality of life is vastly better on the drugs. Others might reach an alternative conclusion. In the end, Staller and other health experts advise patients and their physicians to gather and evaluate as much information as possible before making a decision—and to be prepared to change course if new evidence comes to light.
PREVIOUS WINNERS Fleet-footed maestro Eden Hazard took the plaudits for his starring role in Chelsea’s title charge – and rightly so. The Belgian completed 179 dribbles, streets ahead of nearest rival Alexis Sanchez. Hazard created more chances than any other Premier League player this season (101). He was Chelsea’s chief architect, and of the Premier League’s attacking midfielders, only Arsenal's Chilean star Sanchez scored more. But it’s a fair question to ask if he or any of Chelsea’s attacking talents would have had the freedom to cause chaos in the attacking third if you’d removed the roadblock behind them. The answer, of course, is no. That impenetrable barricade was Nemanja Matic. His team-mates rely on him – not only to do their dirty work, but to supply pinpoint passes into feet. After signing a 21-year-old Matic from Kosice for £1.5 million, the Blues used him as a makeweight in the deal to bring David Luiz to Stamford Bridge from Benfica. The Brazilian has since departed for PSG and Matic returned to west London in a deal worth £22m after excelling at the Estádio da Luz. And what a coup he’s proved to be. He made more tackles than any other player in the top division (129) this season, won the third-most duels overall (278) and was second to creator-in-chief Cesc Fabregas for passes made in the opposition half. He’s revered for his efficient defensive work, but he’s more than just a water-carrier. He can play, too. His impact at both ends of the pitch was highlighted in Chelsea’s 2-1 win at Aston Villa in February, where Matic made more passes than any other player on the pitch (74, with a 91% completion rate), two of which led to goalscoring opportunities. Defensively he was just as influential, making 5 tackles, 6 clearances and 9 ball recoveries – only Villa’s Carles Gil managed more (11). Again, it was Hazard and Branislav Ivanovic who hogged the headlines with goals, but it was Matic’s discipline and tactical acumen that let the mavericks off their leash. It came on the same day Manchester City drew 1-1 at home with Hull City, giving Chelsea a seven-point lead at the top of the table. The Serbian showed his worth again during Chelsea’s 1-0 win over Manchester United at Stamford Bridge in April. Refusing to break rank, the Blues sat back and allowed United to dominate possession, which bore little reward for Louis van Gaal’s side as Matic stepped in before any openings could present themselves. The 6ft 4in sentinel made 10 ball recoveries, 5 tackles and 4 headed clearances to keep the red wave at bay. GET STATS ZONE Chelsea may not have shown much of their attacking prowess that day, but the Blues went on to win their first title for five years and a fourth in 11 campaigns. “He’s [Matic] a giant. Not for his size but for the way he plays. The man is a giant.” Who’s going to argue with Jose?
const { note } = require('aztec.js'); const utils = {}; /** * Make the first letter of a given single word input string capitalised * * @capitaliseFirstChar * @param {string} stringToCapitalise - input for which the first letter should be capitalised * * @returns {string} - input string with first letter capitalised */ utils.capitaliseFirstChar = (stringToCapitalise) => { return stringToCapitalise[0].toUpperCase() + stringToCapitalise.slice(1); }; /** * Generate a set of notes, given the desired note values and account of the owner * * @method getNotesForAccount * @param {Object} aztecAccount - Ethereum account that owns the notes to be created * @param {Number[]} noteValues - array of note values, for which notes will be created * @returns {Note[]} - array of notes */ utils.getNotesForAccount = async (aztecAccount, noteValues) => { return Promise.all(noteValues.map((noteValue) => note.create(aztecAccount.publicKey, noteValue))); }; /** * Generate a factory ID based on three supplied uint8's: epoch, cryptoSystem and * assetType * * @method generateFactoryId * @param {Number} epoch - uint8 representing factory version control * @param {Number} cryptoSystem - uint8 representing the cryptosystem the factory is associated with * @param {Number} assetType - uint8 representing the type of the asset i.e. is it convertible, * adjustable */ utils.generateFactoryId = (epoch, cryptoSystem, assetType) => { return epoch * 256 ** 2 + cryptoSystem * 256 ** 1 + assetType * 256 ** 0; }; module.exports = utils;
Pursuing a career in nursing: differences between men and women qualifying as registered general nurses. Much interest currently focuses on differences in the career intentions and career pathways of men and women nurses. This study seeks to add to existing knowledge on this subject with findings from a survey of newly qualified registered general nurses. Questionnaires were sent to a cohort of 1164 nurses, 87% of whom responded. Data from the 936 women and 79 men were compared in relation to educational and employment background, routes into nursing and career intentions at qualification. Procedures for modelling of categorical data were applied to these data within the constraints of the study design. Findings showed that men were less likely than women to have entered nursing as a first choice and less likely to intend working in the community after qualification. Men were more likely than women to plan to move out of clinical practice and more likely to plan pursuing a postgraduate qualification. Other differences between men and women were suggested, but limitations of the study design mean that drawing of conclusions had to be more tentative. Consequently, further research on this subject is warranted.
# The following patch enables conversion webhook for CRD # CRD conversion requires k8s 1.13 or later. apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: eventhubnamespaces.azure.microsoft.com spec: conversion: strategy: Webhook webhookClientConfig: service: namespace: system name: webhook-service path: /convert
Melinda Gates goes public (reflection) Last issue of Fortune magazine features Melinda Gates (yes, wife of most richest American businessman) on its cover. The article is very interesting reading, as it uncovers not only how she became his wife, but also how and why the Bill and Melinda Gates Foundation has emerged. It is a very interesting reading available here and it made me ponder on a lot of things. For example – how such decisions are made. It isn’t easy for a person who still remembers what’s it like to live from paycheck to paycheck to understand how people can throw billions of dollars away. Or on some process, the results of which they may never see. Generally speaking, it allows you to see (just a little, though) how rich people feel towards their richness. And that is indeed a good reading.
__author__ = "Max Dippel, Michael Burkart and Matthias Urban" __version__ = "0.0.1" __license__ = "BSD" import unittest import numpy as np import time import torch import ConfigSpace as CS import ConfigSpace.hyperparameters as CSH from autoPyTorch.utils.configspace_wrapper import ConfigWrapper from autoPyTorch.pipeline.nodes.normalization_strategy_selector import NormalizationStrategySelector from autoPyTorch.pipeline.nodes.create_dataset_info import DataSetInfo from numpy.testing import assert_array_almost_equal from sklearn.preprocessing import MinMaxScaler class TestNormalizationStrategySelector(unittest.TestCase): def test_normalization_strategy_selector(self): X = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9], [3, 2, 1], [6, 5, 4], [9, 8, 7]]) train_indices = np.array([0, 1, 2]) valid_indices = np.array([3, 4, 5]) dataset_info = DataSetInfo() dataset_info.categorical_features = [False, True, False] hyperparameter_config = {NormalizationStrategySelector.get_name() + ConfigWrapper.delimiter + "normalization_strategy": "minmax"} normalizer_node = NormalizationStrategySelector() normalizer_node.add_normalization_strategy("minmax", MinMaxScaler) fit_result = normalizer_node.fit(hyperparameter_config=hyperparameter_config, X=X, train_indices=train_indices, dataset_info=dataset_info) assert_array_almost_equal(fit_result['X'][train_indices], np.array([[0, 0, 2], [0.5, 0.5, 5], [1, 1, 8]])) assert_array_almost_equal(fit_result['X'][valid_indices], np.array([[2/6, -2/6, 2], [5/6, 1/6, 5], [8/6, 4/6, 8]])) assert_array_almost_equal(fit_result['dataset_info'].categorical_features, [False, False, True]) X_test = np.array([[1, 2, 3]]) predict_result = normalizer_node.predict(X=X_test, normalizer=fit_result["normalizer"]) assert_array_almost_equal(predict_result['X'], np.array([[0, 0, 2]]))
San Diego Airport Car Rental | Rent a car San Diego For those who are looking for an artistic, cultural and historical haven with an abundant sea coastline and lovely forests for scenic drives, San Diego, in Southern California, is your best bet. Known to be a family entertainment capital; San Diego is a holiday maker’s dream with plenty of places to visit, sights to see and things to do. With our car rental services, you can afford to tour around the city taking in the sights, all at your own pace. Built to commemorate the arrival of Archduke Juan Cabrillo in the 15th century, the Cabrillo National Monument is one of the most spectacular sights in San Diego. Located at Point Loma, you can climb the tower and brace yourself for great views from the monument. Sometimes, there are grey whales and other animals visible in the ocean below. This site is a splendid picnic and camp site and a great place to spend quality time with the family. Dynamic waves dashing the shore make this beach lethal for swimmers but ideal for surfers. In fact, surfing is a major sport at the Black Beach. The Torres Pine Glider Port has gliding and paragliding facilities after you climb the cliffs and reach the vantage point where you fly over the beach in your glider. You can rely upon our perfect car hire services to take you around to many other beaches where you can swim, sunbath (even nude) and spend time in peace. Located around 20 miles away from San Diego, Knott’s Soak City water park has plenty of water rides, rivers, slides and gigantic surf pools with indoor wave splashes. There is no need to visit the beach with the facilities in Soak City resembling the great Mediterranean beaches and the Caribbean all at one place! The lazy river, the splash pool and surfers’ paradise are big attractions in Soak City. With our car rentals in hand, you can visit several other water parks as well and enjoy the variety of entertainment in San Diego. Erected in the early 19th century, the Botanical Building now houses more than 2000 varieties of tropical and sub-tropical plants and many more species of flowering plants, imported from all around the world. You can spend a quiet day in the Botanical Garden, view its horticulture display and soak in the sights of manmade fountains, rivers and waterfalls inside. San Diego car rental is perfect for exploring and offers a convenience that you will never find with public transport. Car rental tips for San Diego Airport Freedom to move where you want with a rental car A rental car gives you the freedom to move wherever you want or make sightseeing stops when you want. All for your commodity and you can drive whenever you want without depending on buses or trains. Please take in mind that traffic rules must be respected at all times. Park your rental car in San Diego Airport Parking your rental car in San Diego Airport can best be done in a garage. Its safer for you and your rental car. There might be the possibility to park your rental car in the outskirts of San Diego Airport and move around by public transportation which is not too expensive and will compensate for the parking fee you will need to pay if parking in the city center. Take in mind that many big cities have a city center only accessible for locals or delivery purposes. Car Rentals in San Diego Airport Rentalcargroup.com compares car rental prices for San Diego Airport). We work with many car rental companies so the quality of the cars is guaranteed. This way you will be sure to get a rental car that suits you the best and for a competitive price. Car rental travels with babies or kids Most car rental offer a baby or child seat/booster when traveling with babies or kids. You can find out if they do and how much the extra fee is. It is described in the detailed rental terms. Be advised that baby or child seat/boosters are subject to availability of the car rental company. Drive your rental car in San Diego Airport Driving around in the city of San Diego Airport with a rental car can be a hassle but not impossible. Its a good way to move around but we advise to have a look at the local traffic rules as they may vary with what you are used to. Take in mind that in rush hour traffic can be pretty dense and it can take longer before you reach your destination. Rentalcargroup.com is part of Ecommerce Group NV., which offers a range of Ecommerce services. Our holding company is based in the Netherlands Antilles with support offices in several countries. Registered Place of Business: Ecommerce Group NV, Mahaaiweg 6, Willemstad, Curacao, Netherlands Antilles
A simplified surgical technique for the therapy of stress incontinence. After a review of the literature on stress incontinence the importance of selecting the appropriate surgical procedure is emphasized. The long-term results of a modified and simplified method of retropubic urethropexy which has been used by the authors for 5 years are presented. The technique has been found equal in efficiency to other, more complicated, surgical methods.
Q: Installing a rigid fork on a commuter MTB Motivation: This answer says that installing a rigid fork on a frame, designed for a suspension, changes the geometry. Most answers to my question about rusted-out suspension recommend installing rigid suspension for riding in the rain and snow. The question: What changes to the feel of riding should I expect when changing a low to medium (80-100mm) travel suspension fork for a rigid fork? My bike is this one: A: Unless the correct fork is chosen , the bikes geometry will change. A rigid fork built for a bike designed for rigid forks has a smaller axle to crown measurement than a suspension fork. If your bike is designed for 100mm travel suspension, and you put "any old" rigid fork on, the front of you bike will be 100mm lower than it is now. Even if you correct this in the Steerer/stem/handlebars, this will be enough to upset the rake and therefore alter the handling of the bike. (refer @Benzo for the answer to this problem) The bike will (should) be significantly lighter with the benefits that go with that. If you are riding smooth pavement, the changes will be you feel more bumps, and need to use you arms to absorb and control the front of the bike if you hit small bumps in corners. Tyre choice and pressure becomes more important, as does attention to the surface ahead of you. The big gain is that you bike is more efficient - no soft squashy absorbing energy in the front end (even locked out suspension moves) - you will go faster. Off road riding is a completely different ball game. If you are used to suspension and riding bumpy ground hard, you will need to change you style. Expect a few prangs along the way. When riding suspension, you weigh it in corners and let the shocks hold the front wheel on the ground over the bumps. Without suspension, the same technique will lead to the front wheel bouncing and loosing traction, with predictable consequences..... You need to learn to let your arms become the suspension, and your arms need to control the front wheel no only in direction, but "height" and "pressure on ground". The term "loose" takes in a new meaning. It requires more skill and concentration, and far more attention to detail than riding a suspension setup give, as well as being physically harder, but also in some ways, more rewarding. A: Surly has suspension corrected forks for mountain bikes, try 1x1 fork for 26in or ogre or karate monkey for 29er. Check it out. http://surlybikes.com/parts/category/forks
UMD scientists have discovered a mechanism for transgenerational gene silencing in the roundworm Caenorhabditis elegans. Special fluorescent dyes help to visualize neurons (magenta) and germ cells (green) in the roundworm's body. Credit: Sindhuja Devanapally For more than a century, scientists have understood the basics of inheritance: if good genes help parents survive and reproduce, the parents pass those genes along to their offspring. And yet, recent research has shown that reality is much more complex: genes can be switched off, or silenced, in response to the environment or other factors, and sometimes these changes can be passed from one generation to the next. The phenomenon has been called epigenetic inheritance, but it is not well understood. Now, UMD geneticist Antony Jose and two of his graduate students are the first to figure out a specific mechanism by which a parent can pass silenced genes to its offspring. Importantly, the team found that this silencing could persist for multiple generations—more than 25, in the case of this study. The research, which was published in the Feb. 2, 2015 online early edition of the Proceedings of the National Academy of Sciences, could transform our understanding of animal evolution. Further, it might one day help in the design of treatments for a broad range of genetic diseases. "For a long time, biologists have wanted to know how information from the environment sometimes gets transmitted to the next generation," said Jose, an assistant professor in the UMD Department of Cell Biology and Molecular Genetics. "This is the first mechanistic demonstration of how this could happen. It's a level of organization that we didn't know existed in animals before." Jose and graduate students Sindhuja Devanapally and Snusha Ravikumar worked with the roundworm Caenorhabditis elegans, a species commonly used in lab experiments. They made the worms' nerve cells produce molecules of double-stranded RNA (dsRNA) that match a specific gene. (RNA is a close relative of DNA, and has many different varieties, including dsRNA.) Molecules of dsRNA are known to travel between body cells (any cell in the body except germ cells, which make egg or sperm cells) and can silence genes when their sequence matches up with the corresponding section of a cell's DNA. This schematic illustrates how the gene silencing mechanism works in C. elegans. Neurons (magenta) can export double-stranded RNA (orange arrow) that match a gene (green) in germ cells. Import of RNA into germ cells results in silencing of the gene (black) within germ cells. This silencing can persist for more than 25 generations. Credit: Antony Jose The team's biggest finding was that dsRNA can travel from body cells into germ cells and silence genes within the germ cells. Even more surprising, the silencing can stick around for more than 25 generations. If this same mechanism exists in other animals—possibly including humans—it could mean that there is a completely different way for a species to evolve in response to its environment. "This mechanism gives an animal a tool to evolve much faster," Jose said. "We still need to figure out whether this tool is actually used in this way, but it is at least possible. If animals use this RNA transport to adapt, it would mean a new understanding of how evolution happens." The long-term stability of the silencing effect could prove critical in developing treatments for genetic diseases. The key is a process known as RNA interference, more commonly referred to as RNAi. This process is how dsRNA silences genes in a cell. The same process has been studied as a potential genetic therapy for more than a decade, because you can target any disease gene with matching dsRNA. But a main obstacle has been achieving stable silencing, so that the patient does not need to take repeated high doses of dsRNA. The roundworm C. elegans, seen here, is commonly used in laboratory studies because it reproduces quickly and has a simple body. Credit: Photo: Hai Le "RNAi is very promising as a therapy, but the efficacy of the treatment declines over time with each new cell division," Jose said. "This particular dsRNA, from C. elegans nerve cells, might have some chemical modifications that allow stable silencing to persist for many generations. Further study of this molecule could help solve the efficacy problem in RNAi therapy." Jose acknowledges the large gap between roundworms and humans. Unlike simpler animals, mammals have known mechanisms that reprogram silenced genes every generation. On the surface, it would seem as though this would prevent epigenetic inheritance from happening. And yet, previous evidence suggests that the environment may be able to cause some sort of transgenerational effect in mammals as well. Jose believes that his team's work provides a promising lead in the search for how this happens. "This is a fertile research field that will keep us busy for 10 years or more into the future," Jose said. "The goal is to achieve a very clear understanding—in simple terms—of all the tools an animal can use to evolve." More information: "Double-stranded RNA made in C. elegans neurons can enter the germline and cause transgenerational gene silencing," Sindhuja Devanapally, Snusha Ravikumar and Antony M. Jose, Proceedings of the National Academy of Sciences, www.pnas.org/cgi/doi/10.1073/pnas.1423333112 Journal information: Proceedings of the National Academy of Sciences "Double-stranded RNA made in C. elegans neurons can enter the germline and cause transgenerational gene silencing," Sindhuja Devanapally, Snusha Ravikumar and Antony M. Jose,
Question: As we know, (1) the macroscopic spatial dimension of our universe is 3 dimension, and (2) gravity attracts massive objects together and the gravitational force is isotropic without directional preferences. Why do we have the spiral 2D plane-like Galaxy(galaxies), instead of spherical or elliptic-like galaxies? Input: Gravity is (at least, seems to be) isotropic from its force law (Newtonian gravity). It should show no directional preferences from the form of force vector $\vec{F}=\frac{GM(r_1)m(r_2)}{(\vec{r_1}-\vec{r_2})^2} \hat{r_{12}}$. The Einstein gravity also does not show directional dependence at least microscopically. If the gravity attracts massive objects together isotropically, and the macroscopic space dimension is 3-dimensional, it seems to be natural to have a spherical shape of massive objects gather together. Such as the globular clusters, or GC, are roughly spherical groupings Star cluster, as shown in the Wiki picture: However, my impression is that, even if we have observed some more-spherical or more-ball-like Elliptical galaxy, it is more common to find more-planar Spiral galaxy such as our Milky Way? (Is this statement correct? Let me know if I am wrong.) Also, such as this NGC 4414 galaxy: Is there some physics or math theory explains why the Galaxy turns out to be planar-like (or spiral-like) instead of spherical-like? I don't think I'd say that spirals are more common than ellipticals. It depends on where in space and in cosmic history you look, but ellipticals certainly aren't rare (though they don't get as much publicity with spectacular photographs as spirals because they're well... kind of boring). – KyleJan 15 '14 at 19:57 2 Answers 2 To understand how, let us as a starting point look at Wikipedia's sketch of the structure of a spiral galaxy: A spiral galaxy consists of a disk embedded in a spheroidal halo. The galaxy rotates around an axis through the centre, parallel to the GNP$\leftrightarrow$GSP axis in the image. The spheroidal halo consists mostly of Dark Matter (DM), and the DM makes up $\sim90\%$ of the mass of the Milky Way. Dynamically, it is the DM, that, ehrm, matters. And DM will always arrange itself in a ellipsoid configuration. So the question should rather be: Why is there even a disk, why isn't the galaxy just an elliptical? The key to answering this lies in the gas content of a galaxy. Both stars and Dark Matter particles - whatever they are - are collisionless; they only interact with each other through gravity. Collisionless systems tend to form spheroid or ellipsoid systems, like we are used to from elliptical galaxies, globular clusters etc.; all of which share the characteristic that they are very gas-poor. With gas it is different: gas molecules can collide, and do it all the time. These collisions can transfer energy and angular momentum. The energy can be turned into other kinds of energy, which can escape, through radiation, galactic winds etc., and as energy escapes, the gas cools and settles down into a lower energy configuration. The gas' angular momentum, however, is harder to transfer out of the galaxy, so this is more or less conserved. The result - a collisional system with low energy but a relatively high angular momentum - is the typical thin disk of a spiral galaxy. (Something similar, but not perfectly analogous, happens in the formation of protoplanetary disks). Stars also do not collide, so they should in theory also make up an ellipsoid shape. And some do in fact: the halo stars, including but not limited to the globular clusters. These are all very old stars, formed when the gas of the galaxy hadn't settled into the disk yet (or, for a few, formed in the disk but later ejected due to gravitational disturbances). But the large majority of stars are formed in the gas after it has settled into the disk, and so the large majority of stars will be found in the same disk. Elliptical galaxies So why is there even elliptical galaxies? Elliptical galaxies are typically very gas-poor, so gas dynamics is not important in these, they are rather a classical gravitational many-body system like a DM halo. The gas is depleted from these galaxies due to many different processes such as star formation, collisions with other galaxies (which are quite common), gas ejection due to radiational pressure from strongly star forming regions, supernovae or quasars, etc. etc. - many are the ways for a galaxy to lose its gas. If colliding galaxies are sufficiently gas-depleted (and the collision results in a merger), then the resulting galaxy will not have any gas which can settle into a disk, and kinetic energy of the stars in the new galaxy will tend to be distributed randomly due to the chaotic nature of the interaction. (This picture is simplified, as the whole business of galactic dynamics is quite hairy, but I hope it gets the fundamentals right and more or less understandable). I do not think the assumption of a random 3D environment is justified, galaxies are not separated from their environment. – ThrivethOct 6 '14 at 12:17 Besides @HelderVelez , I don't see what the answers in the question you link to has to do with my answer or, for that matter, your own comment...? – ThrivethOct 6 '14 at 12:25 "Stars also do not collide"? How so? Even if they don't collide head-on, but pass each other somewhat closely, they will both be deflected by each other's gravitational field. – endolithNov 25 '14 at 23:28 1 @endolith Yes, but gravitational tugging is a very different process from a collision and has different consequences - for example, equipartition of energy does not happen, thermodynamic equilibrium does not happen. – ThrivethNov 26 '14 at 2:19 A less popular version of physics.stackexchange.com/a/148423/56960 , but needs downvotes as well. Not all galaxies have (prominent) nuclei, not all galaxies have a noticeable total angular momentum (although most of them indeed have), and survival of a huge stellar system for billions years isn’t necessarily determined by its rotation. – Incnis MrsiNov 24 '14 at 17:32
<!DOCTYPE html> <html> <head> <meta charset="UTF-8"> <link rel="stylesheet" type="text/css" href="../../common.css"> <script src="../../t.js"></script> </head> <body> <pre> 1. Visit <a data-path="/viewer-archive-in-frame/frame.html">frame.html</a>. 2. A page should be seen, and archive files in its frames should be viewed normally. </pre> </body> </html>
Kiwis are seeking to travel more and further than ever before. Corporate warrior by day, Amazon warrior on holiday: it's a description many of us can relate to if the continuing trend towards adventurous holidays is anything to go by. "Nowhere is out of reach for Kiwis as they seek to travel more and further to a destination that will give them adventure and an authentic experience," Flight Centre NZ general manager product Sean Berenson says. "Exploring the sprawling jungles of the Peruvian Andes is worth putting on the list, as is hiking the W-trek in Patagonia, back-country skiing in Iceland or trekking to Mt Everest base camp."Strenuous physical activity isn't everyone's idea of a holiday, of course, but that doesn't mean you can't make the most of the continuing boon in "experiential travel". Train and event travel are also set to grow in 2018, as are cruises to private islands and pilgrimages to find the best vege food in the world. Here are 10 trends to watch out for in the New Year. National Geographic Traveller Expedition cruises are ideal for lovers of nature, wildlife and adventure. READ MORE: * A look at travel lists: Where to go in 2018 * Lonely Planet names New Zealand among top countries to travel to * 11 places in NZ you must visit Expedition cruising Purpose-built expedition trips are giving so-called ordinary folk the chance to become modern-day adventurers. "Smaller, more exploratory cruising to far-flung destinations has piqued the interest of the more adventurous traveller," Lonely Planet's Chris Zeiher says. "These small-ship expeditions offer a more immersive experience for the traveller where smaller vessels can access more precious or protected areas." Alaska, the Galapagos Islands, Arctic Norway, Antarctica, the Amazon and the Kimberley are among the destinations to choose from. 123RF Why stay in a bland hotel room when you could rest your head in a yurt? Typically, such cruises will allow passengers to explore by foot, bike, horse, kayak or paddleboard. House of Travel's Dave Fordyce recommends checking out Lindlbad Expeditions' National Geographic Quest. One of the newest ships, it's designed especially to navigate the wildlife-rich inlets and passageways of Alaska, carrying exploration tools such as a SplashCam and hydrophone which allow passengers to look at and listen to what's going on underwater. "Rustic" accommodation Who wants to stay somewhere that looks like a sterile version of their own bedroom? Fewer and fewer of us it seems. Rocky Mountaineer The Rocky Mountaineer takes passengers through otherwise inaccessible terrain. Airbnb witnessed a surge in "non-traditional home" bookings in 2017, with nature lodges and ryokans (traditional Japanese inns) receiving 700 and 600 per cent more bookings respectively. Stays in yurts and RVs/campervans were also popular, with bookings up 155 and 133 per cent. "Travellers are increasingly drawn to homes that are rustic and unique, rather than simply comfortable," the accommodation site says. Train travel Exploring the wilds from the comfort of a luxury train carriage is gaining traction as Kiwis are keen to venture into untouched landscapes without having to rough it or navigate treacherous roads. Frazer Harrison Coachella should be on any music lover's bucket list. The Rocky Mountaineer, which takes passengers through otherwise inaccessible terrain in Western Canada, fits the bill perfectly, House of Travel product and channel director Dave Fordyce says. Other top-rated, top-class services include the Maharajas' Express, which travels through north and west India, the Andean Explorer, South Africa's Blue Train, and the Belmond Hiram Bingham, which operates return day trips to Machu Picchu. International music festivals Life's too short to waste it at mediocre music festivals. If you're a true music fiend you need to experience the world's best. Glastonbury in the UK and Coachella in the US are the ones to choose it you want to give your mates "the ultimate FOMO" Berenson says. Robert Prezioso Catching the Aussie Open in Melbourne is an easy treat. Whether you prefer to party to a backdrop of rolling pastures, golden beaches or 24/7 sunshine, there's a festival to suit. Exit Festival, set in a medieval Serbian fortress, and Fuji Rock in Japan's Niigata Prefecture, which requires a cable car trip up a mountain, offer truly one-of-a-kind experiences. Event travel It's not just the musos who are prepared to roam far and wide to indulge their passions – festival and event tourism has taken off in a big way. "A decade ago this type of travel was an alien term, but as Kiwis seek to experience more from their travel we've seen a surge in Kiwis roaming widely for sports, festivals and carnivals," says Berenson. For sports fans, the HSBC Sydney Sevens and Australian Open in Melbourne are well worth the quick trip across the ditch, he suggests. CHRIS SKELTON/STUFF More cruises are incorporating time on private islands. Visiting a place during a major cultural event can be a great way to immerse yourself in that culture and find out more about the local history. Weird and wonderful festivals are celebrated around the world year-round. The Cheung Chau Bun Festival in Hong Kong, which sees festival goers race to reach the top of towers of sweet buns up to 20 metres high, and the baby-jumping festival in Castillo de Murcia, Spain. are two of the more out-there. Cruises to private islands Rockstars and Richard Branson aren't the only ones to holiday on private islands these days. A growing number of cruise lines offer port excursions to tropical isles inaccessible to the hoi polloi. Norwegian Cruise Line's Harvest Caye, for one, transports guests to 31 hectare Harvest Caye island in Belize. Featuring a 2.8habeach lined with loungers, an enormous pool with swim-up bar, a wide array of watersports and eating and shopping precincts, it's not exactly Robinson Crusoe territory but, for many, it's paradise nonetheless. THE DOMINION POST More Kiwis, particularly women, are waking up to the benefits of solo travel. Fordyce recommends keeping an eye out for MCS Cruises' Ocean Cay Marine Reserve in the Bahamas, set to open at the end of 2018. "Equipped with several sandy beaches, snorkel-ready coral reefs, and various dining, spa, and entertainment options, it'll be one to add to the bucket list." Solo travel Solo travel is hardly considered out-there these days but it appears many are just waking up to its benefits. "Feedback from a lot of customers is that travelling solo gives them the chance to indulge themselves fully. Many who have previously never travelled alone often describe their first solo trip as invigorating and character building," Berenson says. CHRISTEL YARDLEY/STUFF Vegos are willing to travel far and wide to find the perfect veggie meal. More women are choosing to travel alone, with many opting for active trips which allow them to meet new people and push themselves in the great outdoors. REI Adventures has designed a series of adventurous trips targeted at women. Options include hiking tours through the Southern Alps, mountain biking in the Grand Canyon and scaling California's Mt Shasta. Vege travel No longer are vegan and vegetarian travellers content to skip the meat and have a double serve of salad and fries when they hit the road: they want to go places that embrace and excel at meat-free eating, Zeiher says. Amsterdam should be high on any vegetarian traveller's list – Holidu recently named it the most vegan-friendly city in Europe for its plethora of plant-based cafes and restaurants. Italy is another top European spot for the meat-free, with vegetables playing the starring role in many traditional dishes. Think plates heaped with grilled and sauteed vegetable concoctions, risottos, salads, pasta dishes and Neapolitan pizza. 123RF More grown-up "children" are choosing to holiday with their parents. Home to about 500 million vegetarians, India is another vego paradise and Gajarat, South India and Mumbai provide some of the tastiest options. Closer to home, you can typically find plenty of tasty vego treats among the street food stalls of Southeast Asia. Even barbecue-mad Australia boasts some amazing dining experiences – Zeiher recommends The Beet Retreat in Victoria's Yarra Valley, which offers vegan accommodation in the heart of wine country. Cross-generational travel Banish all memories of excruciatingly long car rides en route to a basic bach – cross-generational travel has come of age. Zeiher puts the rising number of grown-up "children" taking trips with their parents down to our ageing population and increasing lifespans. MATTHEW CATTIN/FAIRFAX NZ Staycations never go out of style. "The rise of booking platforms such as Airbnb have also made intergenerational travel far easier internationally scale, providing access to a wider variety of accommodation options that can house greater numbers," he says. Staycations For all the wealth of travel options out there, many Kiwis prefer to holiday in their own backyard, some because they're looking to recreate summer vacations of old and others because they like supporting the local economy. "Within New Zealand there are so many places to take the kids and we're seeing Rotorua, Taupo and Napier as some of the top family destinations," says Finch. A recent Wotif report found that 55 per cent of travellers feel it is important to holiday domestically to support farmers, producers, artists, crafts people and the local economy. Wherever we head though, it seems the main aim for most of us on holiday is to plug out of our everyday lives. "Most Kiwi travellers (87 per cent) look at holidays as an opportunity to disconnect and 75 per cent see holidays as a time to recharge," Finch says.
import * as React from 'react'; import createSvgIcon from './utils/createSvgIcon'; export default createSvgIcon( <React.Fragment><path d="M7 11h10v2H7z" opacity=".3" /><path d="M5 15h14V9H5v6zm2-4h10v2H7v-2zm4-10h2v3h-2zm6.25 4.39l1.41 1.41 1.8-1.79-1.42-1.41zM11 20h2v3h-2zm6.24-1.29l1.79 1.8 1.42-1.42-1.8-1.79zM5.34 6.805l-1.788-1.79L4.96 3.61l1.788 1.788zM3.55 19.08l1.41 1.42 1.79-1.8-1.41-1.41z" /></React.Fragment> , 'WbIridescentTwoTone');
Q: Setting template or templateUrl in Angular Directive based on user input I have simple directive like this : app.directive('sample',function(){ return{ restrict:'E', template:'<a href="#">Hello sample</a>', templateUrl:'' } }); i want when user declare templateUrl in tag like this : <sample template="some url"></sample> use templateUrl but if nothing set use default template in directive A: template and templateUrl can be specified as functions taking two arguments - tElement and tAttrs. An easy way is to move your default template and perform your logic in the templateUrl function: app.directive("sample", [ function() { var defaultTemplate = 'default.html'; return { restrict: 'E', templateUrl: function (tElement, tAttrs) { return tAttrs.template || defaultTemplate; } } } ]); Demo: http://plnkr.co/edit/rrPicuzzb6YF4Z6yh3Rn?p=preview
Tag Archives: Pharmovisa Inc. A co-owner and operator of three Miami discount pharmacies was sentenced today to 168 months in prison for his role in a health care fraud scheme that submitted more than $23 million in false claims to Medicare. The sentence was announced by Assistant Attorney General Lanny A. Breuer of the Justice Department’s Criminal Division; U.S. Attorney Wifredo A. Ferrer of the Southern District of Florida; Michael B. Steinbach, Special Agent in Charge of the FBI’s Miami Field Office; and Special Agent in Charge Christopher B. Dennis of the U.S. Department of Health and Human Services Office of Inspector General (HHS-OIG), Office of Investigations Miami office. Jose Carlos Morales, 55, of Miami, was sentenced by U.S. District Judge Joan A. Lenard in the Southern District of Florida. In addition to his prison term, Morales was sentenced to serve three years of supervised release and to pay a $100,000 fine. A hearing to determine the amount of restitution Morales will pay has been scheduled for April 29, 2013. On Dec. 6, 2012, Morales pleaded guilty in the Southern District of Florida to one count of conspiracy to commit health care fraud and one count of conspiracy to defraud the United States and pay illegal health care kickbacks. According to court documents, Morales was the co-owner of Pharmovisa Inc. and PharmovisaMD Inc., which operated a total of three pharmacies in Miami. Morales paid illegal health care kickbacks to co-conspirators in return for a stream of beneficiary information to be used to submit claims to Medicare and Medicaid. The beneficiaries who were referred to the pharmacies in exchange for kickback payments resided at assisted living facilities (ALFs) located in Miami. Morales and his alleged co-conspirators also paid illegal health care kickbacks to physicians in exchange for prescription referrals, which the pharmacies ultimately billed to Medicare. Court documents also reveal that beginning in approximately 2007, drivers working for Morales’ pharmacies, at his direction, delivered “bingo cards” containing pop out medications to ALFs located throughout the Southern District of Florida. Morales instructed the drivers to pick up any unused “bingo cards” so that Morales pharmacy personnel could put the medications back into pill bottles. Unused and partially used medications were eventually re-billed to Medicare and Medicaid, and a majority of the previously submitted claims to Medicare and Medicaid were never reversed. Morales also instructed Morales pharmacy personnel to place unused and partially used medications into bottles to be sold directly to the general public from the “community” pharmacy shelves. Morales and his alleged co-conspirators also engaged in sham financial transactions to facilitate and conceal the fraud schemes and the flow of fraud proceeds, according to court documents. In most instances, the sham transactions involved shell entities owned and/or controlled by Morales or his alleged co-conspirators. According to court documents, Morales and his co-conspirators submitted and caused to be submitted approximately $23,367,755 in false and fraudulent claims to the Medicare and Florida Medicaid programs. The case is being prosecuted by Trial Attorney Allan J. Medina and Special Trial Attorney William Parente of the Criminal Division’s Fraud Section. This case was investigated by the FBI and HHS-OIG and was brought as part of the Medicare Fraud Strike Force, supervised by the Criminal Division’s Fraud Section and the U.S. Attorney’s Office for the Southern District of Florida. Since its inception in March 2007, the Medicare Fraud Strike Force, now operating in nine cities across the country, has charged more than 1,480 defendants who have collectively billed the Medicare program for more than $4.8 billion. In addition, HHS’s Centers for Medicare and Medicaid Services, working in conjunction with HHS-OIG, are taking steps to increase accountability and decrease the presence of fraudulent providers. To learn more about the Health Care Fraud Prevention and Enforcement Action Team (HEAT), go to: www.stopmedicarefraud.gov.
[Vaginal hydrocele. Report of 55 surgically treated cases]. The authors analyze retrospectively the files of patients who have been operated on for hydrocele between 6th January 2000 and 27th November 2001 (23 months) in Talangaï hospital at Brazzaville. The overall operation rate for that pathology was 4.44%. Prevalence according to age group is as follows: 14 infants (25.45%), three adolescents (5.45%); 14 adults (25.45%); 24 old persons (43.64%). The localization of hydrocele was right in 54.55% of cases, left in 27.27% and bilateral in 18.18% of cases. In all patients, there was a case of idiopathic hydrocele. Patients got the surgical cure more frequently through the homolateral scrotal route than inguinal and the post-therapeutic healing occurs in all cases without recurrence. The authors make comments on those results and they specify certain current data about the disease.
// RUN: %target-swift-frontend -Xllvm -sil-full-demangle -O -sil-inline-threshold 0 -emit-sil -primary-file %s | %FileCheck %s // // This is a .swift test because the SIL parser does not support Self. class C { required init() {} } class SubC : C {} var g: AnyObject = SubC() func gen<R>() -> R { return g as! R } extension C { class func factory(_ z: Int) -> Self { return gen() } } // The integer argument is truly dead, but the C.Type metadata argument may not be removed. // function signature specialization <Arg[0] = Dead> of static functionsigopts_self.C.factory (functionsigopts_self.C.Type)(Swift.Int) -> Self // CHECK-LABEL: sil shared @_T020functionsigopts_self1CC7factory{{[_0-9a-zA-Z]*}}FZTf4dn_n : $@convention(method) (@thick C.Type) -> @owned C // CHECK: bb0(%0 : $@thick C.Type): // CHECK: function_ref functionsigopts_self.gen<A>() -> A // CHECK: apply %{{[0-9]+}}<@dynamic_self C> // Call the function so the specialization is not dead. var x = C() var x2 = C.factory(1)
Effects of neuronal magnetic fields on MRI: numerical analysis with axon and dendrite models. Whether the neuronal magnetic fields (NMFs) could cause measurable MRI signal changes in the human brain seems to be still controversial. In this study, we have numerically investigated the NMF effects on the MRI signal using two separate current source models for axons and dendrites. Since intracellular current distributions are different in axons and dendrites, the NMFs emanating from axons and dendrites are also very different from each other. Due to the quadripole configuration of the intracellular current flowing through an axon, the axonal magnetic field is bipolar causing virtually no changes in the MRI signal. On the contrary, the dendritic magnetic field is unipolar so that its effects can be accumulated during the echo time. The dendritic magnetic field has measurable effects on the MRI signal, but, it is necessary to differentiate the NMF effects from much bigger background BOLD effects to utilize the NMF effects for fMRI.
Q: Is there a methodology to assign integer values to factors in R I am quite new to R, but was wondering if there is a specific way to group/analyze integer values from my data frame i.e., Sample X : int 1 2 3 4 5 Sample Y : int 6 7 8 9 10 Sample Z : int 11 12 13 14 15 and assign these to my factor variable which has the corresponding number of levels (5 in this example) which are called in this example lvl 1, lvl 2, lvl 3, lvl 4, lvl 5. The goal is to be able to graph the observations at each level, for example lvl 1 had the observations 1, 6, and 11/ lvl 2 had 2, 7, and 12, etc. I've found no clean way to do this. Other attempts have including individually typing out the name of each sample and manually linking this to the factor levels, but that has not gone well. Any advice would be appreciated! A: If I understood correctly, you want to have each x, y and z observations associated with a level and plot by level. library(ggplot2) library(reshape2) df = data.frame(x = 1:5, y = 6:10, z = 11:15) df$level = factor(paste0("lvl",1:5)) df df # x y z level # 1 1 6 11 lvl1 # 2 2 7 12 lvl2 # 3 3 8 13 lvl3 # 4 4 9 14 lvl4 # 5 5 10 15 lvl5 It's easier to use long formatted data for plot (with ggplot2 package). I use reshape2::melt here but you could find equivalent solution with tidyr::pivot_long df <- reshape2::melt(df, id.vars = "level") df level variable value 1 lvl1 x 1 2 lvl2 x 2 3 lvl3 x 3 4 lvl4 x 4 5 lvl5 x 5 6 lvl1 y 6 7 lvl2 y 7 8 lvl3 y 8 9 lvl4 y 9 10 lvl5 y 10 11 lvl1 z 11 12 lvl2 z 12 13 lvl3 z 13 14 lvl4 z 14 15 lvl5 z 15 Finally, you can plot. Let's say you want points for each level: ggplot(df, aes(x = level, y = value)) + geom_point()
Membrane-induced folding of the cAMP-regulated phosphoprotein endosulfine-alpha. Endosulfine-alpha (ENSA) is a 121-residue cAMP-regulated phosphoprotein, originally identified as an endogenous regulator of ATP-sensitive potassium channels. ENSA has been implicated in the regulation of insulin secretion, and expression of ENSA is decreased in brains of both Alzheimer's disease (AD) and Down's syndrome patients. We recently described membrane-dependent interactions between ENSA and the Parkinson's disease associated protein alpha-synuclein. Here we characterize the conformational change in ENSA that occurs upon binding to membranes. Secondary chemical shift analysis demonstrates formation of four helices in the lipid-bound state that are not present in the absence of lipid. The helical structure is maintained in several different lipid mimetics (sodium dodecyl sulfate, dodecyl phosphocholine, lyso 1-palmitoyl phosphatidylglycerol, and phospholipid vesicles). Introduction of a mutation (S109E) to mimic PKA phosphorylation of ENSA leads to a perturbation of the fourth helix and disrupts the interaction with alpha-synuclein. These data establish ENSA as an intrinsically unstructured protein that adopts a stable structure upon membrane binding, properties it shares with its binding partner alpha-synuclein.
Cloud computing services in which users run applications on virtual machines hosted a distributed network of servers are available from a number of different service providers. The cloud computing services may be hosted on a public cloud, such as a remote datacenter that host numerous tenant users. Cloud computing service may also be hosted on a private cloud, such as an enterprise datacenter that is available to a limited pool of users associated with the enterprise. Each cloud computing service provides its own proprietary user interface (UI) and application programming interfaces (API) that a must be used to access services on a particular public or private cloud.
if i ever host a tournament i hope i can do such as good of a job much appreciation. its a tournament not to miss especially on friday the parade would be a good event to bring your family to see while you playfood for the players really gives the tournament a nice touch Mr. Alex called me last week and asked if I was going ? I had a major conflict this past week-end. I am the Board President for our local Habitat For Humanity Chapter and this past week-end we had our yearly auction. This past Saturday night our auction thru donations and auctioned items raised $13,000.00. Our first item was a beautiful patchwork that contained an American Flag,etc. and our guest auctioneer for that item was former POW and Wirt County, W.Va. native Ms. Jessica Lynch. Not only did she auction off the item she also signed it. I have read all the festival postings and note the " B" group was kinda heavy with entrants ? I have a lack of p.p. knowledge, and don't possess the best crossboard skills, but still would have entered the Masters. Rest assured if you come to our ACF Nationals next month we will use the ACF ratings and have a 'seeding' committee to make sure you are properly placed as an entrant. Mr Burton called my home and left me two messages inviting me to come. I am sorry I couldn't make it This coming week is the N.C. Tourney and I most likely cannot make it either. A Thursday and Friday makes it difficult for many players still working to miss two business days during the week to attend. I plan on going to the Pa. Open as a 'tune-up' for our National Tournament . Mr. Ron King called me Sunday morning and indicates he will be coming to Medina,Ohio to enter our Nationals. So if any of you want a chance to play the top two rated players in the World, plan on coming to Medina,Ohio and enter the Masters ! Rest assured there will be plenty of stiff competition in all three groups.
Former TOWIE Star Marie Fowler Attacked Maria Fowler was left bloody and injured after allegedly being attacked on a night out in Derby. The former TOWIE star was reportedly punched in the face, leaving her with a bloody nose and cuts strewn across her body. The incident is now being investigated by police. The photos show that Maria is clearly shaken by whatever has just happened, with blood running down her legs and chin. After numerous messages of support Maria tweeted her thanks, and she later posted a quote from the Dalai Lama. A Derby police spokesperson told the Daily Star: ‘We are investigating an assault on a 28-year-old woman. ‘The incident occured at 2.30am on Sunday in Becketwell Lane, Derby.’ This isn’t the first time Maria, who left TOWIE almost 3 years ago, has been the victim of an attack. She was reportedly glassed in the face last February. It was that incident which resulted in her quitting alcohol.
Q: F# continuations goes on StackOverflowException Hi guys I'm implementing an F# function that takes two lists of type : (int*float) list. These two lists have different lentgths. The int element of the couple is an increasing code. What I wanted to do is create a new list that will contain a couple (int*float) for each two elements of the two lists that have the same code. It's important to note that codes in lists are in increasing order. These lists are probably a little long, like 2-3000 elements., so I tried to implement this function using continuation passing style in order to avoid StackOverflowExceptions. but sadly i failed. This is the function, i hope you will give me any hints! let identifiedDifference list1 list2 = let rec produceResult (l1, l2) k = match l1,l2 with | [],[] | _,[] | [],_ -> k [] | (code,rate:float)::xs, (code2,rate2)::ys -> if code = code2 then produceResult (xs, ys) (fun c -> (code,Math.Abs(rate-rate2))::(k c)) elif code > code2 then produceResult (l1, ys) k else produceResult (xs, l2) k produceResult (list1, list2) id I've done something wrong? A: (fun c -> (code,Math.Abs(rate-rate2))::(k c)) should be (fun c -> k ((code,Math.Abs(rate-rate2))::c)) to make it tail-recursive: let identifiedDifference list1 list2 = let rec produceResult (l1, l2) k = match l1,l2 with | [],[] | _,[] | [],_ -> k [] | (code,rate:float)::xs, (code2,rate2)::ys -> if code = code2 then produceResult (xs, ys) (fun c -> k ((code,Math.Abs(rate-rate2))::c)) elif code > code2 then produceResult (l1, ys) k else produceResult (xs, l2) k produceResult (list1, list2) id This will also fix your results being returned in reverse order.
Cairo — One protester died and dozens were injured as secular, liberal Egyptians marched again in cities across the country after prayers Friday to denounce the Islamist government of President Mohammed Morsi. The man was killed when he was struck by two bullets outside the presidential palace in Cairo as rioters pelted elite troops from the Republic Guard guarding the building with Molotov cocktails and stones. Troops replied for the umpteenth time with water cannons, tear gas and warning shots that were fired into the air. The protests denouncing the rule of Morsi and his Muslim Brotherhood have become so common that they are not particularly interesting, despite the bloody fireworks. The narrow range of anti-goverment slogans that are chanted have become as familiar today to those who utter them as the lines of a play are to an actor. The guerrilla street-theatre, for that is what it is, always draws a fleet of orange ambulances and lots of cameras. It starts quietly and builds during the day. The violence that follows is often gratuitous, but it regularly provides dramatic visuals for Arab and Western news networks, whose narrow focus on the street often make them appear to be far more important than they are. But the demonstrations, which are organized by opposition factions who don’t like each other much, have hardly galvanized the nation. Nor have they acted as a catalyst for change since the dramatic initial protests succeeded in overthrowing Hosni Mubarak’s dictatorship. This is not to denigrate the protesters. Most of them are sincere in their desire for Egypt to have something that looks and feels a lot more like a Western-style democracy. Their bravery has been proven by the fact that scores of young men have been killed and many more injured in street fights with government supporters or security forces over the past few months. Still, by the most charitable estimates, less than one per cent of the population of Cairo has turned out at any one time for the many rallies opposing Morsi. This Friday evening only a few thousand Cairenes marched to the presidential palace although it was hard not to think that far more people were involved from the grainy night images broadcast over and over again by the BBC and others. Despite the almost daily protests against Morsi’s regime, these events have not caused him to change his policies a bit. Nor are they likely to. However, there is an outside party that may have more success at making Morsi grant concessions. The International Monetary Fund has been pressing Morsi to show that he has a firm grip on power, is slashing spending and is willing to be more inclusive of minorities and women or his government will lose nearly $5 billion in urgently needed loans that are currently in limbo. Without that money to keep Egypt’s economy sputtering, it is difficult to see how the president will be able to go ahead with parliamentary elections tentatively re-scheduled for April. Morsi has so far been able to dodge or ignore his political rivals, but the IMF poses a much bigger challenge. To achieve the stability that they have demanded, he has to demonstrate that he can peacefully quell the protests. There is a real trick to doing this that has so far alluded him. The main reason he has been unable to do this yet is largely because the security forces, who were the Brotherhood’s implacable foes for half a century, are not much interested in helping him or his party although it must be said that they are have until now been equally uninterested in helping his rivals. The upshot of all this is that while Egyptians latest attempts at revolution do not amount to nearly as much as it sometimes seems when viewed through a tear gas filled night-vision television camera’s lens, the Islamist government and its secular opponents stumble along in lock step with no end to the country’s misery in sight. Comments We encourage all readers to share their views on our articles and blog posts. We are committed to maintaining a lively but civil forum for discussion, so we ask you to avoid personal attacks, and please keep your comments relevant and respectful. If you encounter a comment that is abusive, click the "X" in the upper right corner of the comment box to report spam or abuse. We are using Facebook commenting. Visit our FAQ page for more information. Almost Done! Postmedia wants to improve your reading experience as well as share the best deals and promotions from our advertisers with you. The information below will be used to optimize the content and make ads across the network more relevant to you. You can always change the information you share with us by editing your profile. By clicking "Create Account", I hearby grant permission to Postmedia to use my account information to create my account. I also accept and agree to be bound by Postmedia's Terms and Conditions with respect to my use of the Site and I have read and understand Postmedia's Privacy Statement. I consent to the collection, use, maintenance, and disclosure of my information in accordance with the Postmedia's Privacy Policy. Postmedia wants to improve your reading experience as well as share the best deals and promotions from our advertisers with you. The information below will be used to optimize the content and make ads across the network more relevant to you. You can always change the information you share with us by editing your profile. By clicking "Create Account", I hearby grant permission to Postmedia to use my account information to create my account. I also accept and agree to be bound by Postmedia's Terms and Conditions with respect to my use of the Site and I have read and understand Postmedia's Privacy Statement. I consent to the collection, use, maintenance, and disclosure of my information in accordance with the Postmedia's Privacy Policy.
Mohammad Omar (musician) Ustad Mohammad Omar (1905–1980) was a musician from Afghanistan who played the rubab. Early life and career Mohammad Omar began music lessons under his father, Ibrahim, who taught him singing, sarod, rubab and dutar. In the mid-20th century, he was Director of the National Orchestra of Radio Afghanistan, which brought together folk musicians from the different regions and distinct ethnic communities of Afghanistan. In 1974, Mohammad Omar received a Fulbright-Hays Foreign Scholar Fellowship to teach at the University of Washington, making him the first Afghan musician to teach at a major university in the United States. On November 18, 1974, Mohammad Omar gave a public concert at the university, his first rabab performance in front of a Western audience; he was accompanied on tabla by Zakir Hussain. In 1978 he met the German jazz-rock groupe Embryo at the Goethe Institut in Kabul. The concert was filmed for the movie Vagabundenkarawane by Werner Penzel. Discography Embryo's Reise 1980 (Schneeball 20) Virtuoso from Afghanistan 2002 (SFW) Notes External links Smithsonian Folkways Page on Ustad Mohammad Omar Review of one of his albums on RootsWorld World Music Central - Ustad Mohammad Omar Category:1905 births Category:1980 deaths Category:Afghan musicians Category:Classical music in Afghanistan
Easter service this week was titled ‘Rescued’ because God sent His Son to rescue us from eternal death. We also had the privilege to hear one of the miners that had been trapped in the Chilean mine a few months back. He was buried along with 32 other miners for 58 days. All were rescued physically and 22 were rescued from eternal death. It was an amazing story. RESCUE >verb 1. to free from confinement, danger, or evil 2. save, deliver 3. to take (as a prisoner) forcibly from custody 4. to recover (as a prize) by force 5. to deliver (as a place under siege) by armed force — res·cu·able >adjective — rescue >noun — res·cu·er >noun EXAMPLES of RESCUE The survivors were rescued by the Coast Guard. ORIGIN: Middle English rescouen, rescuen, from Anglo-French rescure, from re- + escure to shake off, from Latin excutere, from ex- + quatere to shake First Known Use: 14th century There are many ways we use the word rescue. In the US we have rescue missions and pet rescue centers. Write down as many common phrases that include the word rescue. Next to each of them write down what they are being rescued from. DAY TWO Let’s consider animals today. There are many who believe that animals need to be rescued or saved. Just today while shopping at our local grocery store, I was asked to join this group raising money to rescue the whales. Star Trek: The Voyage Home was about rescuing whales. Journal today about your views on animals and how they might need to be rescued and from what. DAY THREE The second definition is about saving or delivering. Can you think of a movie about someone being rescued or delivered? Write about the movie plot. How did it make you feel? Is there someone you know that you want to save? DAY FOUR When I think of recovering something by force, I am reminded of our servicemen. They are rescuing so many things for the United States and I am forever grateful. Have you ever recovered something by force? What was it? Share with a serviceman today your grateful heart for their service to you. DAY FIVE Consider today about a time when you needed to be rescued. Does not have to be anything major. It could be a broken down car you were rescued from. It could be a friend who stood by you in a time of need. How did being rescued change your life? DAY SIX Let us spend some time today journaling about those in Japan. When disaster hits many rush to rescue those who cannot rescue themselves. Journal today about how you can assist in the rescue mission to those in Japan. DAY SEVEN As you go through your day today watch for a time to rescue someone. Sitting here at McDonalds I have seen many children that need help filling their drinks, reaching for napkins or drying their hands. It may not seem like a rescue but it really is. When we reach out to help one another we are rescuing them in a real way.
/**[链表][简单] 给定一个排序链表,删除所有重复的元素,使得每个元素只出现一次。 示例 1: 输入: 1->1->2 输出: 1->2 示例 2: 输入: 1->1->2->3->3 输出: 1->2->3 */ /* 个人感悟: 指针的使用要好好琢磨 */ /* 思路1:由于输入的列表已排序,因此我们可以通过将结点的值与它之后的结点进行比较来确定它是否为重复结点。 如果它是重复的,我们更改当前结点的 next 指针,以便它跳过下一个结点并直接指向下一个结点之后的结点。 */ /** * Definition for singly-linked list. * struct ListNode { * int val; * ListNode *next; * ListNode(int x) : val(x), next(NULL) {} * }; */ class Solution { public: ListNode* deleteDuplicates(ListNode* head) { ListNode* current = head; while(current && current->next){ if(current->val == current->next->val){ current->next = current->next->next; }else{ current = current->next; } } return head; } };
Bad Guy Boss boss at 4:00 p.m.: "i need this ready for tomorrow morning." boss, the following day: "what do you mean by claiming overtime for that work? i never approved overtime!"
Boulder Real Estate News and Blog Writer Todd Plummer rides the wave created by National Geographic's recent study that declared Boulder the "Happiest City in the United States." https://boulderluxurygroup.com/wp-content/uploads/2017/11/vogue-eat-sleep-boulder.jpg471780blgwebhttps://boulderluxurygroup.com/wp-content/uploads/2015/02/Boulder-realestate-topblue.pngblgweb2017-11-01 16:05:002017-11-01 16:05:00Vogue Covers Where to Eat and Sleep in America's Happiest City National Geographic, bestselling author Dan Buettner, and Gallup’s social scientists teamed up to develop an index that assesses measurable expressions of happiness and identifies where Americans are living their best lives. https://boulderluxurygroup.com/wp-content/uploads/2017/10/NatGeo-HappiestCities.jpg533800blgwebhttps://boulderluxurygroup.com/wp-content/uploads/2015/02/Boulder-realestate-topblue.pngblgweb2017-10-18 17:33:132017-10-18 17:33:50National Geographic Awards Boulder #1 Happiest City in the US The New York Times explores how Boulder, perhaps best known for craft beer and bicycles and for being home to Mork and Mindy, is known among foodies as the place where new companies are challenging the old guard in the food business. https://boulderluxurygroup.com/wp-content/uploads/2017/02/Foodies-NewYorkTimes.jpg413600blgwebhttps://boulderluxurygroup.com/wp-content/uploads/2015/02/Boulder-realestate-topblue.pngblgweb2017-02-07 22:43:492017-02-07 22:52:06Foodies Know: Boulder Has Become a Hub for New Producers The Boulder Luxury Group and its partners, Leyla Steele and Zach Zeldner, are pleased to announce we continue to lead Boulder's Luxury Market as #1 Sales Producers, producing more sales of $1M+ single family homes in Boulder, than any other agent or team. https://boulderluxurygroup.com/wp-content/uploads/2017/02/zach-leyla-awards.jpg378588blgwebhttps://boulderluxurygroup.com/wp-content/uploads/2015/02/Boulder-realestate-topblue.pngblgweb2017-01-15 23:22:282017-02-08 23:56:39We Are #1 Again The Boulder housing market is considered one of the best in the country in terms of stability, sound fundamentals, and appreciation. In 2016, Boulder experienced double digit average increases of 14% in appreciation according to the Federal Housing Finance Authority. With inventory low and demand high, choosing a listing agent who understands the unique value of your property, who has their finger on the pulse of the market, and who can eectively market your property to get you top dollar is now more important than ever.
#!/bin/ksh -p # # CDDL HEADER START # # This file and its contents are supplied under the terms of the # Common Development and Distribution License ("CDDL"), version 1.0. # You may only use this file in accordance with the terms of version # 1.0 of the CDDL. # # A full copy of the text of the CDDL should have accompanied this # source. A copy of the CDDL is also available via the Internet at # http://www.illumos.org/license/CDDL. # # CDDL HEADER END # # # Copyright (c) 2017 by Lawrence Livermore National Security, LLC. # . $STF_SUITE/include/libtest.shlib . $STF_SUITE/tests/functional/mmp/mmp.cfg verify_runnable "global" if [ -e $HOSTID_FILE ]; then log_unsupported "System has existing $HOSTID_FILE file" fi log_must set_tunable64 MULTIHOST_HISTORY $MMP_HISTORY log_must set_tunable64 MULTIHOST_INTERVAL $MMP_INTERVAL_DEFAULT log_must set_tunable64 MULTIHOST_FAIL_INTERVALS $MMP_FAIL_INTERVALS_DEFAULT log_pass "mmp setup pass"
/* Copyright (c) 2005-2019 Intel Corporation Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ #ifndef __TBB__flow_graph_node_impl_H #define __TBB__flow_graph_node_impl_H #ifndef __TBB_flow_graph_H #error Do not #include this internal file directly; use public TBB headers instead. #endif #include "_flow_graph_item_buffer_impl.h" //! @cond INTERNAL namespace internal { using tbb::internal::aggregated_operation; using tbb::internal::aggregating_functor; using tbb::internal::aggregator; template< typename T, typename A > class function_input_queue : public item_buffer<T,A> { public: bool empty() const { return this->buffer_empty(); } const T& front() const { return this->item_buffer<T, A>::front(); } bool pop( T& t ) { return this->pop_front( t ); } void pop() { this->destroy_front(); } bool push( T& t ) { return this->push_back( t ); } }; //! Input and scheduling for a function node that takes a type Input as input // The only up-ref is apply_body_impl, which should implement the function // call and any handling of the result. template< typename Input, typename Policy, typename A, typename ImplType > class function_input_base : public receiver<Input>, tbb::internal::no_assign { enum op_type {reg_pred, rem_pred, try_fwd, tryput_bypass, app_body_bypass, occupy_concurrency #if TBB_DEPRECATED_FLOW_NODE_EXTRACTION , add_blt_pred, del_blt_pred, blt_pred_cnt, blt_pred_cpy // create vector copies of preds and succs #endif }; typedef function_input_base<Input, Policy, A, ImplType> class_type; public: //! The input type of this receiver typedef Input input_type; typedef typename receiver<input_type>::predecessor_type predecessor_type; typedef predecessor_cache<input_type, null_mutex > predecessor_cache_type; typedef function_input_queue<input_type, A> input_queue_type; typedef typename A::template rebind< input_queue_type >::other queue_allocator_type; __TBB_STATIC_ASSERT(!((internal::has_policy<queueing, Policy>::value) && (internal::has_policy<rejecting, Policy>::value)), "queueing and rejecting policies can't be specified simultaneously"); #if TBB_DEPRECATED_FLOW_NODE_EXTRACTION typedef typename predecessor_cache_type::built_predecessors_type built_predecessors_type; typedef typename receiver<input_type>::predecessor_list_type predecessor_list_type; #endif //! Constructor for function_input_base function_input_base( graph &g, __TBB_FLOW_GRAPH_PRIORITY_ARG1(size_t max_concurrency, node_priority_t priority) ) : my_graph_ref(g), my_max_concurrency(max_concurrency) , __TBB_FLOW_GRAPH_PRIORITY_ARG1(my_concurrency(0), my_priority(priority)) , my_queue(!internal::has_policy<rejecting, Policy>::value ? new input_queue_type() : NULL) , forwarder_busy(false) { my_predecessors.set_owner(this); my_aggregator.initialize_handler(handler_type(this)); } //! Copy constructor function_input_base( const function_input_base& src) : receiver<Input>(), tbb::internal::no_assign() , my_graph_ref(src.my_graph_ref), my_max_concurrency(src.my_max_concurrency) , __TBB_FLOW_GRAPH_PRIORITY_ARG1(my_concurrency(0), my_priority(src.my_priority)) , my_queue(src.my_queue ? new input_queue_type() : NULL), forwarder_busy(false) { my_predecessors.set_owner(this); my_aggregator.initialize_handler(handler_type(this)); } //! Destructor // The queue is allocated by the constructor for {multi}function_node. // TODO: pass the graph_buffer_policy to the base so it can allocate the queue instead. // This would be an interface-breaking change. virtual ~function_input_base() { if ( my_queue ) delete my_queue; } task* try_put_task( const input_type& t) __TBB_override { return try_put_task_impl(t, internal::has_policy<lightweight, Policy>()); } //! Adds src to the list of cached predecessors. bool register_predecessor( predecessor_type &src ) __TBB_override { operation_type op_data(reg_pred); op_data.r = &src; my_aggregator.execute(&op_data); return true; } //! Removes src from the list of cached predecessors. bool remove_predecessor( predecessor_type &src ) __TBB_override { operation_type op_data(rem_pred); op_data.r = &src; my_aggregator.execute(&op_data); return true; } #if TBB_DEPRECATED_FLOW_NODE_EXTRACTION //! Adds to list of predecessors added by make_edge void internal_add_built_predecessor( predecessor_type &src) __TBB_override { operation_type op_data(add_blt_pred); op_data.r = &src; my_aggregator.execute(&op_data); } //! removes from to list of predecessors (used by remove_edge) void internal_delete_built_predecessor( predecessor_type &src) __TBB_override { operation_type op_data(del_blt_pred); op_data.r = &src; my_aggregator.execute(&op_data); } size_t predecessor_count() __TBB_override { operation_type op_data(blt_pred_cnt); my_aggregator.execute(&op_data); return op_data.cnt_val; } void copy_predecessors(predecessor_list_type &v) __TBB_override { operation_type op_data(blt_pred_cpy); op_data.predv = &v; my_aggregator.execute(&op_data); } built_predecessors_type &built_predecessors() __TBB_override { return my_predecessors.built_predecessors(); } #endif /* TBB_DEPRECATED_FLOW_NODE_EXTRACTION */ protected: void reset_function_input_base( reset_flags f) { my_concurrency = 0; if(my_queue) { my_queue->reset(); } reset_receiver(f); forwarder_busy = false; } graph& my_graph_ref; const size_t my_max_concurrency; size_t my_concurrency; __TBB_FLOW_GRAPH_PRIORITY_EXPR( node_priority_t my_priority; ) input_queue_type *my_queue; predecessor_cache<input_type, null_mutex > my_predecessors; void reset_receiver( reset_flags f) __TBB_override { if( f & rf_clear_edges) my_predecessors.clear(); else my_predecessors.reset(); __TBB_ASSERT(!(f & rf_clear_edges) || my_predecessors.empty(), "function_input_base reset failed"); } graph& graph_reference() __TBB_override { return my_graph_ref; } task* try_get_postponed_task(const input_type& i) { operation_type op_data(i, app_body_bypass); // tries to pop an item or get_item my_aggregator.execute(&op_data); return op_data.bypass_t; } private: friend class apply_body_task_bypass< class_type, input_type >; friend class forward_task_bypass< class_type >; class operation_type : public aggregated_operation< operation_type > { public: char type; union { input_type *elem; predecessor_type *r; #if TBB_DEPRECATED_FLOW_NODE_EXTRACTION size_t cnt_val; predecessor_list_type *predv; #endif /* TBB_DEPRECATED_FLOW_NODE_EXTRACTION */ }; tbb::task *bypass_t; operation_type(const input_type& e, op_type t) : type(char(t)), elem(const_cast<input_type*>(&e)) {} operation_type(op_type t) : type(char(t)), r(NULL) {} }; bool forwarder_busy; typedef internal::aggregating_functor<class_type, operation_type> handler_type; friend class internal::aggregating_functor<class_type, operation_type>; aggregator< handler_type, operation_type > my_aggregator; task* perform_queued_requests() { task* new_task = NULL; if(my_queue) { if(!my_queue->empty()) { ++my_concurrency; new_task = create_body_task(my_queue->front()); my_queue->pop(); } } else { input_type i; if(my_predecessors.get_item(i)) { ++my_concurrency; new_task = create_body_task(i); } } return new_task; } void handle_operations(operation_type *op_list) { operation_type *tmp; while (op_list) { tmp = op_list; op_list = op_list->next; switch (tmp->type) { case reg_pred: my_predecessors.add(*(tmp->r)); __TBB_store_with_release(tmp->status, SUCCEEDED); if (!forwarder_busy) { forwarder_busy = true; spawn_forward_task(); } break; case rem_pred: my_predecessors.remove(*(tmp->r)); __TBB_store_with_release(tmp->status, SUCCEEDED); break; case app_body_bypass: { tmp->bypass_t = NULL; __TBB_ASSERT(my_max_concurrency != 0, NULL); --my_concurrency; if(my_concurrency<my_max_concurrency) tmp->bypass_t = perform_queued_requests(); __TBB_store_with_release(tmp->status, SUCCEEDED); } break; case tryput_bypass: internal_try_put_task(tmp); break; case try_fwd: internal_forward(tmp); break; case occupy_concurrency: if (my_concurrency < my_max_concurrency) { ++my_concurrency; __TBB_store_with_release(tmp->status, SUCCEEDED); } else { __TBB_store_with_release(tmp->status, FAILED); } break; #if TBB_DEPRECATED_FLOW_NODE_EXTRACTION case add_blt_pred: { my_predecessors.internal_add_built_predecessor(*(tmp->r)); __TBB_store_with_release(tmp->status, SUCCEEDED); } break; case del_blt_pred: my_predecessors.internal_delete_built_predecessor(*(tmp->r)); __TBB_store_with_release(tmp->status, SUCCEEDED); break; case blt_pred_cnt: tmp->cnt_val = my_predecessors.predecessor_count(); __TBB_store_with_release(tmp->status, SUCCEEDED); break; case blt_pred_cpy: my_predecessors.copy_predecessors( *(tmp->predv) ); __TBB_store_with_release(tmp->status, SUCCEEDED); break; #endif /* TBB_DEPRECATED_FLOW_NODE_EXTRACTION */ } } } //! Put to the node, but return the task instead of enqueueing it void internal_try_put_task(operation_type *op) { __TBB_ASSERT(my_max_concurrency != 0, NULL); if (my_concurrency < my_max_concurrency) { ++my_concurrency; task * new_task = create_body_task(*(op->elem)); op->bypass_t = new_task; __TBB_store_with_release(op->status, SUCCEEDED); } else if ( my_queue && my_queue->push(*(op->elem)) ) { op->bypass_t = SUCCESSFULLY_ENQUEUED; __TBB_store_with_release(op->status, SUCCEEDED); } else { op->bypass_t = NULL; __TBB_store_with_release(op->status, FAILED); } } //! Creates tasks for postponed messages if available and if concurrency allows void internal_forward(operation_type *op) { op->bypass_t = NULL; if (my_concurrency < my_max_concurrency || !my_max_concurrency) op->bypass_t = perform_queued_requests(); if(op->bypass_t) __TBB_store_with_release(op->status, SUCCEEDED); else { forwarder_busy = false; __TBB_store_with_release(op->status, FAILED); } } task* internal_try_put_bypass( const input_type& t ) { operation_type op_data(t, tryput_bypass); my_aggregator.execute(&op_data); if( op_data.status == internal::SUCCEEDED ) { return op_data.bypass_t; } return NULL; } task* try_put_task_impl( const input_type& t, /*lightweight=*/tbb::internal::true_type ) { if( my_max_concurrency == 0 ) { return apply_body_bypass(t); } else { operation_type check_op(t, occupy_concurrency); my_aggregator.execute(&check_op); if( check_op.status == internal::SUCCEEDED ) { return apply_body_bypass(t); } return internal_try_put_bypass(t); } } task* try_put_task_impl( const input_type& t, /*lightweight=*/tbb::internal::false_type ) { if( my_max_concurrency == 0 ) { return create_body_task(t); } else { return internal_try_put_bypass(t); } } //! Applies the body to the provided input // then decides if more work is available task * apply_body_bypass( const input_type &i ) { return static_cast<ImplType *>(this)->apply_body_impl_bypass(i); } //! allocates a task to apply a body inline task * create_body_task( const input_type &input ) { return (internal::is_graph_active(my_graph_ref)) ? new( task::allocate_additional_child_of(*(my_graph_ref.root_task())) ) apply_body_task_bypass < class_type, input_type >( *this, __TBB_FLOW_GRAPH_PRIORITY_ARG1(input, my_priority)) : NULL; } //! This is executed by an enqueued task, the "forwarder" task* forward_task() { operation_type op_data(try_fwd); task* rval = NULL; do { op_data.status = WAIT; my_aggregator.execute(&op_data); if(op_data.status == SUCCEEDED) { task* ttask = op_data.bypass_t; __TBB_ASSERT( ttask && ttask != SUCCESSFULLY_ENQUEUED, NULL ); rval = combine_tasks(my_graph_ref, rval, ttask); } } while (op_data.status == SUCCEEDED); return rval; } inline task *create_forward_task() { return (internal::is_graph_active(my_graph_ref)) ? new( task::allocate_additional_child_of(*(my_graph_ref.root_task())) ) forward_task_bypass< class_type >( __TBB_FLOW_GRAPH_PRIORITY_ARG1(*this, my_priority) ) : NULL; } //! Spawns a task that calls forward() inline void spawn_forward_task() { task* tp = create_forward_task(); if(tp) { internal::spawn_in_graph_arena(graph_reference(), *tp); } } }; // function_input_base //! Implements methods for a function node that takes a type Input as input and sends // a type Output to its successors. template< typename Input, typename Output, typename Policy, typename A> class function_input : public function_input_base<Input, Policy, A, function_input<Input,Output,Policy,A> > { public: typedef Input input_type; typedef Output output_type; typedef function_body<input_type, output_type> function_body_type; typedef function_input<Input, Output, Policy,A> my_class; typedef function_input_base<Input, Policy, A, my_class> base_type; typedef function_input_queue<input_type, A> input_queue_type; // constructor template<typename Body> function_input( graph &g, size_t max_concurrency, __TBB_FLOW_GRAPH_PRIORITY_ARG1(Body& body, node_priority_t priority) ) : base_type(g, __TBB_FLOW_GRAPH_PRIORITY_ARG1(max_concurrency, priority)) , my_body( new internal::function_body_leaf< input_type, output_type, Body>(body) ) , my_init_body( new internal::function_body_leaf< input_type, output_type, Body>(body) ) { } //! Copy constructor function_input( const function_input& src ) : base_type(src), my_body( src.my_init_body->clone() ), my_init_body(src.my_init_body->clone() ) { } ~function_input() { delete my_body; delete my_init_body; } template< typename Body > Body copy_function_object() { function_body_type &body_ref = *this->my_body; return dynamic_cast< internal::function_body_leaf<input_type, output_type, Body> & >(body_ref).get_body(); } output_type apply_body_impl( const input_type& i) { // There is an extra copied needed to capture the // body execution without the try_put tbb::internal::fgt_begin_body( my_body ); output_type v = (*my_body)(i); tbb::internal::fgt_end_body( my_body ); return v; } //TODO: consider moving into the base class task * apply_body_impl_bypass( const input_type &i) { output_type v = apply_body_impl(i); #if TBB_DEPRECATED_MESSAGE_FLOW_ORDER task* successor_task = successors().try_put_task(v); #endif task* postponed_task = NULL; if( base_type::my_max_concurrency != 0 ) { postponed_task = base_type::try_get_postponed_task(i); __TBB_ASSERT( !postponed_task || postponed_task != SUCCESSFULLY_ENQUEUED, NULL ); } #if TBB_DEPRECATED_MESSAGE_FLOW_ORDER graph& g = base_type::my_graph_ref; return combine_tasks(g, successor_task, postponed_task); #else if( postponed_task ) { // make the task available for other workers since we do not know successors' // execution policy internal::spawn_in_graph_arena(base_type::graph_reference(), *postponed_task); } task* successor_task = successors().try_put_task(v); #if _MSC_VER && !__INTEL_COMPILER #pragma warning (push) #pragma warning (disable: 4127) /* suppress conditional expression is constant */ #endif if(internal::has_policy<lightweight, Policy>::value) { #if _MSC_VER && !__INTEL_COMPILER #pragma warning (pop) #endif if(!successor_task) { // Return confirmative status since current // node's body has been executed anyway successor_task = SUCCESSFULLY_ENQUEUED; } } return successor_task; #endif /* TBB_DEPRECATED_MESSAGE_FLOW_ORDER */ } protected: void reset_function_input(reset_flags f) { base_type::reset_function_input_base(f); if(f & rf_reset_bodies) { function_body_type *tmp = my_init_body->clone(); delete my_body; my_body = tmp; } } function_body_type *my_body; function_body_type *my_init_body; virtual broadcast_cache<output_type > &successors() = 0; }; // function_input // helper templates to clear the successor edges of the output ports of an multifunction_node template<int N> struct clear_element { template<typename P> static void clear_this(P &p) { (void)tbb::flow::get<N-1>(p).successors().clear(); clear_element<N-1>::clear_this(p); } template<typename P> static bool this_empty(P &p) { if(tbb::flow::get<N-1>(p).successors().empty()) return clear_element<N-1>::this_empty(p); return false; } }; template<> struct clear_element<1> { template<typename P> static void clear_this(P &p) { (void)tbb::flow::get<0>(p).successors().clear(); } template<typename P> static bool this_empty(P &p) { return tbb::flow::get<0>(p).successors().empty(); } }; #if TBB_DEPRECATED_FLOW_NODE_EXTRACTION // helper templates to extract the output ports of an multifunction_node from graph template<int N> struct extract_element { template<typename P> static void extract_this(P &p) { (void)tbb::flow::get<N-1>(p).successors().built_successors().sender_extract(tbb::flow::get<N-1>(p)); extract_element<N-1>::extract_this(p); } }; template<> struct extract_element<1> { template<typename P> static void extract_this(P &p) { (void)tbb::flow::get<0>(p).successors().built_successors().sender_extract(tbb::flow::get<0>(p)); } }; #endif //! Implements methods for a function node that takes a type Input as input // and has a tuple of output ports specified. template< typename Input, typename OutputPortSet, typename Policy, typename A> class multifunction_input : public function_input_base<Input, Policy, A, multifunction_input<Input,OutputPortSet,Policy,A> > { public: static const int N = tbb::flow::tuple_size<OutputPortSet>::value; typedef Input input_type; typedef OutputPortSet output_ports_type; typedef multifunction_body<input_type, output_ports_type> multifunction_body_type; typedef multifunction_input<Input, OutputPortSet, Policy, A> my_class; typedef function_input_base<Input, Policy, A, my_class> base_type; typedef function_input_queue<input_type, A> input_queue_type; // constructor template<typename Body> multifunction_input(graph &g, size_t max_concurrency, __TBB_FLOW_GRAPH_PRIORITY_ARG1(Body& body, node_priority_t priority) ) : base_type(g, __TBB_FLOW_GRAPH_PRIORITY_ARG1(max_concurrency, priority)) , my_body( new internal::multifunction_body_leaf<input_type, output_ports_type, Body>(body) ) , my_init_body( new internal::multifunction_body_leaf<input_type, output_ports_type, Body>(body) ) { } //! Copy constructor multifunction_input( const multifunction_input& src ) : base_type(src), my_body( src.my_init_body->clone() ), my_init_body(src.my_init_body->clone() ) { } ~multifunction_input() { delete my_body; delete my_init_body; } template< typename Body > Body copy_function_object() { multifunction_body_type &body_ref = *this->my_body; return *static_cast<Body*>(dynamic_cast< internal::multifunction_body_leaf<input_type, output_ports_type, Body> & >(body_ref).get_body_ptr()); } // for multifunction nodes we do not have a single successor as such. So we just tell // the task we were successful. //TODO: consider moving common parts with implementation in function_input into separate function task * apply_body_impl_bypass( const input_type &i) { tbb::internal::fgt_begin_body( my_body ); (*my_body)(i, my_output_ports); tbb::internal::fgt_end_body( my_body ); task* ttask = NULL; if(base_type::my_max_concurrency != 0) { ttask = base_type::try_get_postponed_task(i); } return ttask ? ttask : SUCCESSFULLY_ENQUEUED; } output_ports_type &output_ports(){ return my_output_ports; } protected: #if TBB_DEPRECATED_FLOW_NODE_EXTRACTION void extract() { extract_element<N>::extract_this(my_output_ports); } #endif void reset(reset_flags f) { base_type::reset_function_input_base(f); if(f & rf_clear_edges)clear_element<N>::clear_this(my_output_ports); if(f & rf_reset_bodies) { multifunction_body_type *tmp = my_init_body->clone(); delete my_body; my_body = tmp; } __TBB_ASSERT(!(f & rf_clear_edges) || clear_element<N>::this_empty(my_output_ports), "multifunction_node reset failed"); } multifunction_body_type *my_body; multifunction_body_type *my_init_body; output_ports_type my_output_ports; }; // multifunction_input // template to refer to an output port of a multifunction_node template<size_t N, typename MOP> typename tbb::flow::tuple_element<N, typename MOP::output_ports_type>::type &output_port(MOP &op) { return tbb::flow::get<N>(op.output_ports()); } inline void check_task_and_spawn(graph& g, task* t) { if (t && t != SUCCESSFULLY_ENQUEUED) { internal::spawn_in_graph_arena(g, *t); } } // helper structs for split_node template<int N> struct emit_element { template<typename T, typename P> static task* emit_this(graph& g, const T &t, P &p) { // TODO: consider to collect all the tasks in task_list and spawn them all at once task* last_task = tbb::flow::get<N-1>(p).try_put_task(tbb::flow::get<N-1>(t)); check_task_and_spawn(g, last_task); return emit_element<N-1>::emit_this(g,t,p); } }; template<> struct emit_element<1> { template<typename T, typename P> static task* emit_this(graph& g, const T &t, P &p) { task* last_task = tbb::flow::get<0>(p).try_put_task(tbb::flow::get<0>(t)); check_task_and_spawn(g, last_task); return SUCCESSFULLY_ENQUEUED; } }; //! Implements methods for an executable node that takes continue_msg as input template< typename Output, typename Policy> class continue_input : public continue_receiver { public: //! The input type of this receiver typedef continue_msg input_type; //! The output type of this receiver typedef Output output_type; typedef function_body<input_type, output_type> function_body_type; typedef continue_input<output_type, Policy> class_type; template< typename Body > continue_input( graph &g, __TBB_FLOW_GRAPH_PRIORITY_ARG1(Body& body, node_priority_t priority) ) : continue_receiver(__TBB_FLOW_GRAPH_PRIORITY_ARG1(/*number_of_predecessors=*/0, priority)) , my_graph_ref(g) , my_body( new internal::function_body_leaf< input_type, output_type, Body>(body) ) , my_init_body( new internal::function_body_leaf< input_type, output_type, Body>(body) ) { } template< typename Body > continue_input( graph &g, int number_of_predecessors, __TBB_FLOW_GRAPH_PRIORITY_ARG1(Body& body, node_priority_t priority) ) : continue_receiver( __TBB_FLOW_GRAPH_PRIORITY_ARG1(number_of_predecessors, priority) ) , my_graph_ref(g) , my_body( new internal::function_body_leaf< input_type, output_type, Body>(body) ) , my_init_body( new internal::function_body_leaf< input_type, output_type, Body>(body) ) { } continue_input( const continue_input& src ) : continue_receiver(src), my_graph_ref(src.my_graph_ref), my_body( src.my_init_body->clone() ), my_init_body( src.my_init_body->clone() ) {} ~continue_input() { delete my_body; delete my_init_body; } template< typename Body > Body copy_function_object() { function_body_type &body_ref = *my_body; return dynamic_cast< internal::function_body_leaf<input_type, output_type, Body> & >(body_ref).get_body(); } void reset_receiver( reset_flags f) __TBB_override { continue_receiver::reset_receiver(f); if(f & rf_reset_bodies) { function_body_type *tmp = my_init_body->clone(); delete my_body; my_body = tmp; } } protected: graph& my_graph_ref; function_body_type *my_body; function_body_type *my_init_body; virtual broadcast_cache<output_type > &successors() = 0; friend class apply_body_task_bypass< class_type, continue_msg >; //! Applies the body to the provided input task *apply_body_bypass( input_type ) { // There is an extra copied needed to capture the // body execution without the try_put tbb::internal::fgt_begin_body( my_body ); output_type v = (*my_body)( continue_msg() ); tbb::internal::fgt_end_body( my_body ); return successors().try_put_task( v ); } task* execute() __TBB_override { if(!internal::is_graph_active(my_graph_ref)) { return NULL; } #if _MSC_VER && !__INTEL_COMPILER #pragma warning (push) #pragma warning (disable: 4127) /* suppress conditional expression is constant */ #endif if(internal::has_policy<lightweight, Policy>::value) { #if _MSC_VER && !__INTEL_COMPILER #pragma warning (pop) #endif return apply_body_bypass( continue_msg() ); } else { return new ( task::allocate_additional_child_of( *(my_graph_ref.root_task()) ) ) apply_body_task_bypass< class_type, continue_msg >( *this, __TBB_FLOW_GRAPH_PRIORITY_ARG1(continue_msg(), my_priority) ); } } graph& graph_reference() __TBB_override { return my_graph_ref; } }; // continue_input //! Implements methods for both executable and function nodes that puts Output to its successors template< typename Output > class function_output : public sender<Output> { public: template<int N> friend struct clear_element; typedef Output output_type; typedef typename sender<output_type>::successor_type successor_type; typedef broadcast_cache<output_type> broadcast_cache_type; #if TBB_DEPRECATED_FLOW_NODE_EXTRACTION typedef typename sender<output_type>::built_successors_type built_successors_type; typedef typename sender<output_type>::successor_list_type successor_list_type; #endif function_output() { my_successors.set_owner(this); } function_output(const function_output & /*other*/) : sender<output_type>() { my_successors.set_owner(this); } //! Adds a new successor to this node bool register_successor( successor_type &r ) __TBB_override { successors().register_successor( r ); return true; } //! Removes a successor from this node bool remove_successor( successor_type &r ) __TBB_override { successors().remove_successor( r ); return true; } #if TBB_DEPRECATED_FLOW_NODE_EXTRACTION built_successors_type &built_successors() __TBB_override { return successors().built_successors(); } void internal_add_built_successor( successor_type &r) __TBB_override { successors().internal_add_built_successor( r ); } void internal_delete_built_successor( successor_type &r) __TBB_override { successors().internal_delete_built_successor( r ); } size_t successor_count() __TBB_override { return successors().successor_count(); } void copy_successors( successor_list_type &v) __TBB_override { successors().copy_successors(v); } #endif /* TBB_DEPRECATED_FLOW_NODE_EXTRACTION */ // for multifunction_node. The function_body that implements // the node will have an input and an output tuple of ports. To put // an item to a successor, the body should // // get<I>(output_ports).try_put(output_value); // // if task pointer is returned will always spawn and return true, else // return value will be bool returned from successors.try_put. task *try_put_task(const output_type &i) { // not a virtual method in this class return my_successors.try_put_task(i); } broadcast_cache_type &successors() { return my_successors; } protected: broadcast_cache_type my_successors; }; // function_output template< typename Output > class multifunction_output : public function_output<Output> { public: typedef Output output_type; typedef function_output<output_type> base_type; using base_type::my_successors; multifunction_output() : base_type() {my_successors.set_owner(this);} multifunction_output( const multifunction_output &/*other*/) : base_type() { my_successors.set_owner(this); } bool try_put(const output_type &i) { task *res = try_put_task(i); if(!res) return false; if(res != SUCCESSFULLY_ENQUEUED) { FLOW_SPAWN(*res); // TODO: Spawn task inside arena } return true; } protected: task* try_put_task(const output_type &i) { return my_successors.try_put_task(i); } template <int N> friend struct emit_element; }; // multifunction_output //composite_node #if __TBB_FLOW_GRAPH_CPP11_FEATURES template<typename CompositeType> void add_nodes_impl(CompositeType*, bool) {} template< typename CompositeType, typename NodeType1, typename... NodeTypes > void add_nodes_impl(CompositeType *c_node, bool visible, const NodeType1& n1, const NodeTypes&... n) { void *addr = const_cast<NodeType1 *>(&n1); fgt_alias_port(c_node, addr, visible); add_nodes_impl(c_node, visible, n...); } #endif } // internal #endif // __TBB__flow_graph_node_impl_H
Q: How to make a netctl profile for a TAP device? Seeking to make a netctl profile for a tap device. Here is the info I was given about the connection. GATEWAY=192.168.117.1 DNS=192.168.117.1 BROADCAST=255.255.255.255 **or** 192.168.117.255 (*I was given both of these different values*) PREFIX=31 STATIC IP ADDRESS=192.168.117.2/24 TYPE=TAP Netctl includes some examples. I used the one I found in examples/tuntap: Description='Example tuntap connection' Interface=tun0 Connection=tuntap Mode='tun' User='nobody' Group='nobody' ## Example IP configuration #IP=static #Address='10.10.1.2/16' Here is the profile I came up with: Description='My tap connection' Interface=tap0 Connection=tuntap Mode='tap' User='nobody' Group='nobody' IP=static Address='192.168.117.2/24' UsePeerDNS=true DefaultRoute=true SkipDAD=yes DHCPReleaseOnStop=yes Questions Do I need to specify the broadcast address or gateway? Is a prefix needed (and what is prefix 31)? Is there anything else I have overlooked? A: Do I need to specify the broadcast address or gateway? From the looks of this article/thread titled: [SOLVED] Static IP wired connection doesn't work with netctl the broadcast address can be incorporated into the static IP's definition. For example, they provided you with this: BROADCAST=255.255.255.255 or 192.168.117.255 (I was given both of these different values) I'd assume that the 2nd one, 192.168.117.255, is in fact correct, which would be a /24 mask, hence your Address= already has it: Address='192.168.117.2/24' Is a prefix needed (and what is prefix 31)? Prefixes or, prefix lengths, are described here in these two articles titled: How do prefix-lists work? Working with IP Addresses - The Internet Protocol Journal - Volume 9, Number 1 excerpt The prefix length is just a shorthand way of expressing the subnet mask. The prefix length is the number of bits set in the subnet mask; for instance, if the subnet mask is 255.255.255.0, there are 24 This table shows how they're calculated:                                   So in your case, this information is a bit confusing. Your network address appears to be /24, but your prefix length is 31 bits. In either case, I'd ignore the 31 for the time being, and go with the /24. Is there anything else I have overlooked? Everything else in your example profile appears to check out. You should be good to go. References netctl-profile man page netctl wiki page - ArchLinux
(1) Field of the Invention The present invention relates to a method and apparatus for feedback control of an air-fuel ratio in an internal combustion engine having at least one air-fuel ratio sensor downstream of a catalyst converter disposed within an exhaust gas passage. (2) Description of the Related Art Generally, in a feedback control of the air-fuel ratio sensor (O.sub.2 sensor) system, a base fuel amount TAUP is calculated in accordance with the detected intake air amount and detected engine speed, and the base fuel amount TAUP is corrected by an air-fuel ratio correction coefficient FAF which is calculated in accordance with the output of an air-fuel ratio sensor (for example, an O.sub.2 sensor) for detecting the concentration of a specific component such as the oxygen component in the exhaust gas. Thus, an actual fuel amount is controlled in accordance with the corrected fuel amount. The above-mentioned process is repeated so that the air-fuel ratio of the engine is brought close to a stoichiometric air-fuel ratio. According to this feedback control, the center of the controlled air-fuel ratio can be within a very small range of air-fuel ratios around the stoichiometric ratio required for three-way reducing and oxidizing catalysts (catalyst converter) which can remove three pollutants CO, HC, and NO.sub.X simultaneously from the exhaust gas. In the above-mentioned O.sub.2 sensor system where the O.sub.2 sensor is disposed at a location near the concentration portion of an exhaust manifold, i.e., upstream of the catalyst converter, the accuracy of the controlled air-fuel ratio is affected by individual differences in the characteristics of the parts of the engine, such as the O.sub.2 sensor, the fuel injection valves, the exhaust gas recirculation (EGR) valve, the valve lifters, individual changes due to the aging of these parts, environmental changes, and the like. That is, if the characteristics of the O.sub.2 sensor fluctuate, or if the uniformity of the exhaust gas fluctuates, the accuracy of the air-fuel ratio feedback correction amount FAF is also fluctuated, thereby causing fluctuations in the controlled air-fuel ratio. To compensate for the fluctuation of the controlled air-fuel ratio, double O.sub.2 sensor systems have been suggested (see: U.S. Pat. Nos. 3,939,654, 4,027,477, 4,130,095, 4,235,204). In a double O.sub.2 sensor system, another O.sub.2 sensor is provided downstream of the catalyst converter, and thus an air-fuel ratio control operation is carried out by the downstream-side O.sub.2 sensor is addition to an air-fuel ratio control operation carried out by the upstream-side O.sub.2 sensor. In the double O.sub.2 sensor system, although the downstream-side O.sub.2 sensor has lower response speed characteristics when compared with the upstream-side O.sub.2 sensor, the downstream-side O.sub.2 sensor has an advantage in that the output fluctuation characteristics are small when compared with those of the upstream-side O.sub.2 sensor, for the following reasons: (1) On the downstream side of the catalyst converter, the temperature of the exhaust gas is low, so that the downstream-side O.sub.2 sensor is not affected by a high temperature exhaust gas. (2) On the downstream side of the catalyst converter, although various kinds of pollutants are trapped in the catalyst converter, these pollutants have little affect on the downstream side O.sub.2 sensor. (3) On the downstream side of the catalyst converter, the exhaust gas is mixed so that the concentration of oxygen in the exhaust gas is approximately in an equilibrium state. Therefore, according to the double O.sub.2 sensor system, the fluctuation of the output of the upstream-side O.sub.2 sensor is compensated for by a feedback control using the output of the downstream-side O.sub.2 sensor. Actually, as illustrated in FIG. 1, in the worst case, the deterioration of the output characteristics of the O.sub.2 sensor in a single O.sub.2 sensor system directly effects a deterioration in the emission characteristics. On the other hand, in a double O.sub.2 sensor system, even when the output characteristics of the upstream-side O.sub.2 sensor are deteriorated, the emission characteristics are not deteriorated. That is, in a double O.sub.2 sensor system, even if only the output characteristics of the downstream-side O.sub.2 are stable, good emission characteristics are still obtained. In the above-mentioned double O.sub.2 sensor system, for example, an air-fuel ratio feedback control parameter such as a rich skip amount RSR and/or a lean skip amount RSL is calculated in accordance with the output of the downstream-side O.sub.2 sensor, and an air-fuel ratio correction amount FAF is calculated in accordance with the output of the upstream-side O.sub.2 sensor and the air-fuel ratio feedback control parameter (see: U.S. Pat. No. 4,693,076). In this case, the air-fuel ratio feedback control parameter is stored in a backup random access memory (RAM). Therefore, when the downstream-side O.sub.2 sensor is brought to a non-activation state, such as a fuel cut-off state, to stop the calculation of the air-fuel ratio feedback control parameter by the down- stream-side O.sub.2 sensor, the air-fuel ratio correction amount FAF is calculated in accordance with the output of the upstream-side O.sub.2 sensor and the air-fuel ratio feedback control parameter which was calculated in an activation state of the downstream-side O.sub.2 sensor (i.e., an air-fuel ratio feedback control mode for the downstream-side O.sub.2 sensor) and was stored in the backup RAM. Note that, in a fuel cut-off state, an air-fuel ratio feedback control for the upstream-side O.sub.2 sensor is also prohibited. In the above-mentioned double O.sub.2 sensor system, the air-fuel ratio feedback control conditions for the downstream side O.sub.2 sensor are as follows: the coolant temperature is higher than a predetermined value; the engine is not in an idling state; the engine is not in a fuel cut-off state; a secondary air suction system is not driven for forcibly causing the air-fuel ratio upstream of the catalyst converter; the downstream-side O.sub.2 sensor is in an activation state. Other conditions may be introduced. Therefore, even when all the air-fuel ratio feedback conditions for the downstream-side O.sub.2 sensor are satisfied, the downstream-side O.sub.2 sensor may be not completely in an activation state or the O.sub.2 storage effect of the three-way catalysts may remain. For example, when the engine is in a fuel cut-off state or in a lean driving state for forcibly causing the engine to be in a lean air-fuel ratio, regardless of the output of the O.sub.2 sensors, the three-way catalysts absorb O.sub.2 molecules, and therefore, immediately after the engine returns to a driving state of the stoichiometric air-fuel ratio, the three-way catalysts expel the stored O.sub.2 molecules therefrom. This is a so-called O.sub.2 storage effect. Particularly, at a descending driving mode, if racing occurs too frequently this invites fuel cut-off operations, and the O.sub.2 storage effect is remarkably exhibited. As a result, even when the air-fuel ratio upstream of the catalyst converter is actually rich, the air-fuel ratio downstream of the catalyst converter is lean for a long time, so that the output of the downstream-side O.sub.2 sensor indicates a lean state. Therefore, if an air-fuel ratio feedback control for the downstream-side O.sub.2 sensor is carried out immediately after the engine is switched to a driving state of the stoichiometric air-fuel ratio, the air-fuel ratio feedback control parameter may be so large or small that an air-fuel ratio feedback control by the upstream-side O.sub.2 sensor using the air-fuel ratio feedback control parameter produces an overrich air-fuel ratio, thus increasing the HC and CO emissions, and raising the fuel consumption. Particularly, in a system where the air-fuel ratio feedback control parameter is stored in the backup RAM in a fuel cut-off state or the like, as explained above, if frequent switching from a fuel cut-off state to a fuel cut-off recovery state and vice versa occurs, the controlled air-fuel ratio becomes further overrich, which means that an air-fuel ratio feedback control for the downstream-side O.sub.2 sensor is meaningless. The above-mentioned overrich air-fuel ratio problem is true for a single O.sub.2 sensor system having only one O.sub.2 sensor downstream of the catalyst converter.
The Saracens winger, nicknamed “Mr Arrogant” by opposition fans due to his theatrical try celebrations, went over twice to rub the Gunners’ noses in the Vicarage Road snow. And Scotland will be praying his efforts are not a sign of things to come at Twickenham in the Six Nations opener a week on Saturday. Ashton was delighted with his second try which caught the Gunners completely cold. He said: “Hardly any of their guys knew we had scored, it worked an absolute treat. “Owen made it look as if he was going to kick for goal but he had given me the nod out wide. Before they realised what was happening he fired the ball out in my direction, his weight and direction were perfect.” Edinburgh began well and Greg Tonks set up a raid with a 30-metre surge then Ben Cairns almost latched on to a lob by skipper Greig Laidlaw. But the muscle of the Sarries pack was clear to see as they demolished the Edinburgh front row at the first scrum. And when the Scots were shoved back in a rolling maul Owen Farrell hit his 25th successful pot at goal in a row after Stuart McInally was caught offside. More one-way traffic at a set-piece paved the way for Farrell to strike again in 15 minutes and his tally went to three soon after. Saracens then upped the pace to create a simple try. A chip by Richard Wigglesworth left the Edinburgh defence rooted – giving Ashton plenty of time and space to flop on the ball. Farrell’s amazing feat of marksmanship ended, however, as he saw his conversion effort crash back off the near post. The Gunners then grabbed a touchdown out of nothing. Wigglesworth made a hash of dealing with a kick from Tonks who followed up by hacking the ball towards the line before diving on it and sliding over. Laidlaw slotted the extras. A lack of concentration by the visitors then allowed Ashton to complete his try double. They assumed Farrell was about to tee up another shot at goal but instead he launched a pinpoint cross kick into the path of the winger who was left with a clear path to glory. Prop Matt Stevens rumbled over for touchdown No.3, goaled by Farrell. And veteran Charlie Hodgson darted over to ensure the precious bonus point with Farrell converting again. Chris Wyles completed the rout in stoppage time, Farrell adding his third conversion.
Some of our lambs resting in the shade. Our lambs are not organic because we give them some medication for parasites once or twice a season. However, they are grass-fed and receive a small amount of grain and kelp. They receive no hormones or artificial growth stimulants.
1. Field of the Invention The present invention relates to a three-dimensional image display device capable of displaying a three-dimensional image over its entire periphery, a method of manufacturing the same, and a three-dimensional image display method. 2. Description of the Related Art Various proposals have been made regarding a multi-directional three-dimensional image display device based on a light reproduction method which images a subject over its entire periphery or reproduces a three-dimensional image over the entire periphery of the subject on the basis of two-dimensional image information for three-dimensional image display and the like created by a computer. For example, a three-dimensional image display device which is observable from all directions is disclosed in “Three-dimensional image display device observable from all directions”, URL:http://hhil.hitachi.co.jp/products/transpost.html. This three-dimensional image display device includes a viewing angle restricted screen, a rotation mechanism, an upper mirror, a lower mirror group, a projector, and a personal computer, and displays a three-dimensional image using binocular parallax. The personal computer controls the projector and the rotation mechanism. The projector projects an image for three-dimensional image display onto the upper mirror. The image for three-dimensional image display projected onto the upper mirror is reflected by the lower mirror group and is then projected onto the viewing angle restricted screen. The viewing angle restricted screen rotates at high speed by the rotation mechanism. If the three-dimensional image display device is configured as described above, a three-dimensional image can be viewed from any angle of 360° because the background is transparent. A 3D video display which is observable from all directions is disclosed in “Cylindrical 3D Video Display Observable from All Directions”, URL:http://www.yendo.org/seelinder/. This 3D video display includes a cylindrical rotary body for three-dimensional image display and a motor. A plurality of vertical lines which allow light to be transmitted therethrough are provided on the peripheral surface of the rotary body. A timing controller, a ROM, an LED array, an LED driver, and an address counter are provided in the rotary body. The timing controller is connected to the address counter, the ROM, and the LED driver and controls outputs thereof. The image data for three-dimensional image display is stored in the ROM. On the other hand, a slip ring is provided at the rotary shaft of the rotary body. Electric power is supplied to components in the rotary body through the slip ring. The address counter generates an address on the basis of a set/reset signal from the timing controller. The ROM is connected to the address counter. The ROM receives a read control signal from the timing controller and an address from the address counter, reads the image data for three-dimensional image display, and outputs it to the LED driver. The LED driver receives the image data from the ROM and the emission control signal from the timing controller and drives the LED array. The LED array emits light by control of the LED driver. The motor rotates the rotary body. If the 3D video display is configured as described above, a three-dimensional image can be displayed over the range of the entire periphery of 360°. Accordingly, a three-dimensional image can be observed without wearing the glasses for binocular parallax. In relation to this kind of multi-directional three-dimensional image display device, JP-A-2004-177709 (page 8, FIG. 7) discloses a three-dimensional image display device. This three-dimensional image display device includes a light allocation means and a cylindrical two-dimensional pattern display means. The light allocation means is provided on the front or back surface of a display screen which has a convex curved shape when seen by a viewer. The light allocation means has a curved surface on which a plurality of openings are formed or lenses are formed in the array shape, so that light beams from a plurality of pixels on the display screen are allocated to the openings or the lenses. The two-dimensional pattern display means displays a two-dimensional pattern on the display screen. If the three-dimensional image display device is configured as described above, it is possible to efficiently execute image mapping of a three-dimensional image which makes full-motion moving image display easy. Accordingly, even if the viewing position is changed, a three-dimensional image can be displayed with high resolution without having an adverse effect on the three-dimensional image. Moreover, JP-A-2005-114771 (page 8, FIG. 3) discloses a light reproduction type display device. This display device includes one light emitting unit and a cylindrical screen. The light emitting unit has a structure capable of rotating around the rotary shaft. The screen is disposed around the light emitting unit and forms a part of a rotary body which is axisymmetric with respect to the rotary shaft. A plurality of light emitting sections are arrayed on a side of the light emitting unit facing the screen. Two or more different directions are emission directions of light beams of the light emitting sections, and the emission angle of light is restricted to a predetermined range. The light emitting unit rotates around the rotary shaft for rotation scanning of the light emitting sections and the amount of emitted light of the light emitting section is modulated according to the given information so that an image is displayed on a screen. If the display device is configured as described above, a three-dimensional image can be displayed over the range of the entire periphery of 360°. Accordingly, many people can observe the three-dimensional image without wearing the glasses for binocular parallax. Moreover, JP-T-2002-503831 discloses a display device which presents the same image to all viewers, who are present around the device, by displaying an image in a curved state within a cylindrical device while rotating the entire device. JP-A-10-97013 discloses a three-dimensional display device which performs three-dimensional display by making a display unit, which irradiates light with a unit angle of predetermined parallax, among a plurality of display units corresponding to the parallax number emitting light to a viewer while rotating it.
"Meth mouth": rampant caries in methamphetamine abusers. Rampant dental caries is a characteristic finding in methamphetamine abusers. The popularity of methamphetamine, particularly among the gay community where it is linked to the spread of HIV, its ready availability, and rapid spread across the nation have placed methamphetamine use in an epidemic status in many communities unaccustomed to dealing with drug abuse. We present a case of a 25-year-old male "meth" abuser of unknown HIV, hepatitis B virus (HBV), and hepatitis C virus (HCV) status to promote recognition by the health care team of the association of rampant dental caries with methamphetamine abuse for appropriate intervention to ensure successful treatment and prevention of disease progression.
Center for Neurobiology and Behavior, College of Physicians and Surgeons of Columbia University, 1051 Riverside Drive, New York, NY 10032, USA. Abstract Restricted and regulated expression in mice of VP16-CREB, a constitutively active form of CREB, in hippocampal CA1 neurons lowers the threshold for eliciting a persistent late phase of long-term potentiation (L-LTP) in the Schaffer collateral pathway. This L-LTP has unusual properties in that its induction is not dependent on transcription. Pharmacological and two-pathway experiments suggest a model in which VP16-CREB activates the transcription of CRE-driven genes and leads to a cell-wide distribution of proteins that prime the synapses for subsequent synapse-specific capture of L-LTP by a weak stimulus. Our analysis indicates that synaptic capture of CRE-driven gene products may be sufficient for consolidation of LTP and provides insight into the molecular mechanisms of synaptic tagging and synapse-specific potentiation.
This invention relates to a sport device of the type used in striking a sport object and propelling it at a distance from the user and in particularly, an improved golf club wherein player control is enhanced. A number of sporting activities require the participant to grasp and move a sport device, be it a golf club, croquet or polo mallet in a controlled manner to strike and propel an object typically, a ball, toward a defined goal. In case of the game of golf, the goal is very much reduced in size so that very precise strokes are needed to ultimately place the sport object in the hole. The overall weight of the club determines whether or not the user has sufficient strength to handle the club. Adjustments to the overall weight are typically made by adding material at the handle and/or in the region where the clubhead or striking surface is joined to the end of the club shaft. In the case of golf clubs, this region proximate to the head is termed the hosel. While overall weight is significant for the user, performance is affected as well by the swing weight of the club. In the athletic goods industry, swing weight of a club refers to the relationship of the clubhead weight to the overall weight of the club. The swing weight scale has sixty gradations each of which signifies a certain ratio of weights apart from the overall club weight. As the clubhead weight increases with an increasing swing weight, the shaft bends more during the swing and the club swings heavier and slower. The traditional method of altering swing weight in a golf club is to disassemble the club and add or subtract lead in the hosel region where the head meets the shaft. However, the swing weight can also be altered during manufacture by adding weight to the handle region if the clubhead can no longer tolerate the removal of additional material. The combination of simultaneously changing clubhead and handle-region weights to accommodate a particular player's preference without altering the overall weight of the club has been suggested in the past. The adjustments needed to move several gradations on the swing weight scale are slight since one gradation is approximately equal to the weight of a dollar bill. An experienced golfer can typically sense a variation of three gradations in swing weight. While swing weight is important for the feel and performance of clubs used to propel the ball large distances, the significance thereof decreases when the distances are shorter. Stability of the club when within the player's grasp is increasingly important as the accuracy demanded of the club increases, the most sensitive club being the putter with which the ball is taken to the hole. The need for enhanced stability has generated a family of golf clubs wherein weight distribution within the club head has been altered without changing overall club weight or the swing weight. Primarily this occurred through the concentration of the head weight in regions on either side of the striker by making the surface area smaller or reducing its thickness. These steps to provide clubs with variable swing weights or altered clubheads have concentrated their efforts on maintain- ing the overall weight characteristics constant. These changes have relied upon the use of techniques acting within the length of the club and to this end have not directed their attention to providing stabilization by intentionally moving the center of gravity of the club closer to the hand-grip region of the club. Accordingly, it is an object of the present invention to provide sport clubs with improved stability especially those clubs wherein the need for accuracy is paramount. This invention enables the user to alter the distance with which the center of gravity of the club is moved along the shaft without requiring assistance from the manufacturer or a technician. Furthermore, this invention can be installed on existing clubs and still permit modification of the effect by the user according to his perception of the stabilizing effect required for his game.
Electrophoretic analysis of ITS from Piscirickettsia salmonis Chilean isolates. Piscirickettsia salmonis is the most important pathogen in salmonid mariculture in Chile. Since it was reported numerous piscirickettsiosis outbreaks have occurred differing in virulence and mortality. Genetic variability of P. salmonis isolates has been suggested as one factor to explain this. However until now isolates obtained from outbreaks have not been analyzed. Knowledge of genetic variability of P. salmonis is very limited and also a useful screening method for genetic variations in isolates without sequencing is not available. Here we report an electrophoretic analysis of internal transcribed spacer region (ITS) of eleven P. salmonis isolates obtained from different salmon species and places in southern Chile. When PCR products were submitted to polyacrylamide gel electrophoresis (PAGE) a characteristic electrophoretic pattern was observed, distinguishable from ITS of other bacteria, including fish pathogens. Even though this pattern is conserved in all isolates, a difference in ITS electrophoretic mobility was observed, determining clearly two groups: ITS with higher or with lower electrophoretic mobility, including LF-89 and EM-90 isolates, respectively. A higher ITS sequence homology inside each group was shown by heteroduplex mobility assay (HMA). Our results show that genetic variability between Chilean P. salmonis isolates allows the differentiation of two groups with similar behavior observed previously when six P. salmonis isolates from three geographic origins were analyzed by 16S, 23S and ITS sequencing. PAGE analysis of ITS and HMA could be a basis to develop an assay for screening genetic variability between P. salmonis isolates.
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/1999/REC-html401-19991224/strict.dtd"> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0, maximum-scale=1.0, user-scalable=0" /> <title>Mensagem da loja {shop_name}</title> <style> @media only screen and (max-width: 300px){ body { width:218px !important; margin:auto !important; } .table {width:195px !important;margin:auto !important;} .logo, .titleblock, .linkbelow, .box, .footer, .space_footer{width:auto !important;display: block !important;} span.title{font-size:20px !important;line-height: 23px !important} span.subtitle{font-size: 14px !important;line-height: 18px !important;padding-top:10px !important;display:block !important;} td.box p{font-size: 12px !important;font-weight: bold !important;} .table-recap table, .table-recap thead, .table-recap tbody, .table-recap th, .table-recap td, .table-recap tr { display: block !important; } .table-recap{width: 200px!important;} .table-recap tr td, .conf_body td{text-align:center !important;} .address{display: block !important;margin-bottom: 10px !important;} .space_address{display: none !important;} } @media only screen and (min-width: 301px) and (max-width: 500px) { body {width:308px!important;margin:auto!important;} .table {width:285px!important;margin:auto!important;} .logo, .titleblock, .linkbelow, .box, .footer, .space_footer{width:auto!important;display: block!important;} .table-recap table, .table-recap thead, .table-recap tbody, .table-recap th, .table-recap td, .table-recap tr { display: block !important; } .table-recap{width: 295px !important;} .table-recap tr td, .conf_body td{text-align:center !important;} } @media only screen and (min-width: 501px) and (max-width: 768px) { body {width:478px!important;margin:auto!important;} .table {width:450px!important;margin:auto!important;} .logo, .titleblock, .linkbelow, .box, .footer, .space_footer{width:auto!important;display: block!important;} } @media only screen and (max-device-width: 480px) { body {width:308px!important;margin:auto!important;} .table {width:285px;margin:auto!important;} .logo, .titleblock, .linkbelow, .box, .footer, .space_footer{width:auto!important;display: block!important;} .table-recap{width: 295px!important;} .table-recap tr td, .conf_body td{text-align:center!important;} .address{display: block !important;margin-bottom: 10px !important;} .space_address{display: none !important;} } </style> </head> <body style="-webkit-text-size-adjust:none;background-color:#fff;width:650px;font-family:Open-sans, sans-serif;color:#555454;font-size:13px;line-height:18px;margin:auto"> <table class="table table-mail" style="width:100%;margin-top:10px;-moz-box-shadow:0 0 5px #afafaf;-webkit-box-shadow:0 0 5px #afafaf;-o-box-shadow:0 0 5px #afafaf;box-shadow:0 0 5px #afafaf;filter:progid:DXImageTransform.Microsoft.Shadow(color=#afafaf,Direction=134,Strength=5)"> <tr> <td class="space" style="width:20px;padding:7px 0">&nbsp;</td> <td align="center" style="padding:7px 0"> <table class="table" bgcolor="#ffffff" style="width:100%"> <tr> <td align="center" class="logo" style="border-bottom:4px solid #333333;padding:7px 0"> <a title="{shop_name}" href="{shop_url}" style="color:#337ff1"> <img src="{shop_logo}" alt="{shop_name}" /> </a> </td> </tr> <tr> <td align="center" class="titleblock" style="padding:7px 0"> <font size="2" face="Open-sans, sans-serif" color="#555454"> <span class="title" style="font-weight:500;font-size:28px;text-transform:uppercase;line-height:33px">Olá,</span> </font> </td> </tr> <tr> <td class="space_footer" style="padding:0!important">&nbsp;</td> </tr> <tr> <td class="box" style="border:1px solid #D6D4D4;background-color:#f8f8f8;padding:7px 0"> <table class="table" style="width:100%"> <tr> <td width="10" style="padding:7px 0">&nbsp;</td> <td style="padding:7px 0"> <font size="2" face="Open-sans, sans-serif" color="#555454"> <p data-html-only="1" style="border-bottom:1px solid #D6D4D4;margin:3px 0 7px;text-transform:uppercase;font-weight:500;font-size:18px;padding-bottom:10px"> Inscrição na newsletter </p> <span style="color:#777"> Por se cadastrar na nossa newsletter, nós temos o prazer de lhe oferecer o seguinte cupom de desconto: <span style="color:#333"><strong>{discount}</strong></span> </span> </font> </td> <td width="10" style="padding:7px 0">&nbsp;</td> </tr> </table> </td> </tr> <tr> <td class="space_footer" style="padding:0!important">&nbsp;</td> </tr> <tr> <td class="footer" style="border-top:4px solid #333333;padding:7px 0"> <span><a href="{shop_url}" style="color:#337ff1">{shop_name}</a> baseado em <a href="https://webkul.com" style="color:#337ff1">Webkul&trade;</a></span> </td> </tr> </table> </td> <td class="space" style="width:20px;padding:7px 0">&nbsp;</td> </tr> </table> </body> </html>
Osteolytic Paget's bone disease in a young man. Rapid healing with human calcitonin therapy. Paget's bone disease is rare in young adults. Severe osteolytic Paget's bone disease in a 28-year-old man was found to respond, clinically, biochemically, and radiographically, within one month to daily subcutaneous injections of 0.5 mg of synthetic human calcitonin. After two years of therapy, he remains asymptomatic and has no biochemical evidence of Paget's bone disease while receiving injections three times a week. Despite aggressive disease, young patients may rapidly demonstrate the same beneficial response to synthetic human calcitonin therapy as has been observed in middle-aged or elderly patients with Paget's bone disease.
Q: header image alignment coming wrongly in NavigationView android i have created Navigation View in my app.for header,i have added image and textview.but those alignment are not coming properly.it always start above the screen of the mobile. Thanks in Advance. Here is my code: <android.support.v4.widget.DrawerLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" xmlns:tools="http://schemas.android.com/tools" android:id="@+id/drawer_layout" android:layout_width="match_parent" android:layout_height="match_parent" android:fitsSystemWindows="true" tools:openDrawer="start"> <include layout="@layout/app_bar_main" /> <android.support.design.widget.NavigationView android:id="@+id/nav_view" android:layout_width="wrap_content" android:layout_height="match_parent" android:layout_gravity="start" android:fitsSystemWindows="true" app:headerLayout="@layout/nav_header_main" app:menu="@menu/activity_main_drawer" > </android.support.design.widget.NavigationView> HeaderView: <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:id="@+id/view_container" android:layout_width="match_parent" android:layout_height="wrap_content" android:theme="@style/ThemeOverlay.AppCompat.Dark"> <RelativeLayout android:id="@+id/relativeLayout2" android:layout_width="match_parent" android:layout_height="250dp" android:background="@color/colorPrimary"> <!-- <ImageView android:id="@+id/img_header_bg" android:layout_width="match_parent" android:layout_height="match_parent" android:background="@color/colorPrimary" android:contentDescription="@string/app_name" android:scaleType="fitXY" />--> <de.hdodenhof.circleimageview.CircleImageView android:id="@+id/img_logo" android:layout_width="130dp" android:layout_height="130dp" android:scaleType="centerCrop" /> <!-- <ImageView android:id="@+id/img_logo" android:layout_width="match_parent" android:layout_height="match_parent" android:layout_centerInParent="true" android:layout_marginLeft="45dp" android:layout_marginStart="45dp" android:contentDescription="@string/app_name" android:scaleType="fitXY" />--> <TextView android:id="@+id/name" android:layout_width="match_parent" android:layout_height="wrap_content" android:text="" android:gravity="center" android:layout_below="@+id/img_logo" android:textAppearance="@style/TextAppearance.AppCompat.Body1" /> <TextView android:id="@+id/user_name" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_below="@+id/name" android:layout_centerInParent="true" android:gravity="center" android:textAppearance="@style/TextAppearance.AppCompat.Body1" /> </RelativeLayout> <LinearLayout android:id="@+id/linear_1" android:layout_width="match_parent" android:layout_height="40dp" android:layout_below="@+id/relativeLayout2" android:background="@color/colorSpinnernavihead" android:visibility="gone"> <Spinner android:layout_width="match_parent" android:layout_height="match_parent" android:spinnerMode="dropdown" android:popupBackground="@color/colorWhite" android:dropDownWidth="200dp" android:backgroundTint="@color/colorBlack" android:id="@+id/spinner_navigation"> </Spinner> </LinearLayout> <LinearLayout android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_below="@+id/linear_1" android:layout_alignParentBottom="true" android:background="@color/colorWhite" android:orientation="vertical"> <View android:id="@+id/view_line" android:layout_width="match_parent" android:layout_height="1dp" android:background="@color/colorGrayNormal"/> </LinearLayout> A: replace your code with this <RelativeLayout android:id="@+id/relativeLayout2" android:layout_width="match_parent" android:layout_height="250dp" android:background="@color/colorPrimary"> <de.hdodenhof.circleimageview.CircleImageView android:id="@+id/img_logo" android:layout_width="130dp" android:layout_height="130dp" android:layout_centerHorizontal="true" android:layout_marginTop="10dp" android:scaleType="centerCrop" /> <TextView android:id="@+id/name" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_below="@+id/img_logo" android:gravity="center" android:text="" android:textAppearance="@style/TextAppearance.AppCompat.Body1" /> <TextView android:id="@+id/user_name" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_below="@+id/name" android:layout_centerInParent="true" android:gravity="center" android:textAppearance="@style/TextAppearance.AppCompat.Body1" /> </RelativeLayout>
Q: Can't get values submitted using thymeleaf I am new with thymeleaf and I am having a doubt about how to retrieve a field from html (using Thymeleaf) to Java (Spring Boot). Follow the code and the error that I am having: HTML (part with issue) <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml" xmlns:th="http://www.thymeleaf.org"> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> <meta name="viewport" content="width=device-width, initial-scale=1, maximum-scale=1.0" /> <title>Entity Migration</title> <!-- CSS --> <link href="https://fonts.googleapis.com/icon?family=Material+Icons" rel="stylesheet"> <link href="css/style.css" type="text/css" rel="stylesheet" media="screen,projection" /> <link href="css/materialize.css" type="text/css" rel="stylesheet" media="screen,projection" /> </head> <body> <nav class="light-blue lighten-1" role="navigation"> <div class="nav-wrapper container"> <a id="logo-container" href="#" class="brand-logo">Entity Migration</a> <ul class="right hide-on-med-and-down"> <li><a href="#">Logout</a></li> </ul> <ul id="nav-mobile" class="side-nav"> <li><a href="#">Entity Migration</a></li> </ul> <a href="#" data-activates="nav-mobile" class="button-collapse"><i class="material-icons">Logout</i></a> </div> </nav> <div class="section no-pad-bot" id="index-banner"> <div class="container"> <br> <br> <div class="row center"> <h5 class="header col s12 light">Fill the form below to generate your XML:</h5> </div> <br> <br> </div> </div> <form class="col s12" action="#" th:action="@{/prepareData}" th:object="${entity}" method="post"> <div class="row"> <div class="input-field col s4"> <input type="number" id="release" name="release" placeholder="Release" class="validate" th:value="*{release}"/> </div> <div class="input-field col s4"> <input placeholder="Version" id="version" name="version" type="number" class="validate" th:value="*{version}"/> </div> </div> <input type="submit" value="Generate XML" id="generate" class="btn-large waves-effect waves-light orange" /> </div> </form> <!-- Scripts--> <script src="https://code.jquery.com/jquery-2.1.1.min.js"></script> <script src="js/init.js"></script> <script src="js/materialize.js"></script> <script> $(document).ready(function() { $('select').material_select(); }); </script> </body> </html> Java (Spring Boot Controller) @PostMapping(value = "/prepareData") public String prepareData(@ModelAttribute(value="entity") EntityMigration entity) { TemplatePrepare tP = new TemplatePrepare(); tP.prepareMainTemplate(entity); return "results"; EntityMigration (Java Model) public class EntityMigration { private String release; private String version; public String getRelease() { return release; } public void setRelease(String release) { this.release = release; } public String getVersion() { return version; } public void setVersion(String version) { this.version = version; } } Error 2017-11-16 14:01:02.445 ERROR 26932 --- [nio-8080-exec-1] org.thymeleaf.TemplateEngine : [THYMELEAF][http-nio-8080-exec-1] Exception processing template "mainForm": An error happened during template parsing (template: "class path resource [templates/mainForm.html]") org.thymeleaf.exceptions.TemplateInputException: An error happened during template parsing (template: "class path resource [templates/mainForm.html]") (...) Caused by: org.attoparser.ParseException: Exception evaluating SpringEL expression: "release" (template: "mainForm" - line 55, col 23) (...) org.springframework.expression.spel.SpelEvaluationException: EL1007E: Property or field 'release' cannot be found on null (...) What am I doing wrong? Thank you. A: Parsing html exception was caused by forgetting to close input tags. Please replace: <input type="number" id="release" placeholder="Release" class="validate" th:value="*{release}"> <input placeholder="Version" id="version" type="number" class="validate"> with: <input type="number" id="release" placeholder="Release" class="validate" th:value="*{release}"/> <input placeholder="Version" id="version" type="number" class="validate"/> Latter error: org.springframework.expression.spel.SpelEvaluationException: EL1007E: Property or field 'release' cannot be found on null is caused by trying to access 'release' on 'entity' -> entity is null so Thymeleaf can't render it. You must addAttribute 'entity' to model in order to render it. In order to avoid SpelEvaluationException you can check for null in controller: if (entityManager!= null) { model.addAttribute("entity", entityManager); } else { model.addAttribute("entity", new EntityManager()); }
The renderings for the new, smaller O’Gara’s Bar and Grill on Snelling and Selby avenues have been released, and that corner is going to look very different. The design for the space where the sprawling pub currently sits includes 163 apartment units, 205 parking stalls, an integrated co-working space and more than 4,000 square feet of space for O’Gara’s. The current bar space is more than four times that size. Dan O’Gara, the third-generation owner of the restaurant, said in a statement: “This is a perfect project for the next chapter of O’Gara’s. While we will retain lots of little touches that will remind of our past, our smaller footprint and outdoor patio area will reshape our business, making it feel like a great neighborhood pub and restaurant.” Ryan Cos. is the developer on the project. The company submitted plans to the city of St. Paul and the goal is to begin construction this fall.
Introduction {#Sec1} ============ In the past, it was customary to withhold analgesia, particularly opioids, in patients with acute abdominal pains prior to establishment of a definitive diagnosis. Analgesics were thought to interfere with diagnosis by masking the evolution of symptoms and signs with a subsequent delay in surgical treatment \[[@CR1]\]. The dogma was popularized by Cope's \"Early Diagnosis of the Acute Abdomen,\" first published in 1926, and this dogma may have been valid at that time in the absence of sophisticated diagnostic facilities and also because of the traditional use of large doses of morphine. This approach, which is not evidence based, has been challenged by numerous studies in the last decade \[[@CR2]--[@CR8]\]. These studies have demonstrated a significant reduction of pain at the time of initial assessment without interfering with diagnostic accuracy. A more recent edition of Cope's \"Early Diagnosis of the Acute Abdomen\" suggests that withholding analgesics is a cruel practice that should be condemned \[[@CR9]\]. Despite this growing evidence of the usefulness of analgesia in such clinical situations, there is still reluctance on the part of many physicians to prescribe analgesia in these cases \[[@CR10]--[@CR13]\]. Emergency medicine as a speciality is still in a rudimentary state of development in many countries in the developing world. Patients with acute abdominal symptoms are first seen mostly by general medical practitioners and casualty officers in the emergency rooms in these countries. In light of this, it is imperative to bring to attention issues that will enhance patient care by this group of physicians regarding the management of acute abdominal pain in the clinical setting in developing countries. For this reason, a survey was carried out to evaluate the current opinion and practice of Nigerian doctors regarding the use of analgesics in patients with acute abdominal pain during the initial evaluation. Methodology {#Sec3} =========== A one-page survey was distributed to Nigerian doctors by two of the authors on different occasions and at different locations in the country where there were assemblies of Nigerian doctors from different parts of the country, such as conferences, seminars and professional association meetings, in 2007. The surveys were collected on the spot. Demographic data such as age, sex, qualification, specialty, post-qualification experience in years, level of practice and analgesia policy of the respondents' institutions were requested. The survey included information regarding analgesic use in acute abdominal pain, such as the average number of patients with abdominal pain seen per year, the type of analgesic prescribed, when analgesics were prescribed and the reason to withhold analgesics, if any. Also the effect of analgesics on the evolution of signs, diagnostic accuracy and outcome were surveyed. The respondents were divided into two groups based on post-qualification experience: "less experienced" and "experienced." The respondents with less than 10 years' post-qualification experience were classified as "less experienced." Those with more than 10 years' post-qualification experience were classified as "experienced." The respondents were classed into another two groups based on whether they were in a surgical specialty. Those in a surgical specialty, including gynecology, were classed as "surgeons," and those who were in other specialties were classed as "non-surgeons." Doctors with less than 2 years' post-qualification experience and those not in clinical practice were excluded. Epi Info 2003 statistical software was used to analyze the results. Some of the results were expressed in percentages. Chi-square test was applied to the observed values in the "experienced" and "less experienced" groups, and also to the "surgeon" and "non-surgeon" groups. The significance level was set at p \< 0.05. Results {#Sec4} ======= A total of 562 surveys were distributed, of which 23 were excluded from the analysis because of incomplete data. The respondents were 497 males (92.5%) and 42 females (7.5%). The age range of the respondents was 28--57 years, with a mean of 37.1 years and standard deviation of 7.15 years. Post-qualification experience in years ranged from 2 to 33, with a mean of 10.7 and standard deviation of 7.44. Of the respondents, 294 (54.5%) had less than 10 years' post-qualification experience; 168 respondents (31.2%) had postgraduate fellowship qualifications, 133 (24.7%) were specialist or specialist in training in surgery, and 245 (45.5%) were in a surgical specialty; 490 respondents (90.9%) practiced at the tertiary care level (Table [1](#Tab1){ref-type="table"}). Table 1Some characteristics of respondentsSpecialtySurgical specialty24545.5%Surgery13324.7%Gynecology and obstetrics12220.8%Others10519.5%None18935.1%*Level of practice*Primary care142.6%Secondary care356.5%Tertiary care49090.9% Ninety-one respondents (16.9%) practiced in institutions with an analgesic policy for use in abdominal pain; 336 respondents (62.8%) saw more than 40 patients per year (Fig. [1](#Fig1){ref-type="fig"}). Analgesics prescribed included antispasmodics by 168 respondents (31.2%) and simple analgesics by 154 respondents (28.6%) (Fig. [2](#Fig2){ref-type="fig"}). Fig. 1Number of patients with abdominal pain seen per year by respondentsFig. 2Analgesic type prescribed by respondents Four hundred fifty-five respondents (84.4%) thought analgesics interfere with the evolution of signs, 420 (77.9%) felt they cause impairment of diagnosis, and 294 (54.5%) thought they have adverse effects on outcome (Table [2](#Tab2){ref-type="table"}). Table 2Respondents' views on analgesic interference with evolution of signs, effect on diagnosis and outcomeInterference with evolution of signsYes45584.4%No8415.6%Effect on diagnosis Enhance285.2% Impair42077.9% No effect9116.9%Effect on outcome Adverse effect29454.5% Beneficial11220.8% None13324.7% Two hundred seventy-two respondents (50.4%) would not administer analgesics if the diagnosis was unclear; 65 respondents (12%) would not if a surgical opinion was required. Other reasons for withholding analgesics are presented in Fig. [3](#Fig3){ref-type="fig"}. Fig. 3Reasons to withhold analgesics by respondents Two hundred sixty-six of 294 respondents in the "experienced" group thought analgesics interfere with the evolution of signs as compared with 189 of 245 respondents in the "less experienced" group. The findings regarding effects on diagnostic accuracy and outcome are shown in Table [3](#Tab3){ref-type="table"}. Table 3Views of respondents regarding effects of analgesics on the evolution of signs, diagnosis and patient outcomeInterference with evolution of signs:Group:YesNoTotalExperienced26628294Less experienced*1895624545584*539X^2^ = 2.5798, p \> 0.1Effect on diagnosis:GroupEnhanceImpairNo effectTotalExperienced723849294Less experienced*21182422452842091539*X^2^ = 1.598, p \> 0.4Effect on outcome:GroupAdverseBeneficialNoneTotalExperienced1477077294Less experienced*1474256245294112133539*X^2^ = 0.8443, p \> 0.6 Of 245 respondents in the surgical group, 189 thought analgesics interfere with the evolution of signs, while 266 of 294 respondents in the non-surgical group thought the same. The views regarding the effect of analgesics on diagnostic accuracy and outcome as expressed by respondents in these groups are presented in Table [4](#Tab4){ref-type="table"}. Table 4Views of respondents regarding the effects of analgesics on evolution of signs, diagnosis and patient outcomeInterference with evolution of signs:Group:YesNoTotalSurgeons18956245Non-surgeons26628294X^2^ = 2.5798, p \> 0.1Effect on diagnosis:GroupEnhanceImpairNo effectTotalSurgeons2117549245Non-surgeons*724542294284291539*X^2^ = 2.1248, p  \> 0.3Effect on outcome:GroupAdverseBeneficialNoneTotalSurgeons1127756245Non-surgeons*1823577294294112133539*X^2^ = 4.5055, p \> 0.1 Discussion {#Sec5} ========== There are various impediments to the concept of analgesic use in patients with acute abdominal pain, and these include failure of the physician to appreciate the severity of pain as experienced by the patient \[[@CR14]\] and fear of masking clinical signs causing a delay in diagnosis and definitive treatment. Other reasons put forward include alleged interference with the ability to give informed consent in the event of need for surgery, but this has been disputed by many studies \[[@CR15]--[@CR17]\]. In this study 90.9% of the respondents practiced at the tertiary care level. This may be due to the method of recruitment of respondents for this study, and moreover there seems to be a concentration of medical practitioners in these tertiary centers. Only 16.9% of the respondents practiced in institutions with a clear analgesic use policy for abdominal pain; this is similar to the findings of Zimmerman et al. in Israel \[[@CR18]\]. Simple analgesics and antispasmodics were the more commonly prescribed medications by respondents in this study (59.8%), and only 18.2% of respondents prescribed narcotics. Also 50.4% of the respondents would not prescribe analgesics if the diagnosis was not clear, and 25% would not if a surgical consultation was required. This is not surprising since the majority of the respondents (84.5%) held the traditional view that analgesics interfere with the evolution of signs, therefore impairing diagnosis (77.9%), and that analgesics have an adverse effect on outcome (54.5%). These findings are essentially similar to the results of other surveys that show the reluctance of physicians to administer analgesics to patients with abdominal pain in the emergency situation \[[@CR10]--[@CR13]\]. This view was not substantially affected by the specialty of the respondents in this study or by the post-qualification experience as shown in Tables [3](#Tab3){ref-type="table"} and [4](#Tab4){ref-type="table"}. Some studies have shown considerable reluctance on the part of surgeons regarding the administration of analgesics in patients with undifferentiated abdominal pain in the emergency situation \[[@CR19]\]. Others have shown that the more experienced surgeons or physicians are even less liberal in the use of analgesics, particularly narcotics, for these patients \[[@CR18]\]. Conclusion {#Sec6} ========== This study shows that withholding analgesics from patients with acute abdominal pain is prevalent among the Nigerian doctors studied. This practice is not influenced significantly by specialty or length of years of post-qualification experience. The needed long-term measures are capacity building and training of doctors in the specialty of emergency medicine with the ultimate aim of establishing this specialty in the country like it is in the developed world, which would improve the overall management of all patients presenting acutely, including those with acute abdominal pain. In the interim, the organization of seminars and training programs on acute care is necessary to improve patient care. The views expressed in this paper are those of the author(s) and not those of the editors, editorial board or publisher.
Transcription 2 About this document This document sets out Ofcom s provisional decision not to advertise or re-advertise licences in 13 specific areas, or their substitutes, identified for the roll-out of local TV services but where no licence has been awarded to date. This will release Comux, the operator of the transmission infrastructure for local TV, from its current obligation to build the transmission infrastructure in these locations. We consider that continuing to require the extension of the local TV transmission network to these locations or substitute areas as previously planned, would have an adverse impact on the economic viability of the local TV sector as a whole. The document explains in detail the full reasoning behind our provisional decision. We invite interested or affected parties to comment on Ofcom s provisional decision by 1 June 4 1. Introduction Local TV services were first licensed by Ofcom in 2013, and there are now local TV services licensed to broadcast in 34 different locations across the UK. The local TV broadcast licences ( L-DTPS licences ) that we have granted to date followed the decisions we set out in Ofcom s Local TV Statement, 1 which we published in May This set out how we would exercise our powers and duties to implement the roll-out of local TV, including how we would license a single new local TV multiplex to carry all of the individual local TV services. We said that the services would be rolled out in two phases. In Phase 1, we said that the local multiplex licensee should develop the necessary infrastructure to carry a minimum of 21 local TV services in locations that Ofcom had identified as suitable for local TV services. In Phase 2, the multiplex licensee would be required to build out to the additional locations it had itself proposed in its multiplex licence application. 2 There are currently 13 locations which were intended to be part of the phased roll-out of local TV, but for which (for various reasons) no local TV licence has been awarded. We have considered whether we should advertise or re-advertise local TV licences in these areas, or substitute them with licences for nearby areas, to complete the process anticipated in the Local TV Statement. We are minded not to do so for the reasons we set out in this document. Before taking a final decision, we are providing stakeholders with an opportunity to comment on our proposal, in particular to identify any matter(s) that they consider we should take into account which we have not covered in this consultation. In the rest of this section, we set out by way of background: the obligations that we have imposed on the local multiplex licensee, Comux, and on the local TV licensees to secure the provision and availability of local TV services; the funding structure of Comux and local TV services; how the licensing of local TV services has developed in practice since the Local TV Statement was published in Relevant licence obligations of the local multiplex licensee and local TV licensees Ofcom awarded the local multiplex licence to Comux in In its licence application, Comux set out its commitment to build a transmission network based on Ofcom s Coverage Note dated 10 May 2012 for all 21 Phase 1 sites. We also confirm that we will 1 Local TV Statement : Licensing Local Television. How Ofcom will exercise its new powers and duties to license new local television services. 10 May 2012, available here: data/assets/pdf_file/0020/54236/local-tv-statement.pdf 2 Local TV Statement : Licensing Local Television. How Ofcom will exercise its new powers and duties to license new local television services. 10 May 2012, available here: data/assets/pdf_file/0020/54236/local-tv-statement.pdf.paragraph 5 expand to all of the 28 Phase 2 locations, subject to local TV companies obtaining licences to broadcast at those locations. 3 This commitment is incorporated into the conditions in Comux s multiplex licence, which require Comux to establish, operate and maintain a transmission network for the multiplex broadcast of the local television services at the relevant locations. Comux is also required by the conditions of its licence to make licensed local TV services available through reserved capacity on its multiplex, and there is an equivalent obligation on L-DTPS licence holders to make their services available to the local multiplex. When Ofcom published the Local TV Statement, the general indications were that there would be competition to provide local TV services at the locations that had been identified. However, we recognised the possibility that we might not receive applications for all advertised local TV locations, and that some applications might not satisfactorily address the statutory criteria for a licence award. Therefore, we said that we would advertise licences for all locations proposed by the multiplex licensee, and that where we had advertised a licence but not made an award, e.g. because there were no applicants, we would re-advertise it. We also said that, in the event that no L-DTPS licence was awarded, we may instead seek to advertise a licence for the nearest equivalent-sized location that Ofcom could make available in its place. 4 Funding arrangements for local TV transmission network and local TV services As part of the government s local TV policy, funding from the BBC was made available to fund the building of the local television transmitter infrastructure. This funding has been used by Comux to build the transmission infrastructure required for the carriage of the 34 local TV services that were awarded licences by Ofcom between 2013 and The BBC funding for the local TV infrastructure came to an end on 31 July Consequently, the costs of building any further infrastructure will have to be funded by other means. In addition, there are the ongoing operational costs of running the multiplex, which Comux passes on to the local TV licensees. Comux is in fact owned by the collective of L-DTPS licensees, and profits it makes are distributed to them to offset the operational costs of the network that the licensees would otherwise have to meet. 3 This was specified on page 8 of the Application from Comux for the licence, available here: 4 Local TV Statement : Licensing Local Television. How Ofcom will exercise its new powers and duties to license new local television services. 10 May 2012, available here: data/assets/pdf_file/0020/54236/local-tv-statement.pdf, paragraph 2.62 and The last of these locations started broadcasting on 31 July 6 As shown in the Communications Market Report , local television services have been funded primarily through a mix of advertising, BBC funding (primarily protected funding through BBC purchase of local news items), other commercial and non-commercial income, and teleshopping. The protected BBC funding was intended to provide the new local television services with some funding certainty in their initial start-up phase. 7 However, as the services mature, they need to secure alternative sources of revenue to become self-sustaining. As noted in paragraph 1.8, one source of income for the local TV services is rebates from Comux itself, to the extent it is able to generate profits. The analysis of financial information about the sector in our Communications Market Report 2017 demonstrates that the sector as a whole still faces challenges in diversifying income sources; figures continue to show an overall net debt, notwithstanding some improvement in performance. Advertising and grant of local TV licences The first local TV licences to be advertised were for each of the following 21 locations: Belfast, Birmingham, Brighton & Hove, Bristol, Cardiff, Edinburgh, Glasgow, Grimsby, Leeds, Liverpool, London, Manchester, Newcastle, Norwich, Nottingham, Oxford, Plymouth, Preston, Sheffield, Southampton and Swansea. All of these licences were awarded apart from the one for Plymouth (for which there were no applicants), and by 24 August 2015 all of the 20 licensed services had launched, with Comux taking responsibility for building the necessary transmitter infrastructure for each locality. Licences for each of the Phase 2 locations which Comux had identified in its application, and to which it was therefore required to extend its transmission network once a local TV licence had been granted, were advertised in three rounds during 2013 and From the list of Phase 2 locations, we advertised a licence but did not make an award in respect of nine locations (Bangor, Barnstaple, Forth Valley, Gloucester, Inverness, Limavady, Derry/Londonderry, Luton and Stoke-on-Trent). By the end of 2014, this left three locations from the Phase 2 list still to be advertised: Kidderminster, Bromsgrove and Stratford Upon Avon. In 2014, Ofcom decided to make part of the spectrum used to broadcast digital television, including local TV, available for mobile data use 8. Until this exercise was complete, the future coverage of local services could not be accurately predicted with confidence, and therefore we put on hold any further advertisements of local TV licences. We have now reached a point where the planning for 700 MHz clearance is almost complete, and we have a much better understanding of how the economic models of both pdf 8 See Decision to make the 700 MHz band available for mobile data -statement. data/assets/pdf_file/0024/46923/700-mhz-statement.pdf 5 7 Comux and the local TV services themselves are working in practice. We have therefore considered whether our original intention (as stated in our Local TV Statement in 2012) to advertise, re-advertise or substitute licences that have not been advertised or awarded is still appropriate. For the reasons set out in Section 2, we are minded not to advertise licences for these locations or any substitute locations. We have reached this provisional view by applying the same tests we used in 2012 (including the impact on citizens and consumers) for the selection of suitable locations for local TV services, and taking into account the information we have obtained about the operation of the local TV sector and its viability. We set out the basis for our provisional decision in the next section. 6 8 2. Assessment and provisional decision When Ofcom selected the locations for Phase 1 and Phase 2 of the roll-out of local TV in , we applied the following criteria: technical feasibility; evidence of local demand and size/economic viability. 10 We have applied the same criteria now to assess whether to advertise, re-advertise or substitute licences for the 13 locations where licence awards have yet to be made, using data we have gathered since the launch of local TV services to inform our assessment. The criteria involve consideration of the following elements: For technical feasibility, we have considered the household coverage provided by spectrum potentially available in the relevant area. For evidence of local demand, we have considered the current level of interest from potential applicants. For economic viability, we have considered two aspects: - The likely economic viability of a service at each individual location; - The impact of licensing these new local TV services on the economic viability of the local TV multiplex licence holder Comux and of the existing local TV licensees. We have made our assessment in light of our statutory duties, including: our principal duty to further the interests of citizens in relation to communications matters and the interests of consumers in relevant matters, where appropriate by promoting competition in section 3 of the Communications Act 2003; our duties under section 3 of the Wireless Telegraphy Act 2006 when carrying out our spectrum functions to have regard to demand for spectrum and the desirability of promoting the efficient management and use of spectrum. As set out in more detail below, our assessment reveals that in the majority of the 13 locations there is likely to be suitable spectrum available and there is some evidence of interest from potential applicants. However, we consider that local TV services in only five of the locations in question have a reasonable chance of being economically viable, taking account of their estimated household coverage and the low proportion of licence awards we have been able to make for areas covering 50,000 households or fewer. Furthermore, based on our understanding of the capital expenditure required and Comux s operating costs, we consider that the costs of extending the transmission network to any of the 9 As noted earlier, the Phase 1 locations were a baseline that anyone applying for the multiplex licence was expected to build. Applicants could choose which Phase 2 locations they would commit to building; Comux committed to all of them. 10 Local TV Statement : Licensing Local Television. How Ofcom will exercise its new powers and duties to license new local television services. 10 May 2012, available here: data/assets/pdf_file/0020/54236/local-tv-statement.pdf. The criteria for selecting phase 1 locations are set out here: paragraph 3.5 (technical feasibility), 3.10 and 3.15 (economic viability for the multiplex) and 3.14 and 3.16 (local interest and size/economic viability of individual locations). Similar criteria were used for selecting phase 2 locations, as set out in paragraph 3.35 (technical feasibility, local demand and economic viability). 7 9 locations, or any substitute locations, would put at risk the viability of the local TV sector as a whole. Technical feasibility We have assessed the household coverage provided by spectrum potentially available in each of the three locations for which licences have not been advertised to date (Kidderminster, Bromsgrove, Stratford upon Avon). We have not re-assessed spectrum availability and potential household coverage for each of the other locations, and so we have used the coverage figures which we modelled for these locations prior to the 700 MHz planning as the best available proxy. This is set out in Table 1. We recognise that in some of the locations, the coverage is likely to have reduced as a result of that planning. Table 1: Household coverage for locations for which licences have not been awarded 11 based on 2012 or 2017 computer modelling Location Bangor Kidderminster Bromsgrove Stratford Upon Avon Barnstaple Limavady Derry- Londonderry Inverness Luton (Bedford+Luton) Plymouth Stoke on Trent Gloucester (Gloucester + Most recent estimate 16,000 or fewer (based on 2012 computer modelling) Around 18,000 (based on 2017 computer modelling) Around 23,000 (based on 2017 computer modelling) Around 34,000 (based on 2017 computer modelling) 32,000 or fewer (based on 2012 computer modelling) 35,000 or fewer (based on 2012 computer modelling) 36,000 or fewer (based on 2012 computer modelling) 49,000 or fewer (based on 2012 computer modelling) 82,000 or fewer (based on 2012 computer modelling) 100,000 or fewer (based on 2012 computer modelling) 100,000 or fewer (based on 2012 computer modelling) 220,000 or fewer (based on 2012 computer modelling) 11 Some locations were merged following an Ofcom consultation process. They are indicated in brackets in the table. 8 10 Malvern + Hereford) Forth Valley 380,000 or fewer (based on 2012 computer modelling) Local demand In Table 2, we set out the evidence of demand we have received from prospective operators of local TV services, based on unprompted expressions of interest. This shows that there are three locations where we have received no evidence of local demand, and another four where no expression of interest has been received since Table 2: summary of expressions of interest Location At the time of advertisement Most recent expression of interest Bangor Yes 2014 Kidderminster N/A 2016 Bromsgrove N/A 2016 Stratford Upon Avon N/A 2016 Barnstaple None None Limavady None None Derry- Londonderry None 2017 Inverness Yes None Luton (Bedford+Luton) None 2015 Plymouth None 2015 Stoke on Trent Yes 2016 Gloucester (Gloucester + Malvern + Hereford) None 2014 Forth Valley None 11 Economic viability We consider that only five of the locations where there is evidence of local demand would be economically viable, based on our assessment of household coverage in 2012: Luton, Plymouth, Stoke on Trent, Gloucester and Forth Valley. 12 To assess economic viability we have considered whether, based on what we know about areas of an equivalent population size, we would expect anyone to apply for the licence, or if they did apply whether we would expect to be able to award the licence on the basis of a realistic business plan. 13 To summarise our findings to date: Most licences for locations with a household coverage of under 50, 000 have not managed to attract viable applications. 14 Most licences for locations with a household coverage between 50,000 and 100,000 have been awarded, and services have launched. 15 The vast majority of licences for locations with coverage of over 100,000 households have been awarded, and services have launched. 16 However, when the costs to Comux of extending the transmission network are taken into account, we consider that licensing these 13 additional local TV services would have an adverse impact on the local TV sector as a whole. This is because the capital expenditure required to build the new transmitter infrastructure for these locations will no longer be covered by any BBC funding. Further, because these locations are in general fairly small in terms of their household coverage, the incremental revenues Comux would be able to generate from these transmitters would be smaller than the sum of the incremental operating costs and the capital expenditure associated with them. Accordingly, our analysis is that the net impact of awarding licences for the remaining 13 locations (or suitable equivalents) would have a significant negative financial impact on Comux As noted at paragraph 2.6 above, it is possible that household coverage in these areas provided by the available spectrum is now lower than the levels modelled in 2012 as a result of the 700MHz replanning. 13 Given that the sector is quite new, it is too early to draw conclusions about economic viability of local TV areas in a broader sense i.e. by looking at the longer-term ability of individual services to generate profits over the period of the licence. We have however produced a sector-wide overview of developments to date, see our Communications Market Report 2017: 14 Three locations did not attract applications (Barnstaple, Limavady and Derry-Londonderry), 2 were not awarded because of Ofcom concerns about the sustainability of the business models proposed (Bangor and Inverness), and 3 were awarded (Mold, Salisbury and Scarborough). 15 Three locations did not attract applications (Luton, Plymouth, Stoke on Trent) and 7 locations of this type of size have been awarded (Brighton, Basingstoke, Cambridge, Carlisle, Guildford, Oxford and Swansea). It is worth noting that for four of these locations, the licensees requested subsequent increases in coverage (Basingstoke, Cambridge, Oxford and Swansea). 16 Two locations in this group did not attract applications (Gloucester and Forth Valley) and 1 location was not awarded. The remaining 24 locations in this group were awarded. 17 Our analysis is based on commercially confidential financial information provided to Ofcom by Comux (management Accounts for the 12 months ended 30 September 2016; national video stream revenue per household and estimates of the annual incremental costs and the annual incremental income for each of the 13 locations). 10 12 The financial impact of launching additional services would not be limited to Comux. It could also be expected to impact the rest of the local TV sector more generally. Currently, the amounts local TV licensees pay to Comux for transmission and network services may be reduced by the distribution of profits which Comux generates. We consider that the costs to Comux of extending the network to further locations would reduce the likelihood that this could continue and, as a result, the effective cost of transmission to existing individual local TV licensees would rise. Given the financial challenges already faced by the sector 18, this could put the sustainability of the services that are already broadcasting further at risk. We do not think that the potential consumer and citizen benefits to be secured from licensing new local TV services in the identified locations outweigh the risk that licensing any such services is likely to have a significantly adverse financial impact on the existing local TV sector, which in turn may result in consumer and citizen harm from a possible reduction in programming range and quality, or the closure of some services. Finally, it should be noted that we have assessed the financial impact on Comux by reference to Comux s licence obligation to extend its network to all the remaining Phase 1 locations and the Phase 2 locations identified in its application (or suitable alternatives), rather than by individual location. For the reasons set out, we are proposing to release Comux from this obligation in relation to the 13 locations where an award has not been made and not to pro-actively advertise or re-advertise local TV licences in these areas. Nonetheless, it remains open to potential licensees to ask Ofcom to advertise a licence in a new location, and to existing licensees wishing to extend their coverage areas to make a joint application with Comux for an extension, where they consider that there is scope for an economically viable service which would not adversely impact the wider local TV sector. Such proposals will be considered on their merits, in the light of our assessment criteria and our statutory duties. Provisional decision and invitation to comment Accordingly, in light of the considerations we have set out above, we are minded: not to advertise or re-advertise local TV licences for the 13 locations where awards have yet to be made, or for substitute locations; and to release Comux from its obligation to roll out its transmission network to the Phase 2 locations identified in its application. Respondents are invited to comment on whether they agree with this proposal. If there is any matter that respondents consider Ofcom should take into account before reaching a final decision, this should be identified and a brief explanation as to why it might outweigh the factors we have set out in this document should be provided. 18 Set out in Ofcom s report The Communications Market 2016, 2. Television and Audio Visual, page 80 to 86, available online: data/assets/pdf_file/0026/17495/uk_tv.pdf and in The Communications Market 2017, Television and Audio Visual Content, page 66 to 76. Available online: data/assets/pdf_file/0016/105442/uk-television-audio-visual.pdf 11 13 A1. Responding to this consultation How to respond A1.1 Ofcom would like to receive views and comments on the issues raised in this document, by 5pm on 1 June A1.2 You can download a response form from You can return this by or post to the address provided in the response form. A1.3 If your response is a large file, or has supporting charts, tables or other data, please it to as an attachment in Microsoft Word format, together with the cover sheet ( This address is for this consultation only, and will not be valid after June A1.4 Responses may alternatively be posted to the address below, marked with the title of the consultation: Leen Petré Ofcom Riverside House 2A Southwark Bridge Road London SE1 9HA A1.5 We welcome responses in formats other than print, for example an audio recording or a British Sign Language video. To respond in BSL: Send us a recording of you signing your response. This should be no longer than 5 minutes. Suitable file formats are DVDs, wmv or QuickTime files. Or Upload a video of you signing your response directly to YouTube (or another hosting site) and send us the link. A1.6 We will publish a transcript of any audio or video responses we receive (unless your response is confidential) A1.7 We do not need a paper copy of your response as well as an electronic version. We will acknowledge receipt if your response is submitted via the online web form, but not otherwise. A1.8 You do not have to answer all the questions in the consultation if you do not have a view; a short response on just one point is fine. We also welcome joint responses. A1.9 It would be helpful if your response could include direct answers to the questions asked in the consultation document. The questions are listed at Paragraphs 2.14 and It would also help if you could explain why you hold your views, and what you think the effect of Ofcom s proposals would be. 12 14 A1.10 If you want to discuss the issues and questions raised in this consultation, please contact Leen Petré on , or by to Confidentiality A1.11 Consultations are more effective if we publish the responses before the consultation period closes. In particular, this can help people and organisations with limited resources or familiarity with the issues to respond in a more informed way. So, in the interests of transparency and good regulatory practice, and because we believe it is important that everyone who is interested in an issue can see other respondents views, we usually publish all responses on our website, as soon as we receive them. A1.12 If you think your response should be kept confidential, please specify which part(s) this applies to, and explain why. Please send any confidential sections as a separate annex. If you want your name, address, other contact details or job title to remain confidential, please provide them only in the cover sheet, so that we don t have to edit your response. A1.13 If someone asks us to keep part or all of a response confidential, we will treat this request seriously and try to respect it. But sometimes we will need to publish all responses, including those that are marked as confidential, in order to meet legal obligations. A1.14 Please also note that copyright and all other intellectual property in responses will be assumed to be licensed to Ofcom to use. Ofcom s intellectual property rights are explained further at Next steps A1.15 Following this consultation period, Ofcom plans to publish a statement as soon as practicable thereafter. A1.16 If you wish, you can register to receive mail updates alerting you to new Ofcom publications; for more details please see Ofcom's consultation processes A1.17 Ofcom aims to make responding to a consultation as easy as possible. For more information, please see our consultation principles in Annex 2. A1.18 If you have any comments or suggestions on how we manage our consultations, please us at We particularly welcome ideas on how Ofcom could more effectively seek the views of groups or individuals, such as small businesses and residential consumers, who are less likely to give their opinions through a formal consultation. A1.19 If you would like to discuss these issues, or Ofcom's consultation processes more generally, please contact Steve Gettings, Ofcom s consultation champion: 13 16 A2. Ofcom s consultation principles Ofcom has seven principles that it follows for every public written consultation: Before the consultation A2.1 Wherever possible, we will hold informal talks with people and organisations before announcing a big consultation, to find out whether we are thinking along the right lines. If we do not have enough time to do this, we will hold an open meeting to explain our proposals, shortly after announcing the consultation. During the consultation A2.2 We will be clear about whom we are consulting, why, on what questions and for how long. A2.3 We will make the consultation document as short and simple as possible, with a summary of no more than two pages. We will try to make it as easy as possible for people to give us a written response. If the consultation is complicated, we may provide a short Plain English / Cymraeg Clir guide, to help smaller organisations or individuals who would not otherwise be able to spare the time to share their views. A2.4 We will consult for up to ten weeks, depending on the potential impact of our proposals. A2.5 A person within Ofcom will be in charge of making sure we follow our own guidelines and aim to reach the largest possible number of people and organisations who may be interested in the outcome of our decisions. Ofcom s Consultation Champion is the main person to contact if you have views on the way we run our consultations. A2.6 If we are not able to follow any of these seven principles, we will explain why. After the consultation A2.7 We think it is important that everyone who is interested in an issue can see other people s views, so we usually publish all the responses on our website as soon as we receive them. After the consultation we will make our decisions and publish a statement explaining what we are going to do, and why, showing how respondents views helped to shape these decisions. 15 17 A3. Consultation coversheet BASIC DETAILS Consultation title: organisation realise To (Ofcom contact): Name of respondent: Representing (self or organisation/s): Address (if not received by ): CONFIDENTIALITY Please tick below what part of your response you consider is confidential, giving your reasons why Nothing Name/contact details/job title Whole response Organisation Part of the response If there is no separate annex, which parts? If you want part of your response, your name or your organisation not to be published, can Ofcom still publish a reference to the contents of your response (including, for any confidential parts, a general summary that does not disclose the specific information or enable you to be identified)? DECLARATION I confirm that the correspondence supplied with this cover sheet is a formal consultation response that Ofcom can publish. However, in supplying this response, I understand that Ofcom may need to publish all responses, including those which are marked as confidential, in order to meet legal obligations. If I have sent my response by , Ofcom can disregard any standard text about not disclosing contents and attachments. Ofcom seeks to publish responses on receipt. If your response is non-confidential (in whole or in part), and you would prefer us to publish your response only once the consultation has ended, please tick here. Name Signed (if hard copy) 16 Interim use of 600 MHz for DTT Executive summary The BBC, Channel 4 and Arqiva have developed a proposal to make interim use of the 600 MHz band to provide additional Digital Terrestrial Television (DTT) Ofcom's proposed guidance on regional production and regional programming Consultation document The Communications Act makes changes to the existing arrangements for a number of programming quotas that 700 MHz clearance programme timescale review Review of progress, risks and readiness Publication Date: 13 December 2018 About this document When confirming the timescales for the delivery of the 700 MHz Managing the effects of 700 MHz clearance on PMSE and DTT viewers Summary of progress and call for input Call for Input Publication date: 31 March 2016 Closing Date for Responses: 13 May 2016 About this Office of the Minister of Broadcasting Chair Economic Development Committee DIGITAL TELEVISION: MAINTENANCE OF ANALOGUE TRANSMISSION IN REMOTE AREAS PAPER E Purpose 1. This paper is in response to a Cabinet Broadcasting Ordinance (Chapter 562) Notice is hereby given that the Communications Authority ( CA ) has received an application from Phoenix Hong Kong Television Limited ( Phoenix HK ), a company duly Australian Broadcasting Corporation submission to National Cultural Policy Consultation February 2010 Introduction The Australian Broadcasting Corporation (ABC) welcomes the opportunity to provide a submission RESPONSE OF CHANNEL 5 BROADCASTING LTD TO OFCOM S CONSULTATION ON PROPOSED PROGRAMMING OBLIGATIONS FOR NEW CHANNEL 3 AND CHANNEL 5 LICENCES Channel 5 is proud to be a public service broadcaster and wishes 1 Wireless Telegraphy Act 2006 Licence for the transmission of digital terrestrial television multiplex service Date of Issue 6 July 2007 Licensee Company number (if a company) Registered address of Licensee OUR CONSULTATION PROCESS WITH YOU OneMusic Australia is consulting with you and would like to hear what you think. If you use music in your dance school, performance school, or are an instructor of either, Policy on the syndication of BBC on-demand content Syndication of BBC on-demand content Purpose 1. This policy is intended to provide third parties, the BBC Executive (hereafter, the Executive) and licence Response to Ofcom Consultation The future use of the 700MHz band Response from Freesat 29 August 2014 1 1 About Freesat Freesat is a subscription free satellite and IP TV service offering digital television The BBC s services: audiences in Northern Ireland Publication Date: 13 October 2017 The BBC s services: audiences in Northern Ireland About this document The operating licence for the BBC s UK public services Broadcasting Decision CRTC 2018-307 PDF version References: 2017-365, 2017-365-1 and 2017-365-2 Ottawa, 23 August 2018 Vues & Voix Across Canada Public record for this application: 2017-0643-3 Public hearing NATIONAL ASSOCIATION OF BROADCASTERS SUBMISSION TO THE PARLIAMENTARY PORTFOLIO COMMITTEE ON SCIENCE AND TECHNOLOGY ON THE ASTRONOMY GEOGRAPHIC ADVANTAGE BILL [B17-2007] 20 JULY 2007 1. INTRODUCTION 1.1 Australian Broadcasting Corporation submission to Department of Broadband, Communications and the Digital Economy Response to the Discussion Paper Content and access: The future of program standards and The Scheduling of Television Advertising: Approaches to Enforcement Response from the Commercial Broadcasters Association to Ofcom October 2014 1 Executive Summary 1. COBA welcomes the detailed work Ofcom Digital Television Switchover Michael Starks for Jamaica Broadcasting Commission 1. Outline What is digital television? Why have a switchover policy? Pioneers & common principles Research and feasibility GLASGOW 2014 LIMITED RESPONSE TO OFCOM CONSULTATION DOCUMENT Submitted 15 November 2012 Question 1. Do you agree that the most relevant comparator for a top-down approach is likely to be the London 2012 The BBC s services: audiences in Scotland Publication date: 29 March 2017 The BBC s services: audiences in Scotland About this document The operating licence for the BBC s UK public services will set the DATE: 13 October 2017 RECORDED MUSIC FOR THE PURPOSE OF DANCING MUSIC LICENSING CONSULTATION OneMusic Australia is a joint venture initiative of APRA AMCOS and PPCA. APRA AMCOS is the trading name of the Response to Ofcom consultation Maximising the benefits of 700 MHz clearance 20 May 2016 This response is submitted by Digital UK on behalf of its Members the BBC, ITV, Channel 4 and Arqiva - the holders The long term future of UHF spectrum A response by Vodafone to the Ofcom discussion paper Developing a framework for the long term future of UHF spectrum bands IV and V 1 Introduction 15 June 2011 (amended Head-end in the Sky - A Digital Reality Issue V February 2010 Introduction The Telecom Regulatory Authority of India ( TRAI ), on the request of The Ministry of Information and Broadcasting ( MIB ) has DIGITAL MIGRATION WORKING GROUP WORKING COMMITTEE REPORT ON ECONOMIC SCENARIOS AND CONSUMER ISSUES FOR DIGITAL MIGRATION IN SOUTH AFRICA 15 th November 2006 2 1. INTRODUCTION -------------------------------------------------------------------------------------------3 Submission to: A Future for Public Service Television: Content and Platforms in a Digital World - A Public Inquiry: Chaired by Lord Puttnam The contribution of the UK s commercial public service broadcasters Licensing & Regulation #379 By Anita Gallucci I t is about three years before your local cable operator's franchise is to expire and your community, as the franchising authority, receives a letter from Best Practice Regulatory Frameworks for Mobile TV forum Best Practice Regulatory Frameworks for Mobile TV June 2008 Information contained in this report only reflects solely the author s view on the subject SUBMISSION BY THE NATIONAL ASSOCIATION OF BROADCASTERS IN RESPONSE TO THE NOTICE IN RESPECT OF THE DRAFT LOCAL AND DIGITAL CONTENT STRATEGY 20 October 2009 1 INTRODUCTION 1.1 The National Association of Bibliometrics and the Research Excellence Framework (REF) THIS LEAFLET SUMMARISES THE BROAD APPROACH TO USING BIBLIOMETRICS IN THE REF, AND THE FURTHER WORK THAT IS BEING UNDERTAKEN TO DEVELOP THIS APPROACH. e.tv SUBMISSION ON DRAFT SPECTRUM ASSIGNMENT PLAN FOR THE COMBINED LICENSING OF THE 800MHZ AND 2.6GHZ BANDS 29 February 2012 1 1 INTRODUCTION 1.1 On 15 December 2011 in Notice 911, ICASA published Government The BBC s Draft Distribution Policy Consultation Document Published: 12 February 2018 About the consultation Purpose 1. The BBC has opened a consultation in order to seek feedback on its draft Distribution 6.3 DRIVERS OF CONSUMER ADOPTION The main drivers for the take-up of DTT by consumers in South Africa are likely to be: Affordability of STBs and potential subsidies for STBs is the single most important Guidelines for ASEAN Digital Switch-Over Introduction to the Guidelines The migration from analogue to digital TV broadcasting services is a complex process, involving decisions on the regulator/ government, About Arqiva Arqiva response to digital dividend: clearing the 800 MHz band Arqiva has its headquarters in Hampshire, with other major UK offices in Warwick, London, Buckinghamshire and Yorkshire. It now OUR CONSULTATION PROCESS WITH YOU OneMusic Australia is consulting with dance and performance instructors and schools and would like to hear your views. This is the second consultation paper we ve released Equity response to Public Service Television for the 21st Century A Public Inquiry Equity is the UK based union representing over 39,000 creative workers. Our membership includes actors and other performers Input by ViaSat to Support the Universal Service Obligation (USO) Consultation High Capacity Satellite (HCS) Broadband EXECUTIVE SUMMARY The USO is the latest, and welcomed, commitment to improve the delivery Question 1: which services are most likely to drive take up of DTT consumer reception equipment using new technologies? In particular, are HD services the most likely to do so?: This question is facetious. Government Gazette Staatskoerant REPUBLIC OF SOUTH AFRICA REPUBLIEK VAN SUID-AFRIKA Vol. 572 Pretoria, 18 February Februarie 2013 No. 36170 N.B. The Government Printing Works will not be held responsible House of Lords Select Committee on Communications Digital switchover of television and radio 20 January 2010 1. Digital Television Switchover on track, on time and under budget a. 16 main transmitter groups, Submission to Inquiry into subscription television broadcasting services in South Africa From Cape Town TV 1 1. Introduction 1.1 Cape Town TV submits this document in response to the invitation by ICASA Conversion of Analogue Television Networks to Digital Television Networks Sara Elvidge-Tappenden Spectrum Planning Group, BBC R&D 1 Introduction There are many possible planning approaches for the design British Entertainment Industry Radio Group (BEIRG) Response to consultation Developing a framework for the long term future of UHF spectrum bands IV and V Date: 14 th June 2011 Contact Details: Fiona Graham Australian Broadcasting Corporation Submission Digital Conversion of Self-Help Television Retransmission Sites (Department of Communications, Information and the Arts) August 2007 Australian Broadcasting
How World War II figures in a new fight over Greek debt Last week in the Greek village of Nafpolio, Germans Ludwig Zaccaro and Nina Lange handed 875 euro, about $940 at the current rate, to the local mayor. The money, which the mayor said would be given to a local charity, was what the couple figured was their share of Germany’s World War II debt to Greece. They’d always loved Greece, they said in an interview shown on Greek television, and felt bad about their country’s role in the current economic difficulties. “Our politicians pretend the Greeks owe debt to Germany, but the reality is that it is the other way around,” Lange said. Their point of view differs widely from the general German attitude about Greece – 80 percent, polls show, don’t want Germany to give any more aid to Greece and 50 percent want Greece gone from the eurozone – but it strikes at an argument that the new Greek government is pressing: Germany owes Greece money, not the other way around. Never miss a local story. Sign up today for unlimited digital access to our website, apps, the digital newspaper and more. Germany has never repaid money that Germany forced Greece to lend it during World War II, says the Greek government. Now the Greeks would like it back, to help repay the $330 billion the country owes – $67 billion to Germany. The German government of Chancellor Angela Merkel bristles at the suggestion. It insists that any German debt from World War II was eliminated with the so-called Two-plus-Four Treaty that made possible the reunification of Germany in 1990. “Greece will not be able to cover their debts by constructing German responsibilities dating back to World War II,” German Finance Minister Wolfgang Schaeuble said recently. “Greece suffers not because of Berlin, or Brussels, but because its own elites have failed for decades.” “This has all been settled, there can be no more claims,” said Volker Kauder, a member of the German parliament. But the view is not unanimous. Norman Paech, a retired law professor at Hamburg University and one of Germany’s leading experts on war reparations, has been making a case for more than a decade that the Greeks have a case. “The Greek claims with regard to the loan and German war crimes are legitimate from a political, legal and even more so from a moral point of view,” he said in a telephone interview Wednesday. Paech argues that German officials are fighting against this obligation for a simple reason: There are other claims. He said the legal problem is that the 1953 London Treaty officially put all claims against Germany on hold until a lasting peace treaty could be reached. The 1990 treaty that unified Germany is that document, but it was signed only by the two Germanys and the United States, Great Britain, France and Russia. That means claims from any other countries are now active – for example, from Greece. And gaps are appearing among Germany politicians, too. Can Germans really claim anything from a nation they occupied and looted only 70 years ago? Not just the fringe parties are raising the question. Some members of the Social Democratic Party, partners in government with Merkel’s Christian Democrats, are leaning toward the position taken by Zaccaro and Lange in Nafpolio: Something is owed. “It would be good for us Germans to sweep up after ourselves in terms of our history,” Gesine Schwan, a Social Democratic member of parliament, told the magazine Der Spiegel. “Victims and descendants have longer memories than perpetrators and descendants.” The party’s vice chairman, Ralf Stegner, said it’s time to consider “compensation talks.” “We should not tie the reparations to the present euro crisis debate,” he said in widely quoted remarks. “But regardless of that, I think we need a discussion about compensation. Dealing with our history requires it.” Figuring out exactly how much Germany owes Greece would be no easy matter. Greek schoolchildren are taught that the Germans owe Greece $320 billion, about the total of the Greek debt in this current crisis. The most concrete amount, and one most likely to be mentioned by Germans, stems from a 1942 “no-interest loan” to Adolf Hitler’s Nazi regime from the Greek puppet government. The money, 568 million Reichsmarks, was to fund the occupation of Greece. But even that’s not straightforward: Italy shared the occupation and therefore in the money at that time. And, many have noted, if it was a legitimate loan that needs to be repaid, it was one without interest. Germany would owe Greece exactly what was borrowed minus the 92 million Reichsmarks Hitler’s Germany repaid the Greeks, meaning 476 million Reichsmark. The dollar value of a Reichsmark is much debated. In 1942, when there was no dollar-Reichsmark exchange, the official Allies-set rate was 10 Reichsmark to a $1, making the Reichsmark worth about 10 U.S. cents. But just the year before, when America and Germany were not at war, each Reichsmark was worth $2.50. Of course, after the war, a Reichsmark was virtually worthless. Germans who think their country should repay the loan tend to put the repayment value at about 10 billion euro, or $10.6 billion. That amount hardly scratches the surface of Greek debt. But Paech thinks that number might well be low. He points out that in 1997, the Greek village of Distomo, near Delphi, won a case in Greek court ordering Germany to pay about $40 million for the Nazi revenge killing of 200 locals. While the amount isn’t in the billions, it’s only one village in only one nation in which the Nazis rampaged. “There were over 1,000 towns and villages plundered and/or burned down, 1 million people made homeless, 300,000 died of starvation under occupation in Greece alone,” he recently wrote for a German newspaper. As Paech notes, if the Greek claims stand, Germany will face many, many more.
The Trump administration challenged China to do more to pull its ally North Korea back from the nuclear brink as Secretary of State Rex Tillerson bluntly declared Friday that the United States will do whatever is necessary to prevent a North Korean attack. “All options are on the table,” Tillerson said in Seoul, where he underscored U.S. commitment to Asian allies threatened by North Korea and said he would lean on China during a visit there Saturday. In Washington, President Trump goaded China, which has extensive economic and political ties to North Korea but has resisted choking off the flow of money and military materials to its ally. “North Korea is behaving very badly. They have been ‘playing’ the United States for years,” Trump wrote on Twitter. “China has done little to help!” China has repeatedly pledged to do more, but the Trump presidency, like the Obama and George W. Bush administrations before it, accuses Beijing of going easy on Pyongyang. A North Korean soldier takes a picture of Secretary of State Rex Tillerson from outside the window during his visit Friday to Panmunjom, the truce village on the border between South and North Korea. (Pool photo by Yonhap/via European Pressphoto Agency) U.N. Ambassador Nikki Haley went further, telling an interviewer Friday that the Trump administration is making a sharp pivot away from what she said was an ineffectual Obama strategy regarding China and North Korea. “There was a soft approach to China in the past presidency and what I can tell you now is we’re going to go harder on China,” Haley said on Fox News. “We’re going to say, ‘Look, if you really are wanting to partner with this, if you really are wanting to stop the nuclear testing that is going on in North Korea, prove it.’ ” [As North Korea’s arsenal grows, experts see heightened risk of ‘miscalculation’] At the least, the United States wants China to enforce existing sanctions on North Korea and police what U.S. officials have said are illicit Chinese business and banking deals that benefit the North Korean regime and its steadily improving missile-development program. “We are going to go through and ask them to push towards sanctions and push towards talks with North Korea,” Haley said. China says threats of military action by the United States or its allies South Korea and Japan, both within range of existing North Korean missiles, are unhelpful. Beijing favors further efforts to negotiate with North Korea, and hosted the last such international effort, which failed. North Korea is known for its exaggerated and bellicose rhetoric, but the combination of threats and missile launches, coinciding with Chinese anger at South Korea for deploying an American antimissile battery, has raised tensions in the region to a level seldom seen in recent years. Tillerson will be the first high-level Trump administration official to go to China, whose leaders were angered by Trump’s frequent bashing of Beijing over trade policies during the presidential campaign and his decision to speak with the elected leader of Taiwan in December. Trump has tried to smooth the waters by assuring Chinese President Xi Jinping that the United States does not want a trade war and will not upend the decades-old “one China” policy regarding Taiwan, which Beijing considers a province. Trump is expected to host Xi for a visit next month at Trump’s Florida estate. In contrast, the Trump administration has never let up on campaign-trail criticism of China over North Korea. China is also incensed by ongoing U.S.-South Korean military exercises this month and the installation of the U.S. missile defense system in South Korea. The decision to put in the system was made by the Obama administration, and U.S. officials have always insisted it is intended solely for protection against North Korea. But Chinese officials are expected to confront Tillerson with complaints that the system could be used to spy on China. The Chinese government is now banning many imports from South Korea and stopping Chinese tourist groups from traveling there to try to prompt Seoul to change its mind on the missile system. Against that backdrop, Tillerson’s meetings in China probably will be the most difficult and most important of his trip. “We will be discussing with them the serious threat that North Korea poses to peace and stability in the Korean Peninsula, but even beyond,” Tillerson said in Seoul. The United States and its allies still have options on the spectrum between diplomatic talks and military action for persuading the North Korean regime to give up its nuclear weapons, he said. North Korean leader Kim Jong Un said earlier this year that his country is working on an intercontinental ballistic missile capable of striking the U.S. mainland. Trump responded in a tweet: “It won’t happen!” Tillerson has used his three-country Asian tour to underscore that the new Trump administration is fed up with years of North Korea policies that it sees as all carrot and no stick. “Let me be very clear: The policy of strategic patience has ended,” Tillerson said at a news conference in Seoul with Yun Byung-se, the South Korean foreign minister. He was referring to the Obama administration policy of trying to wait North Korea out, hoping that sanctions would prove so crippling that Pyongyang would have no choice but to return to denuclearization negotiations. In recent months, North Korea has been making observable progress toward its goal of building a missile that could reach the U.S. mainland. In a surprise, Yun appeared to suggest that South Korea would support military options. “We have various policy methods available,” said Yun, who is unlikely to remain in his position for much longer, as elections for a new government will be held in early May. Yun likened the diplomatic effort to restrain North Korea to “a building” and said “military deterrence would be one of the pillars.” [Here’s how a North Korean soldier got inches from Rex Tillerson] Sanctions and diplomatic engagement so far have failed to persuade North Korea to abandon its nuclear weapons program. But U.S. administrations have long considered military action as nearly impossible because North Korea has artillery aimed at Seoul, a metropolitan area of more than 20 million people just 30 miles south of the demilitarized zone that divides the two Koreas. Thousands of U.S. troops are also within range of potential North Korean shelling or chemical and biological attacks. Earlier Thursday, Tillerson toured the joint security area in the demilitarized zone, a spot President Bill Clinton once famously described as “the scariest place on Earth.” North Korean soldiers in helmets were taking photos of Tillerson from just a few feet away as the secretary stood at the line and inside the meeting hut. The Korean peninsula was divided along the 38th parallel at the end of World War II, a line that was arbitrarily drawn by one of Tillerson’s predecessors as secretary of state, Dean Rusk, who was an Army colonel at the time. A reporter asked Tillerson on Friday if being at the demilitarized zone brought home the threat of North Korea, but he did not respond. Correction: This story has been updated to correct the time when Dean Rusk drew the line across the Korean peninsula. It was at the end of World War II, not at the end of the Korean War. Fifield reported from Tokyo. Read more: U.S. military deploys advanced defensive missile system to South Korea, citing North Korean threat North Korea launches more missiles; 3 land in Japanese waters N. Korea says it was practicing to hit U.S. bases in Japan with missiles Today’s coverage from Post correspondents around the world Like Washington Post World on Facebook and stay updated on foreign news
Grant MacDonald had to "learn how to run all over again" after his brain injury Ever wanted to quit halfway through a workout? Is the motivation there but your body is telling you to stop? Six years ago, ultra-runner Grant MacDonald was training when a sudden "blinding" headache stopped him in his tracks. Deciding to call it a day and head home, MacDonald was found clutching a speed camera on the road by a passer-by who realised he needed urgent medical attention. "My head had burst and I was having a brain haemorrhage," MacDonald tells BBC Scotland. The biomedical scientist suffered a subarachnoid haemorrhage - an uncommon type of stroke caused by bleeding on the surface of the brain - while running in 2014. Approximately three in five people who have this type of haemorrhage die within two weeks, and half of those who survive are left with severe brain damage and disability. "I was lucky to make a complete recovery," MacDonald says. "So many people die with a condition like I had, so everything after is a bonus." Recovery was a slow, steady pace and it took him a year to get back to his previous standard. "I had my first run at day 37 after my brain haemorrhage. It was more of a walk than a run. It felt like going back to square one and learning how to run all over again. "I was off work for months and running was a way of staying focused." 'You run for 24 hours & eat on move' MacDonald joined the Bellahouston Road Runners in Glasgow 11 years ago in an attempt to get fit. Starting with park runs and progressing to ultra marathons, the 41-year-old now runs for Garscube Harriers and has represented Great Britain. He found his passion in 24-hour ultra-running. "It's quite a niche sport, even among ultra-runners, a lot of them think 24-hour running is weird," he says. "It's usually run on a track or a 1km loop and you start at midday and continue running for 24 hours to try and lap up the most distance that you can in that time. "You can stop and leave the track then come back on, but the only way to really do well is to keep running for the entire 24 hours, eat on the move and never stop. "For me, it's an incredibly pure score. It's how far can you push yourself mentally and physically. You just go in to your own little world and nothing else matters." Having completed multiple 24-hour races, MacDonald is aware that the extreme nature of them takes its toll and he will need to hang up his trainers someday. "I am convinced you have only got about seven or eight 24-hour races in you before your body just gives up. I have done seven now and I am still desperate for my next one." 'I'll be really fit with nothing to do' MacDonald relishes the mental and physical challenge of ultra-running With the temporary halt of everyday life due to the coronavirus outbreak, MacDonald has had to "get creative" with his training routine to ensure he stays in peak physical condition. "I do a lot of my training by running to and from work - 10 miles each way daily - which takes me about an hour and half if I am taking my time. I also usually do a five-hour run on a Saturday. "We are only supposed to go out to exercise for one hour per day now. So I have adapted my routine with lots of yoga, strength and conditioning classes in the garden and one long run a day." His next 24-hour race is scheduled to take place in Verona, Italy - one of the countries most affected by coronavirus - in September. "I can't see it happening this year, but we have to train as if it is going ahead. If it doesn't then we'll just be really fit with nothing to do!"
Pope Leo II Pope Leo II (611 – 28 June 683) was Bishop of Rome from 17 August 682 to 28 June 683. He is one of the popes of the Byzantine Papacy. Background and early activity in the Church He was a Sicilian by birth (the son of a man named Paulus). He may have ended up being among the many Sicilian clergy in Rome, at that time, due to the Islamic Caliphate battles against Sicily in the mid-7th century. Though elected pope a few days after the death of Pope Agatho on January 10, 681, he was not consecrated till after the lapse of a year and seven months (17 August 682). Leo was known as an eloquent preacher who was interested in music, and noted for his charity to the poor. Papacy Elected shortly after the death of Agatho, Leo was not consecrated for over a year and a half. The reason may have been due to negotiations regarding imperial control of papal elections. These negotiations were undertaken by Leo's predecessor Agatho between the Holy See and Emperor Constantine IV. They concerned the relations of the Byzantine Court to papal elections. Constantine IV had already promised Agatho to abolish or reduce the tax that the popes had been paying to the imperial treasury at the time of their consecration, an imperial policy that had been in force for about a century. Leo's short-lived pontificate did not allow him to accomplish much, but there was one achievement of major importance: he confirmed the acts of the Sixth Ecumenical Council (680–681). This council had been held in Constantinople against the Monothelite controversy, and had been presided over by the legates of Pope Agatho. After Leo had notified the Emperor that the decrees of the council had been confirmed, he made them known to the nations of the West. In letters written to the king, the bishops, and the nobles of Spain, he explained what the council had effected, and he called upon the bishops to subscribe to its decrees. During this council, Pope Honorius I was anathematized for tolerating Monothelism. Leo took great pains to make it clear that in condemning Honorius, he did so not because Honorius taught heresy, but because he was not active enough in opposing it. In accordance with the papal mandate, a synod was held at Toledo (684) in which the Third Council of Constantinople was accepted. Regarding the decision of the council, Leo wrote once and again in approbation of the decision of the council and in condemnation of Honorius, whom he regarded as one who profana proditione immaculatem fidem subvertare conatus est (roughly, "one who by betrayal has tried to overthrow the immaculate faith"). In the Greek text of the letter to the Emperor in which the phrase occurs, the milder expression subverti permisit ("allowed to be overthrown...") is used for subvertare conatus est. At this time, Leo put an end to the attempts of the Ravenna archbishops to get away from the control of the Bishop of Rome, but also abolished the tax it had been customary for them to pay when they received the pallium. Also, in apparent response to Lombard raids, Leo transferred the relics of a number of martyrs from the catacombs to churches inside the walls of the city. He dedicated two churches, St. Paul's and Sts. Sebastian and George. Leo also reformed the Gregorian chant and composed several sacred hymns for the divine office. Burial Leo was originally buried in his own monument; however, some years after his death, his remains were put into a tomb that contained the first four of his papal namesakes. References Category:611 births Category:683 deaths Category:Popes Category:Sicilian popes Category:Greek popes Category:Italian popes Category:Papal saints Category:Popes of the Byzantine Papacy Category:7th-century archbishops Category:7th-century Christian saints Category:7th-century popes
How to operate a liver tumor you cannot see. As recent chemotherapy regimens for metastatic colorectal cancer become more and more effective in a neoadjuvant setting before liver surgery, a "complete" clinical response is sometimes documented on imaging. Without operation though, metastatic recurrence is likely to commence within 12 months. Surgeons now face the problem to resect non-visualizable and non-palpable lesions. Computer-based virtual surgery planning can be used to fuse pre- and postchemotherapy computed tomography data to develop an operative strategy. This information is then intraoperatively transferred to the liver surface using an image-guided stereotactically navigated ultrasound dissector. This enables the surgeon to perform a resection that is otherwise not possible. During operation, detection of the lesion through palpation or ultrasound was impossible. After registering the virtual operation plan into the navigation system, the planned resection was performed without problems. Histopathologic workup showed vital tumor cells in the specimen. The new image-guided stereotactic navigation technique combined with virtual surgery planning can solve the surgeon's dilemma and yield a successful operation.
Q: Difference between DAG import in two ways? I am trying to create dynamic dag but seems to be failing at the minute. I came across creating the DAG object in two different: from airflow.models import DAG https://airflow.apache.org/concepts.html#latest-run-only from airflow import DAG https://airflow.apache.org/tutorial.html This really confused me because within the same documentation there are two ways of instantiating DAG object. A: Both are importing the same DAG class. Just an attribute of how python imports works. When you do from airflow.models import DAG python is importing the models file and assigning the variable DAG to the DAG class defined in the models file. When you do from airflow import DAG python is importing the variable DAG defined in init.py, which is in fact just from airflow.models import DAG. A minimal version being: models.py class DAG(): pass init.py from airflow.models import DAG dags/dag_file.py # import __init__.py which imports models.py which contains DAG from airflow import DAG # or this which just imports models.py which contains DAG from airflow.models import DAG All that being said, if your dynamic DAG is failing, I doubt it's related to this import
Tuesday, April 01, 2008 The popular cable access television show and podcast, The Atheist Experience, will air a special fifteen minute episode on Sunday, April 6, announcing the conclusion of the show's ten and a half year run. When pressed for an explanation, show host and president of the Atheist Community of Austin Matt Dillahunty said: "I've been doing this show for two years now... but then last weekend I just looked out the window and couldn't believe what I saw. Trees. Flowers. Blue sky. Even some birds and stuff. I had never noticed any of them before. And I thought to myself, 'Unbelieveable. If all those things exist then surely someone must have made them.'" Dillahunty went on to explain that all four of the regular co-hosts were approached to take over the host spot in order to keep the show running, but all of them declined. "I already announced a while back that I would be leaving the show," said Ashley Perrien, another long time contributor to The Atheist Experience. "I didn't feel comfortable explaining my reasons at the time, but I'm much more sure of myself now. I want to dedicate myself to my new religion of Scientology. In fact, I'm on my way to go get an e-meter reading right now. I hope to cleanse all my engrams in about five years, and then maybe I can talk about coming back." Tracie Harris remarked: "I just realized one day how much I missed the Catholic Church and all their rituals. They've always been like a family to me, and I take comfort and pride in my affiliation with them. I've been chatting with my former priest, and he got me to realize how improper it is for a woman to speak on camera, especially in the capacity of instructing men. I plan to spend the rest of my life atoning for the horrible things I wrote in that stupid Atheist Eve cartoon." Don Baker was struck by a similar epiphany. "I've been lying to myself all these years," he said. "I thought I was an atheist, but really it was just a childish rebellion against the God who wanted me to live a decent, moral life. Now that I have no further excuse to continue sinning, I plan to finally settle down with my new girlfriend and make an honest woman of her." Having said that, he turned to the adoring young lady by his side, who declined to be identified for this story, and gave her a chaste kiss on the hand. "It occurred to me that all this science stuff I'd been preaching was my false substitute for a religion," said Russell Glasser, who used to serve as the show's producer. "Once I realized that all these so-called 'scientists' were actually priests of Satan, the wool was removed from my eyes. It's so clear now that evolution is a lie, and that evidence and reason are a terrible way to understand the world. Faith is clearly a superior epistemological tool. I mean, after all, if you can't even believe something as obvious the resurrection of Jesus, how can you believe anything? Like, how can you believe that New York exists, man? I can't believe I've been so blind. "Also, did you know that the World Trade Center was totally brought down by insiders in the Bush administration?" Glasser added. "It's true! Don't buy the official government story! There's a movie online that explains EVERYTHING!" In a scheduled press conference, producer Joe Rhodes announced that the show will remain off the air for three days, at which point it will be reborn as a new series entitled "Kickin' It With Jesus." In related news, audio podcast host Denis Loubet announced that the first episode of "The Prophets" will air in two weeks. I've decided that I'm not going to be supporting "Kickin' it" as the individuals responsible have latched on to the new, softer, Christianity which is clearly heresy. They've perverted the LORD's Word to fit their own views. I will, instead, begin a new program called "Kickin' it - old school!" where we'll focus on a strict literal interpretation of the Bible, using only the Authorized Version. But I'll pray for my former co-hosts and we'll still be able to work together (despite their clear refusal to fully follow the LORD) to vanquish the godless heathens who are trying to destroy the beautiful world that our loving creator has given us. You are such a tease, thanks for the smile and keep looking, you just may find what you are looking for. To keep my name's sake ( Dan Marvin Apologetic Power Hour) I read something last night I would love to share. It was written by Mike Matthews who earned a BA and an MEd in English education from Bob Jones University so he is better equipped to convey the message then I. He wrote an article about Babel and their rebellion called "The World in Revolt" which reminded me of the conversations at AE. In his article he was discussing Genesis 11:1-9 and I want to focus what he said about verse 4 "And they said, Go to, let us build us a city and a tower, whose top may reach unto heaven; and let us make us a name, lest we be scattered abroad upon the face of the whole earth." Mike said "This sentence reveals the arrogance of the people of Babel. they sought a name for themselves rather then to honor the name of their creator, who is above all and whose name is worthy of all praise." It reminded me of what Matt D said yesterday "Your god, if he exists isn't worthy of worship." Mike went on to say "Throughout history, humans have longed to share God's glory. The serpent tempted Eve with the promise that she and Adam could "be as gods." All man-made religions try to "honor" God by the works of our own hands. God, in contrast, is not impressed by our works. He desires obedience and humility (1 Samuel 15:22-23). Moses's account reminds us how we are all naturally stubborn and rebellious." Right Matt D and all? Mike finishes the paragraph with "Ironically, if we are humble and obedient, God will honor our name. Moses shows this by contrasting the events at Babel with the later faithfulness of Abraham. God promised to make Abraham's name great, and all he needed was humble faith (Genesis 12:2)" "God's gentleness in judging the rebels at Babel is a lesson for us today. God did not let man's rebellion run its full course, as He had before Noah's Flood. He nipped the rebellion in early stages so that humans would not hurt themselves too much. By changing one language into many, He separated nations more effectively the any Wall of China. God stepped in to prevent the human race from falling under the sway of single, absolute tyrant over all the earth. Only in His time would Christ gather together God's family from every nation and Tongue (Revelation 7:9) Note God's ironic words. Just as the rebels said, "let us build a tower," God said, "Let us confound their language," Man's counsels can't stand in the face of God's counsel. As the original creator of human speech, God could easily rewire speech so that the evil speakers could no longer speak to one another. Moses closes the account with a reminder that God will always accomplish His will. We may think we have found a way to circumvent His will, but that is just an appearance. As King Solomon later wrote, "The king's heart is in the hand of the Lord, like the river of water; He turns it wherever He wishes" (Proverbs 21:1) By this simple act, God forced humanity to proceed down His chosen path-to resettle the earth by families. God's first Judgment after the Flood proved that He would continue to superintend the events of human history. God wants us to turn to Him, rather then relying on ourselves. One day, whether they like it or not, all people will bow their knee before the name of Jesus Christ, the true bridge between heaven and earth. (Philippians 2:9-11)" This is just one of a seven part exploration of the Bible that I will present to all of you explaining the need for God and understanding His plan. Thanks Martin for letting me be as one of the authors/ Team members at AE's blog and I hope I will not disappoint your readers. Thanks for the invite as one of speakers on the show and I hope to discuss and the topics presented with a Biblical perspective. I will be there for the April 6th show. I don't know about you guys, but I feel blessed. And that's pretty blessed. I bet I'm more blessed than you. Or maybe not. But it's a pretty blessed feeling, all the same. It makes me feel totally humble to be so blessed. Matt: But I'll pray for my former co-hosts and we'll still be able to work together (despite their clear refusal to fully follow the LORD) Enjoy the day that we observe as "National Atheist's Day" and to help celebrate, a few quotes: * It's better to keep your mouth shut and be thought a fool than to open it and leave no doubt. --Mark Twain* However big the fool, there is always a bigger fool to admire him. -- Nicolas Boileau-Despréaux* [Politicians] never open their mouths without subtracting from the sum of human knowledge. -- Thomas Reed* He who lives without folly isn't so wise as he thinks. -- François, Duc de La Rochefoucauld* The ultimate result of shielding men from the effects of folly, is to fill the world with fools. -- Herbert Spencer* Sometimes one likes foolish people for their folly, better than wise people for their wisdom. -- Elizabeth Gaskell* Looking foolish does the spirit good. -- John Updike* Let us be thankful for the fools. But for them the rest of us could not succeed. -- Mark Twain* A fool sees not the same tree that a wise man sees. -- William Blake* A fool must now and then be right by chance. -- Cowper* It is better to be a fool than to be dead. -- Stevenson* The first of April is the day we remember what we are the other 364 days of the year. -- Mark Twain As the article points out "God by the works of our own hands. God, in contrast, is not impressed by our works." all these good works are an abomination to God. Or as the Bible says in Luke 16:15 "And he said unto them, Ye are they which justify yourselves before men; but God knoweth your hearts: for that which is highly esteemed among men is abomination in the sight of God." There is none good. If a pedophile donates blood, does that make him good also, Logic would say of course not. and Ouini, "Dan. I figured you, of all people, would be a little afraid of hellfire?" Yes I am very afraid of the truth of Hell, It keeps me up at night sometimes. I worry about the lost everyday. "but whosoever shall say," as it says because we, as humans, are subject to God's Law. Can I set wicked people on fire, of course not but God sure can and will. We are not to condemn each other but God sure can. Matthew 10:28 "And fear not them which kill the body, but are not able to kill the soul: but rather fear him which is able to destroy both soul and body in hell." Tommy, how can you be so blind to the wonderful, glorious, supercalifragilistic message of hope and love that Brother Dan is giving us here? Don't you get it, you doomed heathen? Who cares about actually being a good person? Certainly not God! That would take effort, and being honest, and who wants to waste time with that? See, God is so wicked cool that he's made the whole process easy! All you have to do is join the Official Jebus Fan Club (Bible sold separately), and you're SAVED! That's right, brother. You get to call yourself a good person, and not ever actually have to accomplish anything. You get to think you know more about everything than all those stupid liberal egghead scientists, and in fact, you can be a totally uneducated moron the whole time! I mean, look at Dan. It's working for him, ain't it? See, Christianity just makes life totally easy, because you don't have to do anything, know anything, or care...and you get a VIP backstage pass to Heaven while all those horrible people who wasted their lives thinking and teaching and learning and advancing the human race end up stuck on Earth when the trumps sound, holding onto their balls and looking confused. How cool is that? Come on Tommy! What kind of person do you really want to be in life? A "good person"? Or a Christian? Can I hear an Ay-men, brutha!? Martin, I am waiting to see if Richard Dawkins converts to Christianity first, because as you know, we atheists are incapable of thinking for ourselves and blindly follow Dawkins in whatever he says or does. Dan Marvin belches "If a pedophile donates blood, does that make him good also, Logic would say of course not.". So, lack of belief in the existence of a deity is on a par with sexually abusing children? Real good logic you got there Danny Boy. I think I took that quiz a while back. Anyway, I took it again, and I got: "You scored as a Scientific AtheistThese guys rule. I'm not one of them myself, although I play one online. They know the rules of debate, the Laws of Thermodynamics, and can explain evolution in fifty words or less. More concerned with how things ARE than how they should be, these are the people who will bring us into the future." My "angry atheist" rating was actually the lowest of the options. I suppose it's because I'm not really angry. Fundies always think that when we mock them, it's because we're angry. Actually, it's because we're amused. That's why we call them fundies. They're just such fun! If a pedophile donates blood, does that make him good also, Logic would say of course not. Well if he accepts Jesus, he miraculously will be made good, without having to bother doing something like giving blood or some comparable act of charity, and should he lapse and molest again, he can once again apologize to Jesus and he'll forgive him. Praise Jesus! Martin can't even pretend to love Jesus, he is an atheist to the bone, presuppositions in place and all, even if light from heaven came down right in front of him he would run scared and hide under a rock. Revelation 6:16 "And said to the mountains and rocks, Fall on us, and hide us from the face of him that sitteth on the throne, and from the wrath of the Lamb:" It was fun to pretend that all of you were actually headed to being saved, although my heart aches still. (If) That Bible is true, all these people that follow your same thought process Martin will be in a horrific place forever. For me, it sure doesn't get more frustrating then this. I too place myself before God and try to help people get saved but I need to trust the Lord fully to change the hearts of the lost. At this point I just may be getting in the way. Your time on earth is a time of mercy. Those that love Him know the truth about our eternal existence and knowing this truth, we all endeavor to glorify God. We are taught to turn the other cheek by our Lord because our sacrifices, acts of faith & love are from His grace. Grace that may cause the conversion of the one that hurt us or the conversion of others. The greatest miracle of all is the turning of one's heart towards God (ie. a conversion). A conversion is possible for everyone if they admit of their guilt and accept the words of Jesus as being the truth. In the end, once we die, the concept of eternity will be obvious to us and we will all realize that this new dimension, the spiritual dimension, is the real dimension. Our choices in this life are gravely important as they will decide our eternal destination. To reject Jesus, God, your entire life will likely leave you in a place far removed from God. To be without God is to be without peace, hope and love. If we had a glimpse of what true darkness is like, we'd realize that we'd never ever wish Hell on anyone. Not even Hitler. Your time of mercy is now, in this life, and your choice will be eternal. God entered time and space to turn the other cheek as a sacrifice that atones for your sins and my sins. God, in eternity, does not turn the other cheek again. To reject Christ until the end will mean that you are left with His justice. As great is His mercy, also great is His justice. The beauty for those of us that love Jesus, is that we trust Him completely and don't worry ourselves with mysteries that are contrary to our natural minds. Many of you will take the prideful stance to reject Him because of the doctrine of Hell. Which is so terribly sad because you actually don’t need to go to Hell if you simply accept Him? If you are not able to acknowledge your sins because your heart is filled with various emotions that block this capacity, I suggest you take a leap of faith. I suggest you call out to the Lord for help with sincerity and I assure you, you’ll receive it. Call out His name, Jesus Christ, with your lips and never doubt. God, the source of Love, will reveal Himself to you very soon. Dan Marvin wrote:In his article he was discussing Genesis 11:1-9 and I want to focus what he said about verse 4 "And they said, Go to, let us build us a city and a tower, whose top may reach unto heaven; and let us make us a name, lest we be scattered abroad upon the face of the whole earth." Mike said "This sentence reveals the arrogance of the people of Babel. they sought a name for themselves rather then to honor the name of their creator, who is above all and whose name is worthy of all praise." Let's assume that these people really lived, and really tried to build a tower to Heaven. It would have gone up and up and up. It might have fallen town under its own weight, and the people would have started over, using better materials and techniques. Each time it falls over, they start again. Let's give them every possible break, and assume that they have 21st century steel, concrete, trucks, unlimited resources, etc. They manage to build a tower so high that the air is too thin to breathe, and there's still no sign of anything solid for thousands of miles, and certainly nothing like a solid dome of Heaven to which one can bulid a tower. So if God's purpose was to prevent people from building a tower to Heaven, all God needed to do was... absolutely nothing. So why did he intervene, other than to show off? Oh, right: because the whole thing's a just-so story to explain why there are different languages. (BTW, I must recommend Ted Chiang's story Tower of Babylon, which deals with this.) I believe you missed the point. It wasn't to stop the tower, he changed the languages to spread people around the earth instead of one place. By changing one language into many, He separated nations more effectively the any Wall of China. God stepped in to prevent the human race from falling under the sway of single, absolute tyrant over all the earth. By changing one language into many, He separated nations more effectively the any Wall of China. God stepped in to prevent the human race from falling under the sway of single, absolute tyrant over all the earth. After all, God didn't want competition for the title. ;) By the way, I notice you perfectly cut and pasted those identical words in two places within this thread. he changed the languages to spread people around the earth instead of one place. Because obviously, changing people's languages and causing confusion and chaos is so much easier than just letting people breed, and let them a) move elsewhere because it's getting too crowded and b) move because they're curious about what lies over yonder ridge. God stepped in to prevent the human race from falling under the sway of single, absolute tyrant over all the earth. Where are you getting this from? In my Bible, it says, "The LORD said, "If as one people speaking the same language they have begun to do this, then nothing they plan to do will be impossible for them." (Gen. 11:6) It seems clear that in this story, God wanted to prevent people from realizing their full potential (just as he kicked Adam and Eve from the garden to prevent them from living forever.) "And they said, Go to, let us build us a city and a tower, whose top may reach unto heaven; and let us make us a name, lest we be scattered abroad upon the face of the whole earth." Mike said "This sentence reveals the arrogance of the people of Babel. they sought a name for themselves rather then to honor the name of their creator, who is above all and whose name is worthy of all praise." So, the people were united, peacefully and working toward a common goal that wasn't a sycophantic genuflect...and God calls this arrogant and decides to ensure that people have to struggle through thousands of years of divisiveness, learning these languages, fighting over the confusion and can only hope that if they're ever united again, this God won't take offense. Great plan, God! You're #1! "God's gentleness in judging the rebels at Babel is a lesson for us today." It certainly is. It demonstrates that even the "gentleness" of this god ultimately serves to promote divisiveness, servility and confrontation. God did not let man's rebellion run its full course ...thereby countering the free will he'd like us to pretend we have. Much in the way that he directly violated Pharoah's free will, hardening his heart and forcing him to refuse to release the Israelites (so he could show off some of his most terrifying magic tricks). He nipped the rebellion in early stages so that humans would not hurt themselves too much. Hmm, so he confounded languages, sowed divisiveness and confrontation throughout the world, instead of letting the cooperative society continue to work together - and it was all so we wouldn't hurt ourselves too much? God stepped in to prevent the human race from falling under the sway of single, absolute tyrant over all the earth. Actually, that should be "under the sway of ANY OTHER absolute tyrant". Your God is a jealous god, and hates competition. As the original creator of human speech, God could easily rewire speech so that the evil speakers could no longer speak to one another. And yet, we're perfectly capable of working out translations and speaking multiple languages, thereby thwarting this cosmic genius' plans again. By this simple act, God forced humanity to proceed down His chosen path-to resettle the earth by families. Proving that we have no free will in the system you accept. God dictates, we do and if we almost manage to thwart his plans while he's napping, he'll come down, fuck some people up and get everything back on track. God wants us to turn to Him, rather then relying on ourselves. And he's doing such a bang up job of demonstrating why we should. This is just one of a seven part exploration of the Bible that I will present to all of you explaining the need for God and understanding His plan. Just to for the sake of irony, I'm praying that you don't post the other 6 parts. I will be there for the April 6th show. I'm pretty sure this was a joke, but just in case: No, you won't. Get your own show, I'd rather not piss of our listeners by allowing your pathetic apologetics any more time than a single phone call. However, you're welcome to call - just like everyone else. Lastly: It reminded me of what Matt D said yesterday "Your god, if he exists isn't worthy of worship." A statement I stand by - and your own posts only add support to this position. >my heart aches still. (If) That Bible is true, all these people that follow your same thought process Martin will be in a horrific place forever. My heart aches when I think of how many people are living in fear (and infecting their children with it) needlessly, simply because they have been drilled to believe in something fearful, but for which there is no compelling, objectively verifiable evidence. Instilling children with false fear and paranoid distrust, telling them they are depraved through and through, and that they aren’t capable of self-reliance, of making even the slightest successful move or decision in their lives without a mental security blanket wrapped around them to keep them safe, and then telling them it’s all our of love for them, for their own good, that you’re doing this to them is nothing I could ever respect. It’s no different than calling a child stupid and worthless every day. And who wouldn’t consider that poor parenting—if not outright emotionally abusive? And by the way, I don't see how this discussion goes on since there was, some time ago, a question of "what would it take to get you to believe in god," and I think there was pretty fair consensus that existence was direct manifestation, achieved repeatably, through objectively verifiable means. Did I miss where that was provided? Or did we just drop that and go back to arguing over something that still has not been shown to exist? Why is this conversation continuing, I have to ask, while the criteria for showing the object under discussion exists has still not been met. Shouldn't that be the first order of business since Dan is back on the blog? Let me ask you tracieh are you one that believes that building up self esteem in a child is better then teaching them about Hell? Do you really believe that you are better at teaching then the Creator of the Universe? Are you serious(?) Again it comes down to trusting God that He knows how to teach/raise or trusting in people like tracieh that she knows more then God. I will stick with God for now, thanks. Either people are fallible and God is infallible or tracieh is right, you all can choose. I have made my choice. Let me ask you tracieh are you one that believes that building up self esteem in a child is better then teaching them about Hell? Do you really believe that you are better at teaching then the Creator of the Universe? Are you serious(?) What kind of question is that? Honestly Dan, every time you post, I'd swear your IQ drops another 80 points. You're beyond negative integers now, into the realm of imaginary numbers. Seriously, this is like asking, "Do you honestly believe that raising up a child to be loved and respected as a human being is better than to engage in a ruthless campaign of psychological terrorism designed to break down their emotional well-being and turn them into bipolar, maladjusted mental wrecks, frightened even to think a single thought that contradicts the received dogma for fear of eternal torture? Are you serious!?" Yes, we do think building self-esteem in a child (as a thing that must be earned through achievements and character) is better than filling their heads with false fears of a nonexistent hell, which is fundamentally no different then child abuse. People are also better at teaching children than the "Creator of the Universe," because there is no such thing as this "Creator of the Universe" that you believe in. Furthermore, the doctrine of Hell singlehandedly condemns Christianity as a deeply immoral belief system, because it proves Christianity can only compel loyalty through fear, as there are no verifiable facts to support its grandiose claims. See Dan, this is the bit you are too unintelligent to understand. We've been trying to explain this extremely basic point to you for nearly a full year now, and your incurable stupidity has never allowed it to sink in. To wit: You think you can come here and persuade us with non-arguments that amount to nothing more than "Hell! Boo scary!" and "God this, God that," without ever establishing that either of these two things exist. You're putting the cart before the horse...over and over and over again. I know you can neither read nor think well, but go and read Kazim's latest post on "the Star Trek Rule," so that your tiny, tiny, tiny mind might comprehend the concept of the necessity of establishing the existence of the thing you're arguing for, instead of simply taking it as a given, when you try to talk to those who don't take it as a given. I have made my choice. That's great, Dan. So how about taking a long walk down a short pier and leaving everyone else well alone? One of the things that drove me away from Christianity was the realization that it was ridiculous to believe that my life on this planet is just a test so that some supreme being can determine whether or not I will suffer for eternity in the after life, with the primary criterion being whether or not I believe that the aforesaid supreme being impregnated a virgin Jewish teenage girl in the Galilee some 2,000 years ago and that the superboy that resulted from the union died for me and rose from the dead. Am I missing anything? If it exists and you're representative of the product of his teaching - he gets an 'F'. If God exists, he must be shaking his head wondering why only the least intelligent and dysfunctional of his human creations tend to be the ones eager to believe in and slavishly worship him. Somehow, something's not working. :-) "If sinners are converted by the intellect (the wisdom of men), they will fall away by the intellect. If they are merely argued into the faith, they will just as easily be argued out of it whenever a respected scholar reports that 'the bones of Jesus" have been found. However if sinners are converted by "the power of God," they will be kept by the power of God. No intellectual argument will cause them to waver because they will know the life-changing reality of their conversion," If you really need I can back it up with a tremendous amount of scripture. lol BTW I knew you were a fan of the deceitful self esteem movement and it answers a great number of questions I have had about you. Knowing that you don't have any children, I wonder how your future kids will turn out when you tell them they are great when actually they aren't. I tell my children the truth good or bad, I don't fill their heads with things that will push them towards self grandeur, that is where the self esteem movement people such as yourself will fail their kids. "Yes, we do think building self-esteem in a child (as a thing that must be earned through achievements and character) is better than filling their heads with false fears" You just contradicted the self esteem movement. The article I linked to said "It seems that a growing body of research indicates that the self-esteem movement, which argued for praising intelligence rather than effort, may be hurting the kids it claims to help." If you care to see the destructiveness of the self esteem movement look at the woman that tells their child, after an American Idol audition, "Don't listen to those judges, that have over 30 year experience in the industry, when they said your singing was like scratching a chalk board. You sing wonderfully and you are the best no matter what anyone says. You are so smart, now sing to mama, again." Now that is a form of child abuse and I feel sorry for your (future) children, if you find someone that can love that little angry man inside of you. I hope you do, do you? The article nailed even you Martin, square on the head, if you can humble yourself to actually listen to good advice "(Highly aggressive, violent people happen to think very highly of themselves, debunking the theory that people are aggressive to make up for low self-esteem.)" Does it really matter how intelligent a child rapist is if he is raping children? He will be punished no matter how many degrees he has on the wall. Your intellect is flawed and God made sure of that. You must submit to God to understand. For your sake I hope you do someday. If they are merely argued into the faith, they will just as easily be argued out of it whenever a respected scholar reports that 'the bones of Jesus" have been found. Um... if the bones of Jesus actually are found, doesn't that mean that Jesus didn't rise from the dead, so isn't dropping Christianity the right thing to do? Or are you saying that skeptics and atheists can be swayed into believing anything by any smooth talker with a set of letters after his name? If so, allow me to present a few counterexamples: Francis Collins, Ken Miller, and C.S. Lewis, to name but a few. Heck, I'll throw in William Dembski, Michael Behe, and Jonathan Wells for free. They're all believers (in the past tense, in Lewis's case), and all have written about God. The reason they haven't convinced us is not that they lack degrees, but rather that they haven't presented any evidence. I've got $50 that says that when that respected scholar shows up and says he's found the bones of Jesus, the first question on Martin's lips will be "Oh yeah? What evidence does he have?" You just contradicted the self esteem movement. Let's say she has. So what? Do you think tracieh has pledged allegiance to "the self esteem movement" and feels obligated to agree with it in everything, on pain of excommunication? From what little I know of her, she seems quite capable of thinking for herself. If sinners are converted by the intellect (the wisdom of men), they will fall away by the intellect. If they are merely argued into the faith, they will just as easily be argued out of it whenever a respected scholar reports that 'the bones of Jesus" have been found.... Blah blah blah...all of which is merely an admission that your beliefs lack intellectual rigor (let alone any intellectual content), and are merely appeals to emotion: thus not worth anyone's time. Actually I didn't forget the last time you trotted that one out, Dan. (And the fact that you think this point is some kind of respectable argument for Christianity borders on absurdist comedy.) Having dealt with you for nearly a year, all of us here know that your S.O.P. is to come here and parrot the same non-arguments time and again no matter how often we torpedo them. You're basically firing blanks, and have been for ages. You just contradicted the self esteem movement. The article I linked to said "It seems that a growing body of research indicates that the self-esteem movement, which argued for praising intelligence rather than effort, may be hurting the kids it claims to help." Well, doofus, if you had paused to think before typing away, obviously you should have reached the conclusion that, if my views contradict those of some established "self esteem movement," then obviously, that means my views on the subject of self-esteem are more realistic and sensible than those of this silly "movement," and were not meant to support or reinforce said "movement" in the first place. Right? Like, duh. I mean, that should have been obvious, considering that I never once mentioned anywhere in my post that I was a proponent of any "self-esteem movement" that foolishly encourages the building of unearned self-esteem. Not sure why this confused you. Perhaps, since you never come up with your own arguments, it throws you off when I and other people here do. Like so many fundies, you think in black-and-white, shallow terms. Either someone supports the "hellfire and brimstone" Christian authoritarian approach to parenting, or they subscribe to some mealy-mouthed, limp-wristed New Agey "self esteem" "movement." That intelligent people might be freethinkers, who reach conclusions about things on their own through reason, doesn't even show up on your radar. As always, your intellectual limitations trip you up. Knowing that you don't have any children, I wonder how your future kids will turn out when you tell them they are great when actually they aren't. Well, based on what I've just explained to you, obviously, if I had kids and they were being bad, I wouldn't tell them they were great, would I? So there you go. Seriously, Dan, anyone with 6th grade reading skills ought to have understood me. You'll have to try to do better and read what I and others write more carefully next time, so as to avoid building such embarrassing straw men. PS: As much as I think unearned "self esteem" is a bogus thing to teach kids, it's still infinitely less abusive than psychologically terrorizing them with threats of eternal torture after death. PLEASE NOTE: The Atheist Experience has moved to a new location, and this blog is now closed to comments. To participate in future discussions, please visit http://www.freethoughtblogs.com/axp. This blog encourages believers who disagree with us to comment. However, anonymous comments are disallowed to weed out cowardly flamers who hide behind anonymity. Commenters will only be banned when they've demonstrated they're nothing more than trolls whose behavior is intentionally offensive to the blog's readership. Email policy All emails sent to the program at the tv[at]atheist-community[dot]org address become the property of the ACA, and the desire for a reply is assumed. Note that this reply could take the form of a public response on the show or here on the blog. In those cases, we will never include the correspondent's address, but will include names unless we deem it inappropriate. If you absolutely do not wish for us to address your email publicly, please include a note to that effect (like "private response only" or "not for publication" or "if you post this on the blog please don't use my name") somewhere in the letter. Google Analytics script Subscribe To AE and Related Sites PLEASE NOTE: The Atheist Experience has moved to a new location, and this blog is now closed to comments. To participate in future discussions, please visit http://www.freethoughtblogs.com/axp.The Atheist Experience is a weekly live call-in television show sponsored by the Atheist Community of Austin. This independently-run blog (not sponsored by the ACA) features contributions from current and former hosts and co-hosts of the show.
Q: What is an example of a SVM kernel, where one implicitly uses an infinity-dimensional space? Reading the Wikipedia article about SVMs, I noticed More formally, a support vector machine constructs a hyperplane or set of hyperplanes in a high- or infinite-dimensional space, which can be used for classification, regression, or other tasks. I continued with "A Tutorial on Support Vector Machines for Pattern Recognition" by Christopher JC Burges and stumbled over the following (please not that $x \cdot y$ is the dot product): Now suppose we first mapped the data to some other (possibly infinite dimensional) Euclidean space $\mathcal{H}$, using a mapping which we will call $\Phi$: $$\Phi : \mathbb{R}^d \rightarrow \mathcal{H}$$ Then of course the training algorithm would only depend on the data through dot products in $\mathcal{H}$, i.e. on functions of the form $\Phi(\mathbf{x}_i)\cdot \Phi(\mathbf{x}_j)$. Now if there were a “kernel function” $K$ such that $K(\mathbf{x}_i, \mathbf{x}_j) = \Phi(\mathbf{x}_i)\cdot\Phi(\mathbf{x}_j)$, we would only need to use $K$ in the training algorithm, and would never need to explicitly even know what $\Phi$ is. One example is $$K(\mathbf{x}_i, \mathbf{x}_j ) = e^{- \| \mathbf{x}_i - \mathbf{x}_j\|^2 / 2 \sigma^2}$$ In this particular example, $\mathcal{H}$ is infinite dimensional, so it would not be very easy to work with $\Phi$ explicitly. I have three questions which are closely related to this. I am happy with any answer which answers any of my questions: Why would $\mathcal{H}$ be infinitely dimensional in this case? What is $\Phi$ in this case? In other sources I read that the Kernel function has to be positive definite. Why? A: To understand the first two questions, let's consider $x, y \in \mathbb{R}^2, x=(x_1,x_2), y=(y_1, y_2)$ and examine the polynomial kernel of degree 2: $$K(x,y)=(x^Ty)^2$$ Which can be rewritten as: $$K(x,y) = (x_1y_1 + x_2y_2)^2 = x_1^2y_1^2 + 2x_1y_1x_2y_2 + x_2^2y_2^2$$ We know that the kernel function is $K(x,y)=\Phi(x)^T\Phi(y)$, therefore we try to find a feature map $\Phi$ that will be equivalent to the above. Let $$\Phi(x)=(x_1^2, \sqrt{2}x_1x_2, x_2^2)$$ From this, we can see that $\Phi(x)^T\Phi(y) = x_1^2y_1^2 + 2x_1y_1x_2y_2 + x_2^2y_2^2$, which is the kernel function! Notice that by using $\Phi$ we mapped the input vectors from $\mathbb{R}^2$ to $\mathbb{R}^3$, therefore when we compute $K(x,y)$, this mapping will be implicitly performed. Now, going back to your example (the RBF kernel). Let $\gamma = \frac{1}{2\sigma^2}$ and let's assume $x \in \mathbb{R}^1$: $$K(x_i, x_j) = e^{-\gamma||x_i - x_j||^2} = e^{-\gamma(x_i - x_j)^2} = e^{-\gamma x_i^2 + 2\gamma x_i x_j - \gamma x_j^2}$$ Using the Taylor expansion of the exponential function for $e^{2\gamma x_i x_j}$ we can rewrite the above as: $$ K(x_i, x_j) = e^{-\gamma x_i^2-\gamma x_j^2} \left(1 + \frac{2\gamma x_i x_j}{1!} + \frac{(2\gamma x_i x_j)^2}{2!} + \frac{(2\gamma x_i x_j)^3}{3!} + \ldots \right)$$ $$ = e^{-\gamma x_i^2-\gamma x_j^2} \left(1 \cdot 1 + \sqrt{\frac{2\gamma}{1!}}x_i \cdot \sqrt{\frac{2\gamma}{1!}}x_j + \sqrt{\frac{(2\gamma)^2}{2!}}x_i^2 \cdot \sqrt{\frac{(2\gamma)^2}{2!}}x_j^2 + \sqrt{\frac{(2\gamma)^3}{3!}}x_i^3 \cdot \sqrt{\frac{(2\gamma)^3}{3!}}x_j^3 + \ldots \right) = \Phi(x_i)^T \Phi(x_j)$$ And, explicitly the feature map will be: $$\Phi(x) = e^{-\gamma x^2} \left[1, \sqrt{\frac{2\gamma}{1!}}x,\sqrt{\frac{(2\gamma)^2}{2!}}x^2, \sqrt{\frac{(2\gamma)^3}{3!}}x^3, \ldots\right]$$ Which is an infinite-dimensional vector. As you said, in kernel methods we will compute inner products in feature space without explicitly having to define the mapping $\Phi$. Avoiding this explicit mapping is the famous "kernel trick".
{ "name": "NETGEAR JNR3000", "author": "fofa", "version": "0.1.0", "matches": [ { "search": "headers", "text": "NETGEAR JNR3000" } ] }
nidaqmx.task.ai_channel ======================= .. automodule:: nidaqmx._task_modules.channels.ai_channel :members: :inherited-members: :show-inheritance:
PROTECT YOUR BUSINESS WITH FLAT ROOF REPAIR It doesn’t matter if you are the owner of a condominium, restaurant, office building, or shopping plaza, the last thing you want to worry about is roof leaks that cost you money and time. These are the types of headaches that no business owner wants. The simple solution is to protect your business with flat roof repair before trouble begins. There are a number of ways to accomplish this. Schedule a yearly inspection the beginning of January. This inspection comes after hurricane season has passed and is the best time to assess any damage that may have happened since the last inspection. If any flat roof repair is required, you will know exactly what to expect and how much to budget for in the new year. Consider a roof maintenance program. There are a number of options available in these types of programs, including yearly restoration of part of the roof to repairs that are needed. Speaking with one of our roofing experts will enable you to choose the program that is right for your current needs. Seek out repairs the minute any sign of leakage occurs. A leaky roof can mean loss of business and loss of goods if inside damage occurs. Damaged flooring, furniture, electronics, and merchandise can add up rather quickly. As a business owner, you want to protect your property and save money at the same time. Finally, if flat roof repair will not solve the problem, replacing your current roof with a new roof that is under full warranty may be the best and most economical option. How can purchasing a new roof be economical? If you take into consideration the cost of continual repairs and possible damage bills, it may be the least expensive route in the long run. In many instances, insurance premiums may receive deductions when a roof is replaced. By speaking with the experts at SK Quality Roofing, all your questions about flat roof repair and maintenance for your business can be answered. Whatever questions you have, simply ASK SK. We provide the answers and service you desire.
Who Killed Dr Bogle and Mrs Chandler? Who Killed Dr Bogle and Mrs Chandler? is an Australian documentary film about the mysterious deaths of Dr Gilbert Bogle and Mrs Margaret Chandler in Sydney, Australia in 1963. Although it was assumed the couple were murdered, police investigators could find or produce no evidence that it was actually murder. The documentary, directed and written by Australian documentary film maker Peter Butt, presents unique evidence to suggest the couple died from hydrogen sulphide poisoning emanating from a river. Summary When the half-naked bodies of brilliant physicist, Dr Gilbert Bogle, and his lover, Mrs Margaret Chandler, were found in bizarre circumstances on a Sydney riverbank in 1963, it set into play an unprecedented forensic investigation. Autopsies offered little clue as to how the couple died, only that there were signs of a rapidly acting poison. Despite assistance from the FBI and Scotland Yard, the poison was never identified. At the end of a long and controversial coronial inquest, no cause of death, killer or motive could be identified. In the ensuing years, scores of tabloid theories have been put forward, from LSD to Cold War assassinations. But in the minds of many, including the police, Margaret Chandler’s husband, Geoffrey, was the likely culprit. Four decades later, this explosive documentary reveals startling new scientific evidence - evidence so powerful the police gave filmmaker Peter Butt unprecedented access to their forensic records. Cast Reception Television The film premiered on the Australian Broadcasting Corporation on 7 September 2006. 1.8 million people in the five major capitals tuned in, plus an estimated 700,000 viewers in the other cities and regional areas, making it the most-watched Australian documentary ever screened on the network, as well as the most-watched program in 2006 on the ABC. It was the number one program in Sydney, Melbourne, Adelaide, Perth and Brisbane. Awards and recognition Who Killed Dr Bogle and Mrs Chandler? was well received by television critics, scientists, and politicians and won Most Outstanding Documentary in the 2007 TV Week Logies. References External links Movie Trailer and Book Official Website IMDb Entry Category:Australian independent films Category:Australian television films Category:Australian documentary films Category:Australian films Category:English-language films Category:2006 television films Category:Documentary films about water and the environment Category:Films scored by Guy Gross
Diagnosis and morbidity of placenta accreta. To examine the diagnostic precision of ultrasound examination for placenta accreta in women with placenta previa and to compare the morbidity associated with accreta to that of previa alone. This was a retrospective cohort study of all women with previa with/without accreta examined at the University of California, San Francisco (UCSF) between 2002 and 2008. The sensitivity, specificity, negative predictive value (NPV) and positive predictive value (PPV) of ultrasound examination for the diagnosis of accreta were calculated and compared with results from similar studies in the literature. Univariable analysis was used to compare clinical outcomes. The PPV of an ultrasound diagnosis of accreta was 68% and NPV was 98%. Ultrasound had a sensitivity of 89.5%. Compared with previa alone, accreta had an odds ratio (OR) of 89.6 (95% CI, 19.44-412.95) for estimated blood loss > 2 L, an OR of 29.6 (95% CI, 8.20-107.00) for transfusion and an OR of 8.52 (95% CI, 2.58-28.11) for length of hospital stay > 4 days. Placenta accreta is associated with greater morbidity than is placenta previa alone. Ultrasound examination is a good diagnostic test for accreta in women with placenta previa. This is consistent with most other studies in the literature.
Introduction {#sec1} ============ Retinal blood vessel attributes, such as the width, location, obscuration, integrity, and tortuosity, are commonly considered important features in assessments of optic disc swelling.[@bib1] For example, in the modified Frisén grading system, the number of obscured vessel segments leaving the optic disc, is considered one of the key features to distinguish papilledema from mild to severe levels using fundus photographs.[@bib2]^--^[@bib5] Echegaray et al.[@bib6] have also shown that the measurement of vessel discontinuity can be helpful for a machine-learned Frisén grading system to achieve a substantial agreement between its output and the human expert\'s decision. [Figure 1](#fig1){ref-type="fig"}a shows three example fundus photographs with mild, moderate, and severe optic disc swelling (from the top to the bottom row in the figure). As indicated with yellow arrows, the vessel attributes change substantially on the swollen optic disc among these cases. ![Comparisons of the fundus photograph and OCT pairs with mild optic disc swelling (*top row*: a1, b1, c1), moderate swelling (*middle row*: a2, b2, c2), and severe swelling (*bottom row*: a3, b3, c3). *Left column* (a1, a2, a3) shows fundus photographs. *Middle column* (b1, b2, b3) shows the OCT central B-scans with automated layer segmentation. *Right column* (c1, c2, c3) shows the OCT RPE en-face images. Note: In case of swelling, the *yellow arrows* indicate vessel attributes changes in (a), the *cyan lines* in (c) represent the location of the central B-scans, and the *green arrows* in (b) and (c) indicate the matched shadow regions.](tvst-9-2-17-f001){#fig1} Spectral-domain optical coherence tomography[@bib7]^--^[@bib10] (OCT) is another imaging modality that is regularly used for assessing optic disc swelling. To date, most OCT-based measurements that have been used in the clinic and in research studies in cases of optic disc swelling (such as the retinal nerve fiber layer as well as the total retinal layer thicknesses, the optic nerve head \[ONH\] volume, and Bruch\'s membrane shapes)[@bib5]^,^[@bib8]^,^[@bib11]^--^[@bib15] do not incorporate vessel information. However, especially for purposes of developing automated systems for assessing the severity and causes of optic disc swelling, having robust automated approaches for the OCT-based segmentation of retinal vessels is needed not only for computation of vessel-based features but also as one of the preprocessing steps for computation of other features. For example, removing retinal vessels is often involved as a part of preprocessing for further retinal texture analyses, such as retinal fold analysis.[@bib16]^,^[@bib17] Having an accurate vessel tree location map can substantially reduce the false-positive rate for an automated method to detect retinal folds in OCT.[@bib17] Furthermore, vessels are often an important structure used for the alignment of images (e.g., color fundus to OCT or OCT images over time), which can be used for multimodal analyses and regional longitudinal analyses. Thus, the motivation for an OCT-based vessel segmentation in cases of optic disc swelling includes the need for the direct computation of vessel-related features in OCT (especially in cases where fundus photography is not available) for direct measures of severity or for differentiation, the need for additional contextual information in the development of techniques for the automated segmentation and analysis of other structures (e.g., for fold/wrinkle detection) that may help in differentiation, and the need for an alignment technique for region-based longitudinal analyses. Although OCT has been widely used for capturing cross-sectional information of the retina, observing the vessels in the common B-scan orientation is not straightforward (as shown in [Fig. 1](#fig1){ref-type="fig"}b). A common method to display the vessels in OCT is to create an en-face view by projecting the pixel intensity values within the retinal pigment epithelium (RPE) complex along each A-scan.[@bib18]^--^[@bib20] In cases without optic disc swelling, projection at the level of the RPE works well given the high contrast between the bright RPE and inner-retinal vessel shadows. However, in cases of optic disc swelling, the presence of swelling can cause image shadows, making the task of vessel segmentation much more challenging. [Figure 1](#fig1){ref-type="fig"}c continues to show the RPE en-face images from the same three patients; it is noticeable that the challenge of delineating the vessels for both manual and automated approaches increases when the image shadow (from the swollen disc) grows. However, based on our prior preliminary experience with the segmentation of vessels in the OCT scans of mice whereby using multiple en-face images was advantageous over a single projection image[@bib21] as well as our observation that the vessels could sometimes be seen more prominently in layers other than the RPE layer in cases of optic disc swelling in humans, we hypothesized that simultaneously considering vessel information from various projected retinal layers in cases of optic disc swelling would substantially increase the vessel visibility and enable a better segmentation. Thus, instead of relying on a single projection image at the level of the RPE, we have developed a deep-learning approach (using a modification of a U-Net[@bib22] architecture) to simultaneously input three OCT en-face images from the RPE complex, inner retina, and total retina and to output an OCT vessel probability map ([Fig. 2](#fig2){ref-type="fig"}). Although current deep neural networks have shown prominent performance among the frequent implementations of vessel segmentation algorithms,[@bib23]^--^[@bib28] there is no study specifically focusing on the OCT cases with optic disc swelling. Both quantitative and qualitative comparisons between manual tracings in en-face images from various retinal layers and the automated segmentation results are performed. ![Architecture of the proposed deep-learning approach. Three image patches (32 × 32 pixels) are separately extracted from the OCT en-face images of the RPE complex, the inner retina, and the total retina. Next, these three patches are concatenated to each other at the first layer in the network. The numbers in *black* and in *gray* at each block represent the number of channels and dimensions at the current network layer, and the colors of the *arrows* represent different network operations.](tvst-9-2-17-f002){#fig2} Methods {#sec2} ======= Training/Testing Data {#sec2-1} --------------------- From 122 patients with various causes of optic disc swelling who had been recruited for research use of their clinical imaging data from the Neuro-Ophthalmology Clinic at the University of Iowa, we had previously analyzed the volumetric OCT imaging data of 22 of these patients for a preliminary fold detection image analysis approach.[@bib17] To best ensure a true separation of training and testing sets (whereby evaluation on the testing set is limited to untouched data), our training set was selected from the 22 previously analyzed images. More specifically, of the 22 patients previously analyzed, we selected the 18 patients who had (1) both volumetric ONH-centered OCT scans (Zeiss Cirrus, Carl Zeiss Meditec, Inc., Dublin, CA, USA) and the corresponding fundus photographs (Topcon Medical Systems, Inc.) available at the same visit and (2) an intact retinal structure in the OCT scans to allow the automated retinal layer segmentation[@bib12] to process correctly. The training data set was used for the purposes of designing the neural network architecture, deciding the hyperparameters, and training the neuron weights. For the independent testing data set, an additional 18 pairs of OCT scans and fundus photographs of 18 patients with optic disc swelling collected from the same set of 122 patients having optic disc swelling were included by matching the total retinal volume distribution with the training data set. The reason for this volume-matching process was to maintain a similar vessel visibility (which can be substantially affected by the degree of optic disc swelling, as shown in [Fig. 1](#fig1){ref-type="fig"}) in the OCT en-face images within different patients between the training and testing data sets. [Figure 3](#fig3){ref-type="fig"} shows the data distributions (by ONH volume) of the training and testing data sets. For the causes of optic disc swelling in the training data set, among the 18 training patients, 13 had papilledema, 1 had nonarteritic anterior ischemic optic neuropathy (NAION), and 4 had other causes of optic disc swelling. For the 18 testing participants, 15 had papilledema, 2 had NAION, and 1 had another cause of optic disc swelling. The training data set consisted of 16 women and 2 men with a mean ± standard deviation (SD) age of 36.4 ± 11.4 years while the volume-matched testing data set consisted of 17 women and 1 man with a mean ± SD age of 31.6 ± 12.9 years. In total, the 36 patients had a mean ± SD age of 34 ± 12.4 years. ![Data distribution (by ONH volume) of the training and testing data sets (shown in *pink* and *green bars*, respectively). There are 36 patients in total: 18 in the training data set and the other 18 in the testing data set. The severity of the disc swelling has been matched between both data sets based on the ONH volumes.](tvst-9-2-17-f003){#fig3} All the fundus photographs were obtained using a retinal camera (TRC-50DX; Topcon Medical Systems, Inc.) with 2392 × 2048 pixels. Each OCT scan (Cirrus; Carl Zeiss Meditec, Dublin, CA, USA) was centered at the ONH and had 200 × 200 × 1024 voxels covering (approximately) 6 × 6 × 2 mm^3^. Note that the fundus photographs in this study were only used for helping to create ground truth images (as discussed in the section on the manual tracing process) and were registered and cropped with respect to the OCT en-face images. The study protocol was approved by the University of Iowa\'s Institutional Review Board and adhered to the tenets of the Declaration of Helsinki. En-Face Images from Multiple Retinal Layers (Inputs to Deep-Learning Approach) {#sec2-2} ------------------------------------------------------------------------------ A customized three-dimensional graph-based algorithm[@bib12] was utilized to segment the swollen retinal layers ([Fig. 1](#fig1){ref-type="fig"}b) as a part of the preprocessing. Then, based on the segmentation results, en-face images were generated of the RPE complex (between cyan and green surfaces in [Fig. 1](#fig1){ref-type="fig"}b), the inner retina (between red and yellow surfaces in [Fig. 1](#fig1){ref-type="fig"}b), and the total retina (between red and green surfaces in [Fig. 1](#fig1){ref-type="fig"}b) by averaging the pixel intensities within the interested layers along each A-scan in the OCT. [Figure 4](#fig4){ref-type="fig"} demonstrates the vessel visibility of the en-face images from these three layers with mild, moderate, and severe optic disc swelling. As shown in [Figure 4](#fig4){ref-type="fig"}, retinal vessels away from the swollen optic disc appear more distinct in the RPE en-face image given the high contrast between the RPE and the vessel shadow, whereas the retinal vessels in the swollen regions appear more distinct in the en-face images of the inner retina and total retina. All three of these images were simultaneously used as inputs to the deep-learning network, as discussed in the next section. Furthermore, all three of these images (as well as the registered fundus photograph) were used to create the reference "ground truth" image used for training and evaluation. ![Demonstrations of vessel visibility of en-face images from different retinal layers in mild (*top row*), moderate (*middle row*), and severe (*bottom row*) optic disc swelling (continued from [Fig. 1](#fig1){ref-type="fig"}): *left column*, the RPE en-face image; *middle* *column*, the inner retina en-face image; and *right column*, the total retina en-face image. The *yellow arrows* indicate the vessel visibility changes.](tvst-9-2-17-f004){#fig4} Architecture of the Proposed Deep-Learning Neural Network {#sec2-3} --------------------------------------------------------- As was shown in [Figure 2](#fig2){ref-type="fig"}, our proposed deep-learning network is designed to take (patches of) the three en-face images described above as input and output a pixel-based vessel probability map (with values close to 1 indicating a high vesselness probability and values close to 0 indicating a low vesselness probability). The high-level architecture of our proposed deep-learning network is based on a well-known U-shaped deep neural network (U-Net),[@bib22] with modifications to allow for three inputs rather than one and modifications to the number of layers. More specifically, the architecture of the proposed deep-learning approach ([Fig. 2](#fig2){ref-type="fig"}) contains a total of 16 neural layers, including 1 concatenation layer, 13 convolution layers, and 2 max-pooling layers. The proposed approach is designed to obtain image features in different resolutions by passing the concatenated input image patches through a contracting path (i.e., the first half; repeatedly uses a combination of convolutional layers, rectified linear units \[ReLU\], and a max-pooling layer\] and then an "up-sampling" path (i.e., the second half; repeatedly uses a combination of convolutional layers, ReLU, and "up-convolutional" layers). Moreover, the corresponding feature maps between both paths are also concatenated in different resolutions. For the input of the proposed approach, location-matched image patches (size: 32 × 32 pixels) were first extracted from the three input en-face images to reduce the computational time as well as computer memory. At the end of the proposed approach, a soft-max layer, which is a 1 × 1 convolution layer, was applied to compute the probability value of the retinal vessel at each corresponding pixel location in the input image patch coordinates. For each patient, these location-matched image patches slid (one pixel each time) through the entire en-face image dimensions, and the outputted small vessel probability maps from all the image patches were stitched together (by averaging all the overlapping regions) to form a complete vessel probability map with the same coordinates as the input en-face images. More details about the deep neural network and its hyperparameters are described in the [Appendix](#sec5){ref-type="sec"}. Manual Tracings of Retinal Blood Vessels and Ground Truth Images {#sec2-4} ---------------------------------------------------------------- In order to compare how a human expert would segment the vessels with access to various combinations of the input en-face images with the results from the proposed deep-learning-based approach, all the en-face images in both training and testing data sets were independently traced in three separate stages (by J-KW): stage I, referring only to the RPE en-face image ([Table 1](#tbl1){ref-type="table"}, column 1); stage II, referring to the combination of the RPE and inner-retina en-face images ([Table 1](#tbl1){ref-type="table"}, columns 1--2); and stage III, referring to the combination of the RPE, inner-retinal, and total-retinal en-face images ([Table 1](#tbl1){ref-type="table"}, columns 1--3). Furthermore, to serve as the overall reference standard (also known as the "ground truth") for training and evaluation purposes, the images of the retinal vessels were again separately created by the same expert not only using all the RPE + inner-retinal + total-retinal en-face images but also referring to the registered ONH-centered fundus photographs to obtain the most vessel information ([Table 1](#tbl1){ref-type="table"}, columns 1--4). Also note that the proposed deep-learning approach output is also shown ([Table 1](#tbl1){ref-type="table"}, row 5) for a comparison. More specific details regarding this manual-tracing process are provided in the [Appendix](#sec5){ref-type="sec"}. ###### Inputs and Outputs for Manual Tracing (MT) Stages I, II, and III and the Proposed Deep-Learning Approach ---------------------------- ![](tvst-9-2-17-fx001.jpg) ---------------------------- The *green*, *yellow*, *pink*, *red*, and *cyan* vessels overlaid images show the outputs of MT stages I, II, and III; the ground truth; and the proposed approach. The *black* and *white* binary images are also shown at the next column to help visualization. Overview of Evaluation Approach {#sec2-5} ------------------------------- In the training process, leave-one-subject-out cross-validation (for a total of 18 training participants, 17 participants were used for training, 1 patient for validation, and then rotating the selected patient until all the patients had been tested) was used to help design and decide the architecture (using area under the receiver operating characteristic curve, AUC, as the evaluation metric) and hyperparameters in the proposed method. Next, based on the decided network, all the 18 patients were trained together to result in the final proposed deep neural network. The separate testing data set of 18 patients with matched ONH volumes was used with the quantitative measurements (described below) to compare the performances among the manual tracing stages I, II, and III and the proposed deep-learning approach. Quantitative Evaluation Measurements {#sec2-6} ------------------------------------ Pixel-based evaluation metrics were used to compare each ground truth binary image (0 = background and 1 = vessel object) to the binary results from each of the three OCT-based manual tracing stages, to the output probability map from the proposed deep-learning method, and to a binarized version (using Otsu\'s thresholding algorithm[@bib29]) of the proposed deep-learning method. More specifically, given a ground truth image and a corresponding binary map from another approach (e.g., an OCT-based manual tracing stage or a thresholded version of the output probability map), the true positive (TP), false positive (FP), true negative (TN), false negative (FN), and the total number of pixels (K = TP + FP + TN + FN) can be computed. We correspondingly computed the area under the receiver operating characteristic curves (AUC), for all approaches (manual tracings from the three stages, the original probability map, and the Otsu-thresholded binary map), to measure the TP rate against the FP rate across all possible threshold values. We also computed the average precision (AP) (for all approaches) to quantify the relationship between the precision (P) and recall (R) across all possible threshold values; AP =$\sum_{n}\left( {R_{n} - R_{n - 1}} \right)P_{n}$, where R = TP/(TP + FN), P = TP/(TP + FP), and n is the nth threshold value. For the binary-only results (manual tracings and Otsu-thresholded probability map but not the original probability map), we also computed the accuracy (ACC) to compute the ratio of correctly classified pixels to the total number of pixels in the OCT en-face image; ACC = (TP + TN)/K. In addition, for all approaches, we computed the mean squared error (MSE) as a measure of the label distance between the approaches and the ground truth; ${MSE} = \mspace{720mu}\frac{1}{K}\sum_{\mathfrak{l} = 0}^{K - 1}\left( {{\hat{y}}_{\mathfrak{l}} - y_{\mathfrak{l}}} \right)^{2}$, where ${\hat{y}}_{\mathfrak{l}}$ and $y_{\mathfrak{l}}$ represent the predicted label value at pixel location $\mathfrak{l}$ from the approach and the truth label, respectively; and the coefficient of determination (R^2^ score) to estimate how well the approach is with respect to the ground truth in the sense of regression; $R^{2} = 1 - \frac{\sum_{\mathfrak{l} = 0}^{K - 1}\left( {{\hat{y}}_{\mathfrak{l}} - y_{\mathfrak{l}}} \right)^{2}}{\sum_{\mathfrak{l} = 0}^{K - 1}\left( {\overline{y} - y_{\mathfrak{l}}} \right)^{2}}$, where $\overline{y} = \mspace{720mu}\frac{1}{K}\left( {\sum_{\mathfrak{l} = 0}^{K - 1}y_{\mathfrak{l}}} \right)$. Results {#sec3} ======= The mean area under the ROC curves (AUC) of probability maps from a leave-one-subject-out cross-validation over the 18 patients in the training data set was 0.93. Using the independent imaging data from the test set, the mean AUCs for the manual tracing stages I, II, and III were 0.79, 0.83, and 0.85, respectively; for the proposed deep-learning approach, the mean AUC was 0.96 for the direct output of the vessel probability map, but the mean AUC was 0.83 when the probability maps were converted to binary maps using the Otsu algorithm.[@bib29] For the AP, the results of the manual tracing stages I, II, and III were 0.73, 0.77, and 0.78, respectively; the results of the proposed method were 0.84 and 0.77 for the probability map and binary map, respectively. Other results among the manual tracing stages I, II, and III and the probability map as well as the binary map from the proposed method were as follows: MSE, 0.071, 0.061, 0.061, 0.047, and 0.061; mean coefficient of determination (*R*^2^), 0.38, 0.46, 0.47, 0.59, and 0.46; and mean accuracy (ACC), 0.93, 0.94, 0.94, N/A (Not applicable), and 0.94, respectively. [Table 2](#tbl2){ref-type="table"} shows the summary of the quantitative results. ###### Qualitative Measurements among the Manual Tracing Stages and Proposed Deep Neural Network in the Testing Data Set (Including the Mean Processing Time per Patient) Testing Data Set (18 Patients) --------------------------------- ------------------------------------------------------------------------------------------------- ------------------------ ------------------------ -------------------------- ------------------------ ---------------- ----------- Manual tracing, stage I 0.79 ± 0.04 0.73 ± 0.04 0.071 ± 0.01 0.38 ± 0.08 0.93 ± 0.01 11 min 19 s Manual tracing, stage II 0.83 ± 0.03 0.77 ± 0.03 0.061 ± 0.01 0.46 ± 0.06 0.94 ± 0.01 13 min 51 s Manual tracing, stage III 0.85 ± 0.03 0.78 ± 0.04 0.061 ± 0.01 0.47 ± 0.08 0.94 ± 0.01 14 min 7 s Proposed deep-learning approach Probability map[^†^](#tb2fn2){ref-type="table-fn"} Binarymap[^†^](#tb2fn2){ref-type="table-fn"} 0.96 ± 0.020.83 ± 0.05 0.84 ± 0.070.77 ± 0.05 0.047 ± 0.010.061 ± 0.01 0.59 ± 0.090.46 ± 0.10 N/A0.94 ± 0.01 1 min 5 s Hardware details: A Linux machine is used with single GPU (NVIDIA GeForce GTX 1080 Ti) and 128 GB RAM. The time to train the proposed neural network, including all the 18 patients in the training data set, was 2 hours 2 minutes 12 seconds. The probability map is the direct output from the proposed deep neural network; the binary map is automatically obtained using the Otsu thresholding algorithm. (Note: The outputs from manual tracings are inherently binary maps.) [Figure 5](#fig5){ref-type="fig"} displays all the data points from the 18 testing participants with a 95% confidence interval of the mean for the quantitative results in the testing data set. The direct output (i.e., the vessel probability maps) of the proposed deep-learning approach shows the best performance in all five quantitative measurements. Furthermore, for all the testing participants, the thresholded binary map of the proposed deep-learning approach provides better vessel segmentation results than manual tracing stage I (tracing on only the RPE en-face image) for all five measurements of all 18 testing participants (additional details in [Appendix](#sec5){ref-type="sec"}). The results from manual tracing stages II and III and that of the thresholded binary map of the proposed deep-learning approach were similar. More specific details regarding the subject-wise comparison between different approaches are provided in the [Appendix](#sec5){ref-type="sec"}. ![Data dot plots for the measurements of area under ROC curve (AUC), average precision (AP), mean square error (MSE), mean coefficient of determination (R^2^), and mean accuracy (ACC) with 95% confidence intervals: *green*, manual tracing (MT) stage I; *dark yellow*, MT stage II; *purple*, MT stage III; *blue*, the probability map from the proposed deep learning (DL) approach; and *red*, the binary map, which is obtained using Ostu algorithm) from the DL approach.](tvst-9-2-17-f005){#fig5} The mean processing times per patient for each method in the testing data set were also recorded ([Table 2](#tbl2){ref-type="table"}). The three manual tracing stages using Adobe Illustrator Draw (version 4.6.1; Adobe Systems, Inc., San Jose, CA, USA) on an iPad (Apple, Inc., Cupertino, CA, USA) required at least 11 minutes for each patient in the testing data set (precisely, 11 minutes 19 seconds, 13 minutes 51 seconds, and 14 minutes 7 seconds for stages I, II, and III, respectively). The computation for the proposed deep neural network was performed using a Linux machine with single GPU, NVIDIA GeForce GTX 1080 Ti (NVIDIA Corporation, Santa Clara, CA, USA) and 128 GB RAM. The mean processing time for each patient in the testing data set for the proposed deep neural network was 1 minute 5 seconds. (Note: The total time of training the proposed neural network, including all the 18 patients in the training data set, was 2 hours 2 minutes 12 seconds.) Six patients with various levels of optic disc swelling (the ONH volume range is from 11.46 mm^3^ \[the top row\] to 26.45 mm^3^ \[the bottom row\]) are shown as qualitative results in [Table 3](#tbl3){ref-type="table"}. The en-face images of the RPE complex, the inner retina, and the total retina are listed in the table to show the shadow region growth in different degrees of swelling. The manual tracing stage III (highlighted in purple), the proposed deep-learning approach binary maps (highlighted in cyan), and the ground truth (highlighted in red) are displayed in the next three columns, respectively. The corresponding ONH-registered fundus photographs are also added at the last column for reference. ###### Examples of Vessel Segmentation in Six Patients with Various Levels of Optic Disc Swelling ---------------------------- ![](tvst-9-2-17-fx002.jpg) ---------------------------- The patients are arranged based on the ONH volumes from the top to the bottom rows. Discussion {#sec4} ========== Although a preliminary OCT-based vessel segmentation for swollen optic disc had been utilized in our previous studies as a part of the preprocessing,[@bib17] this is the first time that we directly focus on OCT to reveal the obscured vessels from the image shadow by using a modified deep neural network that simultaneously considers multiple OCT en-face images from various retinal layers as its inputs. Furthermore, while use of an RPE en-face image would traditionally be considered sufficient for visualization of projected retinal vessels in OCT (especially in nonswollen cases), our results also show that having access to multiple en-face views is important for even a human expert to properly visualize the vessels. Overall, our deep-learning approach (even when using a nonoptimized thresholding approach to generate the binary maps) performs at least as well as one would expect from a human expert having access to multiple en-face OCT views and better than what one would expect from a human expert having access to only the RPE en-face image (the traditional approach). In fact, when using the probability map itself (rather than the binary map), perhaps surprisingly, our results suggest that the deep-learning approach even outperforms the human expert in segmenting the retinal vessels with access to only OCT en-face images (discussed further below). Effectively, having multiple simultaneous input images enabled the proposed deep-learning network to learn to extract the vessel information from the region at the retinal layer with better signal response to compensate the regions that are eclipsed. [Figure 4](#fig4){ref-type="fig"} shows examples that the en-face images around the optic disc of the inner retina contain substantially more vessel information than the one from the RPE complex in cases with optic disc swelling; however, the vessel visibility at the peripheral region is still the clearest in the RPE en-face image compared with the others. Since the vessels shown in the en-face images are the mean intensity values of the shadow from the superficial vessels, it is not surprising that the vessel visibility in the RPE en-face image is considerably deteriorated around the optic disc in which the OCT signal is greatly weakened by passing through the swollen inner retinal layers. Overall, the total-retinal en-face image displays in-between vessel clarity, which is potentially helpful for the proposed deep-learning approach having an extra reference when the vessel patterns are inconsistent between the RPE and inner-retinal en-face images. When the optic disc swelling is severe, in addition to the optic disc elevation, other pathogenic conditions, such as hemorrhage and nerve fiber layer infarcts (cotton-wool spots), may appear.[@bib1]^,^[@bib30] Meanwhile, the vessel appearance may also be affected: the retinal vessels can be seen extremely blurred, discontinuous, and/or covered by the cotton-wool spots. Under these circumstances, it is sometimes tricky to clearly define the boundary of the vessels in the OCT en-face images. The bottom row in [Table 3](#tbl3){ref-type="table"} illustrates the difficulty of the process of tracing complete vessel trees even with the extra information from the fundus photograph (i.e., the ground truth). [Figure 6](#fig6){ref-type="fig"} shows that the performances of both manual tracing and proposed method gradually decline when the ONH volume increases; however, the proposed method still seems visually more robust than the manual tracing stage III in the testing data set. ![Scatterplots of 18 testing participants for displaying the relationships between the AUC and ONH volume from the manual tracing (MT) stage III (the best performance in all three manual tracing stages; shown as *magenta crosses*) and proposed deep-learning (DL) approach probability map (shown as *cyan triangles*).](tvst-9-2-17-f006){#fig6} Because the degree of optic swelling influences the difficulty of the vessel segmentation, it is important to keep in mind our overall reported results are, in part, reflective of the distribution of swelling levels tested. Our training set (and, correspondingly, our test set because of the volume-matching process) likely reflected a higher proportion of cases with moderate-to-severe optic disc swelling than one might encounter in clinical practice. If a larger percentage of cases with milder swelling were evaluated, we would actually expect to obtain better overall performance numbers. However, while we evaluated the approach on a reasonably balanced data set of cases with optic disc swelling, one limitation of our work is that we have not quantitively evaluated the result of the proposed approach on a separate normative data set. Nevertheless, based on visually assessing the results from separately applying our trained neural network on eyes with no apparent swelling (having approximate optic nerve head volume of 8--10 mm^3^), we found that the proposed approach still successfully segments the major vessels in such eyes. Thus, while not quantitatively evaluated on eyes without optic disc swelling, we still expect the approach to be robust in these cases as well (where traditional approaches involving only use of an RPE en-face image may already work). Also note that in computing the ONH volumes used for estimating the degree of optic disc swelling, we were unable to correct for ocular magnification due to the lack of axial length information, and thus the reported volumetric measures are technically approximations. However, while we did use the estimated measures to provide a similar distribution of swelling severity in the training and testing data sets as well as to provide some insight into the dependence of the performance on degree of optic disc swelling, correcting for ocular magnification is not needed for appropriately training the algorithm. As previously mentioned, the quantitative results in [Table 2](#tbl2){ref-type="table"} and [Figure 5](#fig5){ref-type="fig"} demonstrate that the probability map from the proposed deep neural network has the best performance compared with all the other methods. However, after thresholding the probability map (by the Otsu algorithm) into a binary map, the performance from the neural network declines to a similar level of manual tracing stages II and III. This is because of a part of vessel information gets lost in the process of thresholding. For example, in the probability map, some thinner vessels may have a smaller probability, which could be thresholded to become background in the binary map. The performance gap between the probability and binary maps can possibly be shortened by developing more sophisticated thresholding methods. Using regionally adaptive threshold values based on vessel continuity rather than one global threshold value is one of the options to improve. Regarding the stages of manual tracing, it is worthy to note that we strictly followed the order that we described in the [Appendix](#sec5){ref-type="sec"} to avoid the extra information from the higher stages, especially for the ground truth images, to prejudice the tracing in the lower stages. [Table 2](#tbl2){ref-type="table"} and [Figure 5](#fig5){ref-type="fig"}c have shown that, for the same human expert, only using the RPE en-face image to trace the retinal vessels (i.e., the traditional method) provides the worst performance among all three stages for all the measurements. After adding consideration of the inner-retinal and total-retinal en-face images, the performance in stages II and III noticeably increased. Also, our manual tracings are performed using Adobe Illustrator Draw (Adobe Systems, Inc., version 4.6.1) on an iPad, which allows us to overlay all input en-face images on each other so that the vessel information can be intuitively accumulated from all the input retinal layers. The image-overlay method is potentially a more robust method than separately tracing these en-face images and then adding the results together. Also note that the time-consuming multistage manual tracing stages were one factor that limited the size of our training and test sets (18 cases each). While 18 cases in a training set would likely be considered too small for an image-level classification task for a deep-learning approach (e.g., determining the cause of the swelling), since our work focused on a pixel-level task in determining the probability of each pixel being a vessel (in combination with our use of a U-Net-based architecture), we were able to have sufficient data to be able to train the approach to provide a good performance overall on the independent test set. However, confirming the performance on a larger data set (perhaps using less time-consuming multistage reference standard) would be useful in future work. It is conceivable that our proposed deep-learning approach can be extended to detect other tubular objects in the OCT volumes. True three-dimensional vessel segmentation (instead of on the projected en-face planes) could be a subject of future study. Also, analyses of retinal folds[@bib16]^,^[@bib17]^,^[@bib31] are believed to be one of the key features to scrutinize the mechanisms of stress/strain at the ONH region. Automatically detecting the retinal folds and further quantifying them can be another possible extension for our proposed neural network. Supported in part by I01 RX001786, I50 RX003002, and R01 EY023279. A preliminary version of this work was presented as an abstract at ARVO 2019 (Islam et al., ARVO Abstract \#1510, 2019). Disclosure: **M.S. Islam**, None; **J.-K. Wang**, None; **S.S. Johnson**, None; **M.J. Thurtell**, None; **R.H. Kardon**, Fight for Sight, Inc. (S), Department of Veterans Affairs Research Foundation, Iowa City, IA (S); **M.K. Garvin**, University of Iowa (P) Additional Details Regarding the Modified U-Net Architecture ============================================================ As displayed in [Figure 2](#fig2){ref-type="fig"}, the first layer of the neural network first concatenates the three input location-matched en-face image patches together to form an image blob (dimensions: 32 × 32 × 3). Next, for the contracting path, a combination of a convolutional layer (with the filter size of 3 × 3 pixels, 32 feature channels, and zero padding of 1 pixel for correcting the boundary effect) plus a rectified linear unit (ReLU) is used twice before a max-pooling layer (filter size: 2 × 2 pixels, stride: 2 pixels). It is worthy of note that the feature map dimensions have been down-sampled to 16 × 16 pixels due to the max-pooling layer, but the depth has increased to 64 channels. A similar process is then repeated once, and the feature map dimensions achieve 8 × 8 pixels with 128 channels at the end of the contracting path. In the up-sampling path, the "up-convolutional" layers are applied to double the dimensions of the feature map, and then the outputting feature maps are concatenated with the ones in the contracting path to "reconsider" past features (i.e., the gray arrows in the [Fig. 2](#fig2){ref-type="fig"}). Following by similar convolutional processes again but with different amounts of feature channels, the feature map dimensions finally recover back to 32 × 32 pixels and with 32 channels. The last layer of the neural network is a soft-max layer, which is a 1 × 1 × 32 convolutional operator, to estimate the probability value (range from 0 to 1) of the retinal vessel at each pixel location. As mentioned in the Methods, the architecture of the proposed neural network is designed using a leave-one-subject-out cross-validation in the training data set (18 patients), and the hyperparameters include a first-order gradient-based optimization algorithm (Adam[@bib32]) with momentum 1 (β~1~) of 0.9, momentum 2 (β~2~) of 0.999, delta (ε) of 10^−8^, learning rate (α) of 0.01, and gamma (γ) of 0.9. In addition, a simple technique of dropping out half of the neurons after convolution layers is adopted to prevent data overfitting.[@bib33] The proposed deep neural network is implemented using a Caffe library (1.0.0)[@bib34] on a Linux machine with single GPU (NVIDIA GeForce GTX 1080 Ti), Intel Core i7-6800K Broadwell-E 6-Core 3.4G, and 128 GB RAM. After the architecture of the proposed neural network has been decided and all the hyperparameters are fixed from the cross-validation step in the training data set, the proposed neural network is retrained by using all the 18 patients from the training data set and officially tested in the volume-matched testing data set. The step of completely separating the training and testing data sets helps reduce the overall bias of the proposed method. Additional Details Regarding the Manual Tracings of Retinal Blood Vessels ========================================================================= Tools and more precise steps of manually tracing the retinal blood vessels are described as follows. Three stages of human tracing were designed to compete with the proposed deep-learning approach. In stage I, for each OCT image in the training set, its RPE en-face image was first loaded into an iPad Pro tablet (Apple, Inc.) using Adobe Illustrator Draw (Adobe Systems, Inc., version 4.6.1) and saved as an independent profile. Next, the visible vessels in this en-face image were manually highlighted by the expert drawing on the tablet using an Apple Pencil (Apple, Inc.) ([Table 1](#tbl1){ref-type="table"}, row 1). Then, the same process was repeatedly applied to the rest of the OCT images in the training set. In stage II, to have an independent vessel tracing from the previous stage, for each OCT image in the training set, its RPE en-face image was reloaded and then saved as a clean, separate profile using Adobe Illustrator Draw. Then, the corresponding inner-retina en-face image was loaded as the second image layer so that the expert could trace the retinal vessels on the RPE en-face image meanwhile accessing the vessel information from the inner-retina en-face image ([Table 1](#tbl1){ref-type="table"}, row 2). Then, the same process was repeatedly applied to the rest of the OCT images in the training set. In stage III, similar processes from stage II were repeated again and adding the third image layer from the total-retina en-face image ([Table 1](#tbl1){ref-type="table"}, row 3). Finally, the ground truth image was created by simultaneously accessing four images (all the three en-face images + the registered fundus photographs) ([Table 1](#tbl1){ref-type="table"}, row 4). The same procedures were next utilized for the images in the testing data set. Note that because the amount of vessel information increases starting from stage I, stage II, and stage III and maximizing at the ground truth images, the order of our manual tracings helped ensure that the manual tracings in lower-level stages had less of a chance to be affected by the prior knowledge from the higher-level stages. Also note that the proposed deep-learning approach output is also shown ([Table 1](#tbl1){ref-type="table"}, row 5) for a comparison. Additional Details Regarding the Quantitative Comparison between Different Approaches ===================================================================================== Since the performances of manual tracing stages II and III are similar (as shown in [Fig. 5](#fig5){ref-type="fig"}), only the results from stage III are kept for further comparisons. [Figures A1](#figA1){ref-type="fig"}a--[A1](#figA1){ref-type="fig"}c show the comparison between quantitative values of different evaluation metrics for all the 18 testing participants when different approaches are used. The lines are found by joining the quantitative values of evaluation metrices from two different approaches (e.g., manual tracing stage I and deep learning binary map for [Fig. A1](#figA1){ref-type="fig"}a, manual tracing stage I and manual tracing stage III for [Fig. A1](#figA1){ref-type="fig"}b, manual tracing stage III versus deep-learning binary map for [Fig. A1](#figA1){ref-type="fig"}c). An upward slope indicates a better performance using the second approach in the figure and a downward slope indicates the first approach provides better performance. Based on all the quantitative results, [Figure A1](#figA1){ref-type="fig"}a suggests that the proposed deep-learning approach binary map provides better vessel segmentation results than the manual tracing stage I by showing that the majority of the slopes are positive. Similarly, [Figure A1](#figA1){ref-type="fig"}b suggests that the manual tracing stage III has better performance than stage I does. Finally, [Figure A1](#figA1){ref-type="fig"}c suggests that the performances between the deep-learning binary map and the manual tracing stage III are similar. ![Trajectory connection dot plots for the measurements of area under ROC curve (AUC), average precision (AP), mean square error (MSE), mean coefficient of determination (R^2^), and mean accuracy (ACC) with 95% confidence intervals: *Dark yellow and green*, manual tracing (MT) stage I (in [Fig. A1](#figA1){ref-type="fig"}(a) and (b) respectively); *purple*, MT stage III; and *red*, the binary map (which is obtained using Ostu algorithm) from the DL approach. (a) Trajectory connection between the MT stage I and DL binary map results. (b) Trajectory connection between the MT stage I and III results. (c) Trajectory connection between the MT stage III and DL binary map results.](tvst-9-2-17-f007){#figA1}
Tech leaders show off growth in Atlanta industry From dancing robots to virtual reality, it's all created here in Atlanta. "We are certainly on the rise. Everybody is looking at us," says Jerry Hudson of Moxie. Top tech leaders showed off Atlanta's industry growth at the Woodruff Arts Center, hoping to catch the attention of Amazon. Moxie held the FutureX Live event. Atlanta is leading the way in virtual reality technology. "In general, we do a lot of work to help corporate marketing departments tell their brand stories through V.R.," says Dave Beck of Foundry 45. "Virtual reality will actually transport you to an entirely different place. You put the head set on, you're in an entirely digital environment." Because of advancements in technology like V.R., Atlanta tech jobs have grown by more than 46 percent since 2010. That's according to a tech study done by commercial real estate company CBRE Atlanta. Tech leaders like Janet Murray say between the city's film and tech industry, combined with top universities like Georgia tech, Atlanta offers Amazon a culture no other city can. "Here we have much more fluid culture," says Murray, an associate dean for research at Georgia Tech.
.flavor() when (@flavor = vanilla) { .opciones .candidato{ // Esquema // .foto-candidato, .foto-principal{ .width-fl(50%); } .foto-partido, .foto-secundaria{ .width-fl(20%); } .nombre-partido, .candidato-principal, .candidato-secundario,{ .width-fl(100%); } // Estilos generales // &.candidato-persona { .nombre-partido{ border-bottom: 1px solid @gris-medio; margin-top: 0px; margin-bottom: 5px; } .candidato-secundario{ color: @celeste-votar; } } &.candidato-lista-completa{ .foto-partido{ .column-p(11, 24); } .nombre-partido{ color: @azul; font-size: 1em; margin-top: 0px; } } } // Estilos especificos // .opciones { &.max2 { .candidato-persona { font-size: 2em; .foto-candidato { .column-p(15,24); } .foto-partido { .column-p(8,24); margin-top: 105px; } .nombre-partido, .candidato-principal, .candidato-secundario { .column-p(24,24); } } .candidato-lista-completa { .nombre-partido { font-size: 2em; margin-top: 10px; } } } &.max4 { .candidato-persona { .candidato-principal { font-size: 1.2em; margin-top: 10px; margin-bottom: 5px; } .candidato-secundario { font-size: 1em; } .lista { font-size: 1.7em; } } .candidato-lista-completa { .foto-candidato, .foto-partido { .column-p(8,24); } .nombre-partido { font-size: 2em; margin-top: 10px; } } } &.max6 { .candidato-persona { .foto-candidato { .column-p(13,24); } .foto-partido { .column-p(6,24); } .nombre-partido { .column-p(10,24); } .candidato-principal { .column-p(24,24); font-size: 1.3em; } .candidato-secundario { .column-p(24,24); font-size: 1em; } .lista { font-size: 1.2em; } } } &.max9 { .candidato-persona { .lista { font-size: 1.2em; } } .candidato-lista-completa { .foto-candidato, .foto-partido { .column-p(8.64,24); } } } &.max12 { .candidato-persona { .candidato-principal, .candidato-secundario { .width-fl(100%); font-size: 1em; } .lista { font-size: 1.2em; } .foto-candidato, .foto-secundario { .width-fl(37%); } .foto-partido{ margin-top: 46px; } } &.cat_JEF .candidato-persona { .foto-partido{ margin-top: 10px; } .cargo{ font-size: 15px; font-weight: normal; color: @azul; } } } &.max16 { .candidato-persona { font-size: 0.9em; .foto-partido { .column-p(5,24); } .lista { font-size: 1.2em; } } .candidato-lista-completa { .foto-candidato, .foto-partido { .column-p(8,24); } } } &.max20 { .candidato-persona { .foto-candidato { .column-p(10,24); } .foto-partido { .column-p(5,24); } .nombre-partido, { font-size: 0.8em; min-height: 33px; .column-p(13,24); } .candidato-principal, .candidato-secundario { font-size: 0.8em; .column-p(24,24); } .lista { font-size: 1.2em; } } .candidato-lista-completa { .foto-candidato, .foto-partido { .column-p(9,24); } } } &.max24 { font-size: 0.8em; .candidato-principal, .candidato-secundario { clear: both; width: 100%; font-size: 0.95em; } .nombre-partido { min-height: 33px; font-size: 0.9em; } } &.max30 { .candidato-persona { font-size: 0.7em; .candidato-principal, .candidato-secundario { .column-p(24,24); } .foto-partido { .column-p(4.75992,24); } .foto-candidato { .column-p(8.11992,24); } .nombre-partido { min-height: 21px; } } .candidato-lista-completa { font-size: 0.8em; .foto-candidato, .foto-partido { .column-p(7,24); } } } &.max36 { .candidato-persona { font-size: 0.7em; .foto-candidato { .column-p(9.84,24); } .foto-partido{ display: none; } .nombre-partido { .column-p(9.36,24); } .candidato-principal, .candidato-secundario { .column-p(12,24); } } .candidato-lista-completa { font-size: 0.8em; .foto-candidato, .foto-partido { .column-p(6,24); } } } } }
Spotlight: A Conquest Like No Other by Emma Anderson #Shifters #Suspense #FourHorsemen The second book in The Fall of the Four Horsemen is here. Delve into the lives of Nikki and Viper as they fight her past so they can create a future together. When Nikki Greene is kidnapped by a crime boss, the last thing she expects is for an even more dangerous man to rescue her. She’s already survived one brutal attack by a man claiming to care for her. She’s not in the market for love, even though her tattooed hero awakens urges she never expected to feel again—and some she’s never felt before. Viper, a renowned playboy, doesn’t want a mate. He likes the freedom of taking a different partner to bed each night. But the moment Nikki enters his life, his bachelorhood is doomed. Now he must work past Nikki’s trust issues to win a place in her life and bed. He’s determined to prove that her scars are only skin-deep and together they can have it all. But trouble is coming for Nikki. A menacing presence lurks on the fringes of her world, waiting for the chance to prove to Nikki that revenge isn’t sweet, but it is deadly. Can Viper save her this time, or will he lose her for good? Nikki slumped with her back against the closed front door. To say she was relieved to see Viper leave was an understatement. Again her hormones reacted to him. Why they had to come to life after three years, she had no idea. And even worse yet, for her best friend’s new brother-in-law. The situation had disaster written all over it. Even if her fear would allow her to make a move on the unsuspecting man, her conscience wouldn’t. As he had pointed out, they were now family. And she couldn’t risk Gabby’s happiness by throwing herself at Viper. It would lead to uncomfortable family gatherings. Pushing herself off the door, she attempted to eject her sexy rainbow-colored angel out of her mind as she went about getting herself ready for bed. By the time she was snuggled under the blankets, she was exhausted. Half an hour later, sleep still refused her. Her body needed something—something she knew B.O.B. could provide. Reaching across to her bedside table drawer, she pulled out her purple vibrator. This silicone cock had been the only form of male company her body had allowed over these last three years. And he never disappointed her, except when he ran out of batteries. Throwing the blankets back, she quickly removed her pajamas. The cool air hit her sensitive nipples, causing them to harden further. They were begging to be teased. Her hand molded around her tiny breast, roughly massaging it. The harsh manhandling had more juices leaking from her already flowing pussy. Her thumb and index finger found her pebbled nipple, pinching it hard. A groan escaped her as her hips lifted from the mattress, seeking out its partner. The hand that held B.O.B. travelled lower. B.O.B.’s head circled her hypersensitive clit. Electricity raced through her whole being, enticing her orgasm closer to the surface. As if on its own accord, B.O.B. found her entrance. Scooping up some of her escaped juices, it slowly entered her. Each thrust took her higher, but still it wasn’t enough to take her over the edge. Suddenly Viper’s image popped into her mind. No longer were her fingers pinching and pulling at her nipples. Instead, his mouth had engulfed her breasts, his teeth worrying her nipples and sending shock waves throughout her whole being. As for B.O.B., he had been replaced as well. Viper was carrying her body to the heavens, and she welcomed it. Her imagination took the fantasy further. Her legs widened,allowing room for Viper’s large body. The thrusts were coming harder and faster. Her orgasm quickly approached, catapulting her over the edge. Still Viper’s body continued to drive into her. Suddenly he pulled out and turned her. With her head on the mattress and her backside in the air, he slammed back into her. At this angle it was enough to encourage another orgasm to make itself known. It danced around the edges, avoiding her pleas for completion. More punishing thrusts were delivered from behind. Then it came, a command her body seemed to be waiting for. “Come for me, sweetheart.” Light burst behind her closed eyelids right before she screamed her completion. As she tumbled back to earth she felt tears falling from her eyes. Her head was turned to the side, allowing the sheet and mattress to absorb them. It was then that she realized how uncomfortable she was. True to her fantasy, her backside was in the air, while her head rested upon the bed. One arm held her balanced while the other one was under her body. Her hand was between her legs, holding B.O.B. in her pussy. As she went about cleaning B.O.B and remaking her bed she expected embarrassment to consume her. It never came. In its place was confusion. How could that fantasy have felt so real? In her mind, she felt his body against hers. Never before had any of her masturbating sessions felt like that. The man she was trying to view as family had somehow accomplished the impossible. In her mind, he had truly been with her in this bed. Yet the fear that usually accompanied the idea of being sexually involved with a male was absent. How was that possible? Why was he having such an effect on her? This man with his brightly painted skin and rusty red hair had her craving things she didn’t think were possible to ever want again—a sexual relationship. His messy locks with their just-out-of-bed look called to her fingers. They desperately wanted to delve into his thick mass of hair. His plump, kissable lips had her mouth watering and her chin itching. In her mind she could see the stubble rash that would appear after a kiss from her new fantasy go-to guy. How was she ever going to look at that man again without blushing? She was glad that she didn’t have to see him every day. Otherwise she suspected her rampant hormones would have her doing something impetuous. With her hormones in overdrive she knew it wouldn’t be long before she threw herself at this man who was only being friendly with her because of Gabby’s relationship with Devil. She needed to minimize her exposure to Viper. It was the only way she wouldn’t embarrass Gabby and herself.
Based on the iconic Mansions of Madness board game, and in partnership with Fantasy Flight Games, Escape Games Canada has designed and built a manor house full of mystery. By realizing the board game elements into reality, fans of Mansions of Madness will find themselves immersed in a world of Lovecraftian terrors. Players new to the board game need not worry, The Missing Will autonomously contains all the information you need for a uniquely thrilling escape experience. Using state-of-the-art technology for seamless integration between the gameplay and the theme, The Missing Will is an experience like no other. © 2019 Fantasy Flight Games. Mansions of Madness and the FFG logo are ®/TM of Fantasy Flight Games Details
Q: apply function in data table for conditional removal of row I have a data table, dt: V1 V2 V3 PubMedCounts 1: 0000100005 100-00-5 CAS Number 6 2: 0000100005 1-Chloro-4-nitrobenzene DescriptorName 12 3: 0000100005 aahs DescriptorName 111 4: 0000100005 PNCB Synonym 35 Also, I have a data table, ew, which has only one columns with words, like: V1 1: aah 2: aahed 3: aahing 4: aahs 5: aardvark from dt data table, i need to remove all the rows which have V2 size less than or equal to 5 or present in ew data table. Example, from dt table, i would remove 3rd and 4th row. I would like to use apply function to make it efficient as its pretty big data set A: If I understand you correctly I would do: dt[!ew, on = c(V2 = "V1")][nchar(V2) > 5] which gives: V1 V2 V3 PubMedCounts 1: 100005 100-00-5 CAS_Number 6 2: 100005 1-Chloro-4-nitrobenzene DescriptorName 12 Applying the conditions in the other order might be faster: dt[nchar(V2) > 5][!ew, on = c(V2 = "V1")] This prevents matching on things in dt that would be deleted in the next step anyway. A third possibility is using: dt[nchar(V2) > 5 & !( V2 %chin% ew$V1 )] Used data: dt <- structure(list(V1 = c(100005L, 100005L, 100005L, 100005L), V2 = c("100-00-5", "1-Chloro-4-nitrobenzene", "aahs", "PNCB"), V3 = c("CAS_Number", "DescriptorName", "DescriptorName", "Synonym"), PubMedCounts = c(6L, 12L, 111L, 35L)), .Names = c("V1", "V2", "V3", "PubMedCounts" ), row.names = c(NA, -4L), class = c("data.table", "data.frame")) ew <- structure(list(V1 = c("aah", "aahed", "aahing", "aahs", "aardvark")), .Names = "V1", row.names = c(NA, -5L), class = c("data.table", "data.frame"))
Cha Dong-min Cha Dong-min (Hangul: 차동민, Hanja: 車東旻; ; born August 24, 1986 in Seoul, South Korea) is a retired South Korean taekwondo practitioner. Sports career In 2008, he won the gold medal in the +80 kg category at the Beijing Olympic Games. He participated in the 2012 London Olympic Games to defend his title as the Number 1 seed in the 80 kg division, but was eliminated in the Quarterfinal round against Bahri Tanrıkulu of Turkey. He competed for the 2016 Rio Olympics in the same division where he won a bronze medal. This was his last International competition, as he announced his retirement. References External links Category:1986 births Category:Living people Category:South Korean male taekwondo practitioners Category:Olympic taekwondo practitioners of South Korea Category:Olympic gold medalists for South Korea Category:Taekwondo practitioners at the 2008 Summer Olympics Category:Taekwondo practitioners at the 2012 Summer Olympics Category:Olympic medalists in taekwondo Category:Medalists at the 2008 Summer Olympics Category:Korea National Sport University alumni Category:Medalists at the 2016 Summer Olympics Category:Olympic bronze medalists for South Korea
Q: rbenv install 2.7.1 fails on Mac OS Catalina using home-brew I have been trying to install ruby for a few days now. I installed home-brew checked that openssl@1.1 was installed. I ran brew install rbenv and congfigure my zsh as follows, local READLINE_PATH=$(brew --prefix readline) local OPENSSL_PATH=$(brew --prefix openssl) export LDFLAGS="-L$READLINE_PATH/lib -L$OPENSSL_PATH/lib" export CPPFLAGS="-I$READLINE_PATH/include -I$OPENSSL_PATH/include" export PKG_CONFIG_PATH="$READLINE_PATH/lib/pkgconfig:$OPENSSL_PATH/lib/pkgconfig" # Use the OpenSSL from Homebrew instead of ruby-build # Note: the Homebrew version gets updated, the ruby-build version doesn't export RUBY_CONFIGURE_OPTS="--with-openssl-dir=$OPENSSL_PATH" # Place openssl@1.1 at the beginning of your PATH (preempt system libs) export PATH=$OPENSSL_PATH/bin:$PATH # Load rbenv eval "$(rbenv init -)" # Extract the latest version of Ruby so you can do this: # rbenv install $LATEST_RUBY_VERSION export LATEST_RUBY_VERSION=$(rbenv install -l | grep -v - | tail -1) When I try to run rbenv install 2.7.1 I get a build error. Saying it can't require openssl@1.1. I checked it is installed and tried everything I can think of. This was tested on catalina 10.15 fresh install. I reformatted my computer and installed the Xcode command tools as well. Here are the logs. installing manpages: /Users/main/.rbenv/versions/2.7.1/share/man (man1, man5) installing default gems from lib: /Users/main/.rbenv/versions/2.7.1/lib/ruby/gems/2.7.0 (build_info, cache, doc, extensions, gems, specifications) benchmark 0.1.0 /private/var/folders/cv/z8f4fy9171z64hl8vk4ms68h0000gn/T/ruby-build.20200617194325.10220.l3muIu/ruby-2.7.1/lib/rubygems/core_ext/kernel_require.rb:92:in `require': cannot load such file -- openssl (LoadError) from /private/var/folders/cv/z8f4fy9171z64hl8vk4ms68h0000gn/T/ruby-build.20200617194325.10220.l3muIu/ruby-2.7.1/lib/rubygems/core_ext/kernel_require.rb:92:in `require' from /private/var/folders/cv/z8f4fy9171z64hl8vk4ms68h0000gn/T/ruby-build.20200617194325.10220.l3muIu/ruby-2.7.1/lib/rubygems/specification.rb:2426:in `to_ruby' from ./tool/rbinstall.rb:846:in `block (2 levels) in install_default_gem' from ./tool/rbinstall.rb:279:in `open_for_install' from ./tool/rbinstall.rb:845:in `block in install_default_gem' from ./tool/rbinstall.rb:835:in `each' from ./tool/rbinstall.rb:835:in `install_default_gem' from ./tool/rbinstall.rb:799:in `block in <main>' from ./tool/rbinstall.rb:950:in `block in <main>' from ./tool/rbinstall.rb:947:in `each' from ./tool/rbinstall.rb:947:in `<main>' make: *** [do-install-all] Error 1 Any help would be greatly appreciated I am getting pretty annoyed. A: So what I ended up doing was I installed the entire Xcode package found online instead of through the App Store. Doing this helped solve my issue.
Heuristic errors in clinical reasoning. Errors in clinical reasoning contribute to patient morbidity and mortality. The purpose of this study was to determine the types of heuristic errors made by third-year medical students and first-year residents. This study surveyed approximately 150 clinical educators inquiring about the types of heuristic errors they observed in third-year medical students and first-year residents. Anchoring and premature closure were the two most common errors observed amongst third-year medical students and first-year residents. There was no difference in the types of errors observed in the two groups. Errors in clinical reasoning contribute to patient morbidity and mortality Clinical educators perceived that both third-year medical students and first-year residents committed similar heuristic errors, implying that additional medical knowledge and clinical experience do not affect the types of heuristic errors made. Further work is needed to help identify methods that can be used to reduce heuristic errors early in a clinician's education.
Great Drinks, Unbelievable Value At The Junction, we're passionate about our drinks and our great value. That's why The Junction brings you a top range of draught lager, bottled beer, cider and a huge selection of spirits, soft drinks and wines. Why not team up your skillet with a refreshing pint of Carlsberg, or get in the mood for summer with the new Bulmer's Zesty Blood Orange? Plus, our cheeky cocktails, bombs, specially selected Innis & Gunn and Sierra Nevada Pale Ale craft beers are sure to excite your taste buds. We even have Echo Falls Fruit Fusions and Barefoot Bubbly! In fact, there's something for everyone on our drinks menu. Just see for yourself!
29 jul. 2017 Please give me 500 Yen! [ENG] Name:Please give me 500 Yen! Circle/Artist:Ririadoll Type: DoujinshiFandom: Katekyo Hitman Reborn Pairing: Mukuro x Hibari x Tsuna Scans: Magi Description In a specific day and time... The glutton freeloader, Lanbao-chan, ate the sweets that he was not allow to buy in a sweets shop, then here came Tsuna-kun who was looking for Lanbao-chan. The shopkeeper asked him to pay for the sweets, but he does not have enough of money at the moment. The sweets cost is 500 yen. Right away, Tsuna-Kun’s most trustable partner, Reborn-Sensei pass by and he found someone who is able to help him to pay the debt. “I shall up my grade during the day little by little.” Told to return to him but we do not plan to meet up, so what shall I do Reborn sensei said: “this is your debt, return to me later then.”
Q: Chart Series By Region in Google Earth Engine? I've wrote a code for modis ndvi timeseries(code below). But it returns error in chart creating (ui.Chart.image.seriesByRegion) : Error generating chart: Invalid argument specified for ee.Number(): system:time_start var regions = ee.FeatureCollection([ ee.Feature( // forest. ee.Geometry.Rectangle(geometry), {label: 'forest'}), ee.Feature( // crop. ee.Geometry.Rectangle(geometry2), {label: 'crop'}) ]) ; var modis = ee.ImageCollection("MODIS/006/MOD13Q1") .filterDate("2000-01-01","2017-12-31"); var mod13 = modis.map(function(img){ var id = img.id(); img = img.select("NDVI").rename(id); return img.multiply(0.0001) .copyProperties(img,['system:time_start','system:time_end']); }); print(mod13); var timeseries = ui.Chart.image.seriesByRegion( mod13,regions,ee.Reducer.mean(),250,'system:time_start','label') .setChartType('ScatterChart') .setOptions({ title : 'vegetation series', xAxis : {title : 'time'}, yAxis : {title : 'NDVI'}, lineWidth : 1, pointSize : 2, series : { 0 : {color : 'red'}, 1 : {color : 'green'} }}); print(timeseries); A: Well clearly the charting function is not happy with the input arguments. While creating mod13, you renamed all the NDVI bands to corresponding dates (not sure why you would do that, because chart uses dates from the system:time). Images within the collection mod13 doesn't contain a unique band name. Now when you call ui.Chart.image.seriesByRegion, it is expecting the following order of arguments: imageCollection, regions, reducer, band, scale, xProperty, seriesProperty. At the place of band it finds 250 which is the scale. So you have two options: Use ui.Chart.image.seriesByRegion(mod13, regions, ee.Reducer.mean()) with first three arguments, assuming you don't want to change the scale Do not rename the NDVI band in mod13 and use ui.Chart.image.seriesByRegion(mod13, regions, ee.Reducer.mean(), 'NDVI', 250, 'system:time_start', 'label') See https://code.earthengine.google.com/9f86ca8486070c7f440c3fa1ea02dd4b for complete example. PS. Your code is not exactly reproducible. geometry and geometry2 are not defined.
A federal appeals court on Thursday upheld the injunction against the Trump Administration’s ban on residents of six Muslim-majority countries entering the United States. In a 10-3 ruling, judges sided with the lower court in its decision to indefinitely block central parts of Trump’s March executive order barring people from Iran, Libya, Somalia, Sudan, Syria and Yemen from entering the U.S. for a 90 day period and suspending the refugee program for 120 days. Although the revised executive order (after an earlier version was ruled likely unconstitutional by a judge in Seattle) did not explicitly mention Muslims or Islam, the court ruled that the intent was to target people based on religion. The new version, Chief Judge Roger L Gregory wrote in his ruling, contained “vague words of national security, but in context drips with religious intolerance, animus, and discrimination.” Gregory referred to a number of statements made by Trump on the campaign trail, including his statement vowing to pursue a complete and total shutdown of Muslims entering the United States. Gregory also cited an interview Trump gave with Christian Broadcasting News on Jan. 27 2017, in which he said that his executive order was designed to give preference to Christian refugees. Gregory wrote that the order “cannot be divorced from the cohesive narrative linking it to the animus that inspired it,” and that the national security explanation was a “post hoc, secondary justification for an executive action rooted in religious animus and intended to bar Muslims from this country.” “President Trump’s Muslim ban violates the Constitution, as this decision strongly reaffirms,” said Omar Jadwat, director of the ACLU’s Immigrants’ Rights Project, in a statement. “The Constitution’s prohibition on actions disfavoring or condemning any religion is a fundamental protection for all of us, and we can all be glad that the court today rejected the government’s request to set that principle aside.”
M-component with reactivity against actin associated with thrombotic thrombocytopenic purpura. A patient with a monoclonal B-cell disorder producing an M-component with antiactin activity is described. After more than 3 years of observations, the final diagnosis is not completely established, but a malignant lymphoproliferative process is evidently under evolution. On three occasions, she has presented with symptoms compatible with a thrombotic thrombocytopenic purpura-like syndrome, concomitant with an increase in paraprotein. A relationship between this autoantibody and the patient's symptoms is proposed. Steroids have so far had a beneficial effect on the symptoms.
Q: Compute likelihood of two events happening at the same millisecond I would like to write a python (or R, etc) function that computes the likelihood of two events happening at the same millisecond. The events are independent an can happen at any time of the day. My goal is to be able to say "when there are 1.000.000 events a day, the likelihood of both happening on the same millisecond is X%". I've read a bit about the Poisson distribution and the python function random.expovariate that could be used to predict likelihood of events based on the Poisson distribution. But to my uneducated mind it is not clear whether the Poisson distribution is the right one. Could it maybe even be as simple as the following? likelihood=(1/86400000)*(1/86400000)*number_of_events A: The phrasing of your question is a bit ambiguous, so I'm going to interpret it like this: "The expected number of events per day is 1,000,000. Given that an event has just occurred, what's the probability that the next event occurs within 1ms?" We can consider event times to be generated by a homogeneous Poisson process. This means that they're independent, and there's a parameter $\lambda$ that gives the expected number of events per unit time. $\lambda$ itself doesn't change over time in this model (if you want it to, you'd need an inhomogeneous Poisson process). In your example, $\lambda$ = 1e6/8.64e7 (events per ms) = (events/day)*(days/ms) For a homogeneous Poisson process, the time between successive events (let's call it the inter-event interval) has an exponential distribution with mean $1 / \lambda$. In your example, this gives an average of 86.4ms between events. Given that an event has just occurred at time $t_0$, we want to calculate the probability that the next event occurs at time $t_1 \le t_0 + 1$ ms. That is, the inter-event interval will be between 0 and 1ms. To do that, we can integrate the probability density function (PDF) of the inter-event interval from 0 to 1ms. This is the same as evaluating its cumulative distribution function (CDF) at 1ms. The CDF of the exponential distribution is: $$1 - e^{-\lambda t}$$ Evaluating this at $\lambda$=1e6/8.64e7 and $t$=1ms gives a probability of ~0.0115
Reid River Airfield Reid River Airfield is a World War II airfield located to the south of the Reid River near Townsville, Queensland, Australia. Disused since the war as an airfield, the former base is private property used for mustering cattle and horses. An arch marks the western edge of the strip, easily accessible from the main road. With permission of the owner, visitors can tour the strip. On the eastern edge of the strip are concrete pads from former buildings including the mess hall and first air station. Also, there are the remains of a B-26 crash site and a former 2nd BS camp area. Small markers, left by veterans in 1992 mark these locations. History The airfield, which had a single main runway running east to west, was built by the United States Army Air Forces. Two units were based at the airfield: 2nd Bombardment Squadron, 22nd Bombardment Group, 9 April-9 October 1942 408th Bombardment Squadron, 22nd Bombardment Group (later re-designated the 18th Reconnaissance Squadron), 12 April-15 October 1942 Both squadrons initially flew B-26 Marauder medium bombers. The first mission by the 408th BS was on 22 April 1942 from Reid River, operating until January 1943 when the group went on R&R. In early February 1943, both the 2nd and 408th Squadrons converted from B-26 to B-25 Mitchell bombers when it was decided to send the B-26s to the Mediterranean Theatre. See also United States Army Air Forces in Australia (World War II) List of airports in Queensland References Maurer, Maurer (1983). Air Force Combat Units Of World War II. Maxwell AFB, Alabama: Office of Air Force History. . Pacific Wrecks - Reid River Airfield Category:Airfields of the United States Army Air Forces in Australia Category:Defunct airports in Queensland Category:Airports established in 1942 Category:1942 establishments in Australia Category:Queensland in World War II
Effect of ice water ingestion on asthmatic children after exercise challenge. Both exercise and ice water ingestion are known to be trigger factors for an asthma attack in ethnic Chinese asthmatic children. The purpose of this study was to investigate whether ice water ingestion further deteriorates pulmonary function of asthmatic children after exercise. Thirty Chinese asthmatic children underwent exercise challenge by ergocyclometer for 6 minutes and then were further challenged by immediate ingestion of ice water (200 ml, 0-4 degrees C), warm water (200 ml, 37 degrees C) or no ingestion on three different days in one week. Each patient completed the three different water ingestion tests after exercise challenge. The FEV1, FEF25-75%, and PEF tests were performed at baseline and again at 5, 15, 30, 60, 90 minutes after exercise plus water ingestion challenge. After the spirometric test at 90 minutes, 3 puffs (0.6 mg) of hexoprenaline from a metered dose inhaler were given and then a further spirometric test was performed 15 minutes later. The FEV1 and PEF were significantly decreased after exercise plus the 3 different water ingestion challenge except for the FEV1 in the patients who ingested nothing (p = 0.051) and PEF in the patients who ingested warm water (p = 0.163). FEF25-75% of the three tests was not significantly decreased. Exercise-induced asthma (EIA) developed in about two thirds of the 30 patients, regardless of whether ice water, warm water or nothing at all was ingested after exercise challenge. There was no statistically significant difference in spirometric data among the 3 different water tests at various time points. The mean percentage increase of FEV1, FEF25-75% and PEF after bronchodilator therapy were all the lowest in the ice water test, and the greatest in the warm water test. A statistically significant difference was found between ice water and warm water tests for FEV1 and PEF (p = 0.0293 and p = 0.0308 respectively). In conclusion, about two thirds of the asthmatic children in this series had EIA. Those who ingested warm water after exercise had a better bronchodilator response than those who ingested ice water.
5.1 TUVA's Blog post: I felt like I was a risk taker and a reflective person because we took a risk to make an elephant sculpture, but it didn’t work. At least we tried, but now we are making an elephant in the garden. I am also reflective because I listened to other ideas like when we were going to make an script for the stage performance. I was writing and listening to Xian Hang and Praveen’s ideas!I could improve by doing things more quicker! RAYMOND's BLOG POST: I have been a risk taker and been enthusiastic while I have been preparing for the exhibition. I have shown my learner profiles by taking risks to make a game and also being creative to program a game when nobody else in the exhibition tried to make a game. I also learnt many new things of designing with my teacher. Moreover, I have been enthusiastic by using 20 hours of my weekends trying to code my game.I need to improve on being organised because my exhibition work schedule was pretty messed up, so I wasted my time. I was chased up from time and had to rush making and glueing images on my poster. Check out the trailer to Raymond, Hina and Hanna's game: TOMA'S BLOGPOST: I i think I am creative this week because I made Comic about S A R D. SARD is drone.I can improve because I finish. I think am a Thinker Because I got many idea.sI can improve because this is for Exhibition. AZUSA'S Post: Two IB learner profiles that I’ve shown are : Inquirer -Because when I have a little time, I’m reading some websites about my Exhibition to get some information. Reflective:When I went back to my house, I always think about what I did in school today and what can I do in my house now. I also plan about what can I do in school tomorrow. Students in grade 5.1 are buzzing with creative ideas, excitement and anticipation of the upcoming Exhibition. Hina, Hanna and Raymond are inquiring into the effects of Music on learning. For the past week they have been conducting a scientific experiment with controlled variables. Every day, during Math lessons, they have played Music for ten minutes and then recorded student response. They have then record their observations and written conclusions. This will go on for ten days before they arrive at a conclusion and test their hypothesis. Meanwhile, Raymond, (above) is applying his understanding of coding to create his own computer game. The music for the game has been created by Hina and Hanna. Talk about collaboration! Koji has designed and created a nail puller using a simple machine. He is inquiring into technology and machines and how they have made our lives easier.Keagan and Koji have designed and created a motorised car. The car moves, but needs a body and a hood. The duo are in the process of making sketches of designs that will work. Their car is called: Mark 2 Inspired by the Hawk Eye used in tennis and baseball games, below, Azusa, Koji and Keagan are working with their mentor Nathan, to come up with a way to use technology in sports in school. Ikram, Mirai and Steven who are inquiring into climate change and pollution are collecting E Waste in the school. They have set up a box in the ES office and have made presentations to various classes in an effort to collect E Waste. The E waste will be delivered to Virogreen, a local company.The group also plan to make a book to educate others.Raphael, Sangje and Toma have designed and created a drone (futuristic) that can be used to rescue animals in the wild.Tuva, Praveen and Xianhang have collected used plastic bottles for ten days and are planning to create a sculpture of an elephant to draw attention to ivory trade.STUDENT REFLECTIONS-HINA - I finished take video of “how to teach math with music” and put in the music to game with group. And I finished wrote the science experiment Day 5 report. I feel very tired, because I using the computer in the school and also at home. And I look and read the English very much. So, I am very tired.
The man who found the Boston Marathon bombing suspect hiding in his boat in his backyard says he's no hero and wants the attention he's drawn to "fade away." David Henneberry, 66, of Watertown tells The Boston Globe in a rare interview that he also wants to set the record straight. Media have reported that the retired technician went to investigate after seeing blood on his boat, which was on a trailer and wrapped for the winter season. But Henneberry said the truth is he never would have approached the boat on April 19 had he seen blood, the paper reported Wednesday. "If I had seen blood out there, I wouldn't have investigated it," Henneberry said. "I'm not crazy." Instead, he noticed some padding used to protect the hull of the 24-foot vessel had fallen to the ground, so he went to fix it. He grabbed a stepladder and put it beside the boat, the Slip Away II. When he lifted a piece of shrink wrap, he noticed blood splattered on the deck, then he spotted a man, curled in a fetal position, inside the boat. It was Dzhokhar Tsarnaev, one of the two brothers suspected of setting off the pressure cooker bombs at the marathon finish line April 15, killing three people and injuring more than 260. "I thought, `Oh my God, he's in there,"' Henneberry said. He ran inside, looked at his wife and said, "He's in the boat! He's in our boat!" "He was shaken," his wife, Beth, said. "We were both shaken." He called 911. His actions have drawn unwanted attention. Writers, filmmakers and just plain gawkers stopping by his house. "It just goes on and on," Beth Henneberry said. And the bullet-riddled boat? It's being held by the FBI as evidence, and an agency spokesman says the Henneberrys are unlikely to be compensated. They did get $1,000 from their insurance company. "I just want this all to fade away," David Henneberry said. "I'm not like a rock star who sought publicity. I don't want any more." Tsarnaev remains in custody after pleading not guilty to 30 federal charges stemming from the April 15 explosions. His brother died during the police search for the suspects.
Q: mysql single table SELECT query ORDER BY causes FILESORT I looked through multiple similar posts trying to get input on how to redefine my index but can't figure this out. Every time i include the ORDER BY statement, it uses filesort to return the resultset. Here's the table definition and query: SELECT `s`.`title`, `s`.`price`, `s`.`price_sale` FROM `style` `s` WHERE `s`.`isactive`=1 AND `s`.`department`='women' ORDER BY `s`.`ctime` DESC CREATE TABLE IF NOT EXISTS `style` ( `id` mediumint(6) unsigned NOT NULL auto_increment, `ctime` timestamp NOT NULL default CURRENT_TIMESTAMP, `department` char(5) NOT NULL, `isactive` tinyint(1) unsigned NOT NULL, `price` float(8,2) unsigned NOT NULL, `price_sale` float(8,2) unsigned NOT NULL, `title` varchar(200) NOT NULL, PRIMARY KEY (`id`), KEY `idx_grid_default` (`isactive`,`department`,`ctime`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 COLLATE=latin1_general_ci AUTO_INCREMENT=47 ; Also, here's the explain result set I get: +----+-------------+-------+------+---------------+----------+---------+-------------+------+-----------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+------+---------------+----------+---------+-------------+------+-----------------------------+ | 1 | SIMPLE | s | ref | idx_grid | idx_grid | 6 | const,const | 3 | Using where; Using filesort | +----+-------------+-------+------+---------------+----------+---------+-------------+------+-----------------------------+ A: Why does s.isactive not get used as an index? MySQL (or any SQL for that matter) will not use a key if it has low cardinality. In plain English, if many rows share the same value for a key, (My)SQL will not use the index, but just real the table instead. A boolean field almost never gets picked as an index because of this; too many rows share the same value. Why does MySQL not use the index on ctime? ctime is included in a multi-field or composite index. MySQL will only use a composite index if you use all of it or a left-most part of it *) If you sort on the middle or rightmost field(s) of a composite index, MySQL cannot use the index and will have to resort to filesort. So a order by isactive , department will use an index; order by department will not. order by isactive will also not use an index, but that's because the cardinality of the boolean field isactive is too low. *) there are some exceptions, but this covers 97% of cases. Links: Cardinality wikipedia: http://en.wikipedia.org/wiki/Cardinality_%28data_modeling%29 http://dev.mysql.com/doc/refman/5.0/en/mysql-indexes.html
The President of Venezuela has urged women to stop using hairdryers and offered alternative styling tips as the country’s energy crisis continues. Nicolas Maduro has announced a decree giving state employees Fridays off for two months as part of measures to offset a crippling electricity shortage. He urged his compatriots to increase other efforts to save power, including cutting appliance use and raising the temperature on air conditioning units. Venezuela calls for drastic measures in face of electricity crisis Recommending that women reduce hairdryer use to “special occasions”, Mr Maduro added: “I always think a woman looks better when she just runs her fingers through her hair and lets it dry naturally. It's just an idea I have." He also called on Venezuelans to make small changes to their routines, including embracing the tropical heat and hanging clothes out to dry instead of using tumble dryers. Not everyone welcomed the advice, with one Caracas resident telling Al Jazeera: "If the President thinks that not blowdrying our hair is going to help, then the problem is far worse than we thought." The government has declared Fridays a non-working day for the public sector for the next 60 days as the economic and energy crisis combine to cause food shortages and long supermarket queues. Shopping centres were shut down in Caracas in February in another bout of electricity rationing (AFP/Getty Images) Around 70 per cent of Venezuela’s electricity comes from a hydroelectric plant at the Guri Dam, which holds back the Caroni River in the south-eastern state of Bolivar. Officials have been warning for weeks that the water level behind it has fallen to near its minimum operating level, meaning it may soon have to be shut down entirely. Mr Maduro’s socialist administration blames the crisis on a drought caused by the El Nino weather phenomenon and acts of sabotage by its opponents, but experts say rationing could have been prevented by investment in maintenance and in the construction of thermoelectric plants. The President’s emergency measures sparked ridicule from critics, who have predicted an acute recession. Venezuela's President Nicolas Maduro gestures while he attends to a rally in Caracas, April 7, 2016. (Reuters) "Just because Maduro doesn't work Monday to Friday, Saturday or Sunday, doesn't mean we Venezuelans are like that,“ said opposition politician Maria Corina Machado. "What we want is to keep working, and for you, Maduro, to go." His rambling and sometimes expletive-laden late-night speeches have irked many Venezuelans struggling to make ends meet and desperate for a solution to the crisis. The South American nation has grappled with blackouts for years, including one that took Mr Maduro himself by surprise as he delivered a national address on live television. Caracas occasionally shuts down because of citywide losses of power and some rural areas are living mostly in the dark. World news in pictures Show all 50 1 /50 World news in pictures World news in pictures 14 September 2020 Japan's Prime Minister Shinzo Abe, Chief Cabinet Secretary Yoshihide Suga, former Defense Minister Shigeru Ishiba and former Foreign Minister Fumio Kishida celebrate after Suga was elected as new head of the ruling party at the Liberal Democratic Party's leadership election in Tokyo Reuters World news in pictures 13 September 2020 A man stands behind a burning barricade during the fifth straight day of protests against police brutality in Bogota AFP via Getty World news in pictures 12 September 2020 Police officers block and detain protesters during an opposition rally to protest the official presidential election results in Minsk, Belarus. Daily protests calling for the authoritarian president's resignation are now in their second month AP World news in pictures 11 September 2020 Members of 'Omnium Cultural' celebrate the 20th 'Festa per la llibertat' ('Fiesta for the freedom') to mark the Day of Catalonia in Barcelona. Omnion Cultural fights for the independence of Catalonia EPA World news in pictures 10 September 2020 The Moria refugee camp, two days after Greece's biggest migrant camp, was destroyed by fire. Thousands of asylum seekers on the island of Lesbos are now homeless AFP via Getty World news in pictures 9 September 2020 Pope Francis takes off his face mask as he arrives by car to hold a limited public audience at the San Damaso courtyard in The Vatican AFP via Getty World news in pictures 8 September 2020 A home is engulfed in flames during the "Creek Fire" in the Tollhouse area of California AFP via Getty World news in pictures 7 September 2020 A couple take photos along a sea wall of the waves brought by Typhoon Haishen in the eastern port city of Sokcho AFP via Getty World news in pictures 6 September 2020 Novak Djokovic and a tournament official tends to a linesperson who was struck with a ball by Djokovic during his match against Pablo Carreno Busta at the US Open USA Today Sports/Reuters World news in pictures 5 September 2020 Protesters confront police at the Shrine of Remembrance in Melbourne, Australia, during an anti-lockdown rally AFP via Getty World news in pictures 4 September 2020 A woman looks on from a rooftop as rescue workers dig through the rubble of a damaged building in Beirut. A search began for possible survivors after a scanner detected a pulse one month after the mega-blast at the adjacent port AFP via Getty World news in pictures 3 September 2020 A full moon next to the Virgen del Panecillo statue in Quito, Ecuador EPA World news in pictures 2 September 2020 A Palestinian woman reacts as Israeli forces demolish her animal shed near Hebron in the Israeli-occupied West Bank Reuters World news in pictures 1 September 2020 Students protest against presidential elections results in Minsk TUT.BY/AFP via Getty World news in pictures 31 August 2020 The pack rides during the 3rd stage of the Tour de France between Nice and Sisteron AFP via Getty World news in pictures 30 August 2020 Law enforcement officers block a street during a rally of opposition supporters protesting against presidential election results in Minsk, Belarus Reuters World news in pictures 29 August 2020 A woman holding a placard reading "Stop Censorship - Yes to the Freedom of Expression" shouts in a megaphone during a protest against the mandatory wearing of face masks in Paris. Masks, which were already compulsory on public transport, in enclosed public spaces, and outdoors in Paris in certain high-congestion areas around tourist sites, were made mandatory outdoors citywide on August 28 to fight the rising coronavirus infections AFP via Getty World news in pictures 28 August 2020 Japanese Prime Minister Shinzo Abe bows to the national flag at the start of a press conference at the prime minister official residence in Tokyo. Abe announced he will resign over health problems, in a bombshell development that kicks off a leadership contest in the world's third-largest economy AFP via Getty World news in pictures 27 August 2020 Residents take cover behind a tree trunk from rubber bullets fired by South African Police Service (SAPS) in Eldorado Park, near Johannesburg, during a protest by community members after a 16-year old boy was reported dead AFP via Getty World news in pictures 26 August 2020 People scatter rose petals on a statue of Mother Teresa marking her 110th birth anniversary in Ahmedabad AFP via Getty World news in pictures 25 August 2020 An aerial view shows beach-goers standing on salt formations in the Dead Sea near Ein Bokeq, Israel Reuters World news in pictures 24 August 2020 Health workers use a fingertip pulse oximeter and check the body temperature of a fisherwoman inside the Dharavi slum during a door-to-door Covid-19 coronavirus screening in Mumbai AFP via Getty World news in pictures 23 August 2020 People carry an idol of the Hindu god Ganesh, the deity of prosperity, to immerse it off the coast of the Arabian sea during the Ganesh Chaturthi festival in Mumbai, India Reuters World news in pictures 22 August 2020 Firefighters watch as flames from the LNU Lightning Complex fires approach a home in Napa County, California AP World news in pictures 21 August 2020 Members of the Israeli security forces arrest a Palestinian demonstrator during a rally to protest against Israel's plan to annex parts of the occupied West Bank AFP via Getty World news in pictures 20 August 2020 A man pushes his bicycle through a deserted road after prohibitory orders were imposed by district officials for a week to contain the spread of the Covid-19 in Kathmandu AFP via Getty World news in pictures 19 August 2020 A car burns while parked at a residence in Vacaville, California. Dozens of fires are burning out of control throughout Northern California as fire resources are spread thin AFP via Getty World news in pictures 18 August 2020 Students use their mobile phones as flashlights at an anti-government rally at Mahidol University in Nakhon Pathom. Thailand has seen near-daily protests in recent weeks by students demanding the resignation of Prime Minister Prayut Chan-O-Cha AFP via Getty World news in pictures 17 August 2020 Members of the Kayapo tribe block the BR163 highway during a protest outside Novo Progresso in Para state, Brazil. Indigenous protesters blocked a major transamazonian highway to protest against the lack of governmental support during the COVID-19 novel coronavirus pandemic and illegal deforestation in and around their territories AFP via Getty World news in pictures 16 August 2020 Lightning forks over the San Francisco-Oakland Bay Bridge as a storm passes over Oakland AP World news in pictures 15 August 2020 Belarus opposition supporters gather near the Pushkinskaya metro station where Alexander Taraikovsky, a 34-year-old protester died on August 10, during their protest rally in central Minsk AFP via Getty World news in pictures 14 August 2020 AlphaTauri's driver Daniil Kvyat takes part in the second practice session at the Circuit de Catalunya in Montmelo near Barcelona ahead of the Spanish F1 Grand Prix AFP via Getty World news in pictures 13 August 2020 Soldiers of the Brazilian Armed Forces during a disinfection of the Christ The Redeemer statue at the Corcovado mountain prior to the opening of the touristic attraction in Rio AFP via Getty World news in pictures 12 August 2020 Young elephant bulls tussle playfully on World Elephant Day at the Amboseli National Park in Kenya AFP via Getty World news in pictures 11 August 2020 French Prime Minister Jean Castex is helped by a member of staff to put a protective suit on prior to his visit at the CHU hospital in Montpellier AFP via Getty World news in pictures 10 August 2020 Locals harvest their potatoes as Mount Sinabung spews volcanic ash in Karo, North Sumatra province, Indonesia Antara Foto/Reuters World news in pictures 9 August 2020 Doves fly over the Peace Statue at Nagasaki Peace Park during the memorial ceremony held for the 75th anniversary of the atomic bombing EPA World news in pictures 8 August 2020 Anti-government protesters try to remove concrete wall that installed by security forces to prevent protesters reaching the Parliament square, during a protest against the political elites and the government after this week's deadly explosion in Beirut AP World news in pictures 7 August 2020 A protester throws a stone towards Israeli forces in the village of Turmus Aya, north of Ramallah in the occupied West Bank, following a march by Palestinians against the building of Israeli settlements AFP via Getty World news in pictures 6 August 2020 A woman yells as soldiers block a road for French President Emmanuel Macron's visit the Gemmayzeh neighborhood. The area in Beirut suffered extensive damage from the explosion at the seaport AP World news in pictures 5 August 2020 Damage at the site of Tuesday's blast in Beirut's port area, Lebanon Reuters World news in pictures 4 August 2020 A large explosion in the Lebanese capital Beirut. The blast, which rattled entire buildings and broke glass, was felt in several parts of the city AFP via Getty World news in pictures 3 August 2020 A general view shows the new road bridge in Genoa, Italy ahead of its official inauguration, after it was rebuilt following its collapse on August 14, 2018 which killed 43 people Reuters World news in pictures 2 August 2020 Empty stall spaces are seen hours before a citywide curfew is introduced in Melbourne, Australia EPA World news in pictures 1 August 2020 People take part in a demonstration by the initiative "Querdenken-711" with the slogan "the end of the pandemic - the day of freedom" to protest against the current measurements to curb the spread of COVID-19 in Berlin, Germany AFP via Getty World news in pictures 31 July 2020 Pilgrims circumambulating around the Kaaba, the holiest shrine in the Grand mosque in Mecca. Muslim pilgrims converged today on Saudi Arabia's Mount Arafat for the climax of this year's hajj, the smallest in modern times and a sharp contrast to the massive crowds of previous years Saudi Ministry of Media/AFP World news in pictures 30 July 2020 The Mars 2020 Perseverance mission lifts off at the Kennedy Space Centre in Florida. The mission is part of the USA's largest moon to Mars exploration. Nasa will attempt to establish a sustained human presence on and around the moon by 2028 through their Artemis programme EPA World news in pictures 29 July 2020 A woman refreshes herself in a outdoor pool in summer temperatures in Ehingen, Germany dpa via AP World news in pictures 28 July 2020 Malaysia's former prime minister Najib Razak speaks to the media after he was found guilty in his corruption trial in Kuala Lumpur AFP via Getty World news in pictures 27 July 2020 North Korean leader Kim Jong Un poses for a photograph after conferring commemorative pistols to leading commanding officers of the armed forces on the 67th anniversary of the "Day of Victory in the Great Fatherland Liberation War". Which marks the signing of the Korean War armistice KCNA via Reuters Mr Maduro gave workers a full week off in March to save electricity, and cut the hours of more than 100 shopping centres across the country in the previous month. Together with other measures, he hopes to reduce electricity consumption by at least 20 per cent. His predecessor, Hugo Chavez, promised to solve the problem in 2010, but little has improved. Other Latin American countries are also grappling with the drought, though still working normal weeks. Juan Manuel Santo, the President of Colombia, has been urging citizens to cut back on power consumption to avoid rationing, while the Panama Canal is imposing restrictions on ships as it struggles with low water levels.
Sometimes I think we should give up on the classification of bad actors in SurveyMan – it’s a very hard problem and developing solutions feels very far afield from what I want to be doing. Then I see blog posts like this one that show the limitations of post-hoc modeling and the pervasiveness of bad actors. As some of the commenters pointed out, if people were responding randomly, then we could treat their responses as noise. However, what they are pointing out is that people in fact do not respond randomly – there may be some questions that are more likely to elicit “incorrect” responses than others, which puts us in bias detection territory. SurveyMan comes equipped with a variety of classifiers for deciding whether a response comes from a bad actor. Different classifiers come with different sets of assumptions and different power. In particular, I was focused on developing classifiers that leveraged SurveyMan’s randomization and structural features. Since we have a simulator, it might make sense to take a look at classifiers that make modeling assumptions as well. Prior work Given the amount of work on other aspects of survey design, we initially found it surprising that there wasn’t much on the quality of responses. Part of the reason why this kind of modeling is difficult is because it is so contingent on the survey instrument. When we began investigating bad actors in 2013, we cited Meade and Craig. According to Google Scholar, this paper (published in 2012) now has 180 citations and so far as I can tell, it is still the best review of the problem. I thought it would be good to revisit this paper, since it’s been a while. Literature In this post, we will focus on the contributions of Meade and Craig. However, I want to start by pointing out some interesting prior work that they cite. I found two pieces of these paper particularly interesting and possibly useful for background in future papers: Use of simulators in assessing the validity of psychological instruments: Validity scales must themselves be validated, and this is normally accomplished by instructing experimental participants (or computers) to simulate a particular kind of invalid responding (e.g., responding randomly; faking good or bad). A successful validity scale correctly identifies most simulated protocols as invalid while misidentifying very few unsimulated protocols as invalid. I had not seen this mentioned previously and need to look for citations. It gives us a nice precedence in the psychology community for using our simulator. Even more prior work on the tradeoff between improving the accuracy of the instrument (as Gelman would say, a measurement issue) with the effect of identifying (whether accurately or inaccurately) bad actors: Accurately identifying relatively rare events, even with a well-validated scale, is in principle a very difficult psychometric problem (Meehl & Rosen, 1955). Furthermore, research indicates that “correcting” scores with validity scales can actually decrease the validity of the measure (Piedmont, McCrae, Riemann, & Angleitner, 2000). Piedmont et al. conclude that researchers are better off improving the quality of personality assessment than trying to identify relatively infrequent invalid protocols. I see several issues with this assessment. Of course, the main problem is that inattentiveness and intensional misrepresentation may not be rare events (and in fact current research and the point of this post is that they are not!). Furthermore, as a computer scientist, I am concerned that we are inducing an arms race here. We know that the instrument is flawed for inattentive respondents. We also assume that inattentive respondents are not malicious. They will not adapt their behavior in response to a change in policy. This isn’t to say that their behavior won’t change – over time, we may see survey respondents in general become more or less attentive, or for their attentiveness to respond to different interventions, but the key here is that there is no causal relationship between the policy or protocol or instrument (or whatever term you prefer) and inattentive responses (which of course is not true in the general case, but we’ll get to that later). The behavior of inattentive respondents contrasts with the behavior of respondents who deliberately misrepresent themselves (we can update our model later, but for now assume that bad actors belong to one of these two classes, rather than a mixture of them). Meade and Craig review five methods of detecting bad actors (i.e., people who provide “careless responses”) and perform two studies for measuring the effectiveness of these methods. They estimated 10-12% of their participants (undergraduates) responded carelessly. This proportion is smaller than what a previous study had estimated. In any case, what I see as perhaps the most important contribution from Meade and Craig to SurveyMan is that they found that features of the data (the topics being measured, the features of the survey) had a strong influence on their ability to identify careless responses. Their recommendations are features that SurveyMan might suggest the survey writer add, if the static analyses find it especially difficult to classify respondents as bad actors. Meade and Craig make the following contributions: …we provide the first investigation of a comprehensive set of methods for screening for careless responding and provide an understanding of the way in which they relate to one another and observed data responses. This comparison very nicely provides empirical backing for some suggestions for best practices. Although not all of them will be applicable to SurveyMan, it’s nice to see them laid out in a clear and well-thought-out study. In particular, they break their classification into two stages: those that involve augmenting the survey before deployment, and those that include specialized analyses. Methods that involved tweaking the survey instrument included: Prompts urging respondents to pay attention, and reduce their anonymity. Respondents were asked to either electronically “sign” a statement of good faith, or initial each page of questions. Social desirability scales. These are culturally relative. The idea is that there is a general gold standard of qualities we expect respondents to identify as strongly desirable or strongly undesirable. The paper included a citation (Paulhus 2002), but I have not read this paper, so I don’t entirely know what this means. My interpretation is that respondents who are acting in good faith will respond to questions in a way that is consistent with general social mores (e.g., theft and murder are generally bad, no one thinks of themselves as racists, etc.). Lie scales. I really don’t know what these are, and not being part of the community, I suspect I’m not going to anytime soon. From what I gather, they are part of a psychological battery that only a sociopath would answer “correctly.” Nonsensical, bogus items. These are the closest to things I’ve seen in the general survey literature, and are commonly described as being a subset of “attention-check questions.” These are questions with a single, known valid response and can be used to flag potentially inattentive respondents. They are the kinds of questions we could automatically inject into surveys that have insufficient entropy bounds (assuming that entropy is actually a useful measure). Scales of consistent responding. This is another idea drawn from the psychology literature, and is the sort of thing we have actually talked about trying to optimize: some questions are essentially redundant, and this index is supposed to measure internal correlations in the survey. SurveyMan allows respondents to flag questions whose responses we expect to be correlated (although this method advocates for a much stronger condition–that the questions be statistically identical). Ideally, once we have collected enough information to learn a reliable classifier for a particular survey, we should be able to prune these questions. Self-report measures. Ask the respondents whether they really tried, and whether we should use their results. Methods that involve performing specialized analysis include: Response consistency. The previously mentioned consistency method above involves injecting questions known to be essentially identical. This method looks at questions that are either suspected to be correlated, or are found to be correlated empirically. SurveyMan actually performs this analysis in practice (and has since we released the software two years ago). Response pattern indices. The authors look for unlikely strings of responses. This approach should be a relative non-issue for us, due to the pervasive randomization in SurveyMan, but we should probably implement something like it for cases where people turn randomization off. Outlier analysis. This, of course, is the heart of what we try to do in SurveyMan, since it is almost totally domain-agnostic. The work of Meade and Craig is informed by some domain knowledge, and I would like to see if anything they do is applicable to SurveyMan. Response time. This is hands-down the most asked-about feature that we don’t provide as a default. My main reasoning is that classifiers based on timing can be gamed. In contexts such as AMT, you provide a time limit for tasks, which should be an upper bound, but inevitably it changes user behavior. It isn’t clear that response time is interpretable across tasks, nor across platforms. Within tasks and platforms, perhaps we are interested in outlier analysis. However, I suspect this will only catch a few very very very stupid respondents (who should be caught by other methods anyway). In conclusion, I feel it’s a bit of a strawman, but in past versions, I have provided example injectable Javascript. … we examine the latent class structure of a typical undergraduate respondent sample using both latent profile analysis and factor mixture modeling, revealing data patterns common among careless responders. It wasn’t entirely clear to me how they did this. They say that “raw item-level data matrix” is very large. Presumably this meant that there were many questions with many options. Here is the setup: the authors administered a mix of several personality assessments. One of them was a 300-item personality test (the International Personality Item Pool (IPIP)). I took a version of the test here. Some of the questions were clearly rephrasings of the synonymous or antonymous versions of other questions, meant to build in some redundancy to increase accuracy. You can see an example of the assessment that’s returned in a previous post. This is what the text says: Recent evidence suggests that Mahalanobis distance can be effective at identifying inattentive responses (Ehlers et al., 2009). Given the large size of the raw item-level data matrix, using all survey items simultaneously would have been computationally intensive. Instead, five Mahalanobis distance measures were computed for each respondent, one for each of the five broad personality factors (60 items per factor). The correlations among the five Mahalanobis distance measures were in excess of .78 (p < .05) and were averaged to a single Mahalanobis distance value. It looks like they took the codings for each of the personality domains and clustered those questions together before performing outlier analysis. In the past, the only mechanism we have given survey writers to describe clusters in their surveys has been the correlation column, which allows them to tag questions. Here is my best guess for the procedure they used: every 7-point scale is treated as a random variable. Each question has as tag known a priori indicating which of the five measures it belongs to. For each measure (i.e. domain), it treats some set of questions that measure that domain as its own dimension. For the 300-question inventory I took, this gives us approximately 60 questions per domain – therefore, it gives us a 60-dimensional space in which we need to find the center. Note that the analysis requires that the space we are analyzing be . (It isn’t clear if the 60 questions that would make up one of these factors are they same 60 questions that make up their analyses, since they used a series of personality tests). Side Note: I think the alternative analysis they were talking about might have to look at outliers from the center of a 300-dimensional random variable. Instead, they averaged these 5 60-dimensional ones. Note that we can construct an example where these two methods are not equivalent. Anyway, I see three main ways we can use Mahalanobis distance in SurveyMan: As part of the cluster classifier. Right now, we try to cluster all responses into valid and invalid. I’ve rotated through several ways of using it, and right now we inject sufficient known random respondents so that we have at least 10% bad actors, and use the majority of respondents in a cluster to determine the cluster’s classification (where “real” data is initially classified as unknown, but gets counted as valid). I’ve also used it in the “stacked” strategy, which first clusters, then labels using the LPO strategy, and again uses the majority label of the cluster to update the labels on the remaining cluster. I see Mahalanobis distance as useful to update the score for individual responses. At the moment, we don’t have a summary statistic to attach to each response, so comparing within the cluster isn’t easy. Adding Mahalanobis distance for each response from the distribution should help. As its own classifier. I’m not really sold on this one, but we could just treat an -question survey as an -dimensional random variable. It isn’t great for heterogeneity in responses, but if we break out the analysis by path (which we should be doing anyway), it might work nicely. In conjunction with user-provided correlation labels. This I think would be really cool. It would basically be the same strategy as what this paper is suggesting, where, if the psychometric questionnaire the paper uses were a SurveyMan program, we would just label the questions with their appropriate domain. The authors were also interested in whether there were different types of careless respondents. They did a “latent factor analysis,” which so far as I can tell was something like PCA on the “indices.” In particular, since they were using a psychometric survey, they wanted to see if there was some correlation between the subdomains of agreeableness and the indices they tabulated. This isn’t particularly generalizable to SurveyMan, but is clearly of interest to people who run psychometric surveys. … we provide an estimate to the prevalence of careless responding in undergraduate survey research samples. They reported 10-12% careless responses. … we examine the role of instruction sets (anonymous vs. identified) as well as the efficacy of response time and self-report indicators of careless responding. They performed some nice simulations with injected data and generally found what you would expect – when responses are clearly clustered (uniform random vs. true or gaussian random vs. true), it’s much easier to identify careless responses than when a given respondent exhibits a mixture of behavior. In SurveyMan, we try to use the option of submitting early as a proxy for self-classification of the time of the changepoint when a respondent goes from being conscientious to being careless. We’ll see if we can validate some of these contributions in simulation and work them into SurveyMan’s classifier system. One thing I would like to note is that there was no discussion of partitioning or resampling the data for the hypothesis tests. There are a lot of hypotheses being tested in this paper. However, it’s pitched in an exploratory way, and there aren’t any strong recommendations that come out of the paper, so I am not too bothered by it.
[Endoscopic submucosal excavation (ESE) is a safe and useful technique for endoscopic removal of submucosal tumors of the stomach and the esophagus in selected cases]. Submucosal tumors of stomach and esophagus are often detected incidentally during endoscopy and further characterized by endoscopic ultrasonography. After risk estimation such submucosal tumors are either controlled by watchful waiting or surgically resected. Nevertheless, symptomatic submucosal tumors should be treated. Endoscopic submucosal excavation (ESE) and submucosal tunneling endoscopic resection (STER) may represent an alternative non-surgical therapeutic option. Two cases of complete endoscopic resection of symptomatic submucosal tumors are reported: a small gastrointestinal stroma tumor (GIST) of the antrum and a 12 cm long esophageal lipoma. For selected cases, ESE of symptomatic submucosal tumors of stomach and esophagus represents a useful alternative compared to surgical removal particularly if mass is located in antrum or corpus, sized < 20 mm and clearly defined by endoscopic ultrasonography.
Q: Connecting to and syncing with localhost from android device not working I am using the Azure Mobile App quickstart ToDoList example to get started with cross platform app. I have set up the back-end and it is working on localhost - I can hit it using Swagger and gets posts etc are working. I then set up the client application (Xamarin.Forms). I am running the client application on my Android device and all works great when back-end is in Azure, including the offline sync element. The problem is that I have to work locally for now but I cannot sync with the db when running on localhost. At first the debugger was giving me a "connection refused" error, so I followed the steps here and in various other sources including using my laptop IP and setting firewall rule, adding binding to port in IIS Manager and applicationhost.config, and changing ApplicationURL in Constants.cs. Now, I get no connection refused error, but the data is not getting to the db, athough the localdb on the tablet seems to be working - it is failing when I try to sync to/from db. Not too familiar with networking but it may be important to note that when I use localhost:portnum/tables/todoitem in browser I get results in XML but when I use 192.168.0.10:portnum/tables/todoitem I get "Bad Request - Invalid Hostname". A: By default, your Mobile App .NET server backend application will run in IIS Express. This is problematic when debugging with a client application running in another device on your network, or in a virtual machine in Hyper-V (such as Windows Phone Emulator). IIS Express will host your server application under localhost, which makes the application unreachable to other devices or virtual machines. Your client application running on Windows Phone Emulator has a different meaning for localhost. The same is true for the Visual Studio Emulator (which runs in Hyper-V) and the Google Emulator. It is simpler to configure your machine to host your Mobile App .NET server backend application on IIS, as this allows you to control the binding of the server application to an IP address, rather than localhost. For information on this, see: https://github.com/Azure/azure-mobile-apps-net-server/wiki/Local-development-and-debugging-the-Mobile-App-.NET-server-backend
define({ root: //begin v1.x content ({ deleteButton: "[Delete]" }) //end v1.x content , "ar": true, "az": true, "bg": true, "ca": true, "cs": true, "da": true, "de": true, "el": true, "es": true, "fi": true, "fr": true, "he": true, "hu": true, "hr": true, "it": true, "ja": true, "kk": true, "ko": true, "nb": true, "nl": true, "pl": true, "pt-pt": true, "pt": true, "ro": true, "ru": true, "sk": true, "sl": true, "sv": true, "th": true, "tr": true, "uk": true, "zh": true, "zh-tw": true });
at is g in -34/3*g**3 + 0 + 18*g**2 + 30*g + 2/3*g**4 = 0? -1, 0, 3, 15 Let y(g) be the first derivative of 14 - 1/8*g**2 + 1/12*g**3 - 1/4*g + 1/16*g**4. Suppose y(j) = 0. What is j? -1, 1 Let g be (-90)/18*(-22)/(-5). Let v be (-7)/(-22) + (-4)/g. Factor 3/2*o + o**2 - o**3 - 1/2*o**5 + v - 3/2*o**4. -(o - 1)*(o + 1)**4/2 Let g(c) = c**3 + 6*c**2 + 3*c + 7. Let d be g(-5). Let r be -4 - (42/(-19) + d + -19). Factor 2/19*j**2 + 2/19 - r*j. 2*(j - 1)**2/19 Let z be 64/(-6) + (-3)/9. Let d = z + 14. Factor 2 - 4*x**d - 3*x - 4*x**2 + x**3 + 4*x. -(x + 1)**2*(3*x - 2) Let r(g) = -g**3 + 11*g**2 + 60*g + 8. Let k be r(15). Let m(f) be the first derivative of -k + 1/7*f**2 + 2/7*f - 4/21*f**3. Find w, given that m(w) = 0. -1/2, 1 Find z, given that -5*z + 0 - 1/2*z**3 + 7/2*z**2 = 0. 0, 2, 5 Let q(v) be the second derivative of 0 + 5*v**2 + 5/2*v**3 + 9*v - 5/6*v**4 + 1/12*v**5. Let y(b) be the first derivative of q(b). Solve y(l) = 0. 1, 3 Let x(p) be the first derivative of -5*p**6/6 + 12*p**5/5 + p**4/4 - 4*p**3 + 2*p**2 + 54. Suppose x(t) = 0. What is t? -1, 0, 2/5, 1, 2 Factor -31*p**3 + 112*p - 46*p**2 - 4 - 216 + 36*p**3 - 12*p + 161*p**2. 5*(p - 1)*(p + 2)*(p + 22) Find v such that -4*v**4 + 12 + 40/3*v**2 - 4*v**3 - 2/3*v**5 + 26*v = 0. -3, -1, 2 Let -18*a - 71/6*a**3 - 1/6*a**5 - 133/6*a**2 - 5/2*a**4 - 16/3 = 0. Calculate a. -8, -4, -1 Suppose 4*z = g + 46, -4*g + 3*g - 51 = -3*z. Let b = g - -137/2. Solve -y**4 + b*y**3 + 0*y + 0 - y**2 = 0. 0, 1/2, 2 Factor -2 + 7/3*y - 1/3*y**2. -(y - 6)*(y - 1)/3 Let 3*y - 1/5*y**2 + 0 = 0. Calculate y. 0, 15 Suppose -28 = -4*a - 2*z - 0, 65 = 5*a - 5*z. Find d such that -2*d**3 + a*d**3 - 3*d**2 - 4*d**3 = 0. 0, 1 Let q(v) = -5*v**2 - 13*v - 39. Let f(t) = -8*t**2 - 27*t - 77. Let s(d) = 3*f(d) - 5*q(d). Factor s(g). (g - 18)*(g + 2) Let z(a) be the second derivative of -a**4/36 - 11*a**3/18 + 2*a**2 - 10*a + 4. Factor z(y). -(y - 1)*(y + 12)/3 Let j be 65/(-26)*(-8)/5. Let 7*g - 3*g**3 - 2*g - 2*g**3 + 5*g**j - 5*g**2 = 0. What is g? -1, 0, 1 Factor -87*d + 55 - 3*d**2 + 37*d - 2*d**2. -5*(d - 1)*(d + 11) Solve -12/5*h**5 - 216/5*h**2 + 28/5*h + 8 - 16/5*h**4 + 176/5*h**3 = 0 for h. -5, -1/3, 1, 2 Let i(x) be the third derivative of 7*x**6/30 - 4*x**5/5 - 2*x**4/3 + 6*x**2 + 14*x. Factor i(h). 4*h*(h - 2)*(7*h + 2) Solve -4/9*z**3 - 16 + 16/9*z + 4*z**2 = 0 for z. -2, 2, 9 Let a = -1335 + 1335. What is m in 0*m**2 - 5/3*m + 0*m**4 - 5/3*m**5 + 10/3*m**3 + a = 0? -1, 0, 1 Let r(n) be the third derivative of n**7/70 - 11*n**6/40 - 7*n**5/10 + 3*n**4 - 98*n**2. Determine q so that r(q) = 0. -2, 0, 1, 12 Suppose -6*w + 12 = -4*w. Let t(q) = -12*q**3 + 32*q**2 - 122. Let d(f) = -f**3 + 3*f**2 - 11. Let r(x) = w*t(x) - 68*d(x). Factor r(y). -4*(y - 1)*(y + 2)**2 Suppose 4*x - 12 = 12. Let -2*i**2 + 7*i**2 + 6 + 5 + 10*i - x = 0. Calculate i. -1 Let b(j) be the second derivative of j**7/252 + j**6/60 - j**5/24 - 3*j**4/8 - 8*j**3/9 - j**2 + 7*j. Suppose b(l) = 0. What is l? -2, -1, 3 Let b be 4/(52/(-21))*(-16)/1176*21. Factor -4/13*o**2 + 4/13*o**4 + 0*o + 0 - b*o**3. 2*o**2*(o - 2)*(2*o + 1)/13 Let y be (38/(-950))/(3/(-45))*50/6. Suppose -2/3*b**4 + 4/3*b**2 - 2/3*b**3 + 1/3*b - 2/3 + 1/3*b**y = 0. Calculate b. -1, 1, 2 Let m = -37719 + 339473/9. Suppose m*t**3 + 0 + 0*t + 0*t**2 = 0. Calculate t. 0 Let x(w) = 2*w**2 - 2. Let q(h) = -3*h**2 - h - 8. Let t(f) = -2*q(f) - 4*x(f). What is k in t(k) = 0? -3, 4 Factor -164/3*a - 18 - 2*a**2. -2*(a + 27)*(3*a + 1)/3 Let x(g) = 2*g**2 + 3*g - 7. Let b be x(-4). Let p = 23 - b. Determine o so that 6*o + 6 + 1 - p - 3*o**2 = 0. 1 Suppose -17*v + 5*v + 36 = 0. Let r(d) be the third derivative of 0*d + 1/20*d**5 + 0 + 2*d**v - 1/2*d**4 - 3*d**2. Factor r(h). 3*(h - 2)**2 Let p = 289 + -287. Suppose -3/4*m**5 - 81/4 - 69/2*m**3 - 243/4*m - 33/4*m**4 - 135/2*m**p = 0. What is m? -3, -1 Let l(a) = -2*a**3 - 20*a**2 - 10*a + 28. Let n(d) = 4*d**3 + 41*d**2 + 19*d - 54. Let j(w) = 5*l(w) + 2*n(w). Factor j(v). -2*(v - 1)*(v + 2)*(v + 8) Let p = 2/557 - -7227/3899. Let u = p + -11/21. Factor -8/3*a**4 + 0*a**2 + 0*a - u*a**5 + 0 - 4/3*a**3. -4*a**3*(a + 1)**2/3 Let -9/2*b**3 + 0 + 3/4*b**4 - 3/4*b**2 + 9/2*b = 0. Calculate b. -1, 0, 1, 6 Let i be 16/40 + 404/(-10). Let x = i - -44. Solve 3/2*n**3 + 5/6*n**x + 0 + 1/6*n**5 + 1/3*n + 7/6*n**2 = 0. -2, -1, 0 Let o be ((-18)/(-36))/((-1)/4*-1). Factor 8*z**2 - 4*z**o + 3*z**2 - 2*z**2. 5*z**2 Let m = 224 - 1118/5. Find k such that 2/5*k**3 - 2/5*k - m*k**2 + 0 + 2/5*k**4 = 0. -1, 0, 1 Let n(u) be the second derivative of -u**5/110 - 3*u**4/22 + 2*u**3/3 + 364*u. Let n(o) = 0. Calculate o. -11, 0, 2 Let d(i) be the first derivative of i**5/20 + i**4/28 - 17*i**2/2 - 1. Let h(v) be the second derivative of d(v). Determine x, given that h(x) = 0. -2/7, 0 Let h = 45 - 43. Factor -3*m**h - 2 + 6 + 8. -3*(m - 2)*(m + 2) Let f(x) be the third derivative of -x**7/840 + x**6/360 - 43*x**3/6 + 28*x**2. Let h(t) be the first derivative of f(t). Solve h(n) = 0. 0, 1 Let r(l) be the third derivative of 5*l**9/3024 - l**7/168 + 5*l**3/3 + 7*l**2. Let b(y) be the first derivative of r(y). Factor b(h). 5*h**3*(h - 1)*(h + 1) Let u(j) be the third derivative of 0*j**3 + 23*j**2 + 0 + 2/3*j**4 + 0*j - 16/15*j**5 + 7/30*j**6. Factor u(m). 4*m*(m - 2)*(7*m - 2) Let s(y) be the third derivative of y**6/60 + 14*y**5/15 - 88*y**2. Find n, given that s(n) = 0. -28, 0 Let n(j) be the second derivative of 216*j**6/5 + 162*j**5/5 - 189*j**4/4 + 13*j**3 - 3*j**2/2 - 133*j. Determine w so that n(w) = 0. -1, 1/12, 1/3 Suppose 2*v + 3*z - 3 = -v, -5*v + 5*z = -25. Suppose 2*h = 2*b - 6, -25 = -5*h - 0*h - v*b. Factor 0 + 0*r**3 + 2/5*r**5 - 4/5*r**4 + 4/5*r**h - 2/5*r. 2*r*(r - 1)**3*(r + 1)/5 Let z(l) be the first derivative of -l**6/1980 + l**5/330 + l**4/44 - 2*l**3/3 + 7. Let h(n) be the third derivative of z(n). Let h(a) = 0. What is a? -1, 3 Let w be 8 + 0 - (3 + -3 - 0). Let j = w + -15/2. Let j*c**3 - 1/2*c - 1/2*c**4 + 1/2*c**2 + 0 = 0. What is c? -1, 0, 1 Let l(k) be the second derivative of k**6/105 - 3*k**5/70 + k**4/42 + k**3/7 - 2*k**2/7 + 26*k + 3. Factor l(v). 2*(v - 2)*(v - 1)**2*(v + 1)/7 Let s(r) = 1300*r**2 + 16608*r + 53152. Let y(m) = 200*m**2 + 2555*m + 8177. Let v(l) = 5*s(l) - 32*y(l). Let v(q) = 0. Calculate q. -32/5 What is a in 244/3*a - 4/3*a**2 + 248/3 = 0? -1, 62 Let t = 112 - 122. Let b be (-8)/t - 238/385. Factor -b*u**2 + 0 - 2/11*u. -2*u*(u + 1)/11 Factor -4/9*z + 1/9*z**2 + 1/3. (z - 3)*(z - 1)/9 Solve 0*w**2 - 2/9*w**4 + 0 + 2/3*w**3 - 8/9*w = 0. -1, 0, 2 Let d(m) be the first derivative of -m**4/20 - 224*m**3/15 - 6272*m**2/5 - 501. Factor d(w). -w*(w + 112)**2/5 Let z(c) be the first derivative of -3*c**5/5 - 9*c**4/2 + 81*c**2 + 243*c - 531. Factor z(p). -3*(p - 3)*(p + 3)**3 Let d(w) be the first derivative of -6 - 4/5*w**5 + 0*w**3 - w**4 + 0*w**2 + 0*w. Factor d(t). -4*t**3*(t + 1) Let 988/5*p - 2/5*p**2 - 122018/5 = 0. Calculate p. 247 Factor 16/7 + 2/7*g - 16/7*g**2 - 2/7*g**3. -2*(g - 1)*(g + 1)*(g + 8)/7 Let g(r) be the first derivative of -81*r**4/4 + 285*r**3 + 196*r**2 + 44*r + 245. Factor g(l). -(l - 11)*(9*l + 2)**2 Let y(b) be the third derivative of b**7/70 + b**6/40 - 3*b**5/20 - b**4/8 + b**3 - 175*b**2. Find d such that y(d) = 0. -2, -1, 1 Let u(v) be the first derivative of 11/3*v**2 + 8 - 2/3*v**3 + 8/3*v. Factor u(g). -2*(g - 4)*(3*g + 1)/3 Suppose -2*z + 10 = 5*x, -x = -z - 8 - 1. Factor -35*n - 46*n - 32 + 33*n - x*n**3 - 24*n**2. -4*(n + 2)**3 Factor -1/2*v**2 + 11*v + 23/2. -(v - 23)*(v + 1)/2 Let z(x) be the third derivative of x**6/200 + 3*x**5/50 + 11*x**4/40 + 3*x**3/5 - 41*x**2. Determine s, given that z(s) = 0. -3, -2, -1 Factor 0 - 5/3*s**3 + 30*s - 5*s**2. -5*s*(s - 3)*(s + 6)/3 Let r(d) be the second derivative of 0 + 49/2*d**2 + 13*d + 1/12*d**4 - 7/3*d**3. Factor r(p). (p - 7)**2 Let w be 3/(126/39 - 9/3). Suppose -2*q = w*q - 0*q. Solve -1/3*n**2 + 0*n + q = 0. 0 Let t(v) = -v**5 + v**4 + v**3 + v**2 + v + 1. Let u(z) = 5*z**5 - 11*z + 3 + 9*z**3 - 6 - 11*z**4 + 10*z - 11*z**2. Let o(q) = 3*t(q) + u(q). Factor o(j). 2*j*(j - 1)**4 Let d be 16/(-18) + 3/((-9)/(-3)). Let n(z) be the second derivative of 0*z**2 + 3*z + 0 - 2/9*z**4 + d*z**3. Factor n(m). -2*m*(4*m - 1)/3 Let
Q: Foreach skip option value I'm trying to figure out how to remove language id 10 from the loop. <? foreach ($languages as $langId => $langDetails): ?> <option value="<?=$langId?>" <?=($langId == zbanRegistry::getInstance()->lang) ? 'selected="selected"' : NULL;?>><?=$langDetails['LABEL']?></option> <? endforeach; ?> and the result is: <select name="lang" id="lang" > <option value="1" selected="selected">Language 1</option> <option value="2" >Language 2</option> <option value="3" >Language 3</option> <option value="4" >Language 4</option> <option value="5" >Language 5</option> <option value="6" >Language 6</option> <option value="7" >Language 7</option> <option value="8" >Language 8</option> <option value="9" >Language 9</option> <option value="10" >Language 10</option> </select> Any help is appreciated :-) A: You can check the value of langId. Maybe something like that? <? foreach ($languages as $langId => $langDetails): ?> <? if ($langId != 10): ?> <option value="<?=$langId?>" <?=($langId == zbanRegistry::getInstance()->lang) ? 'selected="selected"' : NULL;?>><?=$langDetails['LABEL']?></option> <? endif; ?> <? endforeach; ?>
Objectives Disruption in the stability of respiratory microbiota is known to be associated with many chronic respiratory diseases. However, only few studies have examined microbiomes in lung cancer. Therefore, we characterized and compared the microbiomes of patients with lung cancer and those with benign mass-like lesions. Materials and methods Bronchoalveolar fluid was collected prospectively to evaluate lung masses in patients who had undergone bronchoscopies from May to September 2015. Twenty-eight patients (20 male, 8 female) were enrolled: 20 diagnosed with lung cancer and 8 diagnosed with benign diseases. Samples were analysed by 16S rRNA-based next-generation sequencing. Results The participants’ mean age was 64 ± 11 years. Bacterial operational taxonomic units were classified into 26 phyla, 44 classes, 81 orders, 153 families, 288 genera, and 797 species. The relative abundance of two phyla ( Firmicutes and TM7) was significantly increased in patients with lung cancer (p = 0.037 and 0.035, respectively). Furthermore, two genera ( Veillonella and Megasphaera) were relatively more abundant in lung cancer patients (p = 0.003 and 0.022, respectively). The area under the curve of a combination of these two genera used to predict lung cancer was 0.888 (sensitivity = 95.0%, specificity = 75.0% and sensitivity = 70.0%, specificity = 100.0%; p = 0.002). Conclusion The results indicate that differences exist in the bacterial communities of patients with lung cancer and those with benign mass-like lesions. The genera Veillonella and Megasphaera showed the potential to serve as biomarkers to predict lung cancer. Thus, the lung microbiota may change the environment in patients with lung cancer.
10 Amp AGC Glass Fuses Automotive Glass Cartridge Fuses - 10 amp - These fuses measure in at 1/4" x 1-1/4" - They are used in many automotive (imported and domestic vehicles) and marine applications. Meets or exceeds OEM or S.A.E. standards.
The expressed localization of rat putative pheromone receptors. The localization of pheromone receptors in the rat vomeronasal epithelium was examined by light- and electron-microscopic immunocytochemical analysis, using affinity-purified polyclonal antibodies. The antibodies were raised against a synthetic oligopeptide corresponding to a partial sequence of the rat putative pheromone receptor (VN6). Positive immunoreactivity was observed on the luminal surface of the sensory epithelium, and was abolished when an excess of the antigen peptide was added to the primary reaction solution. On electron microscopy, the immunoreactivity for the VN6 peptide was localized at the dendritic knobs and microvilli of receptor cells, but not in those of the supporting cells. These results show the first evidence of cellular localization of putative pheromone receptors in rat vomeronasal receptor cells.
Improper payments issued by federal government top $100B Tax credits for families that don’t qualify. Medicare payments for treatments that might not be necessary. Unemployment benefits for people who secretly are working. Federal agencies reported making $100 billion in payments last year to people who may not have been entitled to receive them. Congressional investigators say the figure could be even higher. “The amounts here are absolutely staggering,” said Rep. John Mica, R-Fla. “It’s over $100 billion each of the last five years. That’s a staggering half a trillion dollars in improper payments.” Mica chairs the House Oversight government operations subcommittee, which had a hearing on improper payments Wednesday. Each year, federal agencies are required to estimate the amount of improper payments they issue. They include overpayments, underpayments, payments to the wrong recipient and payments that were made without proper documentation. Some improper payments are the result of fraud, while others are unintentional, caused by clerical errors or mistakes in awarding benefits without proper verification. In 2013, federal agencies made $97 billion in overpayments, according to agency estimates. Underpayments totaled $9 billion. That adds up to $106 billion in improper payments, or 3.5 percent of all the payments made by the federal government. The Obama administration has reduced the amount of improper payments since they peaked at $121 billion in 2010. The administration has stepped up efforts to measure improper payments, identify the cause and develop plans to reduce them, said Beth Cobert, deputy director of the White House budget office. Federal agencies recovered more than $22 billion in overpayments last year, she said. “We have taken an aggressive approach to attacking waste, fraud and abuse within federal agencies, and we will continue to seek out new and innovative tools to help us in this fight,” Cobert told the subcommittee. However, a new report by the Government Accountability Office questions the accuracy of agency estimates, suggesting that the real tally could be higher. The GAO is the investigative arm of Congress. “The federal government is unable to determine the full extent to which improper payments occur and reasonably assure that appropriate actions are taken to reduce them,” Beryl H. Davis, director of financial management at the GAO, told the subcommittee. Davis said some agencies don’t develop estimates for programs that could be susceptible to improper payments. She also said estimates by the Defense Department “may not be reliable.” The Pentagon estimates that less than 1 percent of its payments are improper. However, the GAO found last year that the Pentagon’s estimates for 2011 “were neither reliable nor statistically valid because of long-standing and pervasive financial management weaknesses.” “We have reason to believe that the numbers are sound but we certainly understand why the skepticism exists,” Mark E. Easton, the Defense Department’s deputy chief financial officer, told the subcommittee. The largest sources of improper payments are government health care programs, according to agency estimates. Medicare’s various health insurance programs for older Americans accounted for $50 billion in improper payments in the 2013 budget year, far exceeding any other program. Most of the payments were deemed improper because they were issued without proper documentation, said Shantanu Agrawal, a deputy administrator for the Centers for Medicare & Medicaid Services. In some cases, the paperwork didn’t verify that services were medically necessary. “Payments deemed ‘improper’ under these circumstances tend to be the result of documentation and coding errors made by the provider as opposed to payments made for inappropriate claims,” Agrawal told the subcommittee.
Market for paid inclusion search to reach $6 billion SEO PowerSuite SEO category is sponsored by SEO PowerSuite. Power-charge your SEO with the industry's finest SEO tools. Rankings, backlinks, competitors, reports, analytics - you name it - all in one place. Try it for free now! Innovation among companies hots up as ad pie swells. With experts projecting that the global market for paid inclusion Internet searches will reach more than US$6 billion (S$10 billion) by 2006, up from about US$2 billion last year, search engine companies are gearing up to duke it out for a chunk of the advertising pie. Mr Sterling, program director at strategic research firm The Kelsey Group, senses a war brewing on the Internet. Recent news reports have described the efforts of IT giants Yahoo! and Microsoft to gain a foothold in the market. Both are trying to come up with next-generation search technologies to elbow past the current leader, Google, which now processes 80 per cent of all Internet searches. Yahoo!, which now uses Google’s search technology, announced plans this month to ditch its partner in favour of technologies developed by Inktomi and Overture, companies recently bought by Yahoo!
Get the maxim number of hairs | Visit our website and learn about Hair Transplant Scar If you would like to know more about our hair transplant surgery and how you can achieve results like these start by calling one of our hair transplant specialists on 844-327-4247. Learn more about our Licensed Medical Doctors and our exclusive HUE Method (High-Yield Unit Extraction®) that can yield twice as many transplanted hairs in a single procedure. Hair Transplants Can Dramatically Change How You Look and Feel. Are you looking for a fuller, natural looking head of hair that will make you look and feel better? Are you wanting to increase your confidence and self-esteem? Natural Transplants, a premier hair restoration clinic has a comprehensive and consultative approach, Dr. Matt Huebner is a surgeon with many years of experience in treating medical hair loss. Hair Transplant Procedures to evaluating and treating patients with hair loss. Unlike other clinics that use 'technicians' to perform hair restoration Our clinic offers a number of hair transplant options including strip-donor procedures and follicular unit transplantation. We do not provide nor do we recommend robotic follicular unit extraction as you will see in a very informative video entitled "FUE Hair Transplant and FUT Strip Scar Truth". Click HERE to watch now. The technique Dr. Huebner offers patients is vastly superior to clinics offering FUE (Follicular Unit Extraction) also known as Neograft or Artas (robotic) and laser hair therapy. Simply put, our process allows the transplantation of more hair follicles in one procedure. FUE punches the hair follicles out of the scalp, limiting the success of follicle removal coupled with an inefficient use of the precious donor area. FUE limits the amount of hairs moved in one procedure to a maximum of 4,000 hairs. In contrast, Dr. Huebner has yielded 12,000 hair implants in one procedure in less than 6 hours. Natural Transplants provides hair restoration to clients from around the world, and we are happy to be able to offer Travel Incentives to Fort Lauderdale, FL for consultation and hair restoration procedures. Conveniently located minutes from the beaches and resorts our hair transplant clinic is just a short drive from the Hollywood/Fort Lauderdale Airport and minutes from Las Olas and downtown. Complete Hair Clinic prides itself on staying good price selling price. Our goal has generally been to provide the highest quality hair transplant method at the absolute best cost. We utilize the no-touch approach to implant our grafts, bringing about a considerably faster growth and sturdy effects Robotic hair restoration gadgets make the most of cameras and robotic arms to help the surgeon Along with the FUE method. In 2009, NeoGraft grew to become the initial check here robotic surgical gadget FDA authorized for hair restoration.[six] The ARTAS Method was FDA authorized in 2011 to be used in harvesting follicular units from brown-haired and black-haired Males. On the other hand, the harvesting strategy does have important implications to the hair restoration method as it's going to influence the whole range of substantial-good quality grafts that may be harvested within the donor space and finally, the fullness realized from your hair transplant. In general, the harvesting way of FUT by way of strip is outstanding to that of FUE for 2 most important reasons. The very first rationale would be that the FUT method permits the surgeon to supply the highest excellent grafts by isolating the follicle models with small trauma (this downside is minimized with Robotic FUE). We are now executing all of our FUE transplant processes using this engineering. You may read through additional on this Superior procedure by checking out the Robotic Hair Transplant part or studying responses to frequently requested questions on Robotic FUE. Our hair transplant procedue is often a lower hazard, minimally invasive, extremely predictable surgical process, entails going healthier hair follicles from a person web page of your scalp to a different. Basically, Dr. Ma moves hair from a location where hair is a lot more plentiful and less likely for being shed (Harmless zone) to areas the place hair is thinning or lacking. A handful of days before the method it is usually recommended to begin keeping away from alcoholic beverages. Alcohol thins the blood and therefor may cause irregular bleeding. “Hailey was a fling, but Drake has often desired to make points function with Rihanna,” the insider exposed, including: “They happen to be keeping it a magic formula due to the fact this time about, they want to get it done suitable and retain their connection personal.” Okay nicely I've had this diarrhea dilemma for each week now. At the outset I assumed that it had been the stomach flu due to the fact my entire relatives experienced the abdomen flu. Then I didn't puke or have diarrhea for a day so I went to highschool but my tummy harm a whole lot. Surgical options, for example follicle transplants, scalp flaps, and hair decline reduction, can be obtained. These techniques are frequently decided on by those who are self-acutely aware about their hair reduction, but They can be high-priced and unpleasant, with a chance of infection and scarring. After medical procedures has occurred, six to eight months are wanted before the quality of new hair may be assessed. Hair transplantation will also be utilized to revive eyelashes, eyebrows, beard hair, upper body hair, pubic hair also to fill in scars brought on by accidents or surgical procedures including confront-lifts and previous hair transplants. Hair transplantation differs from pores and skin grafting in that grafts consist of Pretty much all the epidermis and dermis encompassing the hair follicle, and many little grafts are transplanted rather than just one strip of skin. In alcoholic cirrhosis, the traditional tissues from the liver are progressively replaced by scarred tissues, i.e. fibrosis. As the liver cells continue to die, it hampers the flexibility of this organ to operate correctly, Consequently subjecting it to loads of tension. Scientific tests expose that one in each individual ten individuals who indulge in major ingesting are vulnerable to this problem. It's not a "plug-and-Engage in" item that's the identical anywhere you go assuming that the "graft variety" continues to be continuous. This would make as much feeling as saying each and every vehicle on earth is the exact same offered they all have "4 tires." In hair restoration, you will discover many hundreds of subtleties and nuances that vary from Individual to individual. This is a process that delivers my patients a great deal of contentment; it restores their self esteem, feeling of youth and perfectly-currently being. It is just a process that simply cannot be boiled right down to a "for every-the-graft" pricing. ‘s Web page Six, which noted that the pair were noticed creating out at an N.Y.C. bowling alley subsequent the demise of Rihanna’s marriage with Chris Brown. Drake went as far as to mention the memorable evening in “Fireworks.
In the Supreme Court of Georgia Decided: October 6, 2014 S14A0880. FREEMAN V. THE STATE. HINES, Presiding Justice. Eddie Lee Freeman appeals from his convictions and sentences for malice murder and possession of a firearm during the commission of a crime in connection with the death of Terrance Devaris Moore. For the reasons that follow, we reverse.1 Construed to support the verdicts, the evidence showed that Freeman and 1 The crimes were committed on September 12, 2006. On January 23, 2007, a Richmond County grand jury indicted Freeman, together with Byron Lorenza Elliard and Tordell Lafranze Stokes, for malice murder and possession of a firearm during the commission of a crime; Elliard was also charged with possession of a firearm by a convicted felon. Freeman was tried alone before a jury June 9-12, 2008 and found guilty on both counts. On July 10, 2008, he was sentenced to life in prison for the malice murder, and a consecutive term of five years in prison for possession of a firearm during the commission of a crime. Freeman filed a notice of appeal on August 5, 2008; he also filed, pro se, an “Omnibus Motion,” which included motions for in forma pauperis status, the appointment of appellate counsel, and “to amend pending motion for new trial.” The appeal was docketed in this Court pursuant to the notice of appeal on May 24, 2010, and on June 22, 2010, this Court remanded the case to the trial court for consideration of Freeman’s requests for in forma pauperis status and the appointment of appellate counsel. An amended motion for new trial was filed on August 27, 2013, and a second amended motion for new trial was filed on August 29, 2013. The motion for new trial, as amended, was denied on September 27, 2013. Freeman filed a notice of appeal on October 2, 2013, and the appeal was docketed in this Court for the April 2014 term and submitted for decision on the briefs. two other men went to a motel room to buy illegal drugs; Freeman was in possession of a .38 caliber revolver. Moore was in the motel room with three other men. There was a disagreement over the price of the drugs, and an argument ensued; Moore locked the door to the motel room and placed his hand in his pocket and appeared to begin to remove a handgun from it. A gunshot was then fired, followed by a number of other gunshots, and the lights of the room went out; the door to the room became inoperative and those inside the room began to leave through a broken window. Freeman fired his .38 revolver several times, and was himself twice struck by bullets. He was subsequently taken to a hospital. Moore was also struck twice by bullets, and died en route to the hospital. The autopsy produced two .38 bullets recovered from his body, at least one having been fired from close range; the bullets proved to have been fired from either a .38 special or .357 magnum revolver, and to have been fired from the same weapon as another bullet found at the crime scene. Freeman’s .38 revolver was not found, and it was established that bullets of other calibers were fired at the crime scene. 1. The evidence was sufficient to prove beyond a reasonable doubt that Freeman was guilty of the crimes of which he was convicted. See Jackson v. 2 Virginia, 443 U.S. 307 (99 SCt 2781, 61 LE2d 560) (1979). 2. Freeman gave three oral statements to investigating law enforcement officers; one statement was given in the hospital emergency room shortly after the shooting; one was made at the sheriff’s office several hours later; and the third occurred two days later. Only the third statement was made after the giving of Miranda2 warnings and Freeman argued to the trial court that evidence contained within the first two statements should be excluded as he was in custody at the time they were made and thus Miranda warnings were required to be given. See Durden v. State, 293 Ga. 89, 95 (3) (744 SE2d 9) (2013). Prior to trial, and after a Jackson v. Denno3 hearing, the trial court ruled the two statements admissible. At trial, when the State sought to introduce the recording of the first interview, Freeman objected, and the State responded that the trial court had “already found at the previous hearing that the statement was freely and voluntarily given and as well that no Miranda warnings were necessary as the defendant was not a suspect at that time.” The court simply overruled the 2 Miranda v. Arizona, 384 U.S. 436 (86 SCt 1602, 16 LE2d 694) (1966). 3 Jackson v. Denno, 378 U.S. 368 (84 SCt 1774, 12 LE2d 908) (1964). 3 objection and admitted the recorded statement. When the State sought to introduce a recording of the second interview, Freeman again objected and the State responded that “the issue of voluntariness has already been addressed and [the State] would request the court allow this into evidence.” The court responded: “All right. I find that the statement was freely and voluntarily given as previously ruled. I’ll admit it over the objection of the defense.” Freeman contends that this constituted an improper comment on the evidence by the court, violating OCGA § 17-8-57,4 and necessitating a new trial. This is correct. Determining the voluntariness and, consequently, the admissibility of a defendant's statement in a criminal case is a two-step process. Initially, the trial court addresses the issue outside the presence of the jury and, if the statement is determined to be voluntary, it is admitted for the jury to make the ultimate determination as to its voluntariness and, thus, its probity as inculpatory evidence. Having made the determination that a statement is voluntary, the trial court should simply admit it into evidence and not inform the jury of its ruling. A trial court's ruling before the jury on the voluntariness of a defendant's statement, even when coupled with an explanation as to the roles played by the trial court and the jury when the voluntariness of a defendant's statement is questioned, amounts to a violation of OCGA § 17–8–57. 4 OCGA § 17-8-57 reads: It is error for any judge in any criminal case, during its progress or in his charge to the jury, to express or intimate his opinion as to what has or has not been proved or as to the guilt of the accused. Should any judge violate this Code section, the violation shall be held by the Supreme Court or Court of Appeals to be error and the decision in the case reversed, and a new trial granted in the court below with such directions as the Supreme Court or Court of Appeals may lawfully give. 4 Chumley v. State, 282 Ga. 855, 857 (2) 655 SE2d 813) (2008) (Citations and punctuation omitted.) The court’s response: “I find that the statement was freely and voluntarily given,” clearly violated OCGA § 17-8-57, and would have even if the trial court had explained the court’s and the jury’s separate roles. Id. Although the State contends that the trial court’s articulation was made during a mere colloquy with counsel regarding an evidentiary ruling, see Bryant v. State, 268 Ga. 664, 667(8) (492 SE2d 868) (1997), the transcript reveals nothing other than that the remark was made in the jury’s presence. And, it is of no moment that Freeman did not raise a contemporaneous objection to the trial court’s articulation; as this Court has explained, [a]lleged violations of OCGA § 17-8-57 are subject to a sort of “super-plain error” review; not only may they be raised on appeal without any objection at trial, but, if sustained, they automatically result in reversal without consideration of whether the error caused any actual prejudice. [Cits.] Wells v. State, 295 Ga. 161, 167 (3) (758 SE2d 598) (2014). Accordingly, a new trial is necessary. 3. Freeman contends that the trial court also erred in making the initial determination that his first two statements were freely and voluntarily made because he was in custody at the time each was made, but he was not given the 5 benefit of Miranda warnings.5 A person is considered to be in custody and “Miranda warnings are required when a person ‘is (1) formally arrested or (2) restrained to the degree associated with a formal arrest.’ [Cit.] Unless a reasonable person in the suspect's situation would perceive that he was in custody, Miranda warnings are not necessary. [Cit.]” Robinson v. State, 278 Ga. 299, 301 (2) (602 SE2d 574) (2004). “On appeal, the issue is whether the trial court was clearly erroneous in its factual findings regarding the admissibility of the statements. [Cit.]” Jackson v. State, 272 Ga. 191, 193 (3) (528 SE2d 232) (2000). According to the evidence presented at the Jackson v. Denno hearing, at the time of the first statement, Freeman was in the hospital being treated for his gunshot wounds; he was not under arrest; he was not restrained in any way; and if he had wished, he would have been allowed to leave if his medical situation so permitted. The second interview took place in an interview room at the sheriff’s office; the testimony of the interviewing officers was that Freeman 5 Freeman’s enumeration of error also encompasses his third statement, given after he received Miranda warnings, but his arguments in this Court detail only the first two statements. At trial, Freeman asserted that the third statement was tainted by the alleged illegality of the first two. See generally, Rashid v. State, 292 Ga. 414, 419-420 (4) (737 SE2d 692) (2013). 6 voluntarily came there at their request, although they were not able to state whether he arranged his own transportation or was given a ride in an official vehicle; in any event, that is a circumstance which would not necessarily indicate that he was in custody. See Scott v. State, 281 Ga. 373, 375-376 (2) (637 SE2d 652) (2006). Again, the evidence was that Freeman was not restrained and was free to leave, and one officer testified that Freeman did, in fact, leave after the interview. Based on the evidence presented at the Jackson v. Denno hearing, the trial court did not err in determining that Freeman was not in custody at the time he made his first two statements, and that these statements were voluntary and properly admissible. Durden, supra at 95-96. 4. At the Jackson v. Denno hearing, the investigating officers who conducted the hospital interview both testified that Freeman was not at that time a suspect, but rather was interviewed as a potential victim or witness. In his motion for new trial, Freeman asserted that his trial counsel failed to provide effective representation in that, at the Jackson v. Denno hearing, counsel did not introduce the interview sheet filled out by the investigating officers at the time of the hospital interview, which showed a checked box indicating that Freeman was a “suspect” rather than a “subject,” “victim,” or “witness.” In order to 7 prevail on a claim of ineffective assistance of counsel, Freeman must show both that counsel’s performance was deficient, and that the deficient performance was prejudicial to his defense. Smith v. Francis, 253 Ga. 782, 783 (1) (325 SE2d 362) (1985), citing Strickland v. Washington, 466 U.S. 668 (104 SCt 2052, 80 LE2d 674) (1984). To meet the first prong of the required test, he must overcome the “strong presumption” that counsel’s performance fell within a “wide range of reasonable professional conduct,” and that counsel’s decisions were “made in the exercise of reasonable professional judgment.” Id. The reasonableness of counsel’s conduct is examined from counsel’s perspective at the time of trial and under the particular circumstances of the case. Id. at 784. To meet the second prong of the test, he must show that there is a reasonable probability that, absent any unprofessional errors on counsel’s part, the result of his trial would have been different. Id. at 783. “‘We accept the trial court’s factual findings and credibility determinations unless clearly erroneous, but we independently apply the legal principles to the facts.’ [Cit.]” Robinson v. State, 277 Ga. 75, 76 (586 SE2d 313) (2003). At the hearing on the motion for new trial, trial counsel testified that he was aware of the interview form indicating that Freeman was denominated a 8 suspect at the time of the hospital interview, but chose not to introduce it at the Jackson v. Denno hearing because he did not believe that the trial court would suppress the statement, and thought that it would be more valuable to use the form to attack the credibility of the testifying officers at trial, which counsel did. Counsel also testified that he was aware that case law had established that merely being considered a suspect did not mandate that Miranda warnings were necessary. And in this regard, counsel’s understanding of the law is correct. Whether a police officer focused his unarticulated suspicions upon the individual being questioned is of no consequence for Miranda purposes. [Cit.] This is so because Miranda was fashioned to redress “‘the compulsive aspect of custodial interrogation, and not the strength or content of the government’s suspicions’” when the questioning commenced. [Cit.] “Even a clear statement from an officer that the person under interrogation is a prime suspect is not, in itself, dispositive of the custody issue, for some suspects are free to come and go until the police decide to make an arrest.” [Cit.] Thus, the proper inquiry is whether the individual was formally arrested or restrained to the degree associated with a formal arrest, not whether the police had probable cause to arrest. [Cits.] McAllister v. State, 270 Ga. 224, 227-228 (1) (507 SE2d 448) (1998). Freeman fails to show that counsel’s performance was deficient. “When, as here, a strategic choice is made after thoughtful consideration, a claim of ineffective assistance of counsel is not supported. Decisions relating to strategy 9 and tactics must not be judged by hindsight or the ultimate result of the trial.” Browder v. State, 294 Ga. 188, 194 (4) (751 SE2d 354) (2013) (Citations and punctuation omitted.) 5. The remainder of Freeman’s enumerations of error are unlikely to recur on retrial and we thus decline to address them. See Boring v. State, 289 Ga. 429, 435 (3) (711 SE2d 634) (2011). Judgments reversed. All the Justices concur. 10
Q: listing all processes in iOS 5.0.1 How could one go about viewing all the processes in an ssh session to my (jailbroken)iphone? I'm currently able to ssh in, I have bash installed, core utilities installed, shell-cmds package installed and the system-cmds package installed. I would have expected the "ps" command to be available from the core-utilities package, but this does not seem to be the case. What am I missing? A: I believe the package Cydia package adv-cmds contains a copy of PS
Effects of pneumadin (PNM) on the adrenal glands. 5. Potent stimulating action of PNM on adrenocortical growth of dexamethasone-administered rats. Pneumadin (PNM) is a biologically active decapeptide, originally isolated from mammalian lungs, that has been previously found to acutely stimulate pituitary-adrenocortical axis in rats. The effects of 2-day PNM administration on the atrophic adrenal cortices of rats treated for 8 days with dexamethasone (DX) were investigated. PNM significantly raised adrenal weight and the average volume of adrenocortical cells. The decapeptide strikingly increased ACTH plasma concentration; however, the blood levels of aldosterone and corticosterone, as well as steroid output by adrenal quarters were not apparently affected. In light of these findings the following conclusions can be drawn: (i) PNM enhances the growth of adrenal cortex in DX-administered rats by a mechanism involving the stimulation of ACTH release; and (ii) PNM treatment is probably too short to allow DX-atrophied adrenocortical cells to re-acquire all their differentiated secretory capacities.
Introduction ============ In the past decade there has been an outpouring of interest in accelerating statistical mechanics simulations. This started with the work of Swendsen and collaborators: Swendsen and Wang introduced a cluster–flip method for accelerating non-disordered spin systems [@sw86; @sw87], and Widom, Strandburg, and Swendsen introduced a cluster–flip for finding quasicrystalline ground–states in a two–dimensional atomic simulation [@wss87]. These methods all gain a major speedup by introducing mostly non-local update rules, and often prove capable of bypassing critical slowing down problems [@swf92]. During the same period, different accelerating approaches were introduced, which are based on efficient schemes to analyze data from traditional Monte Carlo simulations [@swf92; @fs88; @lk90] and are frequently called “histogram methods”. These methods have enlarged the applicability of various kinds of critical phenomena simulations, although they are not necessarily designed to bypass critical slowing down problems as efficiently as [*e.g.*]{} cluster algorithms. Nevertheless, substantial progress can be achieved combining histogram and cluster–flip algorithms (see [*e.g.*]{} reference [@nb96]). More recently so–called “reweighting techniques” have been introduced, which are based on an early approach by G. M. Torrie and J. P. Valleau [@tv77]. They proposed a method to enlarge the sampling range of a Monte Carlo algorithm by using nonphysical weighting functions. The general idea in the newer approaches is to change the relative weights of different configurations to sample equally in all ranges of energy rather than focusing on a narrow temperature range. The most frequently used reweighting method is multicanonical sampling [@bn91; @bn92; @b97; @j94; @sb95], which represents the most general method as other reweighting methods, [*e.g.*]{} entropic sampling [@l93], can be directly mapped onto this approach[@bho95]. In systems with a strongly double peaked probability distribution of magnetization or energy states (a situation often found in systems exhibiting a first–order phase transition), the multicanonical approach has been proven to be a powerful tool. Simple reweighting schemes allow to overcome the “supercritical slowing down”[@j94] known from canonical Monte Carlo simulations at a fixed temperature. [*E.g.*]{} in non-disordered spin systems with a field-driven first-order phase transition ([*e.g.*]{} the Ising model) or a temperature-driven first-order phase transition ([*e.g*]{} the q-state Potts model) the supercritical slowing down of canonical Monte Carlo is due to the low Boltzmann weight of the domain-wall states. Multicanonical sampling approaches the problem with introducing a weightfunction, which weights all magnetization states (Ising model) or energy states (Potts model) equally, and therefore ensures that domain-wall states are sampled with the same likelihood as all other accessible states. The canonical distribution function at a fixed temperature, which contains all the thermodynamic information, can be reconstructed. Usually multicanonical sampling uses local update schemes along the lines of the Metropolis algorithm[@mrrtt53]; variations using cluster–flip or other methods are feasible and have been proven useful (for a review consult [@b97; @j94] and references therein). Of course, acceleration methods are most crucial for glassy systems, which otherwise can be inaccessible to numerical simulations[^1]. Whether one believes that glasses are sluggish because of large energy barriers to relaxation (rates $\sim e^{B/T}$), or believe that the free energy barriers are due to tortuous entropically difficult routes between the metastable configurations, a clever algorithm could in principle jump directly between the glassy states. Instead of relative rates which grow as a power of $T-T_c$, acceleration could gain us exponential speedups. Acceleration methods have been extensively applied to disordered spin models. These studies have been less focused on understanding the performance of the algorithms, because the physics of the systems is less thoroughly understood (there has been more to mine from the results, and a less firm foundation on which to do algorithmic analysis). The multicanonical methods have been applied to spin glasses in two and three dimensions to calculate the zero–temperature entropy, ground state energies, distribution of overlaps etc. (see refs. [@spin1; @spin2; @spin3; @spin4; @spin5; @spin6; @spin7; @spin8; @spin9]). The authors succeed in evaluating these properties with remarkable accuracy; nevertheless, whether the replica theory [@mpv87] or the droplet scaling ansatz [@fh88] is the more appropriate picture in describing the ground state properties of glasses could not be resolved. The performance of multicanonical sampling for glassy systems is clearly worse than for system with a less rugged landscape. We believe this failure is systematic and can’t be avoided within the framework of multicanonical sampling. The authors of reference [@spin1] argued multicanonical methods should be superior to simulated annealing (gradual cooling)[@kgv83], although a direct comparison was made only to canonical sampling (quenches to a fixed low temperature). The authors of ref. [@lc94] applied multicanonical sampling to the Traveling Salesman problem and claim to acchieve a dramatic improvement over the traditional Monte Carlo simulated annealing approach. Newman[@n97] has used both cluster methods[@nb96] and entropic sampling [@l93] to study the random–field Ising model. He finds dramatic speedups from both methods, often reaching equilibrium in a few passes through the lattice. Newman has focused on small systems (mostly $24^3$, with a few runs for systems up to $64^3$), and simultaneously used histogram methods to measures critical exponents and phase boundaries for a range of disorders and temperatures. He confirms results of the related “simulated tempering” approach, invented by E. Marinari and G. Parisi [@mp92; @m96]. Simulated tempering proved very useful in spin glass simulations [@kr94] and is similar in spirit to the multicanonical approach [@ho96]. Acceleration methods have been little used in continuum atomic simulations, perhaps because of the widespread reliance on molecular dynamics methods. Straightforward, direct molecular dynamics simulations of the equations of motion do not converge to an equilibrium state much faster than the Monte Carlo simulated annealing methods, but they are also not noticeably worse[@c97], and they have a direct physical interpretation. Shumway[@ss91] studied a one-dimensional atomic system in an incommensurate sinusoidal potential, and developed an evolutionary algorithm which generated optimal cluster moves as the system was quenched to lower temperatures; later attempts to generalize these ideas to higher dimensions have so far not been successful[@s97]. The authors of reference [@ho94] used multicanonical sampling and Monte Carlo simulated annealing to study the folding of the peptide Met-enkephalin; the multicanonical method found the ground state more consistently using the same amount of computer time. This result underlines the general belief [@spin1; @b96] that simulations in the multicanonical ensemble are in many ways superior to traditional simulated annealing. In this paper we apply multicanonical sampling in the particular form of entropic sampling to two-component Lennard–Jones systems, and compare the performance with traditional simulated annealing and straightforward molecular dynamics in finding low energy configurations. We search for low energy states of a three dimensional Lennard–Jones glass, one of the prototype glassy systems[@sw84; @sw89; @sw85; @ws85], and use the set of parameters recently introduced by W. Kob and H. C. Andersen [@ka94; @ka951; @ka952]. In addition to that, we apply entropic sampling and simulated annealing to a two-dimensional Lennard–Jones systems with quasicrystalline ground states, using the parameters of reference [@wss87]. We find that entropic sampling brings little benefit for the study of either. We argue that this is likely a general effect, applicable to all simulation methods applied to glassy systems in the thermodynamic limit. Introduction to the Methods: Multicanonical Sampling and Entropic Sampling ========================================================================== The standard way of implementing a Monte Carlo algorithm is using importance sampling. The idea behind this approach is simple. Rather than weighting each sample in phase space equally, each state is weighted with a sample probability distribution $\Gamma(x)$, where $x$ denotes the sampled configuration of the system. To estimate the thermal average of an observable $A$, one calculates: $$<A> = \frac{\sum_x A(x) \exp[-\beta H(x)] \Gamma^{-1}(x)}{\sum_x \exp[-\beta H(x)] \Gamma^{-1}(x)} \; ,$$ where $H$ is the Hamiltonian of the system (so $H(x)$ is the energy $E$ for the state $x$) and $\beta = 1/k_B T$. Choosing $\Gamma(x)$ non-uniformly ensures that states with important contributions to the partition sum are preferentially sampled, and therefore the number of states need to be sampled to provide a reasonable estimate of $A$ is significantly reduced. In standard Monte Carlo methods, [*i.e.*]{} canonical Monte Carlo or simulated annealing, the weighting distribution is the Boltzmann distribution $\Gamma =\exp[-\beta H(x)]$. This has the advantage of a direct physical interpretation: the computer is doing the same thermal average as an equilibrium system at temperature $1/(k_B \beta)$. It has an important disadvantage that configurations and events which are rare in the physical system are also rare in the simulation. In particular, if the system has a “rugged energy landscape”, with large free energy barriers $B$ separating physically important metastable states, the system will cross between these states with the same slow Arrhenius rate $\nu \exp{(-B/T)}$ that is found experimentally. The idea of multicanonical sampling is to circumvent this problem by choosing $\Gamma(x)$ so that the distribution of states $P(H(x))~\sim~\Omega(H(x))~\times~\Gamma(x)$ is approximately flat in energy (or some other variable, like magnetization[@j94]). In principle, we want to choose $\Gamma(E) = 1/\Omega(E) = \exp[-S(E)]$, where $\Omega(E)$ is the density of states at energy $E$ and $S(E)$ is the entropy. Of course, we don’t begin the simulation knowing the entropy as a function of energy! In our work we use the entropic sampling algorithm [@l93], which is a numerical and mathematical equivalent variant of the multicanonical approach [@bho95]. The only difference between entropic and multicanonical sampling is the way by which one generates estimates $J(E)$ of $S(E)$. The entropic sampling algorithm uses a quite straightforward recursive updating method: - Initialize to zero an array $H(E)$, which will keep track of the energy of the visited states. - Sample states according to the current $J_i(E)$, and add the energy of each sampled state to the histogram $H$, for a reasonably long time. - Set the new $J_{i+1}$ according to the following rule: $$J_{i+1}(E) = \left\{\begin{array}{ll} J_i(E) + \log(H(E)), & {\rm if} H(E) \neq 0 \\ J_i(E), & {\rm if} H(E) = 0 \end{array}\right.$$ The multicanonical sampling update scheme differs from entropic sampling in the treatment of the histogram bins with few entries (for an analysis of various schemes see references [@sb95] and [@b96]). The original approach introduces a constant slope for $J(E)$ below a cutoff energy, corresponding to a small constant temperature. These extra parameters are annoying [@b96] in the implementation; however, they do tend to keep the system from being trapped in energy regions which have not hitherto been sampled frequently. As we will argue, in glassy systems both algorithms will tend to get trapped in low energy metastable states even when the statistics are fine[^2]. In this paper we use the simpler entropic sampling method of equation (2). Theoretical Expectations for Relative Performance ================================================= What makes people think that multicanonical sampling should be an improvement over simulated annealing or molecular dynamics? We consider three possible reasons. (1) Perhaps the multicanonical method is better because it allows the system to cross energy barriers (as is mentioned frequently [@bn91]–[@l93])? This is indeed an improvement over canonical sampling at a fixed temperature; however, a simulated annealing method also runs at a variety of temperatures. Indeed, the two methods are [*identical*]{} in the thermodynamic limit. The acceptance ratio for a given single–atom Monte Carlo move for entropic sampling is $P(E) =\exp[S(E) - S(E')]$. In a large system with $N$ atoms, the entropy density $S(E)/N$ is a smooth function of the energy density $E/N$; since the energy density change for a single-atom move $(E'-E)/N$ is small, we may expand $S(E)$ to first order in $E'-E$. Using the relation $\partial S(E)/\partial E = 1/T$, the acceptance ratio becomes $P(E) = \exp[-(E'-E)/T]$. Thus entropic sampling at the energy $E$ has exactly the same acceptance ratio as simulated annealing at a temperature $T(E) = (\partial S(E)/\partial E)^{-1}$. Thus the local behavior — the acceptance ratio for Monte Carlo moves from a given state — is virtually the same for multicanonical and canonical sampling[^3]. The differences between the two methods near a given state should be similar in magnitude and type to the differences between the microcanonical (fixed-energy) simulations and the canonical (fixed-temperature) simulations: differences can be seen for small systems, but disappear as the system gets larger. To be explicit, for a large system the final state of an entropic sampling run for which the time-dependent energy is $E(t)$ should be statistically equivalent to a simulated annealing run with randomly fluctuating temperature $T(E(t))$. One notes also that the quench rate is not tunable for the multicanonical method: the “diffusion constant” in energy space depends on the atomic step–size and on the number of particles. This is potentially a serious handicap, as changing the quench rate is the primary tool used in glassy systems to find lower energy states. Thus the power of multicanonical methods to vary the energy to facilitate barrier crossing is — for large systems at least — no different from repeatedly heating and cooling the entire system. (2) Perhaps the multicanonical method might be picking the heating and cooling schedule intelligently, in order to escape from local minima? Indeed, since the effective temperature becomes lower as the energy decreases, an entropic sampling system stuck in a high–energy metastable state will have larger thermal excitations (bigger acceptance of upward moves in energy) than one in a low energy state, and will depart faster. This is the explanation, we believe, for the substantial success of the entropic (multicanonical) sampling method seen in the past. This preferential escape from high–energy metastable states will unfortunately also become unimportant for large systems. One can see this most easily by considering a local region trapped in a high-energy configuration with local energy $e'$, and with a lower energy configuration $e$ nearby, separated by a barrier $b$. For a small system, where the local energy difference $e'-e$ is important, the effective temperatures in states $e'$ and $e$ will differ, but for a large system of size $N$ this temperature difference (from the differing acceptance ratios from the two states) will vanish as $1/N$. There are glassy systems in which the energy barriers and energy differences are not all local: mean-field spin glasses, for example, have energy barriers which grow as powers of the number of spins $N$ [@mpv87]. However, the maximum energy barrier (and presumably the maximum energy asymmetry $e'-e$) scales with a power $N^\alpha$ with $\alpha$ strictly less than one (at least in finite dimensions), so the change in effective temperature $N^{\alpha}/N$ still vanishes as $N\to\infty$[@s81]. (3) Perhaps the multicanonical method is exploring different energies more effectively than an externally chosen cooling schedule for simulated annealing? For example, the multicanonical method is guaranteed to converge to an equilibrium density of states at each energy. The same is true for a simulated annealing run at infinitely slow cooling, but is not true for repeated coolings at a fixed rate, which would be expected to generate metastable states repeatedly. On the one hand, the theorems suggest that the ground state should be occupied as often as any other energy; on the other hand it is hard to see how a multicanonical quench to low energies can bypass the metastable states that trap simulated annealing runs of comparable computer time. To address this issue, let us consider the characteristics of the random walk $E(t)$ that the system performs in energy space as a function of time within the multicanonical approach. For some systems, such as the Ising model, multicanonical sampling does indeed produce a roughly unbiased random walk (if one starts from a good estimate of the density of states). As the system becomes larger, the energy range scales with $N$ and the step-size of the energy stays fixed, so the time–scale for diffusing from high energies to near the ground–state scales as $N^2$ (distance scales with the square root of time). This behavior is confirmed in simulations[@bn91; @b97] studying, [*e.g.*]{} the first–order phase transition below $T_c$ changing the Ising model from pointing up to down, where traditional canonical methods would suffer from the surface tension barrier $\sigma L^{d-1} = \sigma N^{(d-1)/d}$ and so the time scales as $N^2 \exp{\sigma N^{(d-1)/d}}$. Bypassing this “supercritical slowing–down”[@j94] is an important application for multicanonical methods. It is known numerically that this simple argument breaks down in simulations of spin glasses[@spin1]. The typical time to cover the energy range (called the ergodicity or tunneling time in the literature) for spin glasses scales as $N^4$[@spin3] or perhaps $e^N$ [@b97] instead of $N^2$. Why should the random walk argument not work for glasses? \[fig1\] The answer to why the random walk argument breaks down, we assert, can be found in the trapping of entropic sampling due to the same metastable states explored in thermal coolings. Indeed, it is the metastable states that the system is [*not able to explore*]{} that trap the entropic sampling algorithm. The states of glassy systems are often described in a caricature tree-like structure (sideways in figure 1). The horizontal axis of the tree can be thought of as either energy or temperature: the branches represent mutually inaccessible ergodic components. For the Ising model, there are two major ergodic components (corresponding to the two directions for the magnetization) and a few domain-wall states. For glasses, the ergodic components are sometimes thought of as regions of configuration space separated by infinite free energy barriers (as in the mean-field spin glass models[@mpv87]), and sometimes as regions separated by energy barriers which are too large to cross in the time–scale of the experiment or simulation. The key point is that the accessible density of states for a glass can be very different from the total density of states. In figure 1, we note that the ergodic component containing the ground state has a density of states which differs from the density of states for the system as a whole, starting at the energy of the first accessible metastable states. The number $\Sigma$ of these inaccessible metastable states is related to the density of tunneling states in configurational glasses[@zp71; @ahv72; @p81], and is thought to increase exponentially with the size of the system ($M$ independent two–state systems with uncrossable barriers generate $\Sigma = 2^M$ states). In spin glasses, the number of components separated by infinite free energy barriers (ones which diverge as $N\to\infty$) diverges with a power of $N$[@s81]. \[fig2\] The multicanonical sampling method is guaranteed to sample the ground-state energy just as much as any other energy. It is easy to see, however, that once the system is in the ground state, it will stay there for a long time! If the density of states within the ground–state ergodic component is $\tilde\Omega_g(E)$ and the density of states in the entire system is $\Omega(E)$, then the acceptance ratio for a multicanonical sampling move from $E$ to $E'$ is $\Omega(E)/\Omega(E')$, while the probability of a random move raising the energy is $\tilde\Omega_g(E')/\tilde\Omega_g(E)$. Hence the likelihood of sampling high energy states $E$ within the ground-state component will fall as $\tilde\Omega_g(E)/\Omega(E)$. Consider the very crude model where all ergodic components are similar and stay completely separate until the glass transition energy $E_g$ (the energy at the glass transition temperature), at which point they merge (figure 2). For this model, escaping from the ground state component will take a time which scales as the total number of ergodic components, and hence diverges as $N\to\infty$. Since multicanonical sampling spends the same amount of time in each energy range, the time between independent visits to the true ground state will scale as the time needed to escape from the ground state ergodic component. So, we begin our exploration with the expectation that multicanonical sampling should be useful for small systems, but will not provide significant advantages for large system sizes. Implementation ============== We argued in the previous section that entropic sampling will not be a fundamental improvement over repeated coolings using simulated annealing, at least in large systems. On the other hand, entropic sampling and other multicanonical methods have been reported to lead to substantial gains in equilibration times for small glassy spin systems[@spin1; @spin2; @spin3; @spin4; @spin5; @spin6; @spin7; @spin8; @spin9]. We are interested in simulations of structural glasses: collections of atoms which typically form metastable, glassy configurations when slowly cooled. In this section, we give a detailed description of our implementation of entropic sampling, simulated annealing, and molecular dynamics. In order to ensure a fair comparison, we have as far as possible taken cooling schedules and time and spatial step sizes from standard references in the literature. In our three dimensional simulations, we applied the three algorithms to a binary mixture of large (L) and small (S) particles with the same mass, interacting via the Lennard–Jones potential of the form $V_{\alpha\beta}(r)=4\, \epsilon_{\alpha\beta}[(\sigma_{\alpha\beta}/r)^{12} - (\sigma_{\alpha\beta}/r)^{6}]$. The values of $\epsilon$ and $\sigma$ were chosen as follows: $\epsilon_{LL} = 1.0, \sigma_{LL} = 1.0, \epsilon_{LS} = 1.5,\sigma_{LS} = 0.8, \epsilon_{SS} = 0.5, \sigma_{SS} = 0.88$. All results are given in reduced units, where $\sigma_{LL}$ was used as the length unit and $\epsilon_{LL}$ as the energy unit. The systems were kept at a fixed density ($\rho\approx 1.2$), periodic boundary conditions have been applied and the potential has been truncated appropriately according to the minimum image rule[@ta86], and shifted to zero at the respective cutoff. The minimum image rule prevents a particle from using the periodic boundary conditions to see more than one copy of its neighbors: to use the conventional cutoff at $r=2.5\sigma$ would demand at least 160 particles. The choice of parameters follows recent simulations of Lennard Jones glasses [@ka94; @ka951; @ka952; @vkb96]; this choice suppresses recrystallization of the system on molecular dynamics time scales. This potential together with this set of parameters mimics the potential for ${\rm Ni_{80}P_{20}}$. We looked at 5 different system sizes ($N$=20, 40, 60, 80, 100). For each $N$ we generated 30 low energy configurations. The initial configurations were random in the case of simulated annealing and entropic sampling, and high temperature equilibrium configurations in the molecular dynamics case. To compare the three methods, we defined one run length to be $10^6$ sweeps through the system for the two Monte Carlo methods. The molecular dynamics runs are quenched at a rate which consumes the same CPU time as used by the Monte Carlo sampling. In our two dimensional simulations, we did not apply molecular dynamics (our hard–wall boundary conditions made it inconvenient). We again used a binary Lennard Jones system, introduced by Widom, Strandburg, and Swendsen [@wss87] with a slightly unconventional form for the potential: $V_{\alpha\beta}(r)=\epsilon_{\alpha\beta}[(\sigma_{\alpha\beta}/r)^{12} - 2 (\sigma_{\alpha\beta}/r)^{6}]$. The Lennard Jones parameters are chosen to favor configurations of decagonal order ($\epsilon_{LL} = 0.5, \sigma_{LL} = 1.176, \epsilon_{LS} = 1.0,\sigma_{LS} = 1.0, \epsilon_{SS} = 0.5, \sigma_{SS} = 0.618$), and the system is known to have a quasicrystalline ground state. The particles are initially randomly distributed in a large cylindric box with infinitely high walls. The potential was truncated at $r_{cutoff} = 2.5\sigma_{\alpha\beta}$ and shifted to zero at this point. All results are in reduced units with $\epsilon_{LS}$ and $\sigma_{LS}$ as fundamental units. Here 4 different system sizes (N=31, 66, 101, 160) were used, where the number of different particles were chosen to keep the ratio fixed close to the value of 1.06 large atoms per small atom. The authors of reference [@wss87] found that this ratio led to defect–free ground states. This system provides an excellent testing ground for entropic sampling for various reasons. The ground state is known to be quasicrystalline, a state with a strong bond orientational order without a long-range order periodicity. Defective configurations are easily recognized, as the typical defects consists of triangles of like particles. There are plenty of metastable states with high energy barriers, as it takes rearrangement of a large number of particles to disentangle the triangular defects. The authors of reference [@wss87] have shown that simulated annealing fails to locate ground states, and always gets trapped in a long lived metastables states, a problem they circumvented using three–particle cluster–flips. The final minimum energy configurations for runs of all three methods were optimized by starting from the lowest energy configurations found, and quenching down to $T=0$ using a conjugate gradient method. The resulting $T=0$ energies are compared in the following section. The entropic sampling method was implemented generally as described in section 2. The Metropolis algorithm [@mrrtt53] was used for local updates. We developed an initial estimate $J(E)$ for the entropy $S(E)$ with a long run (approximately $10^7$ sweeps) starting from a flat distribution: this initial estimate was used as the starting distribution for the subsequent runs. In three dimensions we redid this initialization for each system size; in two dimensions we initialized in this way for the 101 particle system and used finite-size scaling[@sb95] from this distribution for initialization at other system sizes. Finite-size scaling does not appear to work for glassy systems with quenched disorder[@spin1; @spin3]. We did not include this initial computer time in the comparisons: thus we err on the side of entropic sampling. Notice that in continuous systems it is not obvious how to set the optimal bin size for the histogram (unlike in spin systems, where the smallest energy step determines the bin size). We therefore tested several bin sizes. For the two–dimensional systems a fixed bin size of 0.001 has been used, and in the three–dimensional systems, we found it useful to set the bin size to $0.01/N$ energy units. Our investigations suggest that larger bin sizes can introduce artificial barriers in the low energy range, and smaller bins lead to more noise. To set a context, the typical successful energy step in a 20 particle simulation in three dimensions varied from around one at high temperatures to around 0.01 near the ground state. We explored energy-dependent step sizes for the single-atom moves, but they did not improve performance. Entropic sampling demands an upper cutoff for the energy: we use zero for the upper limit for both two and three dimensions. Simulated annealing uses locally the Metropolis update scheme with the Boltzmann factor as the sample probability distribution. The cooling schedule implemented here is similar to the one used in references [@wss87; @ho94], where the temperature is repeatedly lowered by a small factor and then annealed. In our runs, we choose fifty annealing steps of 20,000 sweeps each, with an initial temperature of one and a final temperature of 0.05; each temperature is thus cooled down by a factor of $0.942$. The initial configurations were set at random. The molecular dynamics routine used the velocity form of the Verlet algorithm[@ta86]. The unit of time is given by $(m\sigma^2_{LL}/48\epsilon_{LL})^{1/2}$, where $m$ is the mass of the particle: the Verlet time step in these units is $\delta t=0.01$. The system was coupled to a heat bath and the temperature was reduced linearly in time according to $T_{\rm bath}~=~T_{\rm start}~-~\gamma_{\rm MD} \times t$. Note that the cooling here is linear in time , as is traditional in molecular dynamics of Lennard–Jones glasses[@v96]. The cooling rate $\gamma_{\rm MD} = 1.0~10^{-4}$ was chosen so that the MD runs consume an amount of computer time similar to that of the Monte Carlo algorithms. This cooling rate is in the middle of the range explored in recent simulations, although our system sizes are much smaller[@vkb96]. The initial configurations were equilibrated at a temperature $T_{\rm start} = 1.0$ (at a small cost of computer time which we did not factor into the comparisons), and cooled to the final temperature $T_{\rm final}=0.05$, yielding approximately $2.0 \times 10^6$ molecular dynamics steps. Results ======= In this section we will first compare the performance of the three methods in locating low energy states of the two and three–dimensional Lennard–Jones systems. The performance of the three methods is remarkably similar. Second, we will compare low energy configurations of the two–dimensional system to show that the the algorithms get trapped in similar metastable states. Third, we will quantitatively analyze the trapping of the entropic sampling algorithm in a metastable state. We present the $T=0$ energies of the lowest energy configurations for the three–dimensional Lennard–Jones systems in Table 1. For $N=20$ particles each algorithm is able to locate the same lowest energy state, presumably the ground state. For $N=40$ and $N=60$ particles the lowest energy state is found by entropic sampling. The gain in energy $\Delta E_{40}$ over simulated annealing is around 0.03, and the gain is 0.02 over molecular dynamics. For $N=80$ and $N=100$ particles the lowest energies are found by molecular dynamics, and the gain over entropic sampling is $\Delta E_{80} \sim 0.05$ and $\Delta E_{100}\sim 0.02$. [TABLE 1: $T=0$ energy per particle for the lowest energy configuration found with entropic sampling, simulated annealing and molecular dynamics.]{} --------- ---------------- ----------------- ----------------- [*N*]{} [*Entropic*]{} [*Simulated*]{} [*Molecular*]{} [*Sampling*]{} [*Annealing*]{} [*Dynamics*]{} 20 -0.89 -0.89 -0.89 40 -4.18 -4.15 -4.16 60 -5.38 -5.36 -5.37 80 -6.52 -6.56 -6.57 100 -6.85 -6.86 -6.87 --------- ---------------- ----------------- ----------------- There are three things to notice about this table. First, the dramatic energy difference with increasing system size is due to the change in the cutoff in the potential given by the minimum image rule. Using the twenty particle cutoff in the larger system sizes, we found energies which hardly varied with system size. Second, the fact that these energies differ in the third decimal place does not mean that the differences are negligible. In ref. [@vkb96; @v96] the dependence of the final energy on the cooling rate for exactly this system was studied using molecular dynamics: to gain an energy of $0.03$ starting from the cooling rate we are using, they had to decrease the cooling rate by a factor of ten. Third, as we argued in section three, any gains given by entropic sampling disappear as the system size grows. In Table 2 we list the mean and the standard deviation of the energies from the thirty runs at each system size with each algorithm. It has been found in the literature that the fluctuations for simulated annealing are much larger than for entropic sampling [@ho94]. We find this to be true both for simulated annealing and for molecular dynamics. Indeed, the average performance of entropic sampling remains comparable to that of the other two methods, even at the larger system sizes (where the extremal performance was worse). [TABLE 2: Mean energy per particle and the standard deviation evaluated using all low energy configurations found by entropic sampling, simulated annealing and molecular dynamics.]{} --------- ------------------ ------------------ ------------------ [*N*]{} [*Entropic*]{} [*Simulated*]{} [*Molecular*]{} [*Sampling*]{} [*Annealing*]{} [*Dynamics*]{} 20 -0.84 $\pm$ 0.03 -0.79 $\pm$ 0.06 -0.81 $\pm$ 0.04 40 -4.10 $\pm$ 0.03 -4.07 $\pm$ 0.05 -4.08 $\pm$ 0.04 60 -5.34 $\pm$ 0.01 -5.33 $\pm$ 0.03 -5.33 $\pm$ 0.03 80 -6.50 $\pm$ 0.01 -6.51 $\pm$ 0.02 -6.51 $\pm$ 0.03 100 -6.83 $\pm$ 0.01 -6.84 $\pm$ 0.02 -6.82 $\pm$ 0.02 --------- ------------------ ------------------ ------------------ The Holy Grail of this field is to accelerate three–dimensional glass simulations, bypassing barriers to relaxation. Perhaps this is too high a standard — nobody has such an algorithm. We now will apply entropic sampling to a two–dimensional Lennard–Jones system, where an effective cluster–flip acceleration method has been developed [@wss87]. In Tables 3 and 4 we show the extremal and the mean $T=0$ energies for a variety of system sizes. Again, entropic sampling is slightly better for the smaller systems, but the advantage disappears for the largest system. [TABLE 3: Energy per particle for the lowest energy configuration found with entropic sampling and simulated annealing.]{} [*N*]{} [*Entropic Sampling*]{} [*Simulated Annealing*]{} --------- ------------------------- --------------------------- 31 -2.1 -2.1 66 -2.27 -2.22 101 -2.34 -2.33 160 -2.37 -2.38 [TABLE 4: Mean energy per particle and the standard deviation evaluated using all low energy configurations found by entropic sampling and simulated annealing.]{} [*N*]{} [*Entropic Sampling*]{} [*Simulated Annealing*]{} --------- ------------------------- --------------------------- 31 -1.97 $\pm$ 0.1 -1.87 $\pm$ 0.17 66 -2.24 $\pm$ 0.04 -2.12 $\pm$ 0.07 101 -2.29 $\pm$ 0.07 -2.23 $\pm$ 0.06 160 -2.34 $\pm$ 0.05 -2.34 $\pm$ 0.02 It is remarkable how similarly the three different methods perform. Although it seems to be known that molecular dynamics and simulated annealing are comparable [@c97], we are not aware of any reference providing a direct comparison. Of course, comparisons of efficiency are highly implementation dependent. The two Monte Carlo methods could benefit from a temperature dependent step-size (although we did experiment with it without finding any substantial improvement). One could refine the cooling schedule for the two traditional methods. One could introduce a temperature cutoff (like in the original multicanonical approach) or use variable bin-sizes to improve the entropic sampling method. Again, our experiments with bin–size and cutoff were not encouraging. Our main conclusion is that the choice of methods is a matter of taste. In particular we are encouraged by the fact that Monte Carlo methods are competitive, especially as the adapt easily to cluster acceleration methods. All three methods suffer from the large number of metastable states prevalent in the configuration space of the two and three–dimensional systems, and thus are not capable of locating ground states. To show that they find similar metastable states, we plot in figures 3 and 4 the lowest energy configuration found by entropic sampling and by simulated annealing. \[fig3\] \[fig4\] These configurations are typical representatives of metastable states for the two–dimensional system. The defects are clusters of three large particles, which are shown in grey in figures 3 and 4. Why is entropic sampling not bypassing the free–energy barriers to relaxation? We finish this section with a vivid illustration of how the entropic sampling algorithm gets trapped in a metastable state. In the bottom half of figure 5 we plot the energy as a function of time: at very short times it performs a random walk in energy space as advertised, but it rapidly gets trapped in a low energy metastable state. The simulation shown in figure 5 is the same as the runs for $N=100$ particles tabulated in Tables 1 and 2 except for two important differences. (1) The runs of Table 1 and 2 ran for $10^6$ sweeps, here we ran for $10^7$ sweeps. (2) The entropy estimate $J(E)$ for the runs in Tables 1 and 2 was dynamically updated every $10^5$ sweeps using the recursive updating scheme equation (2). Here we calculated a best estimate $\overline{J(E)}$ from the thirty runs in the Tables and used this function as a fixed entropy estimate. The best estimate $\overline{J(E)}$ (comprising information from $40 \times 10^6$ sweeps) is a sufficiently smooth function that we don’t expect (or observe) the system to be trapped in some artificial well resulting from statistical fluctuations in $J(E)$. \[fig5\] The top half of figure 5 shows the logarithm of the histogram $H(E)$ tabulating the visited states as a function of energy. This function is important as it is used in the recursive updating scheme $\log(H(E)) = \Delta J(E) = J(E)_{\rm update} - J(E)_{\rm estimate}$ (see equation (2)). Figure 6 shows that the system is trapped in a single harmonic metastable state. The upper panel shows an expanded view of the first peak in $\Delta J(E)$. For times after $1.3 \times 10^6$ the system exclusively samples in a single well: repeated quenches yield the same minimum energy $E_1 = -6.8241$. This is a metastable state: as seen in Table 1 the true ground state has an energy $\leq -6.87$. =7.5cm \[fig6\] In the harmonic approximation we can analytically calculate the density of states and compare the contribution from the metastable state directly to the measured data. The harmonic density of states has the form $$\Omega_{\rm harmonic}(E) \propto \left({2(E-E_1) \over K}\right)^{{3N \over 2} - 1} \; ,$$ where $K$ involves the geometric mean of the phonon frequencies and can be thought of as a typical spring constant. In the entropic sampling algorithm the probability of sampling a state in this harmonic well is given by the ratio of the density of states in the single well divided by the estimated density of states $\exp(\overline{J(E)})$. This probability is compared directly to the histogram of sampled states in the upper half of figure 6. The system is trapped in a single harmonic well. By running for shorter times and by dynamically updating the entropy estimate during each run, we have substantially mitigated the trapping problem of entropic sampling shown in figures 5 and 6. Imagine the entropy estimate $J(E)$ after updating it by adding $\log(H(E))$. The acceptance ratio to leave the region given by $1/J(E)$ will dramatically increase and thus the trapping will be bypassed, as indeed we observed in practice. Lingering near a state increases the estimate entropy in that region and eventually push the system out. But dynamical updating should not be an essential ingredient of the algorithm, which is formulated presuming an [*a priori*]{} knowledge of the entropy as a function of energy. Formally dynamical updating violates the Markovian character of the algorithm and convergence to the equilibrium state is no longer guaranteed. In practical terms it is very distressing that the algorithm needs to produce a noticeable bump in the density of states to escape from a metastable state. The comparison against molecular dynamics and simulated annealing would be substantially more unfavorable for long runs without dynamical updating. The figures 5 and 6 are a tangible illustration of the trapping mechanism depicted in figures 1 and 2. The inaccessible metastable states contributing to $\overline{J(E)}$ form a strange type of entropic barrier around the metastable state $E_1$. Leaving $E_1$ via a saddlepoint at $E_2 > E_{\rm peak} \sim -6.8$ is suppressed by roughly the exponential of $\Delta J(E_2) - \Delta J(E_{\rm peak})$ shown in figure 6. Slow cooling in molecular dynamics or simulated annealing can lead to trapping in metastable states due to large energy barriers. Entropic sampling and the other multicanonical methods get trapped in metastable states because of large entropic barriers imposed by the algorithm. In both case the algorithms are sabotaged by the large number of low lying metastable states. Entropic sampling provides a new insight into this problem but doesn’t provide a solution. Conclusions =========== In this study we applied the multicanonical method entropic sampling to Lennard–Jones systems. We focused on the ability of the algorithm to find ground states of these glassy systems and compared the performance to the two traditional glassy simulation methods simulated annealing and molecular dynamics. The use of entropic sampling didn’t reveal any new insights into the ground state properties of Lennard–Jones glasses. We explain these results on the basis of the following observations. First, in the thermodynamic limit multicanonical methods are locally equivalent to simulated annealing. Furthermore the global dynamics of multicanonical sampling resembles a random heating and cooling of the sample. Thus for large systems simulated annealing and multicanonical sampling must have the same properties. In principle multicanonical sampling has the advantage of providing the density of states, which allows to evaluate the canonical distribution function. In glasses this feature is not necessarily helpful, as the multicanonical methods samples phase space as slowly as the annealing methods, thus in practice multicanonical sampling will not be able to extract any equilibrium expectation values better than simulated annealing. Second, the large number of inaccessible metastable states imposes a bizarre entropy barrier to the multicanonical method. The algorithm simply gets stuck in a metastable state, as it might using molecular dynamics and simulated annealing. We underlined this point by comparing the probability distribution estimated by the algorithm inside the metastable state with a theoretical expression derived in the harmonic approximation. Furthermore our results emphasize the known fact, that simulated annealing and molecular dynamics have similar performance in glassy systems. As a consequence one should acknowledge the importance of averaging over many molecular dynamics trajectories especially for glassy systems. Averages over an ensemble of trajectories are a basic concept in Monte Carlo simulations, the striking similarity in performance to molecular dynamics simulations is a hint to the importance of similar averages in glassy molecular dynamics simulations. Finally, the goal of finding a method which gains an exponential speed–up of glassy simulations still remains. Our study clearly indicates that standard reweighting techniques will presumably be no substantial help in tackling this problem. The complicated structure of the glassy configuration space needs more intelligent algorithms, which are not only able to bypass energy barriers but also to find an efficient path through the rugged energy landscape. We would like to thank M. E. J. Newman, J. Jacobsen, K. W. Jacobsen, G. Chester and R. Kree for many useful and illuminating discussions, and B. Berg for very helpful and elucidating comments on various points. The work of JPS was supported by NSF Grant DMR-9419506. KKB is grateful for support by the German Academic Exchange Service (Doktorandenstipendium HSP II/AUFE). Computational support was provided by the Cornell Theory Center. [99]{} R. H. Swendsen and J. S. Wang, [[ Phys. Rev. Lett.]{} [**57**]{}, 1986 (2607)]{}. R. H. Swendsen and J. S. Wang, [[ Phys. Rev. Lett.]{} [**58**]{}, 1987 (86)]{}. M. Widom, K. J. Strandburg, and R. H. Swendsen,[[ Phys. Rev. Lett.]{} [**58**]{}, 706 (1987)]{} R. H. Swendsen, J. S. Wang, and A. M. Ferrenberg, in [*The Monte Carlo Method in Condensed Matter Physics*]{}, ed. K. Binder (Springer, Berlin, 1992), p75. A. M. Ferrenberg and R. H. Swendsen, [[ Phys. Rev. Lett.]{} [**61**]{}, 1988 (2635)]{}, [*ibid*]{} [**63**]{} (1989) 1658. J. Lee and J. M. Kosterlitz, [[ Phys. Rev. Lett.]{} [**65**]{}, 1990 (137)]{}. M. E. J. Newman and G. T. Barkema, Phys. Rev. [**53**]{}, 393 (1996). G. M. Torrie and J. P. Valleau, J. Comput. Phys. [**23**]{} (1977). B. A. Berg and T. Neuhaus, Phys. Lett. B [**267**]{} (1991) 249. B. A. Berg and T. Neuhaus, [[ Phys. Rev. Lett.]{} [**68**]{}, 1992 (451)]{}. B. A. Berg, preprint, cond-mat/9707011 and references therein; W. Janke, [*Recent Developments in Monte Carlo Simulation of First-Order Phase Transitions*]{}, in [*Computer Simulation Studies in Condensed matter Physics VII*]{} (Proceedings in Physics 78), eds. D. P. Landau, K. K. Mon, and H. B. Schüttler (Springer, Berlin, 1994), p29. (1994) 4940. G. R. Smith and A. D. Bruce, J. Phys. A: Math. Gen. [**28**]{} (1995) 6623-6643 and references therein. G. R. Smith and A. D. Bruce, Europhys. Lett. , [**34**]{} (2), pp. 91-96 (1996) J. Lee, [[ Phys. Rev. Lett.]{} [**71**]{}, 1993 (211)]{} ; [*ibid*]{} [**71**]{} (1993) 2353. B. A. Berg, U. H. E. Hansmann, and Y. Okamoto, J. Phys. Chem [**1995**]{}, 99, 2236-2237. A. Metropolis, A. W. Rosenbluth, M. N. Rosenbluth, A. H. Teller, and E. Teller, J. Chem. Phys. [**21**]{}, 1087 (1953) B. A. Berg and T. Celik, [[ Phys. Rev. Lett.]{} [**69**]{}, 1992 (2292)]{}. B. A. Berg and T. Celik, Int. J. Mod. Phys. C [**3**]{}, 1251 (1992). B. A. Berg, T. Celik, and U. Hansmann, Europhys. Lett. [**22**]{} (1), pp. 63-68 (1993). T. Celik, Nucl. Phys. B [**30**]{}, 908 (1993). T. Celik, U. H. E. Hansmann, and M. Katoot, J. Stat. Phys. [**73**]{}, 775 (1993). B. A. Berg and U. E. Hansmann, Nucl. Phys. B [**34**]{}, 664 (1994). U. E. Hansmann and B. A. Berg, Int. J. Mod. Phys. C [**5**]{}, 85 (1994). B. A. Berg, U. E. Hansmann, and T. Celik, Phys. Rev. B [**50**]{}, 16444 (1994). B. A Berg, U. H. E. Hansmann, and T. Celik, Nucl. Phys. B [**42**]{}, 905 (1995). M. Mezard, G. Parisi, and M. A. Virasoro, Spin Glass Theory and Beyond (World Scientific, Singapore, 1987). D. S. Fisher and D. A. Huse, Phys. Rev. B [**38**]{}, 386 (1988). S. Kirkpatrick, C. D. Gelatt, Jr., and M. P. Vecchi, Science 220 (1983) 671. J. Lee and M. Y. Choi, Phys. Rev. E [**50**]{}, R651 (1994) M. E. J. Newman (private communication). E. Marinari and G. Parisi, Europhys. Lett. [**19**]{} (1992) 451. E. Marinari, preprint, cond-mat/9612010 W. Kerler and P. Rehberg, Phys. Rev. E [**50**]{}, 4220 (1994). U. H. E. Hansmann and Y. Okamoto, Phys. Rev. E [**54**]{}, 5863 (1996). G. V. Chester, private communication. S. L. Shumway and J. P. Sethna, Phys. Rev. Lett. [**67**]{}, 995 (1991). S. L. Shumway, private communication. U. H. E. Hansmann and Y. Okamoto, Physica A 212 (1994) 415-437. B. A. Berg, J. Stat. Phys. [**82**]{}, 323 (1996). Ole H. Nielsen, James P. Sethna, Per Stoltze, Karsten W. Jacobsen, and Jens K. Nørskov, Europhysics Lett. [**26**]{}, 51 (1994). F. H. Stillinger and T. A. Weber, J. Chem. Phys. [**80**]{}, 4434 (1984) F. H. Stillinger and T. A. Weber, J. Chem. Phys. [**81**]{}, 5089 (1989) F. H. Stillinger and T. A. Weber, Phys. Rev. B [**31**]{}, 5262 (1985) T. A. Weber and F. H. Stillinger, Phys. Rev. B [**32**]{}, 5402 (1985) W. Kob and H. C. Andersen, Phys. Rev. Lett. [**73**]{}, 13476 (1994). W. Kob and H. C. Andersen, Phys. Rev. E [**51**]{}, 4626 (1995). W. Kob and H. C. Andersen, Phys. Rev. E [**52**]{}, 4134 (1995). K. Vollmayr, W. Kob, and K. Binder, J. Chem. Phys. [**105**]{} (1996) 4714. K. Vollmayr, PhD Thesis (1996), Johannes–Gutenberg University, Mainz, Germany. F. Ritort, Phys. Rev. Lett. [**75**]{}, 1190 (1995) H. Sompolinski, Phys. Rev. Lett. [**47**]{}, 935 (1981). R. C. Zeller and R. O. Pohl, Phys. Rev. B [**2**]{}, 2029 (1971). P. W. Anderson, B. I. Helperin, and C. M. Varma, Philos. Mag. [**25**]{}, 1 (1972); W. A. Phillips, J. Low. Temp. Phys. [**7**]{}, 351 (1972). W. B. Phillips, [*Amorphous Solids: Low Temperature Properties*]{}, Springer Verlag, Berlin (1981). M. P. Allen und D. J. Tildesley, [*Computer Simulation of Liquids*]{}, Oxford Science Pub., Oxford 1986 [^1]: Even the experiments fall out of equilibrium. Think of an experiment as $10^{23}$ parallel atomistic processors with picosecond clock times! [^2]: In any case our simulations spend around half the time at high energies, so any algorithmic improvements can bring at best a factor of two in computer time. [^3]: Near first-order transitions, canonical quenches produce large changes in the state for small changes in temperature, and thus behave quite differently from the multicanonical approaches (which by varying the energy explore the interface states directly). This is one of the major applications of multicanonical sampling methods. We expect that the multicanonical methods will perform for these systems rather similarly to microcanonical quenches which conserve the energy; see ref. [@nsskn94]. In our glassy simulations, this distinction is presumably not important.