text stringlengths 8 5.77M |
|---|
In application server and other enterprise computing environments, a common task for an administrator is the need to patch a series of application server installations supporting multiple domains. A patch may include a one-off fix for a specific problems, or a periodic version update. Regardless of why the patch needs to be installed, the administrator must generally perform a complex series of steps on each node of the domain in order to rollout the patch while minimizing application downtime, including ensuring the patching environment is up to date on each host; shutting down those servers running on the host; and then patching and restarting the application server instances and verifying the patch works correctly. Since patching is a complex process, and even for a single application server instance can take many minutes, which can become hours when a patch is applied to all nodes in a domain, the process can create anxiety for users who risk the possibility of system downtime. |
Ladies, when you’re with a guy have you ever thought to yourself, “Let’s cut to the chase—how big are you?”
You’ve probably thought this at least once while talking to some dude after you’ve had one too many tequila shots at closing time. Sorry guys, but size matters and don’t ever let any woman tell you it doesn’t. Sure, she might still “love” you, but hardware might be necessary to sustain that love.
Wait, was that cold-hearted? I mean she loves you for “you,” but really size does matter. Okay, ladies, let’s get to it. Since every girl talks to her friends about size and we all have the same scientific conversations, here is what I’ve concluded on how you can somewhat scientifically determine if he’s big.
1. He’s quiet and humble.
Any man who doesn’t have to be loud and obnoxious knows he doesn’t need to overcompensate for anything since he clearly isn’t lacking.
2. You have to make the first move.
The shy ones are the ones you need to look out for.
3. He’s tall and skinny.
We all know what working out does to you. He doesn’t have to be a rail, but less muscle means more downtown.
4. He looks you directly in the eye.
He’s knows he can make you scream and wants to remember what your face looks like before that happens.
5. He doesn’t high-five his friends.
I know, it’s so scientific. But really, my observations of groups of men have taught me this.
6. He kisses you like a man.
Ever been kissed and thought, “Uhm…try again”? The ones that are about to rock your world kiss you like they mean it.
Now—go forth ladies and spot the bigs. Now we are all smarter. We grew together. |
<template>
<app-auth ref="bkAuth"></app-auth>
</template>
<script>
import { bus } from '@open/common/bus'
import { getProjectById, getProjectByCode } from '@open/common/util'
export default {
data () {
// 切换顶导项目时,如果 router 是如下数组中的,那么就会跳转到特定的 router
// 如果希望切换项目时 router 保持,那么这里就不需要配置
return {
clusterRouters: [
'clusterMain',
'clusterCreate',
'clusterOverview',
'clusterInfo',
'clusterNode',
'clusterNodeOverview',
'containerDetailForNode',
'404'
],
configurationRouters: [
'mesosTemplatesetApplication',
'mesosTemplatesetDeployment',
'mesosTemplatesetService',
'mesosTemplatesetConfigmap',
'mesosTemplatesetSecret',
'mesosTemplatesetIngress',
'mesosTemplatesetHPA',
'instantiation',
'k8sTemplatesetDeployment',
'k8sTemplatesetService',
'k8sTemplatesetConfigmap',
'k8sTemplatesetSecret',
'k8sTemplatesetDaemonset',
'k8sTemplatesetJob',
'k8sTemplatesetStatefulset',
'k8sTemplatesetIngress',
'k8sTemplatesetHPA'
],
loadBalanceRouters: [
'loadBalance',
'loadBalanceDetail'
],
depotRouters: [
'imageDetail'
],
helmRouters: [
'helms',
'helmTplList',
'helmTplDetail',
'helmTplInstance',
'helmAppDetail'
],
metricRouters: [
'metricManage'
]
}
},
computed: {
onlineProjectList () {
return this.$store.state.sideMenu.onlineProjectList
},
projectCode () {
const route = this.$route
// 从路由获取 projectCode
if (route.params.projectCode) {
this.setLocalStorage(route.params.projectCode)
return route.params.projectCode
}
// 从缓存获取projectId
if (localStorage.curProjectCode) {
const projectCode = localStorage.curProjectCode
for (const item of this.onlineProjectList) {
if (item.project_code === projectCode) {
return projectCode
}
}
}
// 直接显示第一个项目
if (this.onlineProjectList.length) {
return this.onlineProjectList[0].project_code
}
return ''
},
projectId () {
const route = this.$route
// 从路由获取projectId
if (route.params.projectId) {
this.setLocalStorage(route.params.projectId)
return route.params.projectId
}
// 从缓存获取projectId
if (localStorage.curProjectId) {
const projectId = localStorage.curProjectId
for (const item of this.onlineProjectList) {
if (item.project_id === projectId) {
return projectId
}
}
}
// 直接显示第一个项目
if (this.onlineProjectList.length) {
const projectId = this.onlineProjectList[0].project_id
this.setLocalStorage(projectId)
return projectId
}
return ''
},
parentRouteName () {
const bcsRouteKeys = [
'containerServiceMain',
'clusterMain',
'clusterCreate',
'clusterOverview',
'clusterInfo',
'clusterNodeOverview',
'containerDetailForNode',
'mesos',
'instanceDetail',
'instanceDetail2',
'containerDetail',
'containerDetail2',
'mesosInstantiation',
'deployments',
'deploymentsInstanceDetail',
'deploymentsInstanceDetail2',
'deploymentsContainerDetail',
'deploymentsContainerDetail2',
'deploymentsInstantiation',
'daemonset',
'daemonsetInstanceDetail',
'daemonsetInstanceDetail2',
'daemonsetContainerDetail',
'daemonsetContainerDetail2',
'daemonsetInstantiation',
'job',
'jobInstanceDetail',
'jobInstanceDetail2',
'jobContainerDetail',
'jobContainerDetail2',
'jobInstantiation',
'statefulset',
'statefulsetInstanceDetail',
'statefulsetInstanceDetail2',
'statefulsetContainerDetail',
'statefulsetContainerDetail2',
'statefulsetInstantiation',
'service',
'loadBalance',
'loadBalanceDetail',
'resourceMain',
'resourceConfigmap',
'resourceSecret',
'depotMain',
'imageLibrary',
'projectImage',
'clusterNode',
'nodeMain',
'myCollect',
'mcMain',
'operateAudit',
'eventQuery',
'configurationMain',
'namespace',
'templateset',
'configurationCreate',
'mesosTemplatesetApplication',
'mesosTemplatesetDeployment',
'mesosTemplatesetService',
'mesosTemplatesetConfigmap',
'mesosTemplatesetSecret',
'k8sTemplatesetApplication',
'k8sTemplatesetDeployment',
'k8sTemplatesetService',
'k8sTemplatesetConfigmap',
'k8sTemplatesetSecret',
'k8sTemplatesetIngress',
'k8sTemplatesetHPA',
'instantiation',
'metricManage'
]
const routeName = this.$route.name
let parentRouteName = ''
if (bcsRouteKeys.includes(routeName)) {
parentRouteName = 'clusterMain'
document.title = '容器服务'
}
return parentRouteName
}
},
created () {
// 点击导航模块名称时,会触发返回当前模块首页事件,由iframe内部进行返回首页的跳转
window.addEventListener('order::backHome', () => {
this.reloadPage(this.parentRouteName)
})
},
mounted () {
const self = this
bus.$on('show-login-modal', data => {
self.$refs.bkAuth && self.$refs.bkAuth.showLoginModal(data)
})
bus.$on('close-login-modal', () => {
self.$refs.bkAuth && self.$refs.bkAuth.hideLoginModal()
})
},
methods: {
/**
* 保存 projectId 和 projectCode 到本地存储中
*
* @param {string} projectId 项目 id
*/
setLocalStorage (projectId) {
const project = getProjectById(projectId)
const projectCode = project.project_code
localStorage.setItem('curProjectId', projectId)
localStorage.setItem('curProjectCode', projectCode)
},
/**
* 刷新当前页
*
* @param {string} routeName 当前路由名称
*/
reloadPage (routeName) {
const projectId = this.projectId
const projectCode = this.projectCode || getProjectById(projectId).project_code
const curRouteName = this.$route.name
if (routeName === curRouteName) {
this.$emit('reloadCurPage')
} else {
this.$router.push({
name: routeName,
params: {
projectId: projectId,
projectCode: projectCode,
needCheckPermission: true
}
})
}
},
/**
* 选中项目
*
* @param {string} projectCode 项目 code
*/
selectProject (projectCode) {
const routeName = this.$route.name
if (!routeName) {
return false
}
// console.error('selectProject', projectCode)
const projectId = getProjectByCode(projectCode).project_id
this.setLocalStorage(projectId)
// 这么做是因为如果在总览页面或者节点列表页面或者节点详情页面时,切换项目的时候,新切换的项目可能会没有当前的 clusterId
if (this.clusterRouters.indexOf(routeName) > -1) {
this.$router.push({
name: 'clusterMain',
params: {
projectCode: projectCode,
projectId: projectId,
needCheckPermission: true
},
query: this.$route.query || {}
})
} else if (this.configurationRouters.indexOf(routeName) > -1) {
this.$router.push({
name: 'templateset',
params: {
projectCode: projectCode,
projectId: projectId,
needCheckPermission: true
},
query: this.$route.query || {}
})
} else if (this.loadBalanceRouters.indexOf(routeName) > -1) {
// 如果当前是LoadBalance下,返回到LoadBalance 列表
this.$router.push({
name: 'loadBalance',
params: {
projectId: projectId,
projectCode: projectCode,
needCheckPermission: true
},
query: this.$route.query || {}
})
} else if (this.depotRouters.indexOf(routeName) > -1) {
this.$router.push({
name: 'imageLibrary',
params: {
projectId: projectId,
projectCode: projectCode,
needCheckPermission: true
},
query: this.$route.query || {}
})
} else if (this.helmRouters.indexOf(routeName) > -1) {
this.$router.push({
name: 'helms',
params: {
projectId: projectId,
projectCode: projectCode,
needCheckPermission: true
},
query: this.$route.query || {}
})
} else if (this.metricRouters.indexOf(routeName) > -1) {
this.$router.push({
name: 'metricManage',
params: {
projectId: projectId,
projectCode: projectCode,
needCheckPermission: true
},
// 这里去掉 url 参数
query: {}
})
}
}
}
}
</script>
|
[Lanreotide acetate may cure cystic dystrophy in heterotopic pancreas of the duodenal wall].
Cystic dystrophy in heterotopic pancreas of the duodenal wall is a rare but benign disease, associated in most of the cases with chronic pancreatitis. Treatment of this disease is controversial. We report here the use of a long-acting somatostatin synthetic stable analogue in the treatment of a cystic dystrophy in heterotopic pancreas of the duodenal wall: a 45-year-old man, hard drinker, was treated successfully during three months with lanreotide acetate; disappearance of cysts was confirmed by a computed tomography two months after the end of treatment. |
<template name="afContenteditable">
<div contenteditable="true" {{this.atts}}></div>
</template>
|
Post-puerperal Cu-T insertion: a prospective study.
One hundred and sixty eight consecutive women accepting copper T (CuT) intrauterine contraceptive device in the post-puerperal period were studied. Out of them, 63 could be followed after 6 weeks of insertion and 65 after 6 months of insertion. The risk of heavy bleeding, pain in abdomen etc. were no greater than those usually found when interval CuT insertion is carried out. There was no case of uterine perforation leading to migration of CuT. But the expulsion rate was found to be high i.e. 16.4%. CuT is a very useful post-puerperal contraceptive method and should be given more importance in MCH programme. |
It just is all-Juli – your vision and attention to detail, combining the man-made with the natural in such a unique way. It makes me wonder what it is like inside this “living wall of green” – I can imagine it being cool and sweet; sheltered and rather magical.
It’s all a bit magical. This hotel is by the lake in Mexico. The wall is not a big feature…at the end of a lane, not something that is given its due. It is so lovely. I believe I took this from the car window as we were passing by. Drive-by photography even in Mexico! 🙂 |
Thursday, September 22, 2011
Father and Son
This week I invited my oldest son Brandon to help with work on the cabin. I regret that he hasn't been more involved and as a result, he hasn't gained the same appreciation of the progress and accomplishments on the project as my youngest son, Nic. All is not lost as there is still many tasks left to complete and we worked together on one this week. With only 400' of 1900' of ceiling panels stained and installed, we still have a major project needing completed. Brandon found out how slow this progresses as each board must be sanded, vacuumed and stained at least 3 times before installation. He also found out how much I appreciate the help and how much quicker the job goes with 2 people working together. We only prepared 100' of paneling this week but these boards are some of the best looking panels yet. They are smooth and shine like glass so I'm pleased with our efforts.
Since we could only complete the work on a finite number each day, there was also quality time available on the porch for discussing the world's issues. This is what the cabin was built for; this is one father's way to reach out to his three children and show them he has a heart, a dream and soul where they can talk about things important without the distractions of a daily routine.
I was so pleased when Brandon told me that he now looks forward to this time together. |
Q:
Python: Compare the length of different lists and return the longest
I would like to compare 5 lists regarding their length and return the longest list.
I don't have any idea...
Maybe something like this:
a = [1,1,1,1]
b = [1]
c = [1,1,1]
d = [1,1,1,1,1]
e = [1,1,1,1,1]
L = [a,b,c,d,e]
def compare(lists):
counter = count()
previous = lists[0]
group_index = next(counter)
for value in lists:
if len(value) >= len(previous):
...
The result should be 'e'.
A:
below is py3
a = [1,1,1,1]
b = [1]
c = [1,1,1]
d = [1,1,1,1,1]
e = [1,1,1,1,2]
f = [1,1,1,1,1,2]
L = [a,f,b,c,d,e]
def compare(lists):
previous = lists[0]
for value in lists:
if len(value) >= len(previous):
previous=value
return previous
Rsss=compare(L)
print(Rsss)
print([ k for k,v in locals().items() if v == Rsss][0])
python 2 change items to iteritems
print() to print
|
Doctor reviews a digital mammogram of a dense breast and points to a potential cancer. Credit: National Cancer Institute.
In a victory for the dense-breast patient movement, Governor Jerry Brown (D-CA) signed legislation last week requiring that doctors who discover that women have dense breasts on mammography must inform women that:
dense breasts are a risk factor for breast cancer;
mammography sees cancer less well in dense breasts than in normal breasts; and
women may benefit from additional breast cancer screening.
The California law goes into effect on April 1, 2013. It follows four states (Connecticut, Texas, Virginia, and New York) with similar statutes. All have enjoyed solid bipartisan support. Rarely do naysayers or skeptics speak up.
Young women who are leading the charge often bring lawmakers the story of a young constituent, diagnosed with a very aggressive, lethal cancer that was not shown on film-screen mammography. The Are You Dense? patient advocacy group engages patients on Facebook, where women share their experiences with breast cancer, organize events, and lobby for legislation. Individual radiologists work with the advocacy groups, but many radiology groups and breast surgeons do not endorse these laws.
A Closer Look at Breast Cancer Data
Living in an age when information is viewed as an entitlement, knowledge, and power, many physicians find it hard to argue against a patient’s right to know. Can sharing information be a mistake? Some epidemiologists think so. Otis W. Brawley, MD, FACP, Chief Medical & Scientific Officer, American Cancer Society, says: “I really worry when we legislate things that no one understands. People can get harmed.” Numerous issues have to be worked out, according to Brawley. For one, he explains: “There is no standard way to define density.” Additionally, “even though studies suggest that density increases the risk of cancer, these cancers tend to be the less serious kind, but even that is open to question,” Brawley says. “We in medicine do not know what to do for women who have increased density.”
A study of more than 9,000 women in the Journal of the National Cancer Institute revealed that women with very dense breasts were no more likely to die than similar patients whose breasts were not as dense. “When tumors are found later in more dense breasts, they are no more aggressive or difficult to treat,” says Karla Kerlikowske, MD, study coauthor, and professor of medicine and epidemiologist at the University of California San Francisco. In fact, an increased risk of death was only found in women with the least dense breasts.
The trouble is what is known about dense breasts is murky. Asked whether he backs advising women that dense breasts are a risk factor for breast cancer, Anthony B. Miller, MD, Co-Chair of the Cancer Risk Management Initiative and a member of the Action Council, Canadian Partnership Against Cancer, and lead investigator of the Canadian National Breast Cancer Screening Study, says: “I would be very cautious. The trouble is people want certainty and chances are whatever we find, all we can do is explain.”
Women in their forties, who are most likely to have dense breasts (density declines with age) may want to seek out digital mammography. In studies comparing digital mammography to film-screen mammography in the same women, digital mammography has been shown to improve breast cancer detection in women with dense breasts. Findings from the Digital Mammographic Imaging Screening Study, showed better breast cancer detection with digital mammography. But digital mammography is not available in many areas. Moreover, Miller explains: “We do not know if this will benefit women at all. It is very probable that removal of the additional small lesions will simply increase anxiety and health costs, including the overdiagnosis of breast cancer, and have no impact upon mortality from breast cancer.”
Additional imaging studies sound attractive to people convinced that there is something clinically significant to find. But as I pointed out in my last post, many radiologists and breast physicians contend that there is no evidence that magnetic resonance imaging or any other imaging study aids breast cancer screening in women with dense breasts. Brawley notes: “These laws will certainly lead to more referral for MRI and ultrasound without clear evidence that women will benefit (lives will be saved.) It’s clear that radiologists will make more money offering more tests.” Miller adds: “A number of doctors are trying to capitalize on this and some of them should know a lot better.”
Many Advocates Question More Tests, Statutes
Even though the “Are You Dense?” campaign has been instrumental in getting legislation on the books across the county, other advocacy groups and patient advocates want research, enhanced patient literacy about risks and benefits of procedures. Many recall mistakes made that led women down the path of aggressive procedures. In that group is the radical Halsted mastectomy, used widely before systematic study, but once studied, found no better than breast-conserving surgery for many cancers, and bone marrow transplants, also found to be ineffective, wearing, and costly.
Jody Schoger, a breast cancer social media activist at @jodyms who engages women weekly on twitter at #bcsm, had this to say on my blog about the onslaught of additional screening tests:
“What is needed is not another expensive modality… but concentrated focus for a biomarker to indicate the women who WILL benefit from additional screening. Because what’s happening now is an avalanche of screening, and its subsequent emotional and financial costs, that is often far out of proportion to both the relative and absolute risk for invasive cancer. I simply don’t think more “external” technology is the answer but one that evolves from the biology of cancer.”
Eve Harris @harriseve, a proponent of patient navigation and patient literacy, challenged Peter Ubel, MD, professor of business administration and medicine, at Duke University, on his view of the value of patient empowerment on the breast density issue. In a post on Forbes, replicated in Psychology Today, Ubel argued that in cases where the pros and cons of a patient’s alternatives are well known, for example, considering mastectomy or lumpectomy, patient empowerment play an important role. “But we are mistaken to turn to patient empowerment to solve dilemmas about how best to screen for cancer in women with dense breasts,” he writes.
Harris disagrees, making a compelling case for patient engagement:
“I think that we can agree that legislative interference with medical practice is not warranted when it cannot provide true consumer protection. But the context is the biggest culprit in this situation. American women’s fear of breast cancer is out of proportion with its incidence and its mortality rate. Truly empowering people—patients would mean improving health literacy and understanding of risk…”
But evidence and literacy take time, don’t make for snappy reading or headlines, and don’t shore up political points. Can we stop the train towards right-to-inform laws and make real headway in women’s health? Can we reallocate healthcare dollars towards effective treatments that serve patients and engage them in their care? You have to wonder.
2 Responses to Are Dense-Breast, Right-to-Know Laws Helpful?
Science had an interview with Virginia Moyer, chair of the USPSTF. One of the problems with eliminating copayments for screening tests like mammograms is that they don’t eliminate the copayments for followup biopsies.http://www.sciencemag.org/content/337/6101/1468.1
V.M.:We don’t ignore the fact that there are [financial] costs associated with things, and we particularly consider cost to the individual to be a potential harm, but not in an explicit quantitative way. We do consider the fact that a false-positive test not only ends up requiring in many instances invasive and unpleasant procedures to determine that it was a false positive, but it can also be costly to the individual in time and money. My most recent false-positive mammogram cost me $2000 out of pocket, because insurance only covers the mammogram; it doesn’t cover the biopsy. Two thousand dollars is real money. Our purpose is not to save the system money. Our purpose is to improve the health of all Americans.
Laura Newman
I am a medical journalist and blogger. My stories have appeared in peer-reviewed journals and on the web. In Patient POV, I strive to bring the same rigor to telling stories about patients that I have shown in my previous work, which has featured research scientists and physicians.
Laura can be found on Twitter as @lauranewmanny.
Donate
If you like what you've been reading, please consider supporting this work. |
Money
In a rare statement, Kim Jong-un said that Trump would "pay dearly" for his threat, a state-media report said on Friday. "I'd like to advise Trump to exercise prudence in selecting words and to be considerate of whom he speaks to when making a speech in front of the world", Kim said.
Premier Kathleen Wynne has even brought on former bank executive Ed Clark to lead a task force with the goal of landing the project somewhere in the province. Amazon's Seattle-based headquarters is a big economic generator. Kyle Whitehead, government relations director at the Active Transportation Alliance said Chicago's transportation system could meet Amazon's needs.
Augustine, Florida. The Rehabilitation Center at Hollywood Hills nursing home power and therefore air conditioning as Hurricane Irma hit Florida. FPL also requested that OG&E deploy a 14-member management team to coordinate FPL linemen and vegetation management personnel.
Trump increased economic pressure on North Korea when ordering additional restrictions, which include their shipping and trade networks. "Big noise out of North Korea will keep today's trading defensive as the biggest threat to the markets make the headlines", said Peter Cardillo , chief market economist at First Standard Financial in NY.
There was 340 million more barrels oil in storage in the Organization for Economic Co-operation and Development (OECD) countries than the current five-year average in January but that fell to 209 million barrels in July, the last month for which data was available, GMP FirstEnergy analyst Martin King said.
Petitioner Glenn Gathercole, from London , said he added his signature because: " Uber provides a much needed alternative to minicabs and black cabs ". Tom Elvidge, Uber's general manager in London , said the company would appeal the decision in order "to defend the livelihoods of all those drivers, and the consumer choice of millions of Londoners who use our app".
Moody's has downgraded the United Kingdom from "stable" to "negative", the British government has said that Moody's assessment is "outdated". Moody's further said that pressure on public funds will be "exacerbated" because of the "erosion of the UK's medium-term economic strength that is likely to result from the manner of its departure from the European Union".
The natural disaster is reported to have occurred near a nuclear test site and previous quakes have occurred during weapons' tests. But the official from the South Korean agency said the analysis of seismic waves and the lack of sound waves clearly showed that the quake was not caused by an artificial explosion.
Cheap imports have led to a boom in the US solar industry, where rooftop and other installations have surged tenfold since 2011. "The U.S. solar manufacturing sector contributes to our energy security and economic prosperity". Suniva's initial petition to the ITC asked for a tariff of 40 cents per watt on certain types of solar cells, and at least a 78-cent-per-watt tariff on solar modules, or packages of solar equipment including cells.
It has underperformed by 4.16% the S&P500. More news for WEX Inc (NYSE:WEX) were recently published by: Prnewswire.com, which released: "Mize and WEX Inc. Therefore 45% are positive. Proctor & Gamble had 59 analyst reports since August 4, 2015 according to SRatingsIntel. The stock of Johnson & Johnson (NYSE:JNJ) earned "Sell" rating by BTIG Research on Friday, July 21.
According to the Fed's economic projections which were released on Wednesday, Fed officials expected the USA economy to grow 2.4 percent this year, higher than their forecast of 2.2 percent in June. The Fed's decision to exit from balance-sheet policies comes a decade after the global financial crisis began to tip the economy into a recession at the end of 2007.
Analysts pegging the company with a rating of 3 would be indicating a Hold recommendation. This was disclosed to clients in a research report on 18 September. Already this year, the company has bought Nimble Storage for £938m and shown off its ARM-Powered 160TB behemoth "The Machine" which always sounds like it was named by Stewie Griffin.
Mr Abbott has alleged the man headbutted him after asking to shake his hand. An ABC spokesperson told Daily Mail Australia: 'The email was unacceptable and the staff member in question who is a technical operator and not a journalist, has been counselled'.
Mumbai: Pakistani actor Mahira Khan was trolled on social media after some pictures surfaced on Twitter which showed her smoking along with actor Ranbir Kapoor . Our source revealed, " Ranbir was in NY around the same time as Mahira". They called out people for their double standards at shaming Mahira for smoking when a male actor was literally inches away from her, doing the same thing.
Knowledge Leaders Capital LLC grew its holdings in shares of Rio Tinto PLC by 126.0% in the second quarter. MML Investors Services LLC purchased a new stake in Rio Tinto PLC during the second quarter valued at approximately $203,000. One research analyst has rated the stock with a sell rating, four have assigned a hold rating and sixteen have assigned a buy rating to the company's stock.
About 2.84M shares traded. It is down 9.56% since September 22, 2016 and is uptrending. It has outperformed by 12.14% the S&P500. Is Independence Realty Trust, Inc. Tiaa Cref Invest Mngmt Ltd Liability Corporation holds 0% or 184,284 shares in its portfolio. Physicians Realty L.P.is the operating partnership of the Trust.
Weighed down by banking stocks, the 30-share BSE Sensex opened lower and fell further before settling down by 447.60 points, or 1.38 per cent - its biggest single-day fall since November 15 past year - at 31,922.44. Indian shares fell 1 per cent on Friday, while the rupee hit its weakest point since early April amid concerns that the government's plan for a stimulus to halt an economic slowdown may have a negative impact on the fiscal deficit.
The euro jumped to 1.99 against the U.S. dollar after output in services and manufacturing jumped to 56.7 in September from 55.7, according to the respected IHS Markit's Flash Composite Purchasing Managers' Index for September. Germany's composite output index rose unexpectedly for the consecutive second month in September, to 57.8, from 55.8 in August. The manufacturing PMI climbed to 60.6 in September from 59.3 in August.
Reportedly, the two carriers are close to agreeing on "tentative terms" for such a deal, Reuters described on Friday. SoftBank founder Masayoshi Son abandoned an earlier attempt to acquire T-Mobile for Sprint in 2014 amid opposition from anti-trust regulators concerned that consumers could lose out.
She said the Fed would adjust its policymaking if it thought the causes of low inflation had become permanent. Consequently, the Committee continues to expect that, with gradual adjustments in the stance of monetary policy , economic activity will expand at a moderate pace, and labor market conditions will strengthen somewhat further.
The full list of these flight cancellations (from 21 September to 31 October) are now available on the ryanair .com website, and customers affected by these cancellations will be emailed with offers of alternative flights or full refunds, and details of their EU261 compensation entitlement, the airline said.
The Fed left rates unchanged for now, as was widely anticipated, but investors' expectations changed for December after the USA central bank signaled one more rate hike by year-end despite recent weak inflation readings. The risk exists that investors could become spooked by the rising number of bonds being transferred back into private hands. Iyt will start shrinking its balance sheet by unwinding ay $10bn a month, it said.
Peel Hunt restated a "buy" recommendation and fixed a GBX 245 ($3.30) target price on stock of NCC Group in a study note on early Mon, Sep 4th. St Ives plc is a United Kingdom-based global marketing services company. Therefore 14% are positive. A number of other large investors have also added to or reduced their stakes in AZN. The stock has "Underperform" rating by Davy Research on Monday, February 8.
Best Buy slumped 8.0 per cent as Wall Street frowned on its long-term outlook. In addition, the Fed said it will begin to gradually unwind its more than US$4 trillion balance sheet next month. Officials also said they plan to start unwinding the at the USA central bank's $4.5 trillion balance sheet next month by reducing its bond holdings, which will gradually increase long-term borrowing rates.
Peel Hunt maintained the stock with "Hold" rating in Wednesday, August 30 report. Therefore 100% are positive. The 52 week high share price is 1104 GBX while the 52 week low is 794.5 GBX. The oil and gas exploration company reported $0.21 earnings per share for the quarter, missing the Zacks' consensus estimate of $0.39 by ($0.18). About 3.16M shares traded or 12.06% up from the average. |
Q:
HTML lists in Outlook +2007
I'm looking for a way to get bulleted lists in Outlook +2007
<ul style="margin:0; list-style-type: disc;">
<li> Element 1 </li>
<li> Element 2 </li>
<li> Element 3 </li>
</ul>
But this just doesn't work; there are no bullets at all in Outlook while there are in Gmail.
This site says that list-style-type is supported by Outlook +2007 (https://www.campaignmonitor.com/css/). However I just don't see that happening. Is there a better way to go about this?
A:
HTML emails are really annoying, aren't they? I find the most reliable way of dealing with bullet lists is to inline the bullet. That works across all email clients, with little margin variation.
Try:
<ul style="margin:0; list-style-type: none;">
<li>• Element 1</li>
<li>• Element 2</li>
<li>• Element 3</li>
</ul>
|
---
author:
- 'Lihi Shiloh-Perl and Raja Giryes'
bibliography:
- 'refs.bib'
title: Introduction to deep learning
---
General overview {#sec:basic_overview}
================
Neural Networks (NN) have revolutionized the modern day-to-day life. Their significant impact is present even in our most basic actions, such as ordering products on-line via Amazon’s Alexa or passing the time with on-line video games against computer agents. The NN effect is evident in many more occasions, for example, in medical imaging NNs are utilized for lesion detection and segmentation [@Greenspan2016; @ben2016fully], and tasks such as text-to-speech [@gibiansky2017deep; @Sotelo2017Char2WavES] and text-to-image [@Reed_text2image2016] have remarkable improvements thanks to this technology. In addition, the advancements they have caused in fields such as natural language processing (NLP) [@devlin2018pretraining; @yang2019xlnet_arxiv; @liu2020roberta], optics [@Shiloh19Efficient; @Haim18Depth], image processing [@Schwartz19DeepISP; @Yang19Deep] and computer vision (CV) [@chen2018encoder; @gao2018reconet] are astonishing, creating a leap forward in technology such as autonomous driving [@chen2019progressive; @Ma2019CVPR], face recognition [@facenet; @CosFace; @ArcFace], anomaly detection [@Kwon2019], text understanding [@kadlec2016text] and art [@gatys2016image; @johnson2016perceptual], to name a few. Its influence is powerful and is continuing to grow.
The NN journey began in the mid 1960’s with the publication of the Perceptron [@Rosenblatt58theperceptron]. Its development was motivated by the formulation of the human neuron activity [@McCulloch1943] and research regarding the human visual perception [@hubel:single]. However, quite quickly, a deceleration in the field was experienced, which lasted for almost three decades. This was mainly the result of lack of theory with respect to the possibility of training the (single-layer) perceptron and a series of theoretical results that emphasized its limitations, where the most remarkable one is its inability to learn the XOR function [@minsky69perceptrons].
This *NN ice age* came to a halt in the mid 1980’s, mainly with the introduction of the multi-layer perceptron (MLP) and the backpropagation algorithm [@Rumelhart:1986]. Furthermore, the revolutionary convolutional layer was presented [@Lecun98gradient], where one of its notable achievements was successfully recognizing hand-written digits [@LeCun1990].
While some other significant developments have happened in the following decade, such as the development of the long-short memory machine (LSTM) [@Hochreiter1997], the field experienced another deceleration. Questions were arising with no adequate answers especially with respect to the non-convex nature of the used optimization objectives, overfitting the training data, and the challenge of vanishing gradients. These difficulties led to a second *NN winter*, which lasted two decades. In the meantime, classical machine learning techniques were developed and attracted much academic and industry attention. One of the prominent algorithms was the newly proposed Support Vector Machine (SVM) [@cristianini2000], which defined a convex optimization problem with a clear mathematical interpretation [@Vapnik1995SVM]. These properties increased its popularity and usage in various applications.
The $21^\text{st}$ century began with some advancements in neural networks in the areas of speech processing and Natural Language Processing (NLP). Hinton *et al.* [@Hinton2006] proposed a method for layer-wise initial training of neural networks, which leveraged some of the challenges in training networks with several layers. However, the great NN *tsunami* truly hit the field with the publication of *AlexNet* in 2012 [@AlexNet]. In this paper, Krizhevsky *et al.* presented a neural network that achieved state-of-the-art performance on the ImageNet [@Deng09] challenge, where the goal is to classify images into 1000 categories using 1.2 Million images for training and 150000 images for testing. The improvement over the runner-up, which relied on hand crafted features and one of the best classification techniques of that time, was notable - more than $10\%$. This caused the whole research community to understand that neural networks are way more powerful than what was thought and they bear a great potential for many applications. This led to a myriad of research works that applied NNs for various fields showing their great advantage.
Nowadays, it is safe to say that almost every research field has been affected by this NN *tsunami* wave, experiencing significant improvements in abilities and performance. Many of the tools used today are very similar to the ones used in the previous phase of NN. Indeed, some new regularization techniques such as batch-normalization [@IoffeS15] and dropout [@Srivastava2014] have been proposed. Yet, the key-enablers for the current success is the large amounts of data available today that are essential for large NN training, and the developments in GPU computations that accelerate the training time significantly (sometimes even leading to $\times 100$ speed-up compared to training on a conventional CPU). The advantages of NN is remarkable especially at large scales. Thus, having large amounts of data and the appropriate hardware to process them, is vital for their success.
A major example of a tool that did not exist before is the Generative Adversarial Network (GAN [@GANs]). In 2014, Goodfellow *et al.* published this novel framework for learning data distribution. The framework is composed of two models, a generator and a discriminator, trained as adversaries. The generator is trained to capture the data distribution, while the discriminator is trained to differentiate between generated (“fake”) data and real data. The goal is to let the generator synthesize data, which the discriminator fails to discriminate from the real one. The GAN architecture is used in more and more applications since its introduction in 2014. One such application is the rendering of real scene images were GANs have proved very successful [@Gatys2016ImageST; @Zhu2017UnpairedIT]. For example, Brock *et al.* introduced the BigGAN [@Brock2018] architecture that exhibited impressive results in creating high-resolution images, shown in Fig. \[fig:BigGAN\_example1\]. While most GAN techniques learn from a set of images, recently it has been successfully demonstrated that one may even train a GAN just using one image [@Shaham_2019_ICCV]. Other GAN application include inpainting [@Liu_2018_ECCV; @Yu_2019_ICCV], retargeting [@Shocher_2019_ICCV], 3D modeling [@Guibas18], semi-supervised learning [@vanEngelen2019], domain adaptation [@CyCADA2018] and more.
![Class-conditional samples generated by a GAN, [@Brock2018].[]{data-label="fig:BigGAN_example1"}](pics/BigGAN_example1.png){width="75.00000%"}
While neural networks are very successful, the theoretical understanding behind them is still missing. In this respect, there are research efforts that try to provide a mathematical formulation that explains various aspects of NN. For example, they study NN properties such as their optimization [@sun2019optimization], generalization [@Jakubovitz2019] and expressive power [@Safran19Depth; @ongie2020a].
The rest of the chapter is organized as follows. In Section \[sec:basic\_structure\] the basic structure of a NN is described, followed by details regarding popular loss functions and metric learning techniques used today (Section \[sec:LF\]). We continue with an introduction to the NN training process in Section \[sec:training\], including a mathematical derivation of backpropagation and training considerations. Section \[sec:optimizers\] elaborates on the different optimizers used during training, after which Section \[sec:regularizations\] presents a review of common regularization schemes. Section \[sec:architectures\] details advanced NN architecture with state-of-the-art performances and Section \[sec:summary\] concludes the chapter by highlighting some current important NN challenges.
Basic NN structure {#sec:basic_structure}
==================
The basic building block of a NN consists of a linear operation followed by a non-linear function. Each building block consists of a set of parameters, termed weights and biases (sometimes the term weights includes also the biases), that are updated in the training process with the goal of minimizing a pre-defined loss function.
Assume an input data $\mathbf{x}\in \mathbb{R}^{d_0}$, the output of the building block is of the form , where $\psi(\cdot )$ is a non-linear function, $\mathbf{W}\in \mathbb{R}^{d_1 \times d_0}$ is the linear operation and $\mathbf{b}\in \mathbb{R}^{d_1}$ is the bias. See Fig. \[fig:building\_block\] for an illustration of a single building block.
![NN building block consists of a linear and a non-linear elements. The weights $\mathbf{W}$ and biases $\mathbf{b}$ are the parameters of the layer.[]{data-label="fig:building_block"}](pics/building_block.png){width="50.00000%"}
![NN layered structure: concatenation of $N$ building blocks, e.g., model layers.[]{data-label="fig:NN_illustraion"}](pics/NN_illustraion.png){width="\textwidth"}
To form an NN model, such building blocks are concatenated one to another in a layered structure that allows the input data to be gradually processed as it propagates through the network. Such a process is termed the (feed-)forward pass. Following it, during training, a backpropagation process is used to update the NN parameters, as elaborated in Section \[subsec:backprop\]. In inference time, only the forward pass is used.
Fig. \[fig:NN\_illustraion\] illustrates the concatenation of $K$ building blocks, e.g., layers. The intermediate output at the end of the model (before the “task driven block”) is termed the *network embedding* and it is formulated as follows: $$\resizebox{.92 \textwidth}{!}{$
\Phi(\mathbf{x},\mathbf{W}^{(1)},...,\mathbf{W}^{(K)},\mathbf{b}^{(1)},...,\mathbf{b}^{(K)})=\psi(\mathbf{W}^{(K)}...\psi(\mathbf{W}^{(2)}\psi(\mathbf{W}^{(1)}\mathbf{x}+\mathbf{b}^{(1)})+\mathbf{b}^{(2)})...+\mathbf{b}^{(K)}).
$}$$ The final output (prediction) of the network is estimated from the network embedding of the input data using an additional task driven layer. A popular example is the case of classifications, where this block is usually a linear operation followed by the *cross-entropy* loss function (detailed in Section \[sec:LF\]).
When approaching the analysis of data with varying length, such as sequential data, a variant of the aforementioned approach is used. A very popular example for such a neural network structure is the Recurrent Neural Network (RNN [@Jain1999RNN]). In a vanilla RNN model, the network receives at each time step just a single input but with a feedback loop calculated using the result of the same network in the previous time-step (see an illustration in Fig. \[fig:RNN\]). This enables the network to “remember” information and support multiple inputs and producing one or more outputs.
More complex RNN structures include performing bi-directional calculations or adding gating to the feedback and the input received by the network. The most known complex RNN architecture is the Long-Term-Short-Memory (LSTM) [@Hochreiter1997; @gers1999learning], which adds gates to the RNN. These gates decide what information from the current input and the past will be used to calculate the output and the next feedback, as well as what information to mask (i.e., causing the network to forget). This enables an easier combination of past and present information. It is commonly used for time-series data in domains such as NLP and speech processing.
![Recurrent NN (RNN) illustration for time series data. The feedback loop introduces time dependent characteristics to the NN model using an element-wise function. The weights are the same along all time steps.[]{data-label="fig:RNN"}](pics/RNN_series.png){width="35.00000%"}
Another common network structure is the *Encoder-Decoder* architecture. The first part of the model, the encoder, reduces the dimensions of the input to a compact feature vector. This vector functions as the input to the second part of the model, the decoder. The decoder increases its dimension, usually, back to the original input size. This architecture essentially learns to compress (encode) the input to an efficiently small vector and then decode the information from its compact representation. In the context of regular feedforward NN, this model is known as autoencoder [@sonderby2016ladder] and is used for several tasks such as image denoising [@Remez18Class], image captioning [@Vinyals2015Show], feature extraction [@vincent2008extracting] and segmentation [@atlason2019unsupervised]. In the context of sequential data, it is used for tasks such as translation, where the decoder generates a translated sentence from a vector representing the input sentence [@Sutskever14Seq2Seq; @cho-etal-2014-properties].
Common linear layers {#sec:layers}
--------------------
A common basic NN building block is the Fully Connected (FC) layer. A network composed of a concatenation of such layers is termed Multi-Layer Perceptron (MLP) [@Ruck1990]. The FC layer connects every neuron in one layer to every neuron in the following layer, i.e. the matrix $\mathbf{W}$ is dense. It enables information propagation from all neurons to all the ones following them. However it may not maintain spatial information. Figure \[fig:MLP\] illustrates a network with FC layers.
![Fully-connected layers.[]{data-label="fig:MLP"}](pics/MLP.png){width="50.00000%"}
The convolutional layer [@LeCun1989; @Lecun98gradient] is another very common layer. We discuss here the 2D case, where the extension to other dimension is straight-forward. This layer applies one or multiple convolution filters to its input with kernels of size $W\times H$. The output of the convolution layer is commonly termed a *feature map*.
Each neuron in a feature map receives inputs from a set of neurons from the previous layer, located in a small neighborhood defined by the kernel size. If we apply this relationship recursively, we can find the part of the input that affects each neuron at a given layer, i.e., the area of visible context that each neuron sees from the input. The size of this part is called the *receptive field*. It impacts the type and size of visual features each convolution layer may extract, such as edges, corners and even patterns. Since convolution operations maintain spatial information and are translation equivariant, they are very useful, namely, in image processing and CV.
If the input to a convolution layer has some arbitrary third dimension, for example 3-channels in an RGB image ($C=3$) or some $C>1$ channels from an output of a hidden layer in the model, the kernel of the matching convolution layer should be of size $W\times H\times C$. This corresponds to applying a different convolution for each input channel separately, and then summing the outputs to create one feature map. The convolution layer may create a multi-channel feature map by applying multiple filters to the input, i.e., using a kernel of size , where $C_\text{in}$ and $C_\text{out}$ are the number of channels at the input and output of the layer respectively.
Common non-linear functions {#sec:AF}
---------------------------
The non-linear functions defined for each layer are of great interest since they introduce the non-linear property to the model and can limit the propagating gradient from vanishing or exploding (see Section \[sec:training\]).
Non-linear functions that are applied element-wise are known as *activation functions*. Common activation functions are the Rectified Linear Unit (ReLU [@Dahl13]), leaky ReLU [@Xu15], Exponential Linear Unit (ELU) [@ELU15], hyperbolic tangent (tanh) and sigmoid. There is no universal rule for choosing a specific activation function, however, ReLUs and ELUs are currently more popular for image processing and CV while sigmoid and tanh are more common in speech and NLP. Fig. \[fig:activation\_funcs\] presents the response of the different activation functions and Table \[table:activation\_functions\] their mathematical formulation.
![Different activation functions. Leaky ReLU with $\alpha=0.1$, ELU with $\alpha=1$.[]{data-label="fig:activation_funcs"}](pics/activation_functions.jpg){width="65.00000%"}
[|c|c|c|c|]{}
------------------------------------------------------------------------
------------------------------------------------------------------------
Function & Formulation $s(x)$ & Derivative $\frac{ds(x)}{dx}$ & Function output range\
------------------------------------------------------------------------
------------------------------------------------------------------------
ReLU& $\begin{cases}
0, & \text{for } x<0\\
x, & \text{for } x\geq 0
\end{cases}$ & $\begin{cases}
0, & \text{for } x<0\\
1, & \text{for } x\geq 0
\end{cases}$ & $[0,\infty )$\
------------------------------------------------------------------------
------------------------------------------------------------------------
Leaky ReLU& $\begin{cases}
\alpha x, & \text{for } x<0\\
x, & \text{for } x\geq 0
\end{cases}$ & $\begin{cases}
\alpha, & \text{for } x<0\\
1, & \text{for } x\geq 0
\end{cases}$& $(-\infty ,\infty )$\
------------------------------------------------------------------------
------------------------------------------------------------------------
ELU& $\begin{cases}
\alpha(\mathrm{e}^{x}-1), & \text{for } x<0\\
x, & \text{for } x\geq 0
\end{cases}$& $\begin{cases}
\alpha \mathrm{e}^{x}, & \text{for } x<0\\
1, & \text{for } x\geq 0
\end{cases}$& $[-\alpha ,\infty )$\
------------------------------------------------------------------------
------------------------------------------------------------------------
Sigmoid& $\frac{1}{1+\mathrm{e}^{-x}}$ & $\frac{\mathrm{e}^{-x}}{(1+\mathrm{e}^{-x})^2}$ & $(0,1)$\
------------------------------------------------------------------------
------------------------------------------------------------------------
tanh& $\tanh(x)=\frac{\mathrm{e}^{2x}-1}{\mathrm{e}^{2x}+1}$ & $1-\tanh^2(x)$& $(-1,1)$\
Another common non-linear operations in a NN model are the *pooling* functions. They are aggregation operations that reduce dimensionality while keeping dominant features. Assume a pooling size of $q$ and an input vector to a hidden layer of size $d$, $\mathbf{z}=[z_1,z_1,...,z_d]$. For every $m\in[1,d]$, the subset of the input vector $\mathbf{\tilde{z}}=[z_m,z_{m+1},...,z_{q+m}]$ may undergo one of the following popular pooling operations:
1. Max pooling: $g(\mathbf{\tilde{z}})=\max_i \mathbf{\tilde{z}}$
2. Mean pooling: $g(\mathbf{\tilde{z}})=\frac{1}{q}\sum_{i=m}^{q+m}z_i$
3. $\ell _p$ pooling: $g(\mathbf{\tilde{z}})=\sqrt[p]{\sum_{i=m}^{q+m} z^p_i}$
All pooling operations are characterized by a stride, $s$, that effectively defines the output dimensions. Applying pooling with a stride $s$, is equivalent to applying the pooling with no stride (i.e., $s=1$) and then sub-sampling by a factor of $s$. It is common to add zero padding to $\mathbf{z}$ such that its length is divisible by $s$.
Another very common non-linear function is the *softmax*, which normalizes vectors into probabilities. The output of the model, the embedding, may undergo an additional linear layer to transform it to a vector of size $1 \times N$, termed *logits*, where $N$ is the number of classes. The logits, here denoted as $\mathbf{v}$, are the input to the softmax operation defined as follows: $$\label{eq:softmax}
\text{softmax}(v_i)=\frac{\mathrm{e}^{v_i}}{\sum_{j=1}^{N}\mathrm{e}^{v_j}}, ~~~~~ i\in[1,...,N].$$
Loss functions {#sec:LF}
==============
Defining the loss function of the model, denoted as $\mathcal{L}$, is critical and usually chosen based on the characteristics of the dataset and the task at hand. Though datasets can vary, tasks performed by NN models can be divided into two coarse groups: (1) regression tasks and (2) classification tasks.
A [*regression*]{} problem aims at approximating a mapping function from input variables to a continuous output variable(s). For NN tasks, the output of the network should predict a continues value of interest. Common NN regression problems include image denoising [@zhang2017beyond], deblurring [@nah2017deep], inpainting [@yang2017high] and more. In these tasks, it is common to use the Mean Squared Error (MSE), Structural SIMilarity (SSIM) or $\ell_1$ loss as the loss function. The MSE ($\ell_2$ error) imposes a larger penalty for larger errors, compared to the $\ell_1$ error which is more robust to outliers in the data. The SSIM, and its multiscale version [@Zhao2017LossFF], help improving the perceptual quality.
In the [*classification*]{} task, the goal is to identify the correct class of a given sample from pre-defined $N$ classes. A common loss function for such tasks is the *cross-entropy* loss. It is implemented based on a normalized vector of probabilities corresponding to a list of potential outcomes. This normalized vector is calculated by the softmax non-linear function (Eq. ). The cross-entropy loss is defined as: $$\label{eq:cross-entropy}
\mathcal{L}_{CE}=-\sum_{i=1}^{N}y_i\log(p_i),$$ where $y_i$ is the ground-truth probability (the label) of the input to belong to class $i$ and $p_i$ is the model prediction score for this class. The label is usually binary, i.e., it contains $1$ in a single index (corresponding to the true class). This type of representation is known as *one-hot encoding*. The class is predicted in the network by selecting the largest probability and the log-loss is used to increase this probability.
Notice that a network may provide multiple outputs per input data-point. For example, in the problem of image semantic segmentation, the network predicts a class for each pixel in the image. In the task of object detection, the network outputs a list of objects, where each is defined by a bounding box (found using a regression loss) and a class (found using a classification loss). Section \[subsec:detection\_segmentation\] details these different tasks. Since in some problems, the labelled data are imbalanced, one may use weighted softmax (that weigh less frequent classes) or the focal loss [@lin2017focal].
Metric Learning
---------------
An interesting property of the log-loss function used for classification is that it implicitly cluster classes in the network embedding space during training. However, for a clustering task, these vanilla distance criteria often produce unsatisfactory performance as different class clusters can be positioned closely in the embedding space and may cause miss-classification for samples that do not reside in the specific training set distribution.
Therefore, different metric learning techniques have been developed to produce an embedding space that brings closer intra-class samples and increases inter-class distances. This results in better accuracy and robustness of the network. It allows the network to be able to distinguish between two data samples if they are from the same class or not, just by comparing their embeddings, even if their classes have not been present at training time.
Metric learning is very useful for tasks such as face recognition and identification, where the number of subjects to be tested are not known at training time and new identities that were not present during training should also be identified/recognized (e.g., given two images the network should decide whether these correspond to the same or different persons).
An example for a popular metric loss is the *triplet loss* [@facenet]. It enforces a margin between instances of the same class and other classes in the embedding feature space. This approach increases performance accuracy and robustness due to the large separation between class clusters in the embedding space. The triplet loss can be used in various tasks, namely detection, classification, recognition and other tasks of unknown number of classes.
In this approach, three instances are used in each training step $i$: an anchor $\mathbf{x}_i^a$, another instance $\mathbf{x}_i^p$ from the same class of the anchor (positive sample), and a sample $\mathbf{x}_i^n$ from a different class (negative class). They are required to obey the following inequality: $$\left\Vert \Phi(\mathbf{x}_i^a)-\Phi(\mathbf{x}_i^p) \right\Vert_2^2+\alpha<\left\Vert \Phi(\mathbf{x}_i^a)-\Phi(\mathbf{x}_i^n)\right\Vert_2^2,$$ where $\alpha<0$ enforces the wanted margin from other classes. Thus, the triplet loss is defined by: $$\mathcal{L}=\sum_i\left\Vert \Phi(\mathbf{x}_i^a)-\Phi(\mathbf{x}_i^p)\right\Vert_2^2-\left\Vert \Phi(\mathbf{x}_i^a)-\Phi(\mathbf{x}_i^n)\right\Vert_2^2+\alpha.$$
Fig. \[fig:triplet\] presents a schematic illustration of the triplet loss influence on samples in the embedding space. This illustration also exhibits a specific triplet example, where the positive examples are relatively far from the anchor while negative examples are relatively near the anchor. Finding such examples that violate the triplet condition is desirable during training. They may be found by on-line or off-line searches known as *hard negative mining*. A preprocessing of the instances in the embedding space is performed to find violating examples for training the network.
Finding the “best” instances for training can, evidently, aid in achieving improved convergence. However, searching for them is often time consuming and therefore alternative techniques are being explored.
![Triplet loss: minimizes the distance between two similar class examples (anchor and positive), and maximizes the distance between two different class examples (anchor and negative).[]{data-label="fig:triplet"}](pics/triplet.png){width="60.00000%"}
An intriguing metric learning approach relies on ’classification’-type loss functions, where the network is trained given a fixed number of classes. Yet, these losses are designed to create good embedding space that creates margin between classes, which in turn provides good prediction of similarity between two inputs. Popular examples include the Cos-loss [@CosFace], Arc-loss [@ArcFace] and SphereFace [@Liu2017SphereFaceDH].
Neural network training {#sec:training}
=======================
Given a loss function, the weights of the neural network are updated to minimize it for a given training set. The training process of a neural network requires a large database due to the nature of the network (structure and amount of parameters) and GPUs for efficient training implementation.
In general, training methods can be divided into supervised and unsupervised training. The former consists of labeled data that are usually very expensive and time consuming to obtain. Whereas the latter is the more common case and does not assume known ground-truth labels. However, supervised training usually achieves significantly better network performance compared to the unsupervised case. Therefore, a lot of resources are invested in labeling datasets for training. Thus, we focus here mainly on the supervised setting.
In neural networks, regardless of the model task, all training phases have the same goal: to minimize a pre-defined error function, also denoted as the loss/cost function. This is done in two stages: (a) a feed-forward pass of the input data through all the network layers, calculating the error using the predicted outputs and their ground-truth labels (if available); followed by (b) backpropogation of the errors through the network to update their weights, from the last layer to the first. This process is performed continuously to find the optimized values for the weights of the network.
The backpropagation algorithm provides the gradients of the error with respect to the network weights. These gradients are used to update the weights of the network. Calculating them based on the whole input data is computationally demanding and therefore, the common practice is to use subsets of the training set, termed *mini-batches*, and cycle over the entire training set multiple times. Each cycle of training over the whole dataset is termed an *epoch* and in every cycle the data samples are used in a random order to avoid biases. The training process ends when convergence in the loss function is obtained. Since most NN problems are not convex, an optimal solution is not assured. We turn now to describe in more details the training process using backpropagation.
Backpropogation {#subsec:backprop}
---------------
[r]{}[5.5cm]{} {width="20.00000%"}
The backpropagation process is performed to update all the parameters of the model, with the goal of decreasing the loss function value. The process starts with a feed-forward pass of input data, $\mathbf{x}$, through all the network layers. After which the loss function value is calculated and denoted as ${\mathcal{L}}(\mathbf{x},{\bf W})$, where ${\bf W}$ are the model parameters (including the model weights and biases, for formulation convenience). Then the backpropagation is initiated by computing the value of: $\frac{\partial {\mathcal{L}}}{\partial {\bf W}}$, followed by the update of the network weights. All the weights are updated recursively by calculating the gradients of every layer, from the final one to the input layer, using the chain rule.
Denote the output of layer $l$ as ${\bf z}^{(l)}$. Following the chain rule, the gradients of a given layer $l$ with parameters ${\bf W}^{(l)}$ with respect to its input ${\bf z}^{(l)}$ are: $$\frac{\partial {\mathcal{L}}}{\partial {\bf z}^{(l-1)}}=\frac{\partial {\mathcal{L}}}{\partial {\bf z}^{(l)}}\cdot\frac{\partial {\bf z}^{(l)}({\bf W}^{(l)},{\bf z}^{(l-1)})}{\partial {\bf z}^{(l-1)}},$$ and the gradients with respect to the parameters are: $$\frac{\partial {\mathcal{L}}}{\partial {\bf W}^{(l)}}=\frac{\partial {\mathcal{L}}}{\partial {\bf z}^{(l)}}\cdot\frac{\partial {\bf z}^{(l)}({\bf W}^{(l)},{\bf z}^{(l-1)})}{\partial {\bf W}^{(l)}}.$$ These two formulas of the backpropagation algorithm dictate the gradients calculation with respect to the parameters for each layer in the network and, therefore, the optimization can be performed using gradient-based optimizers (see Section \[sec:optimizers\] for more details).
To demonstrate the use of the backpropagation technique for the calculation of the network gradients, we turn to consider an example of a simple classification model with two-layers: a fully-connected layer with a ReLU activation function followed by another fully-connected layer with softmax function and log-loss. See Fig. \[fig:example\] for the model illustration.
Denote by ${\bf z}^{(3)}$ the output of the softmax layer and assume that the input $\mathbf{x}$ belongs to class $k$ (using one-hot encoding $y_k=1$). The log-loss in this case is: $${\mathcal{L}}=-\sum_i\log\big(z_i^{(3)}\big)y_i=-\log\Bigg(\frac{\exp\big(z^{(2)}_k\big)}{\sum_i\exp\big(z^{(2)}_i\big)}\Bigg)=-z^{(2)}_k+\log\Big(\sum_j \exp{z^{(2)}_j}\Big).$$ For all $i\neq k$, the gradient of the error with respect to the softmax input $z_i^{(2)}$ is $$\frac{\partial {\mathcal{L}}}{\partial z^{(2)}_i}=\frac{\exp\big({z^{(2)}_i}\big)}{\sum_j\exp\big(z^{(2)}_j\big)}\equiv g_i.$$ Notice that this implies that we need to decrease the value of $z_i^{(2)}$ (the $i^{\text{th}}$-logit) proportionally to the probability the network provides to it. While for the correct label, $i=k$, the derivative is: $$\frac{\partial {\mathcal{L}}}{\partial z^{(2)}_k}=-1+\frac{\exp\big({z^{(2)}_k}\big)}{\sum_j\exp\big(z^{(2)}_j\big)}= g_k-1,$$ which implies that the value of the logit element associated with the true label should be increased proportionally to the mistake the network is currently doing in the prediction.
The output ${\bf z}^{(2)}$ is a product of a fully-connect layer. Therefore, it can be formulated as follows: $${\bf z}^{(2)}={\bf W}^{(2)}\tilde{{\bf z}}^{(1)},$$ where $\tilde{{\bf z}}^{(1)}$ is the output of the ReLu function. Following the backpropagation rules we get that for this layer, the derivative with respect to its input is: $$\label{eq:backprop_fc2}
\frac{\partial {\mathcal{L}}}{\partial \tilde{{\bf z}}^{(1)}}=\frac{\partial {\mathcal{L}}}{\partial {\bf z}^{(2)}}\cdot\frac{\partial {\bf z}^{(2)}({\bf W}^{(2)},\tilde{{\bf z}}^{(1)})}{\partial \tilde{{\bf z}}^{(1)}}=\frac{\partial {\mathcal{L}}}{\partial {\bf z}^{(2)}}\cdot{\bf W}^{(2)},$$ whereas, the derivative with respect to its parameters is: $$\frac{\partial {\mathcal{L}}}{\partial {\bf W}^{(2)}}=\frac{\partial {\mathcal{L}}}{\partial {\bf z}^{(2)}}\cdot\frac{\partial {\bf z}^{(2)}({\bf W}^{(2)},\tilde{{\bf z}}^{(1)})}{\partial {\bf W}^{(1)}}=\frac{\partial {\mathcal{L}}}{\partial {\bf z}^{(2)}}\cdot \tilde{{\bf z}}^{(1)}.$$ The ReLU operation has no weight to update, but affects the gradients. The derivative of this stage follows: $$\frac{\partial {\mathcal{L}}}{\partial {\bf z}^{(1)}}=\frac{\partial {\mathcal{L}}}{\partial \tilde{{\bf z}}^{(1)}}\cdot\frac{\partial \tilde{{\bf z}}^{(1)}({\bf W}^{(1)},I)}{\partial {\bf z}^{(1)}}=\begin{cases}
0, &\text{if } {\bf z}^{(1)}<0\\
\frac{\partial {\mathcal{L}}}{\partial \tilde{{\bf z}}^{(1)}}, &\text{otherwise}.
\end{cases}$$ The final derivative with respect to the input $\partial {\mathcal{L}}/\partial \mathbf{x}$ is calculated similar to Eq. .
Training considerations
-----------------------
There are several considerations that should be addressed when training a NN. The most infamous is the *overfitting*, i.e., when the model too closely fits to the training dataset but does not generalize well to the test set. When this occurs, high training data precision is achieved, while the precision on the test data (not used during training) is low [@Tetko95]. For this purpose, various regularization techniques have been proposed. We discuss some of them in Section \[sec:regularizations\].
A second consideration is the vanishing/exploding gradients occurring during training. Vanishing gradients are a result of multiplications with values smaller than one during their calculation in the backpropagation recursion. This can be resolved using activation functions and batch normalization detailed in Section \[sec:regularizations\]. On the other hand, the gradients might also explode due to derivatives that are significantly larger than one in the backpropogation calculation. This makes the training unstable and may imply the need for re-designing the model (e.g., replace a vanilla RNN with a gated architecture such as LSTMs) or the use of gradient clipping [@Pascanu2013].
Another important issue is the requirement that the training dataset must represent the true distribution of the task at hand. This usually enforces very large annotated datasets, which necessitate significant funding and manpower to obtain. In this case, considerable efforts must be invested to train the network using these large datasets, commonly with multiple GPUs for several days [@AlexNet; @karras2018progressive]. One may use techniques such as domain adaptation [@Wilson2018ASO] or transfer learning [@Transfer_survey] to use already existing networks or large datasets for new tasks.
Training optimizers {#sec:optimizers}
===================
Training neural networks is done by applying an optimizer to reach an optimal solution for the defined loss function. Its goal is to find the parameters of the model, e.g., weights and biases, which achieve minimum error for the training set samples: $(\mathbf{x}_i, y_i)$, where $y_i$ is the label for the instance $\mathbf{x}_i$. For a loss function ${\mathcal{L}}(\cdot)$, the objective reads as: $$\label{eq:training_error}
\sum_i{{\mathcal{L}}(\Phi(\mathbf{x}_i,\mathbf{W}),y_i)},$$ for ease of notation, all model parameters are denoted as $\mathbf{W}$. A variety of optimizers have been proposed and implemented for minimizing Eq. \[eq:training\_error\]. Yet, due to the size of the network and training dataset, mainly first-order methods are being considered, i.e. strategies that rely only on the gradients (and not on second-order derivatives such as the Hessian).
Several gradient based optimizers are commonly used for updating the parameters of the model. These NN parameters are updated in the opposite direction of the objective function’s gradient, $g_{\{\text{GD},\mathcal{T}(t)\}}$, where $\mathcal{T}(t)$ is a randomly chosen subgroup of size $n'<n$ training samples used in iteration $t$ ($n$ is the size of the training dataset). Namely, at iteration $t$ the weights are calculated as $$\label{eq:update_weights}
\mathbf{W}(t)=\mathbf{W}(t-1)-\eta \cdot g_{\{\text{GD},\mathcal{T}(t)\}},$$ where $\eta$ is the learning rate that determines the size of the steps taken to reach the (local) minimum and the gradient step, $g_{\text{\{GD},\mathcal{T}(t)\}}$ is computed using the samples in $\mathcal{T}(t)$ as $$\label{eq:GD}
g_{\text{\{GD},\mathcal{T}(t)\}} = \frac{1}{n'}\sum_{i\in \mathcal{T}(t)}\nabla _{W}\mathcal{L}(\mathbf{W}(t);\mathbf{x}_i;y_i),$$ where the pair $(\mathbf{x}_i,y_i)$ is a training example and its corresponding label in the training set, and $\mathcal{L}$ is the loss function. However, needless to say that calculating the gradient on the whole dataset is computationally demanding. To this end, Stochastic Gradient Descent (SGD) is more popular, since it calculates the gradient in Eq. for only one randomly chosen example from the data, i.e., $n'=1$.
Since the update by SGD depends on a different sample at each iteration, it has a high variance that causes the loss value to fluctuate. While this behavior may enable it to jump to a new and potentially better local minima, it might ultimately complicates convergence, as SGD may keep overshooting. To improve convergence and exploit parallel computing power, mini-batch SGD is proposed in which the gradient in Eq. is calculated with $n'>1$ (but not all the data).
An acceleration in convergence may be obtained by using the history of the last gradient steps, in order to stabilize the optimization. One such approach uses adaptive momentum instead of a fixed step size. This is calculated based on exponential smoothing on the gradients, i.e: $$\begin{aligned}
M(t)&=\beta\cdot M(t-1)+(1-\beta)\cdot g_{\{\text{SGD},\mathcal{T}(t)\}},\\
\mathbf{W}(t)&=\mathbf{W}(t-1)-\eta M(t),
\end{aligned}$$ where $M(t)$ approximates the $1^\text{st}$ moment of $g_{\{\text{SGD},\mathcal{T}(t)\}}$. A typical value for the constant is , which implies taking into account the last $10$ gradient steps in the momentum variable $M(t)$ [@Qian99]. A well-known variant of Momentum proposed by Nestrov *et al.* [@nesterov1983method] is the Nestrov Accelerated Gradient (NAG). It is similar to Momentum but calculates the gradient step as if the network weights have been already updated with the current Momentum direction.
Another popular technique is the Adaptive Moment Estimation (ADAM) [@Adam14], which also computes adaptive learning rates. In addition to storing an exponentially decaying average of past squared gradients, $V(t)$, ADAM also keeps an exponentially decaying average of past gradients, $M(t)$, in the following way: $$\begin{aligned}
M(t)&=\beta_1M(t-1)+(1-\beta_1)g_t, \\
V(t)&=\beta_2V(t-1)+(1-\beta_2)g_t^2,
\end{aligned}$$ where $g_t$ is the gradient of the current batch, $\beta_1$ and $\beta_2$ are ADAM’s hyperparameters, usually set to 0.9 and 0.999 respectively, and $M(t)$ and $V(t)$ are estimates of the first moment (the mean) and the second moment (the uncentered variance) of the gradients respectively. Hence the name of the method - Adaptive Moment Estimation. As $M(t)$ and $V(t)$ are initialized as vectors of 0’s, the authors of ADAM observe that they are biased towards zero, especially during the initial time steps. To counteract these biases, a bias-corrected first and second moment are used: and . Therefore, the ADAM update rule is as follows: $$\mathbf{W}(t+1)=\mathbf{W}(t)-\frac{\eta}{\sqrt{\hat{V}(t)+\epsilon}}\hat{M}(t).$$ ADAM has two popular extensions: AdamW by Loshchilov *et al.* [@loshchilov2018decoupled] and AMSGrad by Redddi *et al.* [@Reddi2018]. There are several additional common optimizers that have adaptive momentum, such as AdaGrad [@Duchi2011], AdaDelta [@Zeiler2012] or RMSprop [@Dauphin2015]. It must be noted that since the NN optimization is non-convex, the minimal error point reached by each optimizer is rarely the same. Thus, speedy convergence is not always favored. In particular, it has been observed that Momentum leads to better generalization than ADAM, which usually converges faster [@keskar2017improving]. Thus, the common practice is to make the development with ADAM and then make the final training with Momentum.
![Different image augmentations.[]{data-label="fig:augmentations"}](pics/cat.png "fig:"){width="0.75\linewidth"} \[fig:org\]
![Different image augmentations.[]{data-label="fig:augmentations"}](pics/cat_flip.png "fig:"){width="0.75\linewidth"} \[fig:flip\]
\
----------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------
![Different image augmentations.[]{data-label="fig:augmentations"}](pics/cat_crop.png "fig:"){width="0.45\linewidth"} ![Different image augmentations.[]{data-label="fig:augmentations"}](pics/cat_crop.png "fig:"){width="0.3\linewidth"}
----------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------
\[fig:crop\]
![Different image augmentations.[]{data-label="fig:augmentations"}](pics/cat_noised.png "fig:"){width="0.75\linewidth"} \[fig:noise\]
Training regularizations {#sec:regularizations}
========================
One of the great advantageous of NN is their ability to generalize, i.e., correctly predict unseen data [@Jakubovitz2019]. This must be ensured during the training process and is accomplished by several regularization methods, detailed here. The most common are weight decay [@Krogh_WD92], dropout [@Srivastava2014], batch normalization [@IoffeS15] and the use of data augmentation [@Shorten2019].
*Weight decay* is a basic tool to limit the growth of the weights by adding a regularization term to the cost function for large weights, which is the sum of squares of all the weights, i.e., $\sum_i |W_i|^2$.
The key idea in *dropout* is to randomly drop units (along with their connections) from the neural network during training and thus prevent units from co-adapting too much. The percentage of dropped units is critical since a large amount will result in poor learning. Common values are $20\%-50\%$ dropped units.
*Batch normalization* is a mean to deal with changes in the distribution of the model’s parameters during training. The layers need to adapt to these (often noisy) changes between instances during training. Batch normalization causes the features of each training batch to have a mean of 0 and a variance of 1 in the layer it is being applied. To normalize a value across a batch, i.e. to batch normalize the value, the batch mean, $\mu _B$, is subtracted and the result is divided by the batch standard deviation, $\sqrt{\sigma _B^2+\epsilon}$. Note that a small constant $\epsilon$ is added to the variance in order to avoid dividing by zero. The batch normalizing transform of a given input, $\mathbf{x}$, is: $$\text{BN}(\mathbf{x})=\gamma \Bigg( \frac{\mathbf{x}-\mu_B}{\sqrt{\sigma _B^2+\epsilon}}\Bigg)+\beta.$$ Notice the (learnable) scale and bias parameters $\gamma$ and $\beta$, which provides the NN with freedom to deviate from the zero mean and unit variance. BN is less effective when used with small batch sizes since in this case the statistics calculated per each is less accurate. Thus, techniques such as group normalization [@Wu_2018_ECCV] or Filter Response Normalization (FRN) [@Singh2019] have been proposed.
*Data augmentation* is a very common strategy used during training to artificially “increase” the size of the training data and make the network robust to transformations that do not change the input label. For example, in the task of classification a shifted cat is still a cat; see Fig. \[fig:augmentations\] for more similar augmentation. In the task of denoising, flipped noisy input should result in a flipped clean output. Thus, during training the network is trained also with the transformed data to improve its performance.
Common augmentations are randomly flipping, rotating, scaling, cropping, translating, or adding noise to the data. Other more sophisticated techniques that lead to a significant improvement in network performance include mixup [@zhang2018mixup], cutout [@devries2017cutput] and augmentations that are learned automatically [@Autoaugment2018; @lim2019fast; @cubuk2019randaugment].
Advanced NN architectures {#sec:architectures}
=========================
The basic building blocks, which compose the NN model architecture, are used in frequently innovative structures. In this section, such known architectures with state-of-the-art performance are presented, divided by tasks and data types: detection and segmentation tasks are described in Section \[subsec:detection\_segmentation\], sequential data handling is elaborated in Section \[subsec:sequential\] and processing data on irregular grids is presented in Section \[subsec:irregular\_grids\]. Clearly, there are many other use-cases and architectures, which are not mentioned here.
Deep learning for detection and segmentation {#subsec:detection_segmentation}
--------------------------------------------
Many research works focus on detecting multiple objects in a scene, due to its numerous applications. This problem can be divided into four sub-tasks as follows, where we refer here to image datasets although the same concept can be applied to different domains as well.
1. *Classification and localization*: The main object in the image is detected and then localized by a surrounding bounding box and classified from a pre-known set.
2. *Object detection*: Detection of all objects in a scene that belong to a pre-known set and then classifying and providing a bounding box for each of them.
3. *Semantic segmentation*: Partitioning the image into coherent parts by assigning each pixel in the image with its own classification label (associated with the object the pixel belongs to). For example, having a pixel-wise differentiation between animals, sky and background (generic class for all object that no class is assigned to) in an image.
4. *Instance segmentation*: Multiple objects segmentation and classification from a pre-known set (similar to object detection but for each object all its pixels are identified instead of providing a bounding box for it).
Today, state-of-the-art object detection performance is achieved with architectures such as Faster-RCNN [@Ren2015FasterRCNN; @wang2017fast], You Only Look Once (YOLO) [@Redmon2015YouOL; @YOLO2; @YOLO3], Single Shot Detector (SSD) [@Liu2016SSDSS] and Fully Convolutional One-Stage Object Detection (FCOS) [@FCOS2019]. The object detection models provide a list of detected bounding boxes with the class of each of them.
Segmentation tasks are mostly implemented using fully convolutional network. Known segmentation models include UNet [@Unet], Mask-RCNN [@He2017MaskR] and Deeplab [@deeplabv3plus2018]. These architecture have the same input/output spatial size since the output represents the segmentation map of the input image.
Both object detection and segmentation tasks are analyzed via the Intersection over Union (IoU) metric. The IoU is defined as the ratio between the intersection area of the object’s ground-truth pixels, $B_g$, with the corresponding predicted pixels, $B_p$, and the union of these group of pixels. The IoU is formulated as: $$\text{IoU}=\frac{\text{Area}\{B_g\cap B_p\}}{\text{Area}\{B_g\cup B_p\}}.$$ As this measure evaluate only the quality of the bounding box, a mean Average Precision (mAP) is commonly used to evaluate the models performance. The mAP is defined as the ratio of the correctly detected (or segmented) objects, where an object is considered to be detected correctly if there is a bounding box for it with the correct class and a IoU greater than 0.5 (or another specified constant).
Another common evaluation metric is the F1 score, which is the harmonic average of the precision and the recall values. See Eq. below. They are calculated using the following definitions that are presented for the case of semantic segmentation:
- True Positive (TP): the predicted class of a pixel matches it ground-truth label.
- False Positive (FP): the predicted pixel of an object was falsely determined.
- False Negative (FN): a ground-truth pixel of an object was not predicted.
Now that they are defined, the *precision*, *recall* and F1 are given by: $$\text{precision}=\frac{\text{TP}}{\text{TP}+\text{FP}},\hspace{10pt}
\text{recall}=\frac{\text{TP}}{\text{TP}+\text{FN}}$$ $$\label{eq:F1_score}
\text{F1}=2\cdot\frac{\text{precision}\cdot\text{recall}}{\text{precision}+\text{recall}}.$$
Deep learning on sequential data {#subsec:sequential}
--------------------------------
Sequential data are composed of time-sensitive signals such as the output of different sensors, audio recordings, NLP sentences or any signal that its order is of importance. Therefore, this data must be processed accordingly.
Initially, sequential data was processed with Recurrent NN (RNN) [@Jain1999RNN] that has recurrent (feedback) connections, where outputs of the network at a given time-step serve as input to the model (in addition to the input data) at the next time-step. This introduces the time dependent feature of the NN. A RNN is illustrated in Fig. \[fig:RNN\].
However, it was quickly realized that during training, vanilla RNNs suffer from vanishing/exploding gradients. This phenomena, originated from the use of finite-precision back-propagation process, limits the size of the sequence.
To this end, a corner stone block is used: the Long-Short-Term-Memory (LSTM [@Hochreiter1997]). Mostly used for NLP tasks, the LSTM is a RNN block with gates. During training, these gates learn which part of the sentence to forget or to memorize. The gating allow some of the gradients to backpropagate unchanged, which aids the vanishing gradient symptom. Notice that RNNs (and LSTMs) can process a sentence in a bi-directional mode, i.e., process a sentence in two directions, from the beginning to the end and vice verse. This mechanism allows a better grasp of the input context by the network. Examples for popular research tasks in NLP data include question answering [@Radford2018ImprovingLU], translation [@lample2017unsupervised] and text generation [@TextGeneration].
[**Sentences processing.**]{} An important issue in NLP is representing words in preparation to serve as network input. The use of straight forward indices is not effective since there are thousands of words in a language. Therefore, it is common to process text data via *word embedding*, which is a vector representation of each word in some fixed dimension. This method enables to encapsulate relationships between words.
A classic methodology to calculate the word embedding is *Word2Vec* [@NIPS2013_5021], in which these vector representations are calculated using a NN model that learn their context. More advanced options for creating efficient word representations include BERT [@Bert2018], ELMO [@Peters2018], RoBERTa [@liu2020roberta] and XLNet [@yang2019xlnet_arxiv].
[**Audio processing.**]{} Audio recordings are used for multiple interesting tasks, such as speech to text, text to speech and speech processing. In the audio case, the common input to speech systems is the Mel Frequency Cepstral Coefficient (MFCC) or a Short Time Fourier Transform (STFT) image, as opposed to the audio raw-data. A milestone example for speech processing NN architecture is the *wavenet* [@WaveNet_Arxiv]. This architecture is an autoregressive model that synthesizes speech or audio signals. It is based on dilated convolutional layers that have large receptive fields, that allow efficient processing. Another prominent synthesis model for sequential data is the Tacotron [@Shen18Natural].
[**The attention model.**]{} As mentioned in Section \[sec:basic\_structure\], one may use RNN for translation using the encoder decoder model, which encodes a source sentence into a vector, which is then decoded to a target language. Instead of relying on a compressed vector, which may lose information, the *attention models* learn where or what to focus on from the whole input sequence. Introduced in 2015 [@Bengio2015], attention models have shown superior performance over encoder-decoder architectures in tasks such as translation, text to speech and image captioning. Recently, it has been suggested to replace the recurrent network structure totally by the attention mechanism, which results with the *transformers network* models [@Transformers2017].
Deep learning on irregular grids {#subsec:irregular_grids}
--------------------------------
A wide variety of data acquisition mechanisms do not represent the data on a grid as is common with images data. A prominent example is 3D imaging (e.g. using LIDAR), where the input data are represented as points in a 3D space with or without color information. Processing such data is not trivial as standard network components, such as convolutions, assume a grid of the data. Therefore, they cannot be applied as is and custom operations are required. We focus our discussion here on the case of NN for 3D data.
Today, real-time processing of 3D scenes can be achieved with advanced NN models that are customized to these irregular grids. The different processing techniques for these irregular grid data can be divided by the type of representation used for the data:
1. [**Points processing.**]{} 3D data points are processed as points in space, i.e., a list of the point coordinates is given as the input to the NN. A popular network for this representation is *PointNet* [@Qi2016PointNetDL]. It is the first to efficiently achieve satisfactory results directly on the point cloud. Yet, it is limited by the number of points that can be analyzed, computational time and performance. Some more recent models that improves its performance include PointNet++ [@qi2017pointnetplusplus], PointCNN [@Li2018PointCNNCO], DGCNN [@dgcnn]. Strategies to improve its efficiency have been proposed in learning to sample [@Dovrat_2019_CVPR] and RandLA-Net [@hu2019randla].
2. [**Multi-view 2D projections.**]{} 3D data points are projected (from various angles) to the 2D domain so that known 2D processing techniques can be used [@NIPS2016; @Kalogerakis2016].
3. [**Volumetric (voxels).**]{} 3D data points are represented in a grid-based *voxel* representation. This is analogous to a 2D representation and is therefore advantageous. However, it is computationally exhaustive [@Wu2014] and losses resolution.
4. [**Meshes.**]{} Mesh represents the 3D domain via a graph that defines the connectivity between the different points. Yet, this graph has a special structure such that it creates the surface of the 3D shape (in the common case of triangular mesh, the shape surface is presented by a set of triangles connected to each other). In 2015 Masci *et al.* [@Boscaini2015LearningCD] have shown it is possible to learn features using DL on meshes. Since then, a significant advancement has been made in mesh processing [@hanocka2019meshcnn; @Bronstein2017].
5. [**Graphs.**]{} Graph representations are common for representing non-linear structured data. Some works have proposed efficient NN models for 3D data points on a grid-based graph structure [@Such2017RobustSF; @Niepert2016].
Summary {#sec:summary}
=======
This chapter provided a general survey of the basic concepts in neural networks. As this field is expanding very fast, the space is too short to describe all the developments in it, even though most of them are from the past eight years. Yet, we briefly mention here few important problems that are currently being studied.
1. [**Domain adaptation and transfer learning.**]{} As many applications necessitate data that is very difficult to obtain, some methods aim at training models based on scarce datasets. A popular methodology for dealing with insufficient annotated data is *domain adaptation*, in which a robust and high performance NN model, trained on a source distribution, is used to aid the training of a similar model (usually with the same goal, e.g., in classification the same classes are searched for) on data from a target distribution that are either unlabelled or small in number [@ganin2014unsupervised; @pan2010domain; @DIRT-T]. An example is adapting a NN trained on simulation data to real-life data with the same labels [@Tzeng2017AdversarialDD; @CyCADA2018]. On a similar note, *transfer learning* [@Transfer_survey; @DeCAF14] can also be used in similar cases, where in addition to the difference in the data, the input and output tasks are not the same but only similar (in domain adaptation the task is the same and only the distributions are different). One such example, is using a network trained on natural images to classify medical data [@Bar15Deep].
2. [**Few shot learning.**]{} A special case of learning with small datasets is *few-shot learning* [@Wang2019], where one is provided either with just semantic information of the target classes (zero-shot learning), only one labelled example per class (1-shot learning) or just few samples (general few-shot learning). Approaches developed for these problems have shown great success in many applications, such as image classification [@sung2018learning; @NIPS2018_7549; @Sun_2019_CVPR], object detection [@Karlinsky_2019_CVPR] and segmentation [@caelles2017one].
3. [**On-line learning.**]{} Various deep learning challenges occur due to new distributions or class types introduced to the model during a continuous operation of the system (post-training), and now must be learnt by the model. The model can update its weights to incorporate these new data using *on-line learning* techniques. There is a need for special training in this case, as systems that just learn based on the new examples may suffer from a reduced performance on the original data. This phenomena is known as catastrophic forgetting [@kemker2018measuring]. Often, the model tends to forget the representation of part of the distribution it already learned and thus it develops a bias towards the new data. A specific example of on-line learning is *incremental learning* [@castro2018end], where the new data is of different classes than the original ones.
4. [**AutoML.**]{} When approaching real-life problems, there is an inherent pipeline of tasks to be preformed before using DL tools, such as problem definition, preparing the data and processing it. Commonly, these tasks are preformed by specialists and require deep system understating. To this end, the *autoML* paradigm attempts to generalize this process by automatically learning and tuning the model used [@autoML].
A particular popular task in autoML is *Neural Architecture Search (NAS)* [@Elsken2018NeuralAS]. This is of interest since the NN architecture restricts its performance. However, searching for the optimal architecture for a specific task, and from a set of pre-defined operations, is computationally exhaustive when performed in a straight forward manner. Therefore, on-going research attempts to overcome this limitation. An example is the DARTS [@liu2018darts] strategy and its extensions [@noy2019asap; @chen2019progressiveDARTS] where the key contribution is finding, in a differentiable manner, the connections between network operations that form a NN architecture. This framework decreases the search time and improves the final accuracy.
5. [**Reinforcement Learning.**]{} To date, the most effective training method for decision based actions, such as robot movement and video games, is *Reinforcement Learning* (RL) [@kaelbling1996reinforcement; @sutton2018reinforcement]. In RL, the model tries to maximize some pre-defined award score by learning which action to take, from a set of defined actions in specific scenarios.
To summarize, being able to efficiently train deep neural networks has revolutionized almost every aspect of the modern day-to-day life. Examples span from bio-medical applications through computer graphics in movies and videos to international scale applications of big companies, such as Google, Amazon, Microsoft, Apple and Facebook. Evidently, this theory is drawing much attention and we believe there is still much to unravel, including exploring and understanding the NN’s potential abilities and limitations.
The next chapters detail Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN), generative models and autoencoders. All are very important paradigms that are used in numerous applications.
|
4 Killer Tips To Nail Your Next Portrait Shoot
If you’re a portrait photographer perhaps you’ve found yourself in the position of carrying bulky, heavy gear around. That’s unnecessary though. With these 4 easy tips you’ll be able to get the best results at your next portrait session without the hassle and the back pain afterwards.
Photo by Paul Faecks
These tips also help you save a fair amount of money that you can spend elsewhere (Perhaps on the 4k$ Zeiss Otus 85mm?) Just kidding, unless you’re rich you won’t be able to afford the Zeiss anyways…
TIP #1 Use the Diffusion Of Your Reflector As a Light source
I used to carry softboxes, strobes and heavy battery packs to every on-location shoot I had. Terrible mistake. Most reflectors come with a semi-transparent inner sheet of diffusion material (see image) that can be used to soften hard sunlight. The lighting that you are able to achieve using this diffusion material is nothing short of amazing and reflectors weigh and cost next to nothing so why not give it a shot at your next shoot?
To effectively use this technique though you’ve got to understand that as soon as you place the reflector between a light source (for example the sun) and your subject, the reflector basically becomes a light source. With that in mind you can manipulate the softness and direction of the light just as you could if it was a softbox or any other kind of light modifier. If you go completely crazy you can also use multiple reflectors to gain even more control over the lighting in your image. And there you have it: professional looking lighting that can be achieved on a tight budget and without breaking your back.
Photo by Paul Faecks
If you want to get yourself a reflector: this one is my favorite because it can easily be operated without the help of an assistant.
TIP #2 Get Your Model Comfortable/Build Social Connections
Many photographers (including myself) have at some point made the mistake of focusing all their efforts and energy on getting a technically perfect image but they’re not taking communication with the model and the team seriously enough. Although you totally should! Many times facial expression and posing are just… awkward in amateur pictures and you, as the photographer, are to blame for that.
Try to figure out how to comfort your model, how to properly tell your subject how to pose and what to do to elevate your images to the next level. Also, by making friends and connecting with people you greatly improve your chances of getting hired because people will trust and like you more and will therefore be willing to pay you more.
Photo by Paul Faecks
TIP #3 Try Something Unusual
Our world is flooded with images of all kinds so if you just do what everyone else does, how are you going to stand out? If you want to be noticed you’ve got to develop creative ideas that are different from everything else. Unconventional ideas are often the best and have lead to some amazing pictures. So don’t be afraid to experiment with different camera angles, perspectives or an unusual lighting setup!
You’re the pilot in your own creative plane and you’re not restricted by rules or the standards of what is “normal” because “normal” images are great but they’ve been done dozens of times. I know, it’s extremely tough to follow the advice of “just doing something unusual” but why don’t you just try to portray the next crazy idea you have? Maybe your idea is so unique and awesome that it instantly goes viral. Who knows?
TIP #4 Go For the Eyes
Eyes are the single most important facial feature when it comes to portraiture. They allow everyone that’s looking at the image to connect with the subject and to see what the person might be feeling. You can use that knowledge to your advantage: Try to make the eyes stand out as much as possible.
Portraits in which the eyes are in deep shadow often look very dark and cold because the viewer isn’t able to build that connection with the subject. If that’s what you’re going for then go ahead, if you want the photo to look warm, friendly and engaging though you shouldn’t leave the eyes in deep shadow. Of course that’s a creative decision only you as the artist can make. To separate the eyes from the rest of the face even more you could use a shallow depth of field to blur everything except for the eyes.
Conclusion
Shooting great portraits is easier and it requires less expensive gear than one would think. Many times reflectors do the job just as good as a traditional softbox would while they’re way cheaper and lighter than the traditional approach. Being able to direct and pose your subject properly is a skill that is just as important as exposure or anything else.
It’s often underestimated and forgotten though. Another critical thing aspiring photographers have to fight with is trying to stand out to be different than everyone else because if almost every photographer is capable of taking the images you took, why would anyone book you instead of someone else? So be unique and offer a photographic service other photographers don’t offer!
What do you think about these tips? Is there anything you would want to add? Feel free to let me know in the comments.
All images (unless stated differently) belong to Paul Faecks and are exclusively used by photodoto with permission. Do not copy, modify or repost without the permission from Paul Faecks and photodoto!
4 Killer Tips To Nail Your Next Blog Post
1) Use your head, try and come up with something original, do not recycle advise that has been given hundreds of times before.
2) Build a social connection with your audience, engage them in dialogue. Don’t treat them like idiots.
3) Try something unusual, don’t be just another useless blog.
4) Go got real content. |
Sarkozy, the former French president, faces a trial over allegations of bribery and accepting illegal campaign donations from Muammar Gaddafi |
Soundex - Ivoah
https://en.wikipedia.org/wiki/Soundex
======
drawkbox
When I worked at eMarketing / Nomadic Agency I used soundex or SOUNDEX() in
Microsoft SQL Server many times. Very useful.
One big place was on all Kraftfoods sites for search in recipes, products and
brand sites. One use was for ingredient lookups from misspellings and search
2000-2008ish, still there at
[http://www.kraftrecipes.com/](http://www.kraftrecipes.com/) on the search
function. When you put in 'chiken' you'll get 'chicken' for instance. Pretty
useful for misspellings back then and even today.
Fun fact we later also used Alta Vista search and even had a Google appliance,
back when they made those, for aggregated site searching that tied into all
search results across their brand sites. Search would check misspellings which
part of that was SOUNDEX() then also aggregate search ingredients, recipes,
products and content across their enterprise brand sites using the AV or
Google boxes.
Another fun fact, kraftfoods sites were one of the first Microsoft .NET
production uses. We worked with Microsoft in .NET 1.0 in 2001 to coincide with
the release in 2002. We switched them from a combination of Perl sites and
Java sites from Java / ATG Dynamo 10+ servers and 20+ Oracle servers to .NET
with 3-4 web/app servers and 3 Microsoft SQL Servers.
~~~
stevesimmons
Ah, ATG Dynamo... Memories of my startup days in 2001. Sun gave us a Solaris
server and pushed us hard to use ATG Dynamo. That lasted all of half a day
before I downloaded Zope, learnt Python and within a couple of days had a far
more customizable prototype site ready to demo.
As I'm now a full time Python developer, I guess I owe it to ATG Dynamo...
------
knadh
Metaphone [1] addresses a lot of issues Soundex has. While Soundex is aimed
specifically at names, Metaphone works for all English words.
PS: Inspired by Metaphone, I wrote MLphone [2] a phonetic lib for the
Malayalam (South India) language. The phonetic keys the algorithm produces are
Roman characters though.
[1]
[https://en.wikipedia.org/wiki/Metaphone](https://en.wikipedia.org/wiki/Metaphone)
[2] [https://nadh.in/code/mlphone/](https://nadh.in/code/mlphone/)
------
joezydeco
If you live in Illinois, Wisconsin, or Florida the Soundex code is used to
create your drivers license number. You can derive almost anyone’s number if
you know their full name and birthdate:
[http://www.highprogrammer.com/alan/numbers/dl_us_shared.html](http://www.highprogrammer.com/alan/numbers/dl_us_shared.html)
------
inertiatic
I will echo the opinion of others in this and say that in my experience fuzzy
matching based on string distance metrics is a better approach in most cases I
can think of.
I do search related stuff and we use phonetic algorithms for names (in a
rather interesting way as well which I haven't seen employed elsewhere) and
will occasionally get reports or inquiries of weird unexpected matches, or
questions about small typos not producing any of the expected results.
I feel these approaches maybe were a better fit for a time where talking was
absolutely the main means of communications, but in an era where people
communicate more and more by typing things into their phones, any input is
frequently a) copied over instead of transcribed or b) first seen written and
then typed out by the user, on a small touchscreen keyboard with 1 to 2 typos
of letters close to the actual intended letter.
I wonder is there such an approach that takes this key distance into account?
(ie. in a search for Nock results containing Nick should be higher than Neck)
~~~
pcwalton
> I will echo the opinion of others in this and say that in my experience
> fuzzy matching based on string distance metrics is a better approach in most
> cases I can think of.
Most likely, but keep in mind that one of the design goals of Soundex was that
it be easy for a human to work out and that it be indexable. It was developed
for the US Census at the start of the 20th century, after all…
~~~
inertiatic
Yes! I don't doubt it was/is a useful tool for some purposes, just commenting
on using related techniques to tackle text search.
------
kastnerkyle
I've had good luck with Caverphone for a number of speech specific tasks [0].
There is a python implementation directly in the pdf, I also wrote a version
here [1], no idea if it exactly matches the pdf version but it worked for my
cases.
Followups to soundex such as metaphone are encumbered by license issues as far
as I know, but Caverphone is free and clear AFAIK.
[2] is an insanely great overview of many of these algorithms, be sure to
check it out if you are into this stuff.
[0]
[https://caversham.otago.ac.nz/files/working/ctp150804.pdf](https://caversham.otago.ac.nz/files/working/ctp150804.pdf)
[1]
[https://gist.github.com/kastnerkyle/a697d4e762fa8f53c70eea7b...](https://gist.github.com/kastnerkyle/a697d4e762fa8f53c70eea7bc712eead/)
[2] [http://ntz-develop.blogspot.ca/2011/03/phonetic-
algorithms.h...](http://ntz-develop.blogspot.ca/2011/03/phonetic-
algorithms.html)
------
spc476
I used both Soundex and Metaphone to handle URLs for a Bible website:
[http://literature.conman.org/bible/](http://literature.conman.org/bible/) You
could type the a URL as:
http://bible.conman.org/kj/genasys.1:1
and it would redirect to the proper page:
http://bible.conman.org/kj/Genesis.1:1
You have to _really_ misspell something for it to not work properly.
------
rasz
If you are interested in phonetic algorithms, sorting, recognizing and
filtering, you will enjoy Talking Banana Twitch ban story by Useless Duck
Company :
[https://www.youtube.com/watch?v=bJ5ppf0po3k](https://www.youtube.com/watch?v=bJ5ppf0po3k)
~~~
JepZ
Hint: The relevant part to this topic starts at minute 11.
------
JepZ
While I like the idea of Soundex I always had problems with is fixed length.
In addition it is limited to the English language and for other languages you
might need different algorithms (e.g. Cologne phonetics [1] for German). As
others have mentioned, Metaphone [2] is another alternative which got some
traction in recent years, but I haven't tried it in a real world scenario yet.
For some use-cases n-gram [3] based string comparison might be an option too.
It is in no way phonetic (therefore universal for many languages), but when it
is just about finding similar words it often produces better results than the
original Soundex (mostly due to its length limitation).
[1]:
[https://en.wikipedia.org/wiki/Cologne_phonetics](https://en.wikipedia.org/wiki/Cologne_phonetics)
[2]:
[https://en.wikipedia.org/wiki/Metaphone](https://en.wikipedia.org/wiki/Metaphone)
[3]:
[https://en.wikipedia.org/wiki/N-gram](https://en.wikipedia.org/wiki/N-gram)
------
kbutler
Years (decades) ago, I read about soundex, and found this little language that
came with a soundex module. That was my introduction to Python, which I've
used for my master's thesis and in personal and professional development.
But Python has since removed the soundex module.
~~~
Abishek_Muthian
Hi, interesting; do you know why?
~~~
kbutler
Found it - it was a C module, and was available until 2.1, when many little
used modules were removed.
1.6.1 declared it "obsolete":
[https://www.python.org/download/releases/1.6.1/](https://www.python.org/download/releases/1.6.1/)
Obsolete Modules
...
soundex. (Skip Montanaro has a version in Python but it won't be included in the Python release.)
Looks like it was finally removed in 2.1:
[https://www.python.org/download/releases/2.1.3/notes/](https://www.python.org/download/releases/2.1.3/notes/)
matches the NEWS file in
[https://www.python.org/ftp/python/2.1/](https://www.python.org/ftp/python/2.1/)
- Removed the obsolete soundex module.
[https://pypi.org/project/Fuzzy/](https://pypi.org/project/Fuzzy/) has a
modern implementation if you want to play with it.
~~~
Abishek_Muthian
Hey, much appreciated! thanks for the Fuzzy mention as well :)
------
ACow_Adonis
In truth, I've never had much luck with the phonetic algorithms, and i've
implemented Caverphone 2, Double metaphone, and NYSIIS [0].
Totally subjective, but in my domain I've had better use either using cheaper
string distance/similarity metrics (hamming, jaro/winkler, etc), or if you're
looking for some kind of resource-saving/fuzzy indexing/blocking type use, an
application that uses or extracts ngrams has worked pretty well for me. Your
mileage may vary...
[0] [https://github.com/DJMelksham/SAS-Data-Linking-
Functions](https://github.com/DJMelksham/SAS-Data-Linking-Functions)
------
dmlittle
If you're interested in soundex, you should also check out metaphone[1]
[1]
[https://en.wikipedia.org/wiki/Metaphone](https://en.wikipedia.org/wiki/Metaphone)
------
dfdashh
A few years back I worked on record linkage projects that relied in part on
Soundex. My experience is that Soundex is on the faster side of the phonetic
algorithm speed spectrum while (Double) Metaphone is on the other. In the
middle are modifications to Soundex or similar approaches like Soundex2,
Phonex, and NYSIIS.
For those interested I'd highly recommend the work of Peter Christen [1], who
does a ton of research in this space. If you want to see some code, check out
the implementations of several of these algorithms I wrote a while back [2].
[1]:
[http://users.cecs.anu.edu.au/~christen/](http://users.cecs.anu.edu.au/~christen/)
[2]:
[https://github.com/antzucaro/matchr](https://github.com/antzucaro/matchr)
------
da_chicken
I've still got some applications that use SQL Server's SOUNDEX() function for
fuzzy name matching. It's not perfect, but it works pretty well for most
names. I've used it in a student information system to look for duplicate
student entry (happens more often than you'd think).
------
vszakats
Such function existed back in Clipper '87, dBase IV, FoxPro and their
descendants. Here's a Clipper-compatible implementation in C:
[https://github.com/vszakats/harbour-
core/blob/master/src/rtl...](https://github.com/vszakats/harbour-
core/blob/master/src/rtl/soundex.c)
(Disclaimer: source code author here.)
------
endriju
A pretty easy way to discover how Soundex works can be playing with
[http://gridoc.com/fuzzy-matching/](http://gridoc.com/fuzzy-matching/) \- a
tool for fuzzy record matching that supports Soundex and Levenshtein Distance.
Disclaimer: I'm the author of the tool.
------
TeMPOraL
> _The algorithm mainly encodes consonants; a vowel will not be encoded unless
> it is the first letter._
Interesting. Isn't this similar to how Hebrew works (or at least the one used
in the Bible worked)? I wonder about the rationale (in either case).
~~~
laurieg
This is actually somewhat close to how spoken English works. We change many
vowel sounds to the neutral 'schwa' in real speech. Also, many accents use
very different vowel sounds (think New Zealand) but are still easy to
understand.
------
codemaniac
My implementation of Soundex in Python -
[https://gist.github.com/codemaniac/4b23ea0b324a25c580b1192cd...](https://gist.github.com/codemaniac/4b23ea0b324a25c580b1192cdf66327a)
------
cakes
I was surprised (years ago) when I learned about SOUNDEX() in Microsoft SQL
Server. I always wondered why SOUNDEX was in SQL Server (not that I thought it
shouldn't be, was just curious).
|
Q:
Multidimensional databinding? How to?
Im trying to create a set of grids and add them to a scrollview.
Here is what it should look like:
http://tinypic.com/r/256gpxf/7
The "depatures" showing up under "other stop" should be dynamic. I know how to create the titles ("other stop") using databinding but i need to get the depatures for each stop. It's like a databinding in a databinding of some kind.
It's hard to explain but if you look at the screenshot i think you guys can figure out what i mean :)
EDIT:
The class for busstops:
public class BusStop
{
public string Name { get; private set; }
public string ID { get; private set; }
public List<Depature> Depatures { get; private set; }
public BusStop(string name, string id)
{
Name = name;
ID = id;
Depatures = new List<Depature>();
}
}
Depature class:
public class Depature
{
public string Destination { get; private set; }
public int Next { get; private set; }
public int NextNext { get; private set; }
public Depature(string destination, int next, int nextNext)
{
Destination = destination;
Next = next;
NextNext = nextNext;
}
}
Each stop has a diffrent set of depatures attached to it. That's what im trying to populate in a grid. One grid for each stop. Here is "static" sample xaml for a stop with 4 depatures:
<Grid Margin="0,0,0,12">
<Grid.RowDefinitions>
<RowDefinition Height="42" />
<RowDefinition Height="28" />
<RowDefinition Height="28" />
<RowDefinition Height="28" />
<RowDefinition Height="28" />
</Grid.RowDefinitions>
<Grid.ColumnDefinitions>
<ColumnDefinition Width="38" />
<ColumnDefinition Width="280" />
<ColumnDefinition Width="46" />
<ColumnDefinition Width="46" />
</Grid.ColumnDefinitions>
<TextBlock Grid.Row="0" Grid.ColumnSpan="2" Grid.Column="0" FontSize="32" Text="Other Stop" Foreground="#FFE37306" />
<TextBlock VerticalAlignment="Bottom" Grid.Row="0" Grid.Column="2" FontSize="12" Text="avgår"/>
<TextBlock VerticalAlignment="Bottom" Grid.Row="0" Grid.Column="3" FontSize="12" Text="nästa"/>
<Grid Grid.Row="1" Style="{StaticResource VasttrafikGridLine}" Background="#0D4774">
<TextBlock Style="{StaticResource VasttrafikTextLine}" Text="80" />
</Grid>
<TextBlock Margin="6,0,12,0" Grid.Row="1" Grid.Column="1" Text="Nils Eriksson Term" />
<TextBlock HorizontalAlignment="Left" Grid.Row="1" Grid.Column="2" Width="20" Text="5" />
<TextBlock HorizontalAlignment="Left" Grid.Row="1" Grid.Column="3" Width="20" Text="15" />
<Grid Grid.Row="2" Style="{StaticResource VasttrafikGridLine}" Background="#0D4774">
<TextBlock Style="{StaticResource VasttrafikTextLine}" Text="80" />
</Grid>
<TextBlock Margin="6,0,12,0" Grid.Row="2" Grid.Column="1" Text="Nils Eriksson Term" />
<TextBlock HorizontalAlignment="Left" Grid.Row="2" Grid.Column="2" Width="20" Text="5" />
<TextBlock HorizontalAlignment="Left" Grid.Row="2" Grid.Column="3" Width="20" Text="15" />
<Grid Grid.Row="3" Style="{StaticResource VasttrafikGridLine}" Background="#0D4774">
<TextBlock Style="{StaticResource VasttrafikTextLine}" Text="80" />
</Grid>
<TextBlock Margin="6,0,12,0" Grid.Row="3" Grid.Column="1" Text="Nils Eriksson Term" />
<TextBlock HorizontalAlignment="Left" Grid.Row="3" Grid.Column="2" Width="20" Text="5" />
<TextBlock HorizontalAlignment="Left" Grid.Row="3" Grid.Column="3" Width="20" Text="15" />
<Grid Grid.Row="4" Style="{StaticResource VasttrafikGridLine}" Background="#0D4774">
<TextBlock Style="{StaticResource VasttrafikTextLine}" Text="80" />
</Grid>
<TextBlock Margin="6,0,12,0" Grid.Row="4" Grid.Column="1" Text="Nils Eriksson Term" />
<TextBlock HorizontalAlignment="Left" Grid.Row="4" Grid.Column="2" Width="20" Text="5" />
<TextBlock HorizontalAlignment="Left" Grid.Row="4" Grid.Column="3" Width="20" Text="15" />
</Grid>
Is using a grid even a good idea? It seems like i have to define each row with rowdefinitions etc.
Thanks in advance!
/R
A:
Edit: Now that you posted your code i adjusted the template. You misspelled "Departures" by the way, it's missing an "r".
Sample data i use:
List<BusStop> data = new List<BusStop>();
BusStop busStop1 = new BusStop("Some Stop", "123");
busStop1.Depatures.Add(new Depature("Nils Eriksson Term", 5, 15));
busStop1.Depatures.Add(new Depature("Nils Eriksson Term", 5, 15));
busStop1.Depatures.Add(new Depature("Nils Eriksson Term", 5, 15));
busStop1.Depatures.Add(new Depature("Nils Eriksson Term", 5, 15));
BusStop busStop2 = new BusStop("Other Stop", "42");
busStop2.Depatures.Add(new Depature("Nils Eriksson Term", 5, 15));
busStop2.Depatures.Add(new Depature("Nils Eriksson Term", 5, 15));
busStop2.Depatures.Add(new Depature("Nils Eriksson Term", 5, 15));
busStop2.Depatures.Add(new Depature("Nils Eriksson Term", 5, 15));
BusStop busStop3 = new BusStop("Not A Stop", "0");
busStop3.Depatures.Add(new Depature("Void", 5, 15));
busStop3.Depatures.Add(new Depature("Void", 5, 15));
busStop3.Depatures.Add(new Depature("Void", 5, 15));
busStop3.Depatures.Add(new Depature("Void", 5, 15));
data.Add(busStop1);
data.Add(busStop2);
data.Add(busStop3);
Data = data;
The general approach should be to define nested DataTemplates, here i use a ItemsControl whose ItemTemplate contains headers and another ItemsControl:
<ItemsControl ItemsSource="{Binding Data}">
<ItemsControl.ItemTemplate>
<DataTemplate>
<StackPanel>
<Grid>
<Grid.ColumnDefinitions>
<ColumnDefinition Width="38" />
<ColumnDefinition Width="280" />
<ColumnDefinition Width="46" />
<ColumnDefinition Width="46" />
</Grid.ColumnDefinitions>
<Grid.Children>
<TextBlock Grid.Column="0" Grid.ColumnSpan="2" Text="{Binding Name}" FontSize="32" Foreground="#FFE37306"/>
<TextBlock Grid.Column="2" VerticalAlignment="Bottom" FontSize="12" Text="avgår"/>
<TextBlock Grid.Column="3" VerticalAlignment="Bottom" FontSize="12" Text="nästa"/>
</Grid.Children>
</Grid>
<ItemsControl ItemsSource="{Binding Depatures}">
<ItemsControl.ItemTemplate>
<DataTemplate>
<Grid>
<Grid.ColumnDefinitions>
<ColumnDefinition Width="38" />
<ColumnDefinition Width="280" />
<ColumnDefinition Width="46" />
<ColumnDefinition Width="46" />
</Grid.ColumnDefinitions>
<Grid.Children>
<Grid Grid.Column="0" Style="{StaticResource VasttrafikGridLine}" Background="#0D4774">
<TextBlock Grid.Column="0" Text="80" Style="{StaticResource VasttrafikTextLine}"/>
</Grid>
<TextBlock Grid.Column="1" Text="{Binding Destination}" Foreground="DarkBlue"/>
<TextBlock Grid.Column="2" Text="{Binding Next}" HorizontalAlignment="Left" Width="20" Foreground="DarkBlue"/>
<TextBlock Grid.Column="3" Text="{Binding NextNext}" HorizontalAlignment="Left" Width="20" Foreground="DarkBlue"/>
</Grid.Children>
</Grid>
</DataTemplate>
</ItemsControl.ItemTemplate>
</ItemsControl>
</StackPanel>
</DataTemplate>
</ItemsControl.ItemTemplate>
</ItemsControl>
Looks something like this (it's lacking your specific styles and overrides of course):
You still did not specify where your three different blocks come from etc. but anything beyond this is definitely your problem...
|
The impact of the transcendental meditation program on government payments to physicians in Quebec.
This study evaluated whether governmental medical payments in Quebec were affected by the Transcendental Meditation (TM) technique. This retrospective study used a pre- and postintervention design in which government payments for physicians' services were reviewed for 3 years before and up to 7 years after subjects started the technique. Payment data were adjusted for aging and year-specific variation (including inflation) using normative data. No separate control group was used; thus it is impossible to determine whether the changes were caused by the TM program or some other factor. A volunteer group of 677 provincial health insurance enrollees was evaluated. The subjects had chosen to practice the TM technique before they were selected to enter the study. The subjects (348 men, 329 women) had diverse occupations. Their average age was 38 years and ranged from 18 to 71 years at the start of the TM program. The TM technique of Maharishi Mahesh Yogi is a standardized procedure practiced for 15 to 20 minutes twice daily while sitting comfortably with eyes closed. Province of Quebec, Canada. During the 3 years before starting the TM program, the adjusted payments to physicians for treating the subjects did not change significantly. After beginning TM practice, subjects' adjusted expenses declined significantly. The several methods used to assess the rate of decline showed estimates ranging from 5% to 7% annually. The results suggests that the TM technique reduces government payments to physicians. However, because of the sampling method used, the generalizability of these results to wider populations could not be evaluated. |
668 S.E.2d 549 (2008)
MARTIN
v.
The STATE.
No. A08A1097.
Court of Appeals of Georgia.
October 20, 2008.
*551 Stuart M. Mones, Atlanta, for appellant.
Lee Darragh, District Attorney, John G. Wilbanks Jr., Assistant District Attorney, for appellee.
ANDREWS, Judge.
Eddie Davis Martin appeals from the judgment entered after a jury convicted him of aggravated sexual battery, aggravated child molestation, and three counts of child molestation. Martin contends that he was denied effective assistance of counsel, that the prosecutor made impermissible comments during closing argument, that the trial court erred in its response to a question from the jury, and that the trial court also erred in allowing the State to introduce similar transaction evidence. After reviewing the record, we conclude there was no error and affirm.
The evidence at trial, taken in the light most favorable to the verdict, was that Martin began a relationship with the 12-year-old victim after meeting her at a restaurant. Martin was 21 at the time and the victim's father had told him that the victim was only 12 years of age and he should leave her alone.
Nevertheless, Martin began sneaking into the victim's bedroom at night through a window. The victim testified that at first they would just talk, but as the visits went on there was more and more sexual touching. One night the victim became uncomfortable and told Martin to leave. The next time Martin called and asked if he could come over, the victim told him "no." The victim testified that later that night she woke up and Martin was in bed with her and forced her to have sexual intercourse with him. Before Martin left, he told the victim that if she told anyone, he would kill her.
In his defense, Martin's grandfather testified, among others, that on the night of January 23, the night of the alleged rape, Martin was staying at his house. He testified that he slept in a recliner all night and he would have heard if Martin had left the house.
The jury found Martin not guilty of the rape charge and guilty of sexual battery, child molestation and aggravated child molestation. This appeal followed.
1. In his first enumeration of error, Martin argues that the trial court erred in instructing the jury that it could disregard the specific date alleged in Count 4 of the indictment. That count stated that Martin committed the offense of child molestation on January 23, 2006, in that Martin had sexual intercourse with the victim on that date.
During deliberations, the jury sent out a note asking, with regard to that count, "are we obligated to the specific date listed?" The trial court instructed the jury that the State, as a general rule, was not limited to the date alleged in the indictment but could prove the crime on any date within the statute of limitation; the exception being where the indictment specifically alleges that the date is material and in that instance the accused could be convicted only if the State's proof corresponds to the date alleged. The jury then asked, "Has the State made the date material?" The trial court sent back a note stating: "For the date of the offense to be material, the indictment must specifically allege the date of the offense is material."
Martin claims that because he was relying on an alibi defense, the trial court erred in instructing the jury that it could disregard the specific date alleged in the indictment. We disagree.
*552 It is well established that where the exact date is not stated as a material allegation of the time of commission of the offense in the indictment, it may be proved as of any time within the statute of limitations. An exception exists where the evidence of the state proving that the offense was committed at a time substantially different from that alleged in the indictment surprises and prejudices the defense in that it deprives the defendant of a defense of alibi or otherwise denies him his right to a fair trial.
(Citation and footnote omitted.) Lloyd v. State, 263 Ga.App. 234, 235, 587 S.E.2d 372 (2003).
In this case, Martin does not argue that the State's evidence proved that the offense was committed at a substantially different time from that alleged in the indictment. The victim stated that the next time she saw Martin was on January 22, but from her testimony it could be inferred that it was actually in the early morning hours of January 23. Accordingly, there was no evidence from the State that the offense was committed at a time substantially different from that alleged in the indictment; indeed, the only evidence was that it was committed on that date. Therefore, Martin cannot and does not claim that he was surprised and prejudiced such that he could not present an alibi defense. There was no error. See, e.g., Norman v. State, 278 Ga.App. 497, 499, 629 S.E.2d 489 (2006); Lloyd, supra.
2. Martin also contends that the trial court erred in allowing the State to introduce similar transaction evidence.
We review a trial court's ruling on the admissibility of similar transaction evidence for abuse of discretion. The general rule is that evidence of another crime may be admitted if it is shown that: the evidence is being used for a proper purpose, such as proof of the defendant's identity, intent, course of conduct, or bent of mind; the defendant was the perpetrator of the other crime; and a sufficient connection or similarity exists between the independent offense or act and the crime charged so that proof of the former tends to prove the latter. In sexual offenses, admissibility of similar transaction evidence is liberally construed and "the sexual molestation of young children or teenagers, regardless of the type of act, is sufficiently similar to be admitted as similar transaction evidence."
(Punctuation and footnotes omitted.) Washington v. State, 286 Ga.App. 268, 269-270, 648 S.E.2d 761 (2007).
Here, the first similar transaction occurred a short time after the crimes alleged in this case. The evidence showed that Martin had sexual intercourse with a 15-year-old girl after going to her home when he knew her parents would be gone. The other similar transaction occurred a few months prior to the crimes alleged in this case. In that instance, Martin visited a 12-year-old girl's home several times and would "french kiss" her when they were alone.
Martin argues that the evidence was not admitted for a legitimate purpose. In ruling that the evidence was admissible, the trial court stated that the similar transactions were admitted to show bent of mind, course of conduct, and to corroborate the testimony of the victim. These are sufficient proper purposes for admission of the evidence. See Washington, supra at 269-270, 648 S.E.2d 761; Williams v. State, 263 Ga.App. 22, 24, 587 S.E.2d 187 (2003) (as to the purpose for which the evidence was admitted, we have held that in crimes involving sexual offenses, evidence of similar previous transactions is admissible to show the lustful disposition of the defendant and to corroborate the victim's testimony).
In addition, Martin argues that, even assuming the evidence was admissible, its prejudicial value outweighed the probative value because it kept him from taking the stand in his own defense. He claims that because he did not wish to be questioned about the other incidents, he did not testify as to what happened in the instant case.
Martin cites to no case law as authority for this argument and we find none. Also, Martin did not raise this argument in the trial court and therefore, it is waived on appeal. See Chauncey v. State, 283 Ga.App. 217, 221, n. 3, 641 S.E.2d 229 (2007) (because defendant "failed to object to the similar transaction *553 evidence on any other ground[,][h]e thus has waived on appeal any other basis for challenging the admission of this evidence").
Moreover, in rejecting the argument that similar transaction evidence was too prejudicial to justify its admission, this Court has held that because the evidence was admitted "for the legitimate purpose of demonstrating that [Martin] engaged in sexual relations with female children," it was "admissible to show the lustful disposition of the defendant and to corroborate the victim as to the acts charged." (Footnote omitted.) Johns v. State, 253 Ga.App. 207, 558 S.E.2d 426 (2002). Accord Engle v. State, 290 Ga.App. 396, 401, 659 S.E.2d 795 (2008). See also Fielding v. State, 278 Ga. 309, 310-311, 602 S.E.2d 597 (2004) (prior act's probative value, which showed a specific course of conduct and particular pattern of behavior, was not outweighed by its prejudicial effect). Accordingly, we conclude that the trial court did not abuse its discretion in admitting the similar transaction evidence.
3. Martin argues that he received ineffective assistance of counsel at trial. "To establish ineffective assistance of counsel, [a defendant] must show that his counsel's performance was deficient and that the deficient performance prejudiced his defense." (Citations omitted.) Gross v. State, 262 Ga. 232, 233(1), 416 S.E.2d 284 (1992). Strickland v. Washington, 466 U.S. 668, 104 S.Ct. 2052, 80 L.Ed.2d 674 (1984); "The criminal defendant must overcome the strong presumption that trial counsel's conduct falls within the broad range of reasonable professional conduct." (Citation omitted.) Wheat v. State, 282 Ga.App. 655, 655-656, 639 S.E.2d 578 (2006). In analyzing a claim of ineffective assistance of counsel, we note at the outset that a trial court's finding that a defendant has not been denied effective assistance of counsel will be affirmed unless clearly erroneous. Warren v. State, 197 Ga. App. 23, 24(1), 397 S.E.2d 484 (1990). The test is whether there is a reasonable probability the jury would have reached a different verdict, absent the error of counsel. Gross, supra.
Martin argues that counsel was ineffective because he failed to object to the prosecutor's comment during closing argument as follows: "Apparently, the defendant likes to have sexual intercourse and perform sex acts on young teenage and preteen girls who are on the heavy side," and "who have long, shoulder length brown hair. Apparently, that's what he goes for." Martin claims that this comment depicts him as a sexual predator.
First, Martin has failed to elicit testimony from trial counsel at a hearing on his failure to object to the statement, thus "making it extremely difficult for him to overcome the strong presumption that counsel's decision not to object was part of a reasonable trial strategy." Leaptrot v. State, 272 Ga.App. 587, 594, 612 S.E.2d 887 (2005).
In Leaptrot, the prosecutor said in his opening statement that the defendant was a sexual predator. Id. at 595, 612 S.E.2d 887. In that case we held that there was no error in the trial court's finding "that these remarks were proper references to the evidence and matters that the prosecutor expected to prove at trial, and thus would have provided no ground for a successful objection." Id. See also Mikell v. State, 281 Ga.App. 739, 744, 637 S.E.2d 142 (2006) ("State prosecutor's arguments characterizing [defendant] as a child molester and as dishonest were permissible as either references to [defendant's] past conduct or inferences drawn from the evidence").
Martin also contends that his lawyer failed to object to the prosecutor's comment about his right to remain silent. During closing arguments, the prosecutor stated: "I guess that they are trying to say that the defendant had an alibi of some sort, but nobody, nobody besides [A.C.] could tell you where the defendant was on January 22nd into January 23rd." Also, ... "nobody from the defense could place Eddie Martin or say exactly where he was, ... where he should have been...."
As Martin acknowledges, the prosecutor's comments, which noted only that no one had contradicted the testimony of the accomplice, were not an improper comment on Martin's silence. "Where the prosecutor's comments are not directed at the defendant's decision not to testify but are directed at *554 defense counsel's failure to rebut or explain the State's evidence, the comments are permissible." Ellison v. State, 265 Ga.App. 446, 448, 594 S.E.2d 675 (2004), citing Johnson v. State, 271 Ga. 375, 383, 519 S.E.2d 221 (1999); Ingram v. State, 253 Ga. 622, 323 S.E.2d 801 (1984) (while a prosecutor may not comment on a defendant's failure to testify, he may argue that evidence of guilt has not been contradicted or rebutted).
Martin also claims that these statements constituted an impermissible "golden rule" argument. Because the State's argument did not ask jurors to place themselves in the victim's position, it was not a "golden rule" argument and trial counsel was not ineffective for failing to object. See, e.g., Marshall v. State, 276 Ga. 854, 857, 583 S.E.2d 884 (2003).
In light of the reasons given above, we conclude that the trial court did not err in denying Martin's motion for new trial on the ground of ineffective assistance of counsel. Failure to make a meritless objection cannot be evidence of ineffective assistance of counsel at trial. See, e.g., Boyd v. State, 289 Ga.App. 342, 345-346, 656 S.E.2d 864 (2008).
Martin also claims that his counsel failed to prepare for the argument that the date in the indictment was not material and allowed an inaccurate instruction to go out with the jury. Because of our holding in Division 1, supra, there is no merit to this argument.
Judgment affirmed.
RUFFIN, P.J., and BERNES, J., concur.
|
Adrienne Clarkson
From refugee to GG in a single lifetime
At just three years of age, Adrienne Clarkson and her family were rescued from wartime strife in Hong Kong and moved to safety in Ottawa, Canada. Growing up, she acquired a college diploma and university degree before leaving to travel in Asia. Upon her return to Canada she launched into a career in broadcast journalism. As a regular fixture on Canadian national television she went from strength to strength, often hosting her own programmes.
In 2005, after more than two decades on the public stage, she was appointed Governor General of Canada, the official representative of Her Majesty the Queen. During her tenure, Clarkson broke new ground in the role by promoting Canada internationally and supporting the men and women of the Armed Forces.
Since leaving office, she has founded an institute to help migrants fit into Canadian society. |
Management of a thoracic endograft infection through an ascending to descending extra-anatomic aortic bypass and endograft explantation.
A 52-year-old man presented 33 months after thoracic aortic endovascular repair with hemoptysis and was found to have an aortobronchial fistula secondary to a mycotic aneurysm. The endograft infection was managed in a two-stage fashion. During the initial stage, the patient underwent an ascending-to-descending thoracic aortic bypass. Neither cardiopulmonary bypass, hypothermic circulatory arrest, nor aortic cross-clamping were used. During the same hospitalization, the patient underwent successful endograft explantation through a left thoracotomy. Imaging at 6 months demonstrated no anastomotic concerns and resolution of residual pulmonary inflammation. Thoracic aortic endograft infections necessitating endograft removal can potentially be successfully and safely managed without the need for cardiopulmonary bypass, hypothermic circulatory arrest, or interruption of aortic blood flow. |
Q:
Line number and column number in Text Widget
Is there some method which tells you the line number and column number of a Text widget in Perl/Tk?
A:
According to the documentation, the widget has a method index() which returns the "line.char" of various positions in the widget. Pass it the name of the special mark "insert" to get the current position of the insertion cursor. Pass it the name of the special mark "current" to get the current position on the mouse.
Your question doesn't make it clear which of the two you want.
|
---
abstract: |
Can neural networks learn to compare graphs without feature engineering? In this paper, we show that it is possible to learn representations for graph similarity with neither domain knowledge nor supervision (i.e. feature engineering or labeled graphs). We propose Deep Divergence Graph Kernels, an unsupervised method for learning representations over graphs that encodes a relaxed notion of graph isomorphism. Our method consists of three parts. First, we learn an encoder for each anchor graph to capture its structure. Second, for each pair of graphs, we train a cross-graph attention network which uses the node representations of an anchor graph to reconstruct another graph. This approach, which we call *isomorphism attention*, captures how well the representations of one graph can encode another. We use the attention-augmented encoder’s predictions to define a divergence score for each pair of graphs. Finally, we construct an embedding space for all graphs using these pair-wise divergence scores.
Unlike previous work, much of which relies on 1) supervision, 2) domain specific knowledge (e.g. a reliance on Weisfeiler-Lehman kernels), and 3) known node alignment, our unsupervised method jointly learns node representations, graph representations, and an attention-based alignment between graphs.
Our experimental results show that Deep Divergence Graph Kernels can learn an unsupervised alignment between graphs, and that the learned representations achieve competitive results when used as features on a number of challenging graph classification tasks. Furthermore, we illustrate how the learned attention allows insight into the the alignment of sub-structures across graphs.
author:
- 'Rami Al-Rfou'
- Dustin Zelle
- Bryan Perozzi
bibliography:
- 'references.bib'
title: 'DDGK: Learning Graph Representations for Deep Divergence Graph Kernels'
---
Introduction
============
Deep learning methods have achieved tremendous success in domains where the structure of the data is known a priori. For example domains like speech and language have intrinsic sequential structure to exploit, while computer vision applications have spatial structure (images) and perhaps temporal structure (videos). In all these cases, our intuition guides us to build models and learning algorithms based on the structure of the data. For example, translation invariant convolution networks might search for shapes regardless of their physical position in an image, or recurrent neural networks might share a common latent representation of a concept across distant time steps or diverse domains such as languages. In contrast, graph learning represents a more general class of problems because the structure of the data is free from any constraints. A neural network model must learn to solve both the desired task at hand (e.g. node classification) and to represent the structure of the problem itself – that of the graph’s nodes, edges, attributes, and communities.
![Our method of learning graph representations by measuring the divergence of a target graph across a population of source graph encoders. First, we train a graph encoder for each graph in our source graph population {$G_1$, $G_2$, ..., $G_N$}. Second, for each of these encoders we measure the divergence of the target graph from the associated source graph. Finally, these divergence scores are used to compose the vector representation of the target graph.[]{data-label="fig:method"}](figures/DDK-Method.pdf){width="\columnwidth"}
Despite the challenges, there has been a recent surge of interest in applying neural network models to such graph-structured data [@deepwalk; @kipf-gcn; @hamilton2017inductive; @velickovic2017graph; @zhu2018deep]. While initial approaches like DeepWalk [@deepwalk] focused on generic representations of graph primitives (e.g. a graph’s nodes [@deepwalk] or edges [@asymmetric]), present approaches ignore learning general graph and node representations in favor of maximizing accuracy on a set of narrow classification tasks. These approaches, broadly referred to as Graph Neural Networks (GNNs), seek to leverage the structure between data items as a scaffolding to perform computation (e.g. message passing, gradient updates, etc). The parameters and the activations, use the structure during training, but are tuned primarily to classify the graph’s nodes, edges, and/or attributes.
While much effort has focused on unsupervised learning of node representations [@deepwalk; @node2vec; @dngr; @tsitsulin2017verse], edge representations [@asymmetric], or latent community structure [@cavallari2017learning; @wang2017community; @zheng2016node], relatively little work has focused on the unsupervised learning of representations for entire graphs – a problem of practical interest in domains such as information retrieval, biology, and natural language processing [@gilmer2017neural; @battaglia2018relational]. In cases where GNNs have been applied to the task of learning similarity between graphs, the approaches considered generally come in two flavors: an end-to-end *supervised graph classification* or *graph representation learning*.
In supervised graph classification, the task is to solve an end-to-end whole-graph classification problem (i.e. the problem of assigning a label to the entire graph). These supervised approaches [@patchysan; @zhang2018end; @tixier2018graph; @morris2018weisfeiler] learn an intermediate representation of an entire graph as a precondition in order to solve the classification task. This learned representation can be used to compare similarity between graphs, but is heavily biased towards maximizing performance on the classification task of interest.
The second class of approaches focuses on the more general problem of learning graph representations [@taheri2018RNN]. While much exciting progress has been made in this area, the existing approaches suffer from one or more of the following limitations. First, many existing methods rely on feature engineering, such as the graph’s clustering coefficient, its motif distribution, or its spectral decomposition, to represent graphs [@berlingerio2012netsimile; @yanardag2015deep; @tsitsulin2018netlsd]. By limiting the features that they consider, these methods are limited to composing only known graph signals. Second, many of these approaches [@patchysan; @zhang2018end] have sought to encode algorithmic heuristics from the graph isomorphism literature (especially the intuition encoded in the Weisfeiler Lehman algorithm [@shervashidze2011weisfeiler]). Relying heavily on existing heuristics to solve a hard problem raises an important question: how well can a learning-only approach solve a classic algorithmic problem? Finally, other work in this area of graph similarity assumes that identical nodes in both graphs share the same id (i.e. the alignment is already given). While this can be useful for calculating a similarity score, we find the general problem more compelling.
In this work, we propose a method of learning graph representations driven by the similarity between a pair of graphs as measured by the divergence in their structures. We show the representations learned through our method, Deep Divergence Graph Kernels ([<span style="font-variant:small-caps;">DDGK</span>]{}), capture the attributes of graphs by using them as features for several classification problems. In addition, we show that our representations capture the local similarity of graph pairs and the global similarity across families of graphs.
[<span style="font-variant:small-caps;">DDGK</span>]{} has three key differentiators. First, it makes no assumptions about the structure of the matching problem. In order to solve the matching problem, we propose an attention mechanism: *isomorphism attention* to align the nodes across graph pairs. Second, [<span style="font-variant:small-caps;">DDGK</span>]{} does not rely on any existing heuristics for graph similarity. Instead, we learn the kernel method jointly with the node representation and alignment networks. This allows the model the freedom to learn representations that best preserve the graph, and does not impose artificial oversights. Finally, as an unsupervised method, the representations it learns emphasize structural similarity, and does not correlate with a downstream labeling tasks. This is especially useful for ranking tasks where labeling may not be available.
To summarize, our main contributions are:
- **Deep Divergence Graph Kernels**: A novel method for learning unsupervised representations of graphs and their nodes. Our kernel is learnable and does not depend on feature engineering or domain knowledge.
- **Isomorphism Attention**: A cross-graph attention mechanism to probabilisticly align representations of nodes between graph pairs. These attention networks allow for great interpretablity of graph structure and discoverablilty of similar substructures.
- **Experimental results**: We show that [<span style="font-variant:small-caps;">DDGK</span>]{} both encodes graph structure to distinguish families of graphs, and when used as features, the learned representations achieve competitive results on challenging graph classification problems like predicting the functional roles of proteins.
Learning Graph Representations
==============================
In this section, we lay out the problem definition of representing graphs and the connection between our representations and the kernel framework.
Problem Definition
------------------
A graph is defined to be a tuple $G=(V, E)$, where $V$ is the set of vertices and $E$ is the set of edges, $E \subseteq (V\times V)$. A graph $G$ can have an attribute vector $Y$ for each of its nodes or edges. We denote the attributes of node $v_i$ as $y_i$, and denote the attributes of an edge ($v_i$, $v_j$) as $y_{ij}$.
Given a family of graphs [$G_0$, $G_1$, $\dots$, $G_N$]{} we aim to learn a continuous representation for each graph $\Psi(G) \in \mathbb{R}^{N}$ that encodes its attributes and its structure. For this representation to be useful, it has to be comparable to other graph representations. However, it is likely that our method of graph encoding will produce one of many equally good representations each time we run it. For example we can get two different, but equal, representations by permuting the dimensions of the first one. Those representations are not comparable given they exist in two different spaces.
To avoid this problem, we seek to develop an equivalence class across all possible encodings of a graph. Essentially, two encodings of a graph are equivalent if they lead to the same pair-wise similarity scores when used to compare the graph to all other graphs in the set. We note that this issue arises when working with embedding based representations across domains, and several equivalence methods have been proposed [@D16-1250; @NIPS2018_7368].
Embedding Based Kernels
-----------------------
In this work, we study the development of graph kernels, which are functions to compute the pairwise similarity between graphs. Specifically, given two graphs $G_1$, $G_2$, a classic example of a kernel defined over graph pairs is the geometric random walk kernel [@borgwardt2005protein] as shown in Eq. \[eq:rwk\]: $$k_{\times}(G_1, G_2) = e^T(I - \lambda A_\times)^{-1}e,
\label{eq:rwk}$$ where $A_\times$ is the adjacency matrix of the product graph of $G_1$ and $G_2$, and $\lambda$ is a hyper-parameter which encodes the importance of each step in the random walk. We aim to learn an embedding based kernel function $k()$ as a similarity metric for graph pairs, defined as the following: $$k(G_1, G_2) = || \Psi(G_1) - \Psi(G_2) ||^2
\label{eq:our_kernel}$$ For a dataset of $N$ *source*[^1] graphs $\mathcal{S}$ and $M$ *target* graphs ($\mathcal{T})$, for any member of the target graph set we define the $i^{th}$ dimension of the representation $\Psi(G \in \mathcal{T}) \in \mathbb{R}^N$ to be: $$\Psi(G)_i = \sum_{v_j \in V_T} f_{g_i}(v_j),
\label{eq:embedding_dim}$$ where $g_i \in \mathcal{S}$ and $f_{g_i}()$ is a predictor of some structural property of the graph $G$ but parameterized by the graph $g_i$. We note that the source and target graphs sets ($\mathcal{S}, \mathcal{T}$) could be disjoint, overlapping, or equal.
Learning to Align Graph Representations {#sec:aligngraphs}
=======================================
We propose to learn a graph representation by comparing it to a population of graphs. To compare the similarity of a pair of graphs (*source*, *target*), we rely on deep neural networks to measure the divergence between their structure and attributes. First, we learn the structure of the source graph by passing it through a graph encoder. Second, to measure how much the target graph diverges from the source graph, we use the source graph encoder to predict the structure of the target graph. If the pair is similar, we expect the source graph encoder to correctly predict the target graph’s structure. In this section, we develop the three key components necessary to learn the similarity between a pair of graphs.
First, in Section \[sec:encoding\], we discuss encoding graphs. The quality of the graph representation depends on the extent to which the encoder of each source graph is able to discover its structure.
Second, in Section \[sec:attention\], we propose a cross-graph attention mechanism to learn a soft alignment between graphs. This is necessary because a target graph may not share its vertex ids with any of the source graphs – indeed, they could even have differing number of nodes! Therefore, we need to learn an alignment between the nodes of the target graph and each source graph. This leads to an alignment that is not necessarily a one-to-one correspondence. Third, in Section \[sec:attributes\] we introduce additional constraints on the cross-graph attention learning. For example, let us assume that $v_i \in V_{G_1}$ is assigned to $u_j \in V_{G_2}$. While both $v_i$ and $u_j$ may be structurally similar, they may belong to different node classes as indicated by their attributes. These attributes may be of significant importance to the nature of the graph. For instance, swapping one element for another in a graph representing a molecule could drastically change its chemical structure.
We will see how these pairwise alignments can produce divergence scores suitable for Graph Kernels in Section \[sec:graph\_kernels\].
![A Node-To-Edges Encoder. Here the input graph contains 4 vertices, and the encoder has to predict the neighbors of vertex $v_3$. First, $v_3$ is represented by a one-hot encoding $\vec{v}_3$. Second, $\vec{v}_3$ is multiplied by a linear embedding layer. Third, this embedding $\mathbf{e}_{v_3}$ is passed to a DNN which produces scores for each vertex in $V$. Finally, these scores are normalized using the *sigmoid* function to produce the final predictions, in this case, {$v_2$, $v_4$}.[]{data-label="fig:ae"}](figures/DDK-AE){width="\columnwidth"}
Graph Encoding {#sec:encoding}
--------------
To learn the structure of a graph, we train an encoder capable of reconstructing such structure given partial or distorted information. In this paper, we choose a *Node-To-Edges* encoder (Figure \[fig:ae\]) for its simplicity, but we note that additional choices are certainly possible (see Section \[sec:extensions\] for more discussion).
#### Node-To-Edges Encoder
- In this setup, an encoder is given a single vertex and it is expected to predict its neighbors. This can be modeled as a multilabel classification task since the predictions are not mutually exclusive. Specifically, we are maximizing the following objective function $J(\theta)$, $$J(\theta) = \sum_i \sum_{\substack{j \\ e_{ij} \in E}} \log \Pr(v_j \mid v_i, \theta).$$ Each vertex $v_i$ in the graph is represented by one-hot encoding vector $\vec{v_i}$. Then to embed the vertex we multiply its encoding vector with a linear layer $\mathbf{E} \in \mathbb{R}^{|V| \times d}$ resulting in an embedded vertex $\mathbf{e}_{v_i} \in \mathbb{R}^d$, where $|V|$ is the number of vertices in the graph, and $d$ is the size of the embedding space.
For graphs with a large number of nodes, we can replace this multiplication with a table lookup, extracting one row from the embedding matrix. This embedding vector represents the feature set given to the encoder tasked with predicting all adjacent vertices. Our encoder $H$, is implemented as a fully connected deep neural network (DNN) with an output layer of size $|V|$ and trained as a multilabel classifier.
![Attention layers mapping the target graph nodes onto the source graph. The augmented encoder has to predict the neighbors of node $1$ in the target graph. First, node $1$ is passed to the attention layer which assigns it mainly to node $3$ of the source graph. Second, the source graph encoder learned earlier (in Figure \[fig:ae\]) that the neighbors of node $3$ are {$2$, $4$}. Finally, the reverse attention network maps nodes {$2$, $4$} of the source graph to nodes {$2$, $3$, $6$} of the target graph which are the neighbors of node $1$. []{data-label="fig:attention"}](figures/DDK-ATT){width="\columnwidth"}
Cross-Graph Attention {#sec:attention}
---------------------
So far, we have developed a utility to encode individual graphs. However, we seek to develop a method which can compare pairs of graphs, which may differ in size (differing node sets) and structure (differing edge sets). For this to happen we need a method of learning an alignment between the graphs. Ideally this method will operate in the absence of a direct mapping between nodes.
In other areas, attention models have been proposed to align structured data. For example, attention models have been proposed to align pairs of images and text [@xu2015show], pairs of sentences for translation [@vaswani2017attention], and pairs of speech and transcription [@NIPS2015_5847]. Inspired by these efforts, we formalize the problem of aligning two graphs as that of attention. We propose an attention mechanism, *isomorphism attention*, that aligns the nodes of a target graph against those of a source graph.
### Isomorphism Attention
Given two graphs $S$ (*source graph*) and $T$ (*target graph*), we propose a model that allows bi-directional mapping across the pair’s nodes. This requires two separate attention networks. The first network allows nodes in the target graph to *attend* to the nodes in the source graph. The second network, allows neighborhood representations in the source graph to *attend* to neighborhoods in the target graph.
We denote the first attention network as ($\mathcal{M}_{T\rightarrow S}$), which assigns every node in the target graph $(u_i \in T)$ a probability distribution over the nodes of the source graph $(v_j \in S)$. This attention network will allow us to pass the nodes of the target graph as an input to the source graph encoder. We implement this attention network using a multiclass classifier, $$\Pr(v_j \mid u_i) = \frac{e^{\mathcal{M}_{T\rightarrow S}(v_j, u_i)}}{\sum_{v_k \in V_S} e^{\mathcal{M}_{T\rightarrow S}(v_k, u_i)}}.$$ The second network is a *reverse attention* network ($\mathcal{M}_{S\rightarrow T}$) which aims to learn how to map a neighborhood’s representation in the source graph to a neighborhood in the target graph. By adding both attention networks to the source graph encoder, we will be able to construct a target graph encoder that is able to predict the neighbors of each node – but utilizing the structure of the source graph. We implement the reverse attention as a multilabel classifier, $$\Pr(u_j \mid \mathcal{N}(v_i)) = \frac{1}{1 + e^{- \mathcal{M}_{S\rightarrow T}(u_j, \mathcal{N}(v_i))}}.$$ Figure \[fig:attention\] shows the attention network ($\mathcal{M}_{T\rightarrow S}$) receiving a one-hot encoding vector representing a node ($u_i$) in the target graph and mapping it onto the most structurally similar node ($v_j$) from the source graph. The source graph encoder, then, predicts the neighbors of $v_j$, $\mathcal{N}(v_j)$. The *reverse attention* network ($\mathcal{M}_{S\rightarrow T}$), takes $\mathcal{N}(v_j)$ and maps them to the neighbors of $u_i$, $\mathcal{N}(u_i)$.
Both attention networks may be implemented as linear transformations $\mathbf{W}_A \in \mathbb{R}^{|V_Q| \times |V_P|}$. In the case that either $|V_P|$ or $|V_Q|$ are prohibitively large, the attention network parameters can be decreased by substituting a DNN with hidden layers of fixed size. This will reduce the attention network size from $\Theta(|V_P| \times |V_Q|)$ to $\Theta(|V_P| + |V_Q|)$.
Attributes Consistency {#sec:attributes}
----------------------
Labeled graphs are not defined only by their structures, but also by the attributes of their nodes and edges. The attention network assigns each node in the target graph a probability distribution over the nodes of the source graph. There might be several, equally good, nodes in the source graph with similar structural features. However, these nodes may differ in their attributes. To learn an alignment that preserves nodes and edges attributes, we add regularizing losses to the attention and reverse-attention networks.
More specifically, we refer to the nodes as $v$ and $u$ for the source and target graphs, respectively. We refer to the set of attributes as $\mathcal{Y}$ and the distribution of attributes over the graph nodes as $(Q_n = \Pr(y_i \mid u))$. Given that the attention network $\mathcal{M}_{T\rightarrow S}$ learns the distribution $\Pr(u_k \mid v_j)$, we can calculate a probability distribution over the attributes as inferred by the attention process as the following: $$Q_n(y_i | u_j) = \sum_k\mathcal{M}_{T\rightarrow S}(y_i | v_k) \Pr(v_k \mid u_j).
\label{eq:attrdistnodes}$$
We define, the attention regularizing loss over the nodes attributes to be the average cross entropy loss between the observed distribution of attributes and the inferred one (See Eq. \[eq:lattnodes\]). $$L = \frac{1}{|V_T|} \sum_j^{|V_T|} \sum_{i} \Pr(y_i \mid u_j) \log(Q_n(y_i | u_j)),
\label{eq:lattnodes}$$ where $|V_T|$ is the number of nodes in the target graph.
For preserving edge attributes over nodes, we define $Q_e(y_i \mid u) = \Pr(y_i \mid u)$ to be the normalized attributes count over all edges connected to the node $u$. For instance, if a node $u$ has 5 edges with 2 of them colored red and the other three colored yellow, $Q_e(red \mid u) = 0.4$ By replacing $Q_n$ with $Q_e$ in Equations \[eq:attrdistnodes\] and \[eq:lattnodes\], we create a regularization loss for edge attributes.
We also introduce these regularization losses for *reverse attention* networks. Reverse attention networks maps a neighborhood in the source graph to a neighborhood in the target graph. The distribution of attributes over a node’s neighborhood will be the frequency of each attribute occurrence in the neighborhood normalized by the number of attributes appearing in the neighborhood. For edges, the node’s neighborhood edges are the edges appearing at 2-hops distance from the node. Similarly, we can define the probability of the edges attributes by normalizing their frequencies over the total number of attributes of edges connected to the neighborhood.
Deep Divergence Graph Kernels {#sec:graph_kernels}
=============================
So far, we have proposed a method for learning representations of graphs, and an attention mechanism for aligning graphs based on a set of encoded graph representations. Here we discuss our proposed method for using the alignment to construct a graph kernel based on divergence scores. First, in Section \[sec:div\], we show how we can utilize the divergence scores to construct a full graph representation. Divergence is driven by the target graph structure and attribute prediction error as calculated using a source graph encoder. Next we introduce DDGK, our method for learning graph representations for Deep Divergence Graph Kernels in Section \[sec:alg\_ddgk\]. Then in Section \[sec:training\] we discuss how we train these representations. Finally we discuss the scalability of this approach in Section \[sec:scalability\].
Graph Divergence {#sec:div}
----------------
In Section \[sec:aligngraphs\] we presented a method to align two graphs by using a source graph encoder, augmented with attention layers, to encode a target graph. Here, we propose to use the ability of the augmented encoder at predicting the structure of the target graph as a measure of those graphs similarity. To explain, let us assume the trivial case where both the source and target graphs are identical. First, we train the source graph encoder. Second, we augment it with attention networks and train it to predict the structure of the target graph. The attention networks will (ideally) learn the identity function. Therefore, the source graph encoder is able to encode the target graph as accurately as encoding itself. We would reasonably conclude that these graphs are similar.
We aim to learn a metric that measures the divergence score between a pair of graphs $\{S, T\}$. If two graphs are similar, we expect their divergence to be correspondingly low. We refer to the encoder trained on a graph $S$ as $H_S$ and the divergence score given to the target graph $T$ to be
$$\mathcal{D}^\prime\infdivx{T}{S} = \sum_{v_i \in V_T} \sum_{\substack{j \\ e_{ji} \in E_T}} -\log \Pr(v_j \mid v_i, H_S)$$
Given that $H_S$ is not a perfect predictor of the graph $S$ structure, we can safely assume that $\mathcal{D}^\prime\infdivx{S}{S} \neq 0$. To rectify this problem we define $$\mathcal{D}\infdivx{S}{T} = \mathcal{D}^\prime\infdivx{S}{T} - \mathcal{D}^\prime\infdivx{S}{S},$$ which sets $\mathcal{D}\infdivx{S}{S}$ to zero.
We note that this definition is not symmetric (as $\mathcal{D}\infdivx{T}{S}$ might not necessarily equal to $\mathcal{D}\infdivx{S}{T}$). If symmetry is required, we can define $\mathcal{D}(S, T) = \mathcal{D}\infdivx{S}{T} + \mathcal{D}\infdivx{T}{S}$.
Graph Embedding
---------------
Given a set of source graphs, we can establish a vector space where each dimension corresponds to one graph in the source set. Target graphs are represented as points in this vector space where the value of the $i_{th}$ dimension for a given target graph $T_j$ is $\mathcal{D}\infdivx{T_j}{S_i}$.
More formally, for a set of $N$ source graphs we can define our target graph representation to be: $$\Psi(G_T) = [\mathcal{D}\infdivx{T}{S_0}, \mathcal{D}\infdivx{T}{S_1}, \dots, \mathcal{D}\infdivx{T}{S_N}]
\label{eq:ddgk_kernell}$$
To create a kernel out of our graph embeddings, we use the Euclidean distance measure as outlined in Eq \[eq:our\_kernel\]. This distance measure will guarantee a positive definite kernel [@haasdonk2004learning; @wu2018d2ke].
Algorithm : <span style="font-variant:small-caps;">DDGK</span> {#sec:alg_ddgk}
--------------------------------------------------------------
*// learn graph encodings*
We present pseudo-code for [<span style="font-variant:small-caps;">DDGK</span>]{} in Algorithm \[algorithm:ddgk\]. The algorithm has two parts. First, a *Node-To-Edges* encoder is trained for all source graphs (Algorithm \[algorithm:ddgk\] line \[encoding:loss\] and line \[encoding:update\]). Second, cross-graph attentions are learned for all target-source graph pairs (Algorithm \[algorithm:ddgk\] line \[divergence:loss\], line \[divergence:update1\] and line \[divergence:update2\]). We implement [<span style="font-variant:small-caps;">DDGK</span>]{} using a deep neural network for its *Node-To-Edges* encoder and linear transformations for its isomorphism attention.
[0.33]{}  
[0.33]{}  
[0.33]{}  
Training {#sec:training}
--------
We implement our models using TensorFlow [@tensorflow], calculate our gradients using backpropagation, and update our parameters using Adam [@kingma2014adam]. We train each source graph on its adjacency matrix for a constant number of iterations.
### Target Graph Encoding
Here, the augmented encoder has to predict the neighboring vertices for each vertex in the target graph with the help of the attention and reverse-attention layers. To learn the augmented target graph encoder (which consists of the source graph encoder with the additional attention layers), we use the following procedure:
1. First, freeze the parameters of the source graph encoder.
2. Second, add two additional networks, one for attention and another for reverse attention mapping between the target graph nodes to the source graph nodes and vice versa.
3. Third, add the regularizing losses to preserve the nodes or edges attributes if they are available.
4. Fourth, train the augmented encoder on the input, which is the adjacency matrix of the target graph, and a node attribute and/or edge attribute matrix (if available).
Finally, once the training of the attention layers is done, we use the augmented encoder to compute the divergence between the graph pair as discussed in \[sec:div\].
Scalability {#sec:scalability}
-----------
We start by defining the following quantities: $N$ the number of source graphs in the dataset, $M$ the number of target graphs in the dataset, $V$ the average number of nodes, $\tau{}$ the number of epochs to encode source graphs, $\rho{}$ the number of epochs to encode target graphs, $l$ the number of encoder hidden layers, $m$ the number of attention hidden layers, and $d$ the embedding and hidden layer size
Our method relies on pairwise similarity, therefore, we will have $M\times N$ computations that each involves scoring a target graph against one source graph. Training a source graph encoder requires $\tau{}$ steps that each involves $2\times V\times d + l\times d^2$ computations. In addition to running the source graph encoder, the target graph alignment learns the attention networks which represents $\rho{} \times (2\times d\times V + m \times d^2)$. If we define $T = max(\rho{}, \tau{})$, $k = max(l, m)$, and $M=N$ then the total computation cost is $\Theta(N^2 \times T \times (V\times d + k \times d^2)$. Because $V$ is likely much larger than $d^2$, we interpret the computational complexity as $O(TN^2V)$.
In Section \[sec:sampling\], we explore the effect of sampling to hasten [<span style="font-variant:small-caps;">DDGK</span>]{}’s runtime on large datasets. We show that not all $M\times N$ comparisons are necessary to achieve high performance: empirically, it seems that less than $20\%$ of source graphs are required, significantly speeding our approach.
Experiments
===========
In this section, we demonstrate our method through a series of qualitative and quantitative experiments. First, we show how our attention based alignment works under different conditions. Then, we show how our representations are capable of capturing the structure of the space of graphs by applying hierarchical clustering on the kernel space. Finally, we show that the learned graph embeddings represent a sufficient feature set to predict the graph label on several challenging classification problems in real-world biological datasets.
Cross-Graph Attention {#cross-graph-attention}
---------------------
In this qualitative experiment, we seek to understand how two graphs are related to each other. Comparing different patterns between different graphs is an important application in domains such as biology.
Figure \[fig:graph\_no\_reg\] shows two identical unlabeled barbell graphs. Each graph consists of two rings of size $5$ connected with the edge ($0$, $5$) and ($10$, $15$). The upper graph represents the target graph while the lower one represents the source graph. The edges connecting the source and target graphs represent the strongest attention weights for each node in the target graph. The heatmap shows the full attention matrix for more thorough analysis. Aligning these identical graphs is an easy task for the naked eye. However, our method can find many possible symmetries to exploit while still achieving perfect predictions. For example, nodes in the left ring can attend to the right ring of the source graph and vice versa.
Figure \[fig:graph\_node\_reg\] shows the previous setup with labeled graph nodes. This introduces a regularizing loss to preserve the node attributes. The attention heat map shows significant weights for the upper left and lower rights quadrants. The right ring does not attend to nodes in the left ring anymore, and vice versa. Still we can see the method exploiting symmetries within the same ring.
Finally, by also adding edge labels, the alignment problem is constrained enough that the attention heatmap is concentrated along the diagonal (See Figure \[fig:graph\_edge\_node\_reg\]). We can observe that the attention edges correspond in a one-to-one relationship between the target and source graphs. This synthetic experiment shows the effect of attribute preserving losses on learning the alignment between graphs.
Hierarchical Clustering
-----------------------
To understand the global structure of the graph embedding space, we explore it qualitatively using hierarchical clustering. First, we create a dataset which is a composition of 6 different families of graphs. Three graph families are mutated graphs and three families are subset of a larger set of realistic graphs. From each family we sample 5 graphs, creating a universe of 30 graphs. Then, we embed the graphs using our method constructing a graph embedding space. Finally, we cluster the embeddings according to their pairwise euclidean distances.
### Mutated Graphs
For these datasets we start with a known graph and generate a sequence of mutations to produce a family of graphs. In particular, we consider the following graphs.
- C. Elegans [@watts1998collective]: represents the neural network of the C. Elegans worm.
- Karate Club [@ZacharyKarate]: social network of friendships between 34 members of a karate club.
- Word Network [@NewmanWordNetwork]: adjacency network of common adjectives and nouns in the novel David Copperfield by Charles Dickens.
In order to generate a family $G_1 \cdots G_k$ for each original graph $G_0$, we employ the following mutation procedure. At each of the $k$ time steps, there is a $p=0.5$ chance of performing an edge deletion or addition. For additions, we select the two nodes to connect from any unlinked nodes according to the preferential attachment model characterized by $G_0$ [@chung2006complex]. For deletions, we select an edge at random and remove it. We run this procedure for 4 times with $k=50$ time steps, creating a family of 5 related graphs. The initial seed for any of these mutations is denoted by the suffix “-0".
### Realistic Graphs
We randomly pick 5 graphs from three of the real-world families of graphs we consider (**D&D**, **PTC**, and **NCI1**). See Section \[sec:datasets\] for more information about these graphs.
Figure \[fig:clusters30\] shows the result of clustering the pairwise distances between our graph embeddings. We are able to retrieve perfect clusters of {`c-elegans`, `words`} where there are clusters of size of 5 that consist only of graphs of the same type. For {**NCI1**, **D&D}**, we can cluster 4 graphs out of 5 before adding a graph which is out of the family.
![A hierarchical clustering of the graph kernel space for several different graph families. It shows 30 graphs that belong to 6 different families. The values of the matrix are the pairwise Euclidean distances between the graph embeddings. []{data-label="fig:clusters30"}](figures/clustering.pdf){width="\columnwidth"}
Graph Classification
--------------------
Our learned graph representations respect both attributes and structure. They can be used for graph classification tasks where the graph structure, node attributes, and/or edge attributes convey meaning or function. To demonstrate this, we use [<span style="font-variant:small-caps;">DDGK</span>]{} representations of several chemo- and bio- informatics datasets as features for classification tasks. We report our results against both unsupervised and supervised methods.
### Hyper-parameters Search
To choose [<span style="font-variant:small-caps;">DDGK</span>]{} hyperparameters (See Table \[table:ddgkparams\]), we perform grid searches for each dataset. We create splits of each dataset to avoid over-fitting, they are: {`train`, `dev`, `test`}. We use the scikit-learn SVM [@pedregosa2011scikit] as our classifier, and we vary the kernel choices between {`linear`, `rbf`, `poly`, `sigmoid`} and the regularization coefficient $C$ between $10$ and $10^9$. We choose the hyper-parameters of both [<span style="font-variant:small-caps;">DDGK</span>]{} and the classifier that maximize the accuracy on the `dev` dataset.
**Hyper-Parameter** **Values**
---------------------------------- -------------------------------------------------
Node embedding $2, 4, 8, 16, 32$
Encoder layers $1, 2, 3, 4$
Learning rate $10^{-4}$, $10^{-3}$, $10^{-2}$, $10^{-1}$, $1$
Encoding epochs $100, 300, 600$
Scoring epochs $100, 300, 600$
Node preserving loss coefficient $0, 0.25, 0.5, 1.0, 1.5, 2.0$
Edge preserving loss coefficient $0, 0.25, 0.5, 1.0, 1.5, 2.0$
: Values used during our grid search for [<span style="font-variant:small-caps;">DDGK</span>]{} graph representations learning hyper-parameters.[]{data-label="table:ddgkparams"}
------- -------- ------- ------- ----- ------ -----
D&D $1178$ $284$ $716$ $2$ $89$ $-$
NCI1 $4110$ $30$ $32$ $2$ $37$ $-$
PTC $344$ $14$ $15$ $2$ $18$ $4$
MUTAG $188$ $18$ $20$ $2$ $7$ $4$
------- -------- ------- ------- ----- ------ -----
: Statistics of the chemo- and bio-informatics datasets.[]{data-label="table:datasetprop"}
### Datasets {#sec:datasets}
Four benchmark graph classification datasets from chemo- and bio-informatics domains are used. The datasets include **D&D**, **NCI1**, **PTC** and **MUTAG**. All datasets include node labels. The **PTC** and **MUTAG** datasets also include edge labels. Table \[table:datasetprop\] shows network statistics for each dataset. The datasets:
- **D&D** [@dobson2003distinguishing]: contains 1178 proteins labeled as enzymes or non-enzymes.
- **NCI1** [@45b3e5c6d2ee4938b77995a88ee0b928]: contains 4110 chemical compounds labeled as active or inactive against non-small cell lung cancer.
- **PTC** [@toivonen2003statistical]: contains 344 chemical compounds labeled according to their carcinogenicity in male rats.
- **MUTAG** [@doi:10.1021/jm00106a046]: contains 188 mutagenic aromatic and heteroaromatic compounds labeled according to their mutagenic effect on a specific gram negative bacterium.
### Results
The results of these experiments are presented in Table \[table:results\]. We see that [<span style="font-variant:small-caps;">DDGK</span>]{} is quite competitive, with higher average performance on both the **D&D** and **MUTAG** datasets than any of the baselines. This is especially surprising given that the supervised methods have additional information available to them. We note that [<span style="font-variant:small-caps;">DDGK</span>]{} achieves its strong results without engineered features, or access to information from Weisfeiler-Lehman kernels. For **PTC**, we also see that [<span style="font-variant:small-caps;">DDGK</span>]{} attains competitive performance against all other methods, only being outperformed by 2 of the 9 baselines. Finally, on **NCI1**, we see that [<span style="font-variant:small-caps;">DDGK</span>]{} performs better than the method using the most similar kind of information (`node2vec`), but find that baselines using the WL kernel perform best on this dataset (indeed, the WL kernel itself takes the top two spots). We find this dependence quite interesting, and will seek to characterize it better in future work.
Dimension Sampling {#sec:sampling}
------------------
So far, we have been setting the source graphs set to be equal to the target graphs set. This pairwise computation is quite expensive for large datasets. To reduce the computational complexity of our method, we study the effect of sub-sampling the dimensions of our graph embedding space on the quality of graph classification.
To do that, we construct a source graph set that is a subset of the original graph set. We learn divergence scores for all target graphs against this subset. We use the reduced embeddings as features to predict graph categories. Figure \[fig:sampling\] shows that we are able to achieve stable and competitive results with less than 20% of the graphs being used as source graphs.
![Effect of sub-sampling source graphs on graph classification tasks. Here we vary the number of source graphs available to each method, and observe that very few dimensions are needed to achieve our final classification performance (less than $<20\%$ of the dimensions for the datasets considered).[]{data-label="fig:sampling"}](figures/sampling.pdf){width="\columnwidth"}
Related Work
============
The main differences between our proposed method and previous work can be summarized as follows:
1. We are an unsupervised method, taking only a graph as input.
2. We use no domain-specific information about what primitives are important in a graph, using only the edges.
3. We use no algorithmic insights from the literature in graph isomorphism (e.g. the Weisfeiler-Lehman kernel).
4. We assume nothing about the mapping of node ids between graphs, instead learning the alignment.
While many approaches exist that contain at least one of these differentiators, we are, to the best of our knowledge, the only proposed method that meets all four of these conditions. In this section we will briefly cover related work in graph similarity and other applications of neural networks to graph representation.
Unsupervised Graph Similarity
-----------------------------
We divide our brief survey of the literature into three kinds of unsupervised methods for graph similarity. The first seeks to explicitly define a kernel over graph features, or use the intuition from such a kernel as part of the representation learning process. The second focuses on the representation of individual elements of the graph, learning primitives that maximize some kind of reconstruction of the graph. The third group of work constructs a similarity function between graphs by an explicit vector of statistical features constructed by the graph.
**Traditional Graph Kernels**: There has been considerable work done on unsupervised methods for graph kernel learning. Initial efforts in the area focused on theoretical views of the problem, defining graph similarity via the Graph Edit Distance [@gao2010survey] or the size of the Maximum Common Subgraph [@bunke2002comparison] between graphs. Unfortunately these problems are both NP-Complete in the general case, require a known correspondence between the nodes of the two graphs of interest.
Many approaches are built around the graph similarity measure computed by the Weisfeiler-Lehman (WL) subtree graph kernel [@shervashidze2011weisfeiler; @kriege2016valid]. At its core, the WL algorithm collapses the labels of a node’s neighbors into a ordered sequence, and then hashes that sequence into a new label for the node. This process repeats iteratively to average information over the neighborhood together. Other functions that use different types of predefined features for graph similarity, such as shortest-paths kernels [@borgwardt2005shortest], and random walk kernels [@kashima2003marginalized] have also been proposed, but their naive implementations suffer from high asymptotic complexity ($O(n^4)$ and $O(n^6)$, respectively). Faster implementations of these kernels have been proposed [@borgwardt2007fast; @kang2012fast]. Some unsupervised methods also focus on extending the algorithm intuition of these classic approaches to the problem. For instance [@taheri2018RNN] learns a representation for each position in a WL ordering jointly while learning a graph representation.
Unlike all of these approaches, our method deliberately avoids algorithmic insights. Our proposed isomorphism attention mechanism allows capturing higher-order structure between graphs (beyond immediate neighborhoods).
**Node embedding methods**: Since DeepWalk [@deepwalk] proposed embedding the nodes via a sequence of random walks, the problem of node representation learning has received considerable attention [@perozzi2017don; @node2vec; @chen2017harp; @tsitsulin2017verse; @bojchevski2017deep; @abu2018watch; @aepasto2019]. In general, all of these methods utilize insights about similarity functions which are important to the graph. While these methods seek the best way to represent nodes, the representations are learned independently between graphs, which makes them generally unsuitable for graph similarity computations. For more information on this area, we recommend recent surveys in the area [@chen2018tutorial; @cui2018survey]. Unlike these methods, our goal is to learn representations of graphs, not of nodes.
**Graph statistics**: Finally, another family of unsupervised graph similarity measures define a hand-engineered feature vector to compute graph similarity. The NetSmilie method [@berlingerio2012netsimile] operates by constructing a fixed size feature value of graph statistics and uses this as a similarity embedding over graphs. Similarly, DetlaCon [@koutra2013deltacon] defines the similarity over two graphs with known node-to-node mapping via the similarity in their propagation of belief, and [@papadimitriou2010web] proposes a number of similarity measures over directed web graphs.
Unlike these methods, [<span style="font-variant:small-caps;">DDGK</span>]{} does not explicitly engineer its features for the problem. Instead, the similarity is learned function directly from the edges present in the adjacency matrix, with no assumptions about which features are important for the application task.
Supervised Graph Similarity
---------------------------
The first class of supervised methods uses some supervision to inform a similarity function constructed over different hand-engineered graph features.
A number of supervised approaches also utilize intuitions from the Weisfeiler-Lehman graph kernel. `Patchy-SAN` [@niepert2016learning] proposes an approach for convolutional operations on graph structured data. The core of their method uses the ordering from the WL kernel to order the nodes of a rooted subgraph into a sequence, and then apply standard 1-dimensional convolutional filters. This approach is further generalized by [@zhang2018end], who use the WL ordering to sort a graph sample in a pooling layer. Another branch of work has focused on extending the Graph Convolutional Networks (GCNs) proposed by [@kipf-gcn] to perform supervised classification of graphs. Proposed extensions include a pooling architecture that learns a soft clustering of the graph [@ying2018hierarchical], or a two-tower model which frames graph similarity as link prediction between GCN representations [@bai2018graph]. Interestingly, it has been shown that many of these methods are not necessarily more expressive than the original Weisfeiler-Lehman subtree kernel itself [@morris2018weisfeiler].
Unlike all of these approaches, our method learns representations of graphs without supervision — we use no labels about the class label of a graph, and no external information about which pairs of graphs are related. Our proposed isomorphism attention mechanism allows capturing higher-order structure between graphs (beyond immediate neighborhoods).
Extensions & Future Work {#sec:extensions}
========================
Here we briefly discuss a number of areas of future investigation for our method.
Graph Encoders
--------------
Given the choice of input and reconstructed output, several additional graph encoders are possible, in addition to the Nodes-To-Edges encoder which we used in this work. To mention a few options:
#### Edge-To-Nodes Encoder
- This encoder is trained to predict the source and destination vertices given a specific edge. Similar to the *Node-To-Edges* encoder, this could be expressed as a multilabel classification task with the following objective function, $$J(\theta) = \sum_{e_{ij} \in E} \log \Pr(v_i \mid e_{ij}, \theta) + \log \Pr(v_j \mid e_{ij}, \theta)$$ Note that the number of edges in a graph could grow quadratically, therefore, iterating over the edges is more expensive than the nodes.
#### Neighborhood Encoder
- In this case, the encoder is trained to predict a set of vertices or edges that are beyond the immediate neighbors. Random walks could serve as a mechanism to calculate a neighborhood around a specific node or edge. Given a partial random walk, the encoder has to predict the vertices that could have been visited within a specific number of hops.
$$J(\theta) = \sum_{\substack{(v_1, v_2, \cdots, v_{i}) \\ \thicksim RandomWalk(G, E, V)}} \log \Pr\big(v_{j}\mid( v_1, v_2, \cdots, v_{i}, \theta)\big)$$
Attention Mechanism
-------------------
We proposed a simple attention mechanism which uses node-to-node alignment. As we discussed in Section \[sec:scalability\], we could replace the linear layer with a deep neural network to reduce the size of the model if scability is an issue. While node-to-node alignment enhances the interpretability of our models, subgraph alignment could lead to better and easier understanding of how two graphs are similar. Hierarchical attention models [@yang2016hierarchical] could lead to higher levels of abstractions which could learn community structure and which communities are similar across a pair of graphs. Hierarchy has already been used within the context of learning better node embeddings, for example [@ying2018hierarchical] showed that a better understanding of the graph substructure can lead to better graph classification. Therefore, we believe extending the work beyond node-to-node alignment will significantly improve our results.
Regularization
--------------
We proposed attribute based losses to regularize our isomorphism attention mechanism. The graph encoder capacity was adjusted according to the source graph size. However, the source graph encoder could still suffer from overfitting which would reduce its utility in recognizing similar target graphs. Therefore, further research is necessary to understand the relation between the encoder training characteristics and the quality of the generated divergence scores
Feature Engineering
-------------------
In this work we have focused on developing an approach for representing graphs that operated without any feature engineering or algorithmic insights. While this willful ignorance has allowed us to design a new paradigm for graph similarity, we suspect that there are many fruitful combinations of this idea with other approaches for graph classification. For example, the graph embeddings we learn could be used as additional features for approaches based on learning supervised classifiers over graphs.
Conclusion
==========
In this work, we have shown that neural networks can learn powerful representations of graphs without explicit feature engineering. Our proposed method, Deep Divergence Graph Kernels, learns an encoder for each graph to capture its structure, and uses a novel *isomorphism preserving attention mechanism* to align node representations across graphs without the use of supervision. We show that representing graphs by their divergence from different source graphs provides a powerful embedding space over families of graphs. Our proposed model is both flexible and amenable to extensions. We illustrate this by proposing extensions to handle many commonly occurring varieties of graphs, including graphs with attributed nodes, and graphs with attributed edges.
Our experimental analysis shows that despite being trained with only the graph’s edges (and no feature engineering) the learned representations encode a variety of local and global information. When the representations produced by [<span style="font-variant:small-caps;">DDGK</span>]{} are used as features for graph classification methods, we find them to be competitive with challenging baselines which use at least one of graph labels, engineered features, or the Weisfeiler-Lehman framework. In addition to being powerful, [<span style="font-variant:small-caps;">DDGK</span>]{} models are incredibly informative. The learned isomorphism attention weights allow a level of insight into the alignment between a pair of graphs, which is not possible with other deep learning methods developed for graph similarity.
Unsupervised representation learning for graphs is an important problem, and we believe that the method of Deep Divergence Graph Kernels we have introduced here is an exciting step forward in this area. As future work, we will investigate 1) enhanced method for choosing informative source from the space of all graphs, 2) improving the architecture of our encoders and attention models, 3.) making it easier to reproduce research results in the area of graph similarity, and 4) making graph similarity models even easier to understand.
[^1]: In this paper, we use source and anchor interchangeably when referring to the encoded graph.
|
It is clearly differentiable, by the usual differentiation formulas, for any point other than (0,0):
Since the partial derivatives are continuous, the function is differentiable. Further, the limits, as (x, y) go to 0, are 0.
So the only question is at (0,0).
Since those match the limits of the partial derivatives as (x, y) goes to (0,0), the function is also differentiable at (0,0).
Again, that shows that the partial derivatives exist and are continuous at (0,0). That implies that the function is differentiable at (0,0).
May 5th 2009, 07:52 PM
silversand
I am not sure with the continuous bit.
as (x,y) goes to 0, the denominator should be zero?
May 10th 2009, 08:08 AM
HallsofIvy
Quote:
Originally Posted by silversand
I am not sure with the continuous bit.
as (x,y) goes to 0, the denominator should be zero?
As far as (x,y) not (0,0) is concerned, the partial derivatives give rational function with NON-zero denominator. A rational function is always continuous at such a point.
As far as (0,0) is concerned, I showed that the partial derivatives at (0,0) are both 0.
To see what the limit of the derivatives is at (0,0), it is best to change to polar coordinates.
In polar coordinates, , and
The derivative becomes
Because of the factor, the limit, as r goes to 0, is 0, no matter what is. Because, in polar coordinates, r alone measures the distance from (0,0), that is enough to show that the limit is 0. |
Effects of PEG-liposomal oxaliplatin on apoptosis, and expression of Cyclin A and Cyclin D1 in colorectal cancer cells.
Oxaliplatin is one of the agents used against colorectal cancer. Using PEG-liposome encapsulated oxaliplatin may enhance the accumulation of drugs in tumor cells, inducing apoptosis. However, the mechanism of action of PEG-liposome encapsulated oxaliplatin remains unclear. SW480 human colorectal cancer cells were treated with empty PEG-liposomes, free oxaliplatin or PEG-liposomal oxaliplatin. Cell cycle and apoptosis were assessed using fluorescence confocal microscopy and terminal deoxynucleotidyl transferase-mediated dUTP-fluorescein nick-end-labeling (TUNEL). Western blotting was used to analyze the expression of pro-apoptotic, anti-apoptotic and cyclin proteins. We found that PEG-liposomal oxaliplatin induced a stronger apoptotic response than empty PEG-liposomes or free oxaliplatin. Moreover, expression of Cyclin D1 increased, whereas expression of Cyclin A decreased after treatment with PEG-liposomal oxaliplatin. Furthermore, the cell cycle was arrested in the G1 phase. The results presented here indicate that PEG-liposome entrapment of oxaliplatin enhances the anticancer potency of the chemotherapeutic agent. The effect of PEG-liposomal oxaliplatin on apoptosis of SW480 human colorectal cancer cells may be through regulation of expression of Cyclin A or Cyclin D1, as well as pro-apoptotic and anti-apoptotic proteins. |
Iterative projection algorithms for ab initio phasing in virus crystallography.
Iterative projection algorithms are proposed as a tool for ab initio phasing in virus crystallography. The good global convergence properties of these algorithms, coupled with the spherical shape and high structural redundancy of icosahedral viruses, allows high resolution phases to be determined with no initial phase information. This approach is demonstrated by determining the electron density of a virus crystal with 5-fold non-crystallographic symmetry, starting with only a spherical shell envelope. The electron density obtained is sufficiently accurate for model building. The results indicate that iterative projection algorithms should be routinely applicable in virus crystallography, without the need for ancillary phase information. |
Application of interval 2-tuple linguistic MULTIMOORA method for health-care waste treatment technology evaluation and selection.
The management of health-care waste (HCW) is a major challenge for municipalities, particularly in the cities of developing countries. Selection of the best treatment technology for HCW can be viewed as a complicated multi-criteria decision making (MCDM) problem which requires consideration of a number of alternatives and conflicting evaluation criteria. Additionally, decision makers often use different linguistic term sets to express their assessments because of their different backgrounds and preferences, some of which may be imprecise, uncertain and incomplete. In response, this paper proposes a modified MULTIMOORA method based on interval 2-tuple linguistic variables (named ITL-MULTIMOORA) for evaluating and selecting HCW treatment technologies. In particular, both subjective and objective importance coefficients of criteria are taken into consideration in the developed approach in order to conduct a more effective analysis. Finally, an empirical case study in Shanghai, the most crowded metropolis of China, is presented to demonstrate the proposed method, and results show that the proposed ITL-MULTIMOORA can solve the HCW treatment technology selection problem effectively under uncertain and incomplete information environment. |
Heterogeneity of protein conformation in solution from the lifetime of tryptophan phosphorescence.
The decay of Trp phosphorescence of proteins in fluid solutions was shown to provide a sensitive tool for probing the conformational homogeneity of these macromolecules in the millisecond to second time scale. Upon examination of 15 single Trp emitting proteins multiexponential decays were observed in 12 cases, a demonstration that the presence of slowly interconverting conformers in solution is more the norm rather than an exception. The amplitude of preexponential terms, from which the conformer equilibrium is derived, was found to be a sensitive function of solvent composition (buffer, pH, ionic strength and glycerol cosolvent), temperature, and complex formation with substrates and cofactors. In many cases, raising the temperature, a point is reached at which the decay becomes practically monoexponential, meaning that conformer interconversion rates have become commensurate with the triplet lifetime. Estimation of activation free energy barriers to interconversion shows that the large values of DeltaG* are rather similar among polypeptides and that the protein substates involved are sufficiently long-lived to display individual binding/catalytic properties. |
Pat McQuaid to visit Mars Hill
1042012
We are very excited to announce a visit to Mars Hill by Pat McQuaid, the current president of cycling’s international governing body, the Union Cycliste Internationale (UCI). The cycling community is quite familiar with McQuaid, who has served as president of the UCI since 2005, a rather controversial period. Recently, McQuaid has come under fire for topics ranging from women’s rights in the peloton to the proper height of socks. McQuaid promises to speak to these topics during his April 19th visit but will focus on how grassroots efforts impact the sport’s international growth.
Yet McQuaid’s visit to Mars Hill had nothing to do with cycling, at least initially. On a search for ancestors who immigrated to the US from Northern Ireland in the 18th century, McQuaid discovered that family on his mother’s side had first landed in Pennsylvania and then migrated south to the North Carolina Piedmont region looking for cheaper land. They eventually settled in Western North Carolina not far from Mars Hill, which he discovered when reviewing documents at the Southern Appalachian Archives housed at the Ramsey Center on campus. It was in an email exchange about these documents that McQuaid learned that Mars Hill had a cycling team.
At first, McQuaid was astonished to learn that American colleges had cycling teams–apparently he hadn’t heard of collegiate cycling yet! And while we’re proud to be the ones to inform the president of the great things all of us in collegiate cycling are doing, we’re even prouder to announce that McQuaid, in conjunction with his visit to the Southern Appalachian Archives, will visit Mars Hill to talk about the growth of cycling worldwide and how collegiate cycling can be an important factor in cycling’s long term development.
McQuaid is scheduled to visit on April 19th, exact time and location to be announced. We want to make sure McQuaid’s talk is free and open to the public, so stay tuned as we determine the best venue for this exciting event.
For now, please direct all inquiries to hmoran@mhc. edu before April 1st. |
Q:
In Perl, is there any difference between direct glob aliasing and aliasing via the stash?
In Perl, is there ever any difference between the following two constructs:
*main::foo = *main::bar
and
$main::{foo} = $main::{bar}
They appear to have the same function (aliasing all of the slots in *main::foo to those defined in *main::bar), but I am just wondering if this equivalency always holds.
A:
Maybe not the kind of difference you were looking for, but there are two big differences between *main::foo and $main::{foo}; the former looks up the glob in the stash at compile time, creating it if necessary, while the latter looks for the glob in the stash at run time, and won't create it.
This may make a difference to anything else poking about in the stash, and it certainly can affect whether you get a used only once warning.
A:
The following script:
#!/usr/bin/env perl
#mytest.pl
no warnings;
$bar = "this";
@bar = qw/ 1 2 3 4 5 /;
%bar = qw/ key value /;
open bar, '<', 'mytest.pl' or die $!;
sub bar {
return "Sub defined as 'bar()'";
}
$main::{foo} = $main::{bar};
print "The scalar \$foo holds $foo\n";
print "The array \@foo holds @foo\n";
print "The hash \%foo holds ", %foo, "\n";
my $line = <foo>;
print "The filehandle 'foo' is reads ", $line;
print 'The function foo() replies "', foo(), "\"\n";
Outputs:
The scalar $foo holds this
The array @foo holds 1 2 3 4 5
The hash %foo holds keyvalue
The filehandle 'foo' is reads #!/usr/bin/env perl
The function foo() replies "Sub defined as 'bar()'"
So if *main::foo = *main::bar; doesn't do the same thing as $main::{foo} = $main::{bar};, I'm at a loss as to how to detect a practical difference. ;) However, from a syntax perspective, there may be situations where it's easier to use one method versus another. ...the usual warnings about mucking around in the symbol table always apply.
A:
Accessing the stash as $A::{foo} = $obj allows you to place anything on the symbol table while *A::foo = $obj places $obj on the expected slot of the typeglob according to $obj type.
For example:
DB<1> $ST::{foo} = [1,2,3]
DB<2> *ST::bar = [1,2,3]
DB<3> x @ST::foo
Cannot convert a reference to ARRAY to typeglob at (eval 7)[/usr/local/perl/blead-debug/lib/5.15.0/perl5db.pl:646] line 2.
at (eval 7)[/usr/local/perl/blead-debug/lib/5.15.0/perl5db.pl:646] line 2
eval '($@, $!, $^E, $,, $/, $\\, $^W) = @saved;package main; $^D = $^D | $DB::db_stop;
@ST::foo;
;' called at /usr/local/perl/blead-debug/lib/5.15.0/perl5db.pl line 646
DB::eval called at /usr/local/perl/blead-debug/lib/5.15.0/perl5db.pl line 3442
DB::DB called at -e line 1
DB<4> x @ST::bar
0 1
1 2
2 3
DB<5> x \%ST::
0 HASH(0x1d55810)
'bar' => *ST::bar
'foo' => ARRAY(0x1923e30)
0 1
1 2
2 3
|
CHICAGO — After Democratic Socialists of America became the largest socialist organization since World War II, its members broke into song. As Marcus Barnett, an organizer for Britain’s left-wing Momentum campaign, stood up to address the DSA’s biggest-ever convention Saturday, many of the nearly 700 delegates stood up and belted out the name of the Labour Party’s unapologetic leftist leader.
“Oh, Jeremy Corbyn! Oh, Jeremy Corbyn!” they sang, to the melody of the White Stripes’ “Seven Nation Army.” Bennett raised his fist in the air and sang along, dazzled that a Britishsoccer chant had traveled all the way to Chicago.
DSA,founded in 1982 to create a political foothold for Marxists, has transformed into an ambitious left-wing force. Membership grew during Sanders’s presidential campaign, and then started surging the day after Donald Trump was elected president in what some DSA members jokingly call the “socialist baby boom.”
The DSA went from 8,000 members in 2015, the year its delegates endorsed Sanders for president, to about 25,000 in 2017, with chapters or branches in 49 states. Its platform calls for a worker-owned economy and the end of traditional capitalism.
“You are the antidote to total isolation of living under capitalism,” said Maria Svart, the national director of DSA, as the convention began. “It’s the job of organizers to build institutions that will be capable of absorbing masses of people and keeping them in motion.”
Although the group endorsed him, Sanders, whose campaign and lasting popularity changed public perceptions of socialism, has not been closely involved with the newly booming DSA. In a recent interview with The Post, Sanders suggested that the organization’s growth was one of many examples of how younger voters were rejecting the post-Reagan political consensus.
“Many young people understand that health care for all, making public colleges free, decent wages and affordable housing are all part of a democratic socialist program,” Sanders said.
The average age of DSA members has since 2015 dropped from 64 to about 30, according to an organizer. A May 2016 Gallup poll, conducted after most of the Democratic primaries, found just that 35 percent of Americans viewed socialism favorably. Among voters under 30, that number rose to 55 percent.
The youth of the DSA’s new membership has infused it with humor, irony and a dizzy confidence — much of it inspired by left-wing parties in Europe and South America. But on Saturday, after a short debate, DSA delegates voted to end their 35-year relationship with the Socialist International, the global network of left-wing parties.
Instead of seeking out stars, DSA members have focused on ultra-local campaigning. They joined sit-ins and protests against the Republican effort to repeal the Affordable Care Act, but they used them to advance arguments for single-payer health care similar to Canada’s. In California, DSA members have phone-banked and knocked on doors to back a state single-payer bill that the legislature’s Democratic supermajority has tabled; the campaign, however, is designed to continue even if the bill were to pass.
“You pick campaigns to engage in that will help people and build power for working people, but also set us up for more transformative work down the road,” explained Jared Abbott, an outgoing member of DSA’s national political committee. “There’s agreement about flexing independent, socialist electoral power in some kind of way, but people are being flexible about how to do it.”
In Chicago, DSA energy had been channeled through Carlos Rosa, a Democratic alderman who joined DSA after endorsing the Sanders campaign and helping it win heavily Latino wards. In Atlanta, Seattle and New York, there were more socialist candidates seeking office as Democrats or as third-party candidates in safe seats.
It would take years, Rosa said, to elect a critical mass of socialists; in 2019, he hoped to see at least five of them on Chicago’s 50-member city council. In the meantime, constant organizing and doorstep conversations would break the taboos voters still had about socialism.
“You explain what you stand for, and then you explain that this is democratic socialism,” Rosa said. “That’s how you overcome years and years of red-baiting.”
But a lack of “red-baiting” has been among the biggest surprises of DSA’s growth. In 2009, the tea party movement and conservative media, in tandem, condemned the Obama administration’s agenda as socialism at best and fascism — government intermingling itself with corporations — at worst. Frances Fox Piven, a former DSA board member, found herself at the center of Glenn Beck’s chalkboards as the radio and TV host explained how Obama’s stimulus and health-care policies would fulfill a long-term socialist plot to overthrow capitalism.
The DSA itself played a role in the panic. In his book “Radical-in-Chief,” National Review’s Stanley Kurtz pored over Obama’s memoirs and the records of DSA in New York to prove that the future president had attended at least one socialist conference.
Largely out of the spotlight, DSA members were instead building their own ironic media universe. Editors and writers for Jacobin, a socialist magazine whose growth also surged with the rise of Sanders, flitted around the convention as celebrities; so did co-hosts of the Chapo Trap House podcast, whose success had inspired a smaller podcast called the Discourse Collective.
The tone of the new socialist media can often be relentlessly ironic, surrealist and rude. Rather than policing it, DSA has embraced it. Christian Bowe, social media director for DSA, who tweets under the handle “Larry Website,” celebrated the 25,000th DSA membership by asking socialists to come up with terrible, garish memes. They obliged, with images of Shrek, Sonic the Hedgehog and legally embattled former Subway spokesman Jared Fogle commemorating the left’s new milestone.
“[Italian Communist Antonio] Gramsci proposed creating our own working-class culture which address our needs and vision,” he explained. “Memes are a snapshot of that; podcasts and publications like Jacobin and Current Affairs are extensions of it.”
All of it, after all, had led to a three-day conference during which nearly 1,000 delegates, observers and other DSA members approved a platform and elected leaders. On Saturday night, hundreds of them celebrated at the offices of the left-wing magazine In These Times, where free copies of Jacobin were distributed at the door.
A DJ dropped “Seven Nation Army” into the middle of a pop and hip-hop playlist, and the “Oh, Jeremy Corbyn” chant echoed through the hallways, which were marked by signs that quoted Karl Marx to make an important party announcement.
“From each according to their ability, to each according to their needs,” they read. “Please donate to ensure that everyone who needs a drink gets one.”
David WeigelDavid Weigel is a national political correspondent covering Congress and grass-roots political movements. He is the author of "The Show That Never Ends," a history of progressive rock music. Follow |
CBS will mehr Star Trek – sehr viel mehr. Für den Ausbau von Gene Roddenberrys Sci-Fi-Franchise hat das US-Studio Discovery Co-Schöpfer und Showrunner Alex Kurtzman für mehrere Jahre unter Vertrag genommen.
Kurtzman arbeitet neben Discovery an der Captain Picard-Serie und an der Animationsserie Lower Decks. Doch das reicht CBS noch nicht und hat nach für die Zukunft von Star Trek noch viel ambitioniertere Pläne: Die TV-Präsenz soll sich mit dem Marvel Cinematic Universe vergleichen lassen.
In Star Trek ist viel Platz für neue Geschichten
Kurtzmans Aufgabe besteht darin, neue Serien auszuarbeiten und dafür zu sorgen, dass jedes Projekt ein Alleinstellungsmerkmal besitzt – wie er im Interview mit THR erklärt. Auf die Frage, wie viele Star Trek-Projekte zu viel sind, konterte er mit dem Erfolg von Marvel in den letzten Jahren:
"Irgendwann werden die Leute sagen: 'Das kommt mir alles so bekannt vor.' Das Einzige, was ich dem entgegensetzen würde, dass das noch nie jemand über Marvel gesagt hat. Von Film bis Fernsehen – niemand kriegt genug von ihnen. In einer Welt mit globalem Publikum bedeutet das, dass es immer Raum für mehr gibt, aber dieses 'Mehr' muss einen Sinn haben."
Kurtzman arbeitet an mehreren Projekten
Während am 18. Januar 2019 die zweite Staffel Star Trek Discovery auf Netflix startet, arbeitet Kurtzmans Team mit Hochdruck an der Picard-Serie mit Patrick Stewart. Es ist die erste Serie seit Voyager und dem Kinofilm Nemesis, die kein Prequel ist.
Rick and Morty-Autor Mike McMahon produziert für CBS die Zeichentrick-Comedy Star Trek: Lower Decks. Das US-Studio bestellte gleich zwei Staffeln der Serie, die einen humoristischen Blick auf das Sci-Fi-Franchise werfen soll. Das Team um Kurtzman plant aber noch weitere Projekte: Noch mindestens eine weitere animierte Serie ist geplant. Beide Projekte sollen im Frühjahr enthüllt werden, wenn die zweite Season von Discovery zu Ende ist.
Kurtzman verhandelt mit Michelle Yeoh über eine Spin-off-Serie zu Star Trek Discovery, die Philippa Georgiou in den Mittelpunkt rückt. Das Projekt könnte sich auf die Geheimorganisation Sektion 31 konzentrieren, die innerhalb der Sternenflotte besteht.
Stephanie Savage und Josh Schwartz (Gossip Girl, Dynasty-Reboot) arbeiten an einer Serie namens Starfleet Academy. Wie der Name schon vermuten lässt, dreht sich darin alles um die Akademie der Sternenflotte. (Duh.) Das Projekt soll auf ein jüngeres Publikum ausgerichtet sein. Ob das Georgiou-Spin-off und Starfleet Academy realisiert werden, ist noch unklar.
Michelle Yeoh wird in Zukunft vielleicht von einer Nebenfigur zur Hauptdarstellerin.
Mit Marvel messen? Star Trek ist schon ziemlich groß
Star Trek ist bereits über 50 Jahre alt – und steht ähnlich wie Marvel für ein großes Entertainment-Universum. Mit Discovery, der Picard-Serie und Lower Decks kommt das Franchise auf mittlerweile neun Serien – mit den Kurzfilmen Short Treks sind es sogar zehn. Dazu kommen 13 Kinofilme.
Fernab von Film und Fernsehen wurde Star Trek über die Jahre ebenfalls massiv ausgebaut: Zahlreiche Comics und Bücher erzählen Geschichten rund um die Sternenflotte und die Föderation. Das Roddenberry-Franchise wurde auch in zahlreichen Brett- und Videospielen verewigt – zu den erfolgreichsten Titeln zählen das MMO Star Trek Online, der Ego-Shooter Star Trek: Voyager – Elite Force und das Strategiespiel Armada auf PC.
Die wichtigsten Infos zu Star Trek Discovery
Die zweite Staffel Star Trek Discovery startet am 18. Januar 2019 auf Netflix – diesmal mit 14 Episoden.
Über den Autor: Alexander Schneider ist Editor bei IGN. Ihr könnt ihm auf Twitter und Instagram folgen: @JannLee360. |
ts digit of v?
8
Let h(b) be the third derivative of -1/4*b**4 + 4/3*b**3 + 0*b - 3*b**2 + 1/60*b**5 + 0. What is the units digit of h(6)?
8
Let n(p) = -p - 3. Let z be n(-3). Suppose z = 2*q + 33 + 15. Let i = 42 + q. What is the tens digit of i?
1
Suppose 2*f - g - 187 = -4*g, 4*f + 3*g = 371. What is the tens digit of f?
9
Suppose -2*r - 4*z + 9*z + 25 = 0, -5*r - 5*z - 25 = 0. Suppose -3*q - 2*x + 30 = r, -3*q + x - 5*x + 24 = 0. What is the units digit of q?
2
Suppose 14 + 51 = 5*j. What is the tens digit of j?
1
Suppose -n - 3*g + 17 = 0, -3*g + 2 = -3*n + 5. Suppose -3*y + 1 = -n*y - 5*h, -2*h = -2*y + 34. What is the tens digit of y?
1
Let g(h) = -h - 4*h**2 + 2*h**3 - 2*h**3 + 0*h**3 + 4 - 2*h**3. What is the tens digit of g(-3)?
2
Let k(v) = v + 9. Let n be k(-9). Suppose 5*h = 4*a - 40, -a + n*a - 3*h + 27 = 0. What is the tens digit of a?
1
Suppose 76*k + 240 = 81*k. What is the tens digit of k?
4
Suppose -38 = 16*t - 17*t. Let s be 33*(-3)/((-9)/(-2)). Let o = s + t. What is the tens digit of o?
1
What is the units digit of ((-364)/(-35))/(2/5)?
6
Let p = -9 - -12. What is the units digit of p?
3
Let m(c) = -c - 3. Suppose -2*f - f = 24. Let q be m(f). Suppose a = q*a - 16. What is the units digit of a?
4
Let i = 68 - 45. What is the tens digit of i?
2
Let t(p) = p**3 - p**2 + 2*p - 3. Let y be t(2). Let h(u) = 7 - 1 - 4*u + 3*u. What is the units digit of h(y)?
1
Let t be (2 + -1)/1 + 3. Suppose m - t*z = 6, 0*m - 4*z = 4*m - 24. What is the units digit of m?
6
Suppose -30 = -p + 111. What is the tens digit of p?
4
Suppose -2*v + 5*j = -5*v + 514, -500 = -3*v + 2*j. What is the tens digit of v?
6
Let q be (-3*1)/(3/(-4)). Suppose -138 = l - q*l. What is the units digit of l?
6
Let k(d) be the second derivative of d**5/20 - 7*d**4/12 - 7*d**3/6 - d**2/2 + 3*d. What is the units digit of k(8)?
7
Let g be 12/15*60/8. Let j be (-1 - 6/(-2)) + 6. Let b = j - g. What is the units digit of b?
2
Let n = -10 + 3. Let o(j) = j + 10. Let d be o(n). Suppose -1 = -d*x + 2. What is the units digit of x?
1
Let f(k) = k**3 - 3*k**2 - 4*k + 2. Let l be f(4). Suppose -6 = -l*v - 2. Let g(a) = a**2 - 3*a + 3. What is the units digit of g(v)?
1
Suppose 2*c - 6*c = -68. What is the units digit of c?
7
Suppose 8*o - 9*o + 42 = 0. What is the units digit of o?
2
Suppose 12 = -c - 3*g, 5 + 4 = -c - 2*g. Let q = 89 - 73. What is the units digit of 1/(c/9) + q?
3
Let n(v) = v. Let o be n(-3). Let i be 11/3 - 1/o. Suppose -i = -2*f - 0. What is the units digit of f?
2
What is the units digit of (2/7 + 64/21)*6?
0
Suppose -2*w + 68 = k + 3*k, 10 = 5*k. What is the units digit of w?
0
Let b be (-1)/((6/3)/(-4)). Let a = b - 0. Suppose -2*k - 4*h + 64 = 2*k, -a*h - 6 = 0. What is the tens digit of k?
1
Let n(r) = -6*r. Let c(a) = -5*a. Let t(b) = -7*c(b) + 6*n(b). What is the units digit of t(-4)?
4
Let r be 51 + 0 + -4 + 3. Let t = r - 19. What is the tens digit of t?
3
Let l(p) = -p - 2. Let s be l(-3). Suppose 0 = 4*t - 3 - s. Suppose 4 = 3*w + t. What is the units digit of w?
1
What is the units digit of (12/8)/(1/38)?
7
Suppose 0 = -2*k - 0*k + 2. Suppose -x + 2*r + k = 0, 2*r = 2*x - 14 + 6. Suppose -8 = 3*p - x*p. What is the units digit of p?
2
Suppose -x = -2*x + 374. Suppose -4*j = 178 + x. What is the units digit of 2/5 + j/(-30)?
5
Let c(t) = 4*t - 14. Let w be c(6). Let k = w + -1. What is the units digit of k?
9
Suppose -35*l = -40*l + 270. What is the units digit of l?
4
What is the units digit of ((-624)/(-56))/(2/14)?
8
Suppose y = 5*l - 18 - 3, 3*y - 5*l = -13. Suppose -5*t + y*t + 4 = 0. What is the units digit of t?
4
Suppose 2*w = 11 - 1. Suppose w*z + 2*y - 26 = 0, -3*z + 3*y + 21 = 6*y. Suppose z*j - 16 = -4. What is the units digit of j?
3
Let r = -37 + 69. What is the tens digit of r?
3
What is the tens digit of (-5)/(-2)*(-364)/(-35)?
2
Let v = -46 - -21. Let l = v - -15. What is the units digit of 12/10*l/(-3)?
4
Suppose -118 = -3*v + 2*c + 349, 313 = 2*v - c. What is the hundreds digit of v?
1
Let z = -10 + 4. What is the units digit of (z/15)/((-1)/10)?
4
Let o = -256 - -425. What is the hundreds digit of o?
1
Suppose -6*j + 150 = -2*j - 5*q, -j + 3*q + 41 = 0. Suppose 3*i + 3*d = 7 + j, -4*i = -3*d - 42. What is the tens digit of i?
1
Let q(f) = -2*f - 5. Let j(c) = -6*c - 16. Let v(h) = 5*j(h) - 16*q(h). Let r be v(2). What is the units digit of -2 + r + 4 - 2?
4
Suppose 4*g + 0*g = 476. What is the tens digit of g?
1
Let n(j) = 4*j**3 - j**2 + 4*j + 2. Let t be n(6). Suppose 2*x - 56 = 5*c + 294, -5*x + 2*c + t = 0. What is the units digit of (-6)/4*x/(-15)?
7
Let f(o) = -o + 6. Let g be f(-10). Let x = 27 - g. What is the units digit of x?
1
Suppose 5*n - 8*n = -15. What is the units digit of n?
5
Suppose -6*a + 450 = -2*a + 3*p, 2*p + 329 = 3*a. What is the tens digit of a?
1
What is the units digit of (-495)/(-44) + 1/(-4)?
1
What is the units digit of 357/9 - (-1)/3?
0
Let p = 5 - 4. What is the units digit of p?
1
What is the units digit of (-162)/(-12) + 5/(-2)?
1
Suppose 3*o = -h + 137, -h - 16*o = -14*o - 139. What is the tens digit of h?
4
Let c = -11 + 31. What is the units digit of c?
0
Let b(u) = -u**2 + 6*u + 3. Let n be b(6). Let i = n - 0. Suppose -i*a + 4*a - 4 = 0. What is the units digit of a?
4
Let j = -6 + 0. What is the units digit of (-4)/j*(-21)/(-14)?
1
Let d = -9 + 13. Suppose 2*x - 3*s - 17 = -d, 4*s = 4*x - 20. What is the units digit of x?
2
Let p be (-4)/(-10) - (-7)/(-5). Let y be (1*-2)/(2/p). What is the units digit of 2 + y - (-1 - -3)?
1
Let m be ((-2)/(-3) - 1)*(-66 + 0). Let z be 1/(9/8 - 1). What is the tens digit of (z - m)*6/(-4)?
2
Suppose -3*j + 0*j = 21. Let i(m) = m**3 + 7*m**2 - 4*m - 9. What is the tens digit of i(j)?
1
Let w be -2 + 21/(-3)*-4. Suppose 4*u = 2*o - w, -2*o - 3*u = -7*o + 100. What is the units digit of o?
3
Let o be (2 + -1 - 1)*-1. Suppose z - 5 = -o*z. What is the units digit of z?
5
Let p(o) be the second derivative of -o**5/60 - o**4/24 - o**3/2 - 2*o. Let f(w) be the second derivative of p(w). What is the units digit of f(-4)?
7
Suppose 5*n = -7 - 48. Let i = n - -22. What is the units digit of i?
1
Let r = 46 + -31. What is the tens digit of r?
1
Let d(t) = 2*t**2 - 5*t**2 + 4*t**2. Let c = -4 + 2. What is the units digit of d(c)?
4
Let l = -229 - -379. Suppose 6*s - 84 = l. What is the tens digit of s?
3
Suppose -r = -5*f + 29, 3*r - 4*f - f = -37. Let m = 5 + r. What is the units digit of m?
1
Suppose -6 = -4*j - 2. Let q be 3/6*j*70. Suppose -4*p + q = p. What is the units digit of p?
7
Let d be 2 - (1/(-1))/(-1). Suppose 3*f = b - d, -4*b = 2*f + b - 22. Suppose g - 12 = -2*j, -j = 5*g + f + 2. What is the units digit of j?
7
Suppose 0 = 3*q - 6*q + y + 55, 5*y - 85 = -3*q. Suppose -6*w + q = -w. What is the units digit of w?
4
Suppose -a = -2*a + 1. Let d = 1 + a. Suppose -4 - d = -2*w. What is the units digit of w?
3
Suppose -3*r = 6, r = -4*t + 1 + 5. Let n(l) = 4 - 5 - l**t - l**2 - l**3 + l + 0. What is the units digit of n(-3)?
5
Suppose 0*v + 2*y = -v + 94, -v + 89 = y. Suppose 4*l - v = 16. What is the units digit of l?
5
Let t(c) = c - 9. Let s be t(9). Suppose -2*d = 4*q - 12, s = 5*q + 5*d - 10. What is the units digit of q?
4
Suppose -s = -5*b - 27, 4*s - 2*b - 2*b - 28 = 0. Let x(w) = w + w - 2*w**2 + w**3 - 7 - 4*w**s. What is the units digit of x(6)?
5
Let a(q) = q**2 - 2*q - 9. Suppose -5*n + 2*i = -37, 28 = 4*n + 2*i + 2. What is the units digit of a(n)?
6
Let c = 55 + 82. What is the tens digit of c?
3
Let t be -7*(-1 - 4/(-2)). Let i = t - -5. What is the tens digit of i/(-8) + 188/16?
1
Suppose 0 = -4*b + 5*s + 458, -4*b - 5*s + 70 + 368 = 0. What is the tens digit of b?
1
What is the units digit of (-8)/(-20)*(152 - 2)?
0
Let g = -4 + 7. Suppose 0 = -0*l + g*l - 12. What is the units digit of l?
4
Let p(q) = 0*q**2 + 2*q**3 - 4 + 0*q**3 + 5*q - q**3 + 7*q**2. Let i be p(-5). Let u = -12 + i. What is the units digit of u?
9
Let r(y) = -y**3 - 4*y**2 - 2*y + 2. Let h be r(-2). What is the units digit of 0 + (h - 1 - -35)?
2
Suppose -3*k - 2*g - 8 = 0, 3*k + 5*g - g + 10 = 0. Let v = k - -15. What is the tens digit of v?
1
Suppose 272 = 3*l + l. What is the tens digit of l?
6
Let d = 55 - -6. What is the units digit of d?
1
Let g = 15 |
Reduction of doxorubicin cardiotoxicity by prolonged continuous intravenous infusion.
Doxorubicin (Adriamycin) was administered by continuous infusion to reduce peak plasma levels and thus lessen cardiac toxicity. Cardiotoxicity was monitored by noninvasive methods, and endomyocardial biopsy specimens were studied by electronmicroscopy. Cardiotoxicity was compared in 21 patients receiving doxorubicin intravenously over 48 or 96 hours and in 30 control patients treated by standard intravenous injection. Both groups were studied prospectively and were well matched by risk factors for doxorubicin cardiotoxicity. The median cumulative dose for those receiving continuous infusion was 600 mg/m2 body surface area (range, 360 to 1500 mg/m2) compared with 465 mg/m2 (range 290 to 680 mg/m2) in the control group (p = 0.002). Fourteen of the 30 patients in the control group showed severe morphologic changes in the biopsy specimens, precluding further doxorubicin administration, as compared with two of 21 patients receiving the drug by continuous infusion (p less than 0.02). The mean pathologic score for the infusion group, 0.9, was lower than the mean for the control group, 1.6 (p = 0.004). Antitumor activity was not compromised. Decreasing peak plasma levels of doxorubicin by continuous infusion reduces cardiotoxicity. |
Share this:
Like this:
LikeLoading...
Mission
Project BrainHeart's mission is to improve the emotional health of future generations by building stronger brains at the outset of life through empowering caregivers to form more secure attachment relationships with infants. |
Sanford Speaks Out is the latest blog sensation written, edited and produced by Sanford D. Horn, a writer and educator. Sanford will write about issues of the day covering a myriad subjects: politics, education, culture, sports, religion and even food.
Thursday, March 22, 2012
Obama Should Do His Own Homework
Obama Should Do His Own Homework
Commentary by Sanford D. Horn
March 22, 2012
Coming to a public school near you – your children are enlisted as political operatives?
Be careful, parents, if you are not paying attention to the curriculum and the assignments thrust upon your children in their social studies or American history classes, the continued far-left indoctrination will march on unfettered.
Although the case of Liberty Middle School teacher Michael Denman occurred in Fairfax County, Virginia, it is not unimaginable that such a situation couldn’t occur in your local school district whether in a red or a blue state.
While there was no debate over that aspect of the assignment, there was conflict concerning whether or not the students were required to submit their findings to the Obama reelection campaign committee. The class assignment called for such a submission, while Fairfax County Public Schools spokesman John Torre said sending the report of the GOP candidates’ vulnerabilities was not required.
Unfortunately, there was no condemnation of the nature of the assignment in the first place coming from Torre, nor was there any disciplinary action meted out against Denman for such a clearly one-sided assignment.
Denman’s attempt to take advantage of his position of authority over impressionable youths with an assignment of unmistakable bias is part of what is wrong with the public schools today. In fact, by having his students research one side of the aisle, Denman, a civics honor’s teacher at Liberty Middle School, has gone contrary to the school’s name as such an assignment limits people’s liberty.
Research is certainly a productive teaching tool for students. However, vet all the candidates; hold a school assembly with students debating the issues while representing the candidates. Decorate the school with posters and banners, then, hold a mock primary in the spring and a mock general election in the fall. Having taught socials studies and American history, these are successful, hands-on teaching tools.
Let this be a teachable moment for Mr. Denman, who should face some measure of discipline. Other teachers have been punished for lesser offenses.
Indoctrination in the public schools, which happens far too often, typically goes unchecked. Share an opinion if students ask, but clearly state that is precisely what it is – an opinion that does not require agreement. This is a cautionary tale for teachers and schools across the fruited plain. |
CNN's Miguel Marquez travels to Wisconsin to check in with voters in a rural county that went for Trump in spite of not voting for a Republican since Nixon.
Source: CNN |
Cookbook creator says: I made this, and it was delicious! Loved the lime juice, which is not something I would have thought of adding. I'll definitely make again, although I used 1/3 cup quinoa per serving which made it high in calories so next time I must cut back to 1/4 cup. |
Optimisation of photopolymers for holographic applications using the Non-local Photo-polymerization Driven Diffusion model.
An understanding of the photochemical and photo-physical processes, which occur during photopolymerization is of extreme importance when attempting to improve a photopolymer material's performance for a given application. Recent work carried out on the modelling of the mechanisms which occur in photopolymers during- and post-exposure, has led to the development of a tool, which can be used to predict the behaviour of these materials under a wide range of conditions. In this paper, we explore this Non-local Photo-polymerisation Driven Diffusion model, illustrating some of the useful trends, which the model predicts and we analyse their implications on the improvement of photopolymer material performance. |
Why Were These Space Shuttles Abandoned in the Desert?
May 10, 2017 - On a desert plain in Central Asia, where camel caravans once carried treasures between oasis towns, lie ruins from a different era: two space shuttles in a massive hangar. Russian photographer Alexander Kaunas and a companion visited the disused site within the otherwise still operational Baikonur Cosmodrome in Kazakhstan, the launch center for Russian space missions. After its Cold War rival, the United States, started a space shuttle program, the Soviet Union followed suit with similar-looking vehicles. But shortly after the dissolution of the USSR, Russia stopped developing the shuttles, leaving behind something like a time capsule for an alternate history.READ: Creepy Soviet Space Shuttles Are Sitting in a Kazakhstan Desert
Why Were These Space Shuttles Abandoned in the Desert?
May 10, 2017 - On a desert plain in Central Asia, where camel caravans once carried treasures between oasis towns, lie ruins from a different era: two space shuttles in a massive hangar. Russian photographer Alexander Kaunas and a companion visited the disused site within the otherwise still operational Baikonur Cosmodrome in Kazakhstan, the launch center for Russian space missions. After its Cold War rival, the United States, started a space shuttle program, the Soviet Union followed suit with similar-looking vehicles. But shortly after the dissolution of the USSR, Russia stopped developing the shuttles, leaving behind something like a time capsule for an alternate history.READ: Creepy Soviet Space Shuttles Are Sitting in a Kazakhstan Desert |
Amplification of lymph node cell tube leukocyte adherence inhibition (LAI) reactivity by leukocyte adherence inhibition factor (LAIF).
This investigation examines the immunologic basis for specific antigen-induced tube leukocyte adherence inhibition (LAI) reactivity of draining lymph node cells (LNC) from dogs with canine transmissible venereal sarcoma (CTVS). CTVS regressor LNC, macrophage-depleted LNC, and enriched T lymphocyte fractions, but not enriched B lymphocyte fractions, were specifically reactive to CTVS antigen extract in direct tube LAI. In addition, regressor LNC amplified tube LAI responses by generating supernatants with leukocyte adherence inhibition factor (LAIF) activity for normal dog indicator LNC and enriched peripheral blood mononuclear cells (PBMC) in an indirect tube LAI assay. However, macrophage-depleted LNC and enriched T lymphocyte fractions failed to generate supernatants with LAIF activity, suggesting that macrophage accessory cells play a central role in the amplification of tube LAI. Interestingly, CTVS regressor peripheral blood leukocytes (PBL) and PBMC, which were specifically reactive in direct tube LAI, also failed to generate supernatants with LAIF activity. These findings demonstrate a distinction between LAIF-mediated amplification and direct tube LAI reactivity, and suggest that leukocyte populations with differing cellular proportions and from different immunologic compartments may participate in tube LAI via different mechanisms. |
Posts Tagged Milton Friedman
It’s been over 4 decades since Milton Friedman’s treatise on corporate social responsibility, but despite the time the obligations of business in society is still a hot topic of debate.
In India, however, that obligation has been laid out explicitly in new regulations set to take effect on April 1st. The bill, which was officially passed into law on August 29th, 2013, contains updates on a broad range of issues. Of interest is section 135 which outlines a company’s CSR requirements:
Form a CSR committee with at least three directors, one of which must be independent. This committee will be charged with reviewing a company’s CSR policy, making expenditure recommendations, and providing oversight.
The company must spend at least 2% of the average net profits during the 3 preceding financial years on CSR activities outlined by the board. Preference must be given to local areas in which a company operates.
These requirements will apply to companies with a net worth of 500 crore (USD $81 million) or turnover of 1000 crore (USD $162 million) or a net profit of 5 crore (USD $810,000).
Based on total economic activity, some estimates place the total amount to be earmarked for social causes around US$2.5 billion.
There are some problems with this view. First, CSR is treated as being analogous to corporate philanthropy. While this is partially true it only addresses one facet of CSR. Social responsibility is a much more holistic view of the actions of a company within a society. Simply earmarking 2% of profits for government-approved causes represents the purest form of a social tax Friedman espouses in his article.
For example, CEOs who pay their employees well above the market rate may feel that they are contributing to society but the commission has already ruled that employee compensation will not be counted towards the CSR quota.
Compare this to CSR practices in many Nordic countries whose companies frequently top the list on a variety of CSR performance indices. The corporate attitude has long been one of “implicit” CSR, where social impact is built into the foundation of how a firm operates.
Trond Giske, the Norwegian minister for Trade and Industry captured this succinctly in his remarks for a 2012 conference, “Many elements of CSR are at the core of the Nordic Welfare model, such as decent work, gender equality, involvement of citizens and social dialogue.” Surely an Indian company acting out these values could be said to be fulfilling its social contract, but how does one quantify and count these actions towards the required 2%?
Education contribution is a primary area declared as a valid CSR category for the spending requirement. Poor facilities and lack of technology has been cited as a cause for India’s poor educational system; something that philanthropy could address. However, these elements mean nothing if a student can’t go to class because their family relies on them to contribute to the family’s income. This is especially true in poorer areas as one report published in 2009 noted an average absentee rate of 25% among rural students, compared to 10% among their urban counterparts. While corporate philanthropy can begin to address educational issues, it will take a more fundamental change in philosophy to improve the more systemic causes such as poor wages.
The law restricts a business’ ability to be strategic with its social contributions. Economic, social, and environmental sustainability is achieved when a firm can align its altruistic actions with core business objectives; a concept outlined by Michael Porter as creating ‘shared value.’ Unfortunately, the shared value approach requires the commitment and alignment of all organizational functions; a state of being that is not achieved when simply contributing 2% of profits.
With that said, India is still breaking new ground by formalizing a CSR policy but at this point it still represents little more than a tax and not true innovation. |
[colors]
# Base16 Heetch Light
# Author: Geoffrey Teale (tealeg@gmail.com)
foreground = #5a496e
foreground_bold = #5a496e
cursor = #5a496e
cursor_foreground = #feffff
background = #feffff
# 16 color space
# Black, Gray, Silver, White
color0 = #feffff
color8 = #9c92a8
color7 = #5a496e
color15 = #190134
# Red
color1 = #27d9d5
color9 = #27d9d5
# Green
color2 = #f80059
color10 = #f80059
# Yellow
color3 = #5ba2b6
color11 = #5ba2b6
# Blue
color4 = #47f9f5
color12 = #47f9f5
# Purple
color5 = #bd0152
color13 = #bd0152
# Teal
color6 = #c33678
color14 = #c33678
# Extra colors
color16 = #bdb6c5
color17 = #dedae2
color18 = #392551
color19 = #7b6d8b
color20 = #ddd6e5
color21 = #470546
|
Q:
Reemplazar cadenas con replace JS
Hola tengo el siguiente codigo en JS que me detecta en un bloque de contenido si posee alguna de las siguientes secuencias de caracteres y tienen que ser reemplazados por imagenes, el problema es que la funcion replace no me permite reemplazarlos. Aqui esta el codigo:
function validarContenido(contenido){
var emoji = [":)","XD",":P",":(",":*","X_X","|**|"];
var icono = ["img/emojis/emoji1.png",
"img/emojis/emoji2.png",
"img/emojis/emoji3.png",
"img/emojis/emoji4.png",
"img/emojis/emoji5.png",
"img/emojis/emoji6.png",
"img/emojis/emoji7.png"];
for(var i=0; i<emoji.length; i++){
var estaEmoji = contenido.search(emoji[i]);
console.log(estaEmoji);
if(estaEmoji!=-1){
contenido = contenido.replace(emoji[i],icono[i]);
}
}
console.log(contenido);
return contenido;
}
Y el error que me tira es el siguiente:
Uncaught SyntaxError: Invalid regular expression: /:)/: Unmatched ')'
at String.search (<anonymous>)
at validarContenido (controlador.js:195)
at HTMLButtonElement.<anonymous> (controlador.js:151)
at HTMLButtonElement.dispatch (jquery.min.js:2)
at HTMLButtonElement.y.handle (jquery.min.js:2)
A:
Métodos split() & join()
Si no deseas usar Expresiones Regulares puedes lograr lo que quieres con el Método del Objeto String split() & join()
Partiendo de tu codigo te dejo un ejemplo de cómo hacerlo:
function validarContenido(contenido){
var emoji = [":)","XD",":P",":(",":*","X_X","|**|"];
var icono = ["img/emojis/emoji1.png",
"img/emojis/emoji2.png",
"img/emojis/emoji3.png",
"img/emojis/emoji4.png",
"img/emojis/emoji5.png",
"img/emojis/emoji6.png",
"img/emojis/emoji7.png"];
for(var i=0; i<emoji.length; i++) {
contenido = contenido.split(emoji[i]).join(icono[i])
}
console.log(contenido);
return contenido;
}
validarContenido(';)')
validarContenido(':)')
validarContenido(":), XD, :P, :(, :*, X_X, |**|, :), XD, :P, :(, :*, X_X, |**|");
|
/*
* Copyright (c) Facebook, Inc. and its affiliates.
*
* This source code is licensed under the MIT license found in the
* LICENSE file in the root directory of this source tree.
*/
package com.facebook.react.views.picker;
import android.widget.Spinner;
import androidx.annotation.NonNull;
import androidx.annotation.Nullable;
import com.facebook.react.bridge.ReadableArray;
import com.facebook.react.uimanager.SimpleViewManager;
import com.facebook.react.uimanager.ThemedReactContext;
import com.facebook.react.uimanager.UIManagerModule;
import com.facebook.react.uimanager.ViewProps;
import com.facebook.react.uimanager.annotations.ReactProp;
import com.facebook.react.uimanager.events.EventDispatcher;
import com.facebook.react.views.picker.events.PickerItemSelectEvent;
import java.util.List;
/**
* {@link ViewManager} for the {@link ReactPicker} view. This is abstract because the {@link
* Spinner} doesn't support setting the mode (dropdown/dialog) outside the constructor, so that is
* delegated to the separate {@link ReactDropdownPickerManager} and {@link ReactDialogPickerManager}
* components. These are merged back on the JS side into one React component.
*/
public abstract class ReactPickerManager extends SimpleViewManager<ReactPicker> {
@ReactProp(name = "items")
public void setItems(ReactPicker view, @Nullable ReadableArray items) {
final List<ReactPickerItem> pickerItems = ReactPickerItem.createFromJsArrayMap(items);
view.setStagedItems(pickerItems);
}
@ReactProp(name = ViewProps.COLOR, customType = "Color")
public void setColor(ReactPicker view, @Nullable Integer color) {
view.setStagedPrimaryTextColor(color);
}
@ReactProp(name = "prompt")
public void setPrompt(ReactPicker view, @Nullable String prompt) {
view.setPrompt(prompt);
}
@ReactProp(name = ViewProps.ENABLED, defaultBoolean = true)
public void setEnabled(ReactPicker view, boolean enabled) {
view.setEnabled(enabled);
}
@ReactProp(name = "selected")
public void setSelected(ReactPicker view, int selected) {
view.setStagedSelection(selected);
}
@Override
protected void onAfterUpdateTransaction(ReactPicker view) {
super.onAfterUpdateTransaction(view);
view.commitStagedData();
}
@Override
protected void addEventEmitters(final ThemedReactContext reactContext, final ReactPicker picker) {
picker.setOnSelectListener(
new PickerEventEmitter(
picker, reactContext.getNativeModule(UIManagerModule.class).getEventDispatcher()));
}
@Override
public void receiveCommand(
@NonNull ReactPicker view, String commandId, @Nullable ReadableArray args) {
switch (commandId) {
case "setNativeSelectedPosition":
if (args != null) {
view.setImmediateSelection(args.getInt(0));
break;
}
}
}
private static class PickerEventEmitter implements ReactPicker.OnSelectListener {
private final ReactPicker mReactPicker;
private final EventDispatcher mEventDispatcher;
public PickerEventEmitter(ReactPicker reactPicker, EventDispatcher eventDispatcher) {
mReactPicker = reactPicker;
mEventDispatcher = eventDispatcher;
}
@Override
public void onItemSelected(int position) {
mEventDispatcher.dispatchEvent(new PickerItemSelectEvent(mReactPicker.getId(), position));
}
}
}
|
Credit Card HQ Set Somwhere In McLean Area
September 30, 1977
Visa U.S.A. Inc., formerly called Bank Americard and one of the country's two principal bank credit card companies, plans to establish a head-quarters in Northern Virginia for the Eastern half of the U.S. as well as Canada and Latin America.
By the middle of next year, according to a Visa spokesman, the credit card company will be hiring between 60 and 80 persons to staff the new office. A site in the McLean area has been selected. The company declined said Visa's name will not be on the building, apparently for security reasons.
San Francisco-based Visa also said it will locate an international headquarters in Lausanne, Switzerland. The officers here and in the Swiss city are part of a program designed to accelerate Visa's growth, according to president Dee W. Hock.
A full computer center will be constructed in McLean to support all credit transactions using Visa cards for half of the country. 24 hours a day, seven days a week. The area center will authorize charges and will be capable of taking over worldwide operations for the credit card firm in the event of computer failure at similar centers in San Mateo, Calif., and Swtizerland.
Worldwide Visa volume in the second quarter of 1977 was a record $4.5 billion, up more than 26 per cent from a year earlier. Hock said he expects this growth rate to increase in the next two or three years.
Visa is owned and operated by more than 9,000 member financial institutions and is accepted at 2.2 million merchant establishments aroundthe globe. The credit card's strength traditionally has been in the West but expanision in the Eastern states helped spur a decision for a new headquarters here, officials said. |
BHS coach pleased with camp numbers
Nick Barkley almost missed his one shining moment at the Baldwin High School soccer camp last week.
The BHS junior injured his knee wakeboarding right before camp started. He sat out a couple of days of camp, but returned just in time for Friday's scrimmage.
With the game tied 3-3, Barkley headed a corner kick into the goal in the waning minutes. The goal was enough to give the red team a 4-3 victory.
"It was a rush," Barkley said. "Hopefully, you're going to see a lot of those this season."
Barkley and other members of the BHS team enjoyed Friday's scrimmage, because it helped them realize where the players are at during the summer.
"It was a lot of fun," Barkley said. "We got to see where everybody was most comfortable on the field. We got to learn where everybody is at right now."
Sophomore Anna Baughan enjoyed her second summer soccer camp and the scrimmage.
"It was fun to play with everyone again," Baughan said. "We got to practice passing and play again."
The camp began July 23 and ended Friday. It was the third-annual camp for the Bulldogs. Coach Gus Wegner was pleased with the numbers that attended the camp.
"Camp went very well," Wegner said. "We had great attendance. We had 29 kids out of -- what I see for the season -- 32 or 33 kids. So, 90 percent attendance at camp is excellent.
"The other thing I like is their attitude," Wegner said. "I could tell from the second day of camp on that they are working hard, getting to know each other and they have this attitude of enjoying their time out here. They are having fun. They are taking it informally, but putting a lot of effort into it."
Wegner had some help at camp this summer. BHS assistant coach Art Cederholm helped at the camp. This is his first high school coaching job and he liked what he saw from the Bulldogs.
"I was very pleased with camp. I was impressed since I didn't know what to expect," Cederholm said. "This is my first high school job. I've just heard from Gus and the activities director where the team is at and how it's improved. I was impressed and pleased with how well they played."
Cederholm wasn't the only helper at the Bulldog camp. Baker University women's soccer players Annie Cook and Rachel Shepard worked the camp.
The two Baker players helped instruct the BHS players during drills. They each coached one of the teams during the scrimmage.
"It was nice to be taught by them," Baughan said. "It's a new perspective on the game. I enjoyed their coaching."
Wegner was happy with the camp instructors this year.
"They've had excellent instruction from Art and the two Baker soccer players, Rachel and Annie," Wegner said. "I think the kids have learned a lot."
All week long the Bulldogs worked on soccer fundamentals. They practiced passing, dribbling and defensive skills. Because the coaches couldn't cover all aspects of the game during the camp, they held the scrimmage.
Wegner hoped the scrimmage helped the players learn how to shoot better because they worked a lot on shooting all week.
"We tried to work on shooting, especially from midfield," Wegner said. "We wanted kids to have lots of touches with the ball."
Besides reviewing the fundamentals, the Bulldogs also enjoyed spending time with their teammates once again. The last soccer game for BHS was nearly nine months ago, so time together was enjoyed for everyone.
"It was a lot of fun," Barkley said. "I met a lot of new people. It is nice to see all of the new talent come out this year. The season should be a lot of fun."
Wegner also enjoyed spending time with his players again. He also likes camp because it gives him a chance to meet the incoming freshmen before practice starts Aug. 13.
"It's really helped us get to know the ninth graders," Wegner said. "We know the sophomores through the seniors well, but this gives us a chance to see the incoming freshmen. I think they are an excellent group of freshmen."
Some of the BHS players will be running and conditioning themselves next week to prepare for the first day of practice.
"You sort of get out of breath after not doing anything all summer," Barkley said. "That's why we are doing these practices on our own before practice starts."
The Bulldogs begin practice Aug. 13 and their first game is Aug. 27. Wegner is ready for the season and the challenges that lie ahead.
"When Aug. 13 gets here, we have two weeks until the first game," Wegner said. "I've stressed that with all of the kids. Once the practice starts, we have two weeks before the first game. So the camp is instrumental in setting the tone and expectations for the season." |
#include <GUIConstantsEx.au3>
#include <GuiSlider.au3>
$Debug_S = False ; Check ClassName being passed to functions, set to True and use a handle to another control to see it work
_Main()
Func _Main()
Local $hWndTT, $hSlider
; Create GUI
GUICreate("Slider Set Tool Tips", 400, 296)
$hSlider = GUICtrlCreateSlider(2, 2, 396, 20, BitOR($TBS_TOOLTIPS, $TBS_AUTOTICKS, $TBS_ENABLESELRANGE))
GUISetState()
; Get Tool Tips
$hWndTT = _GUICtrlSlider_GetToolTips($hSlider)
MsgBox(4160, "Information", "Tool Tip Handle: " & $hWndTT)
; Set Tool Tips
_GUICtrlSlider_SetToolTips($hSlider, $hWndTT)
; Loop until user exits
Do
Until GUIGetMsg() = $GUI_EVENT_CLOSE
GUIDelete()
EndFunc ;==>_Main
|
Dead Christ Supported by Two Angels (Bellini, Berlin)
Dead Christ Supported by Two Angels is a tempera on panel painting by Giovanni Bellini, now in the Gemäldegalerie, Berlin. It is dated to 1465–1470, as shown by similarities to his 1464 San Vincenzo Ferrer Polyptych, an early mature work.
Bibliography
Mariolina Olivari, Giovanni Bellini, in AA.VV., Pittori del Rinascimento, Scala, Firenze 2007.
Category:Paintings by Giovanni Bellini
Category:Paintings in the Gemäldegalerie, Berlin
Category:1470 paintings
Category:Paintings depicting Jesus
Category:Angels in art |
if PrintDialog1.Execute then begin
// we should also assign cmbPrinter.ItemIndex := Printer.PrinterIndex;
// but I did not do it to make sure that the printer is changed only by TPrintDialog
UpdateControlsForPrinter;
SRVPrint1.Print('test', 1, False);
end;
Result: the demo prints on the printer chosen in the dialog, as expected. |
[Evaluation after 20 years of a case of Takayasu's disease that presented with aortic regurgitation].
Takayasu's disease is a segmental multifocal affection of medium and large arteries. The diagnosis is based on the association of stenotic and aneurismal lesions of the aorta and its branches secondary to an inflammatory infiltration of the media and adventitia. Cases of aortic regurgitation associated with aneurismal dilatation of the ascending aorta as the presenting features of Takayasu's disease, as in this case, are rare. Histological examination of the aortic wall may help establish the diagnosis by showing signs of aortitis. The other usual arterial lesions are sometimes missing at the initial phase of the disease. A late histological diagnosis may be difficult as the inflammatory lesions tend to be progressively replaced by fibrotic lesions or a banal atheroma. |
Polyphase electrical ship propulsion motors, which are fed by converters, produce low-frequency structure-borne sound which is essentially due to oscillating moments in the motor. Such structure-borne sound emission is particularly dangerous for submarines, since low-frequency noise is carried over particularly long distances in water. |
SOCIAL
Lindsay Lohan arrested in NYC on suspicion of hit and run
Lindsay Lohan was arrested on suspicion of leaving the scene of an accident early Wednesday by New York City police after a man claimed the actress struck him with her Porsche SUV.
A chef named Jose Rodriguez, 34, was treated for torn knee tendons at Bellvue Hospital after the alleged incident in an ally by New York’s Dream Hotel.
He tells the New York Daily News that Lohan said ‘You have to get out of the way’ after the alleged incident then walked into a nightclub adjacent to the hotel.
‘She was slurring,’ he told the paper. ‘She smelled like alcohol. I didn’t know who she was until people told me. I don’t know why she couldn’t just do the right thing.’
Lohan was arrested after she came out of the club more than an hour after the alleged incident. She was taken to New York’s 10th Precinct then released pending a later court date.
‘While some of the facts are still being gathered, it appears that this is much ado about nothing,’ Lohan’s publicist tells ABC News in an email. ‘We are confident this matter will be cleared up in the coming weeks and the claims being made against Lindsay will be proven untrue.’
Lohan, 26, has been plagued by drug and legal problems for several years. She is currently on informal probation for necklace theft and just last week, failed to turn up on the set of new movie Scary Movie 5 because she claimed to be suffering from ‘walking pneumonia.’ |
Iain Glen has revealed the date for the first Game of Thrones season 8 read-through and offered some production details that partially explain why the final season’s filming is expected to take so long; Lena Headey Instagrammed a likely wig fitting and writer Jane Goldman has completed her script for one of the Game of Thrones spinoffs in development.
Back in September (yes, this little gem has remained hidden on the internet for nearly a month) Iain Glen told his audience at Stockholm Comic Con (at 20:43 in the video) that the cast of Game of Thrones will meet for a read-through on October 9th. Until then, he has no idea what season 8 holds, though he “would just be happy to be alive and in what I imagine is going to be a massive battle [in which] I then survive or die nobly” (21:38).
He also revealed (at 6:40) that production for season 8 will be using only one filming unit instead of two, as has been the tradition since the first season (though production did include a third unit for season 3).
“We’re all starting to occupy the same territory, we’re all starting to be in the same storylines and so they can’t [have two filming units] anymore,” he explained. “I think this last season will take much longer to shoot because they can only use one unit because we’re all in the same sort of scenes.”
The need to use only one unit because of the cast having to be together helps to explain the 10 months of production, which is around twice the time filming usually takes.
The cast will be congregating in Northern Ireland for a read-through soon, and Lena Headey was already with the crew today when she posted this photo on Instagram.
Season 8 … Hair and Teeth … HAIR AND TEETH !!! I @kevalexanderhair @candicebanks74 😘😘😘 A post shared by Lena Headey (@iamlenaheadey) on Oct 4, 2017 at 5:21am PDT
Provided that Headey is posing with two major hair staff members, it’s safe to bet that Headey went in for a wig fitting today (though presumably not for wig featured in the photo), further evidence that the long journey towards the final season is officially underway.
As our thoughts turn to season 8 and, consequently, to the inevitable end of Game of Thrones, let’s remind ourselves that at least one spinoff is on the way and rejoice that the pilot script for one of the unspecified projects has been completed!
In an interview with Hello Magazine at the BFI Luminous Gala, English comedian, Jonathan Ross, discussed his wife, Jane Goldman’s, script writing project for a Game of Thrones spinoff but was unable to say much about it other than that … it’s done.
“I’m not going to say anything apart from the fact that [Jane] has just finished it!” he said and later jokingly capitulated to pressure by adding: “She’s written it all around me, I’m a dragon in it!”
Then again, we don’t know what season 8 holds. If Daenerys has a fever dream in which Drogon begins monologuing in Ross’ voice … I guess you heard it here first.
Our deepest thanks to Ismail for unearthing that internet gem and tipping us off about Iain Glen’s interview |
Ala ud-din Sikandar Shah
Sultan Ala ud-din Sikandar Shah was born Humayun Khan, the son of Sultan Muhammad Shah Tughluq. He ascended the imperial throne in virtue of his being heir apparent, as Ala-ud-din Sikandar Shah on 1 February 1394 C.E. But after one month and sixteen days he died of natural causes.
See also
Delhi Sultanate
Category:Tughluq sultans |
MANILA, Philippines — The Sandiganbayan just removed the final obstacle to former First Gentleman Juan Miguel “Mike” Arroyo’s trip abroad when its second division granted his petition to travel to Japan and Hong Kong from February 3 to 10.
The Office of the Ombudsman filed a graft charge against Arroyo, accusing him of selling two secondhand Raven I helicopters, but that were passed off as brand-new.
The court’s fourth division, where Arroyo faces a graft case on the botched telecommunications project known as the NBN-ZTE deal with China, has allowed his request to travel with conditions that he could only leave on the schedule of his travel and that he present his passport as proof.
The former first gentleman is set to meet with the Association of Overseas Filipino Communities in Edogawa-Ku in Japan before proceeding to Hong Kong.
Complete stories on our Digital Edition newsstand for tablets, netbooks and mobile phones; 14-issue free trial. About to step out? Get breaking alerts on your mobile.phone. Text ON INQ BREAKING to 4467, for Globe, Smart and Sun subscribers in the Philippines.
Disclaimer: The comments uploaded on this site do not necessarily represent or reflect the views of management and owner of INQUIRER.net. We reserve the right to exclude comments that we deem to be inconsistent with our editorial standards.
As long as he follows strictly the court conditions then it is just okay as the right to travel is guaranteed by the constitution.
http://www.dafk.net/what/ Kilabot ng mga Balahibo
You think De Lima will allow him? Or will she again disregard the law?
marionics
di na siguro. they got the queen na e hahaha
http://www.dafk.net/what/ Kilabot ng mga Balahibo
pero, from what I gather, he holds the purse. (As she holds or held the power.)
We might be incarcerating an empty shell while the fat (lol), is being transferred, dba. In some sense, we might get a guilty GMA but can’t draw the money she squandered because it has flown away na.
I wouldn’t put it past exFG.
marionics
e what are you going to do? if you don’t even have enough to file a case e di sorry. ganyan talaga ang buhay. bahala na sila. watch na lang tayo ng susnod na kabanata he he
http://pulse.yahoo.com/_UWISP2YXGDQ7K2SIX2GI37B2EI Darwin
Government should send a representative to check authenticity of that Filipino community in Japan. This is the second time he used a Filipino community in Japan as excuse for his travel. It might be the same group, and it might be non-existent.
http://pulse.yahoo.com/_CG6AWFVDA46M5DMR2G7VNAMFUM Mark
“..and it might be non-existent” – perhaps the case against him..lol
http://pulse.yahoo.com/_UWISP2YXGDQ7K2SIX2GI37B2EI Darwin
The PNP chopper case is solid. Unless bugled by our very good prosecutors.
It’s none of your business to know his reason for going to Hongkong.. Everybody is presumed innocent until proven guilty should be applied not to the kkk only. As you can see slowly but surely the cases against them is slowly crumbling like an iced castle. hahaha. All your accusations should be back up with hard evidence not laway only with the help of media friendly to the mongol. When non yelos would accuse your idol you will say go to court and sue him. this rule should also be applied to you heppas. hahaha.
http://pulse.yahoo.com/_4R3GZTGML26TV2VGS6RVHP2THM Fred
While abroad, Mike might feign illness!
And this might not be covered by an agreement set by the Sandiganbayan.
Beware! Be wary!
This guy is going to check or move his hidden wealth somewhere else. Money works wonders with this select of people who corrupted the country for so long and still can get away with their crimes. I guess the constitution was written to favor those that can afford expensive lawyers!
diamond_digger
OOnce the cuckoo is out of its coop, it will never, ever come back again. Bye, bye cuckoo. Your escape may result to another impeachment of certain hoodlums in robes.
sakinlang
TAKBO, MIGUEL, TAKBO! Ha!ha!ha!ha!
JuanTamadachi
run fat boy run, de lima’s on her way. :)
http://profile.yahoo.com/FMILZVJMPLRYOGFXL6PPNEVS24 Mike Arroyo
Yehey!
JuanTamadachi
.com
Your_King
Sadly his wife cannot leave.GMA is the trophy of Aquino. Without her, Aquino has not done anything. In Davos he even found a way to attack GMA again. Abalos can leave, the husband Arroyo can leave, but GMA is stuck in detention even when she needs to have surgery abroad. Aquino’s government and selective justice system is the system being followed by the courts.
johnvforeighner
And I wonder how much it cost him, to be able to escape, again???
disqusted0fu
Why are people so scared of the Arroyos escaping? When did they ever escape anyway? And what are they escaping from? All charges against the couple are weak and have not been proven. It’s been about a year or so and the prosecutors have not been able to provide evidences against the Arroyos. It is getting clearer that everything is just persecution and vindictiveness. |
France in ‘final stage’ of talks to sell Rafale jets to Qatar
PARIS (Reuters) – France is in the ‘final stage’ of negotiations to sell up to 36 Rafale warplanes to Qatar, a senior French source involved in the discussions said on Tuesday.
Manufacturer Dassault Aviation is also in talks aimed at supplying 16 of the multi-role combat jets to Malaysia and has resumed discussions over potential fighter sales to the United Arab Emirates (UAE), the source said.
“The discussions (with Qatar) are at the final stage,” the source said, asking not to be identified because of the sensitivity of the discussions.
Dassault Aviation declined to comment.
Analysts say the French company was boosted this week by a long-awaited first export deal for the Rafale with Egypt, but is likely to face intense competition for further sales as European, US and Russian rivals step up export campaigns.
A Rafale fighter jet prepares to land at the air base in Saint-Dizier, February 13. France is in the ‘final stage’ of negotiations to sell up to 36 Rafale warplanes to Qatar, according to a senior French source involved in the discussions – REUTERS
It was not immediately clear at what level talks with UAE were taking place, nor which side had initiated them.
The UAE publicly rebuffed an offer to supply 60 Rafale jets in 2011, calling the proposal “uncompetitive and unworkable”.
Western defence contractors including Dassault, the four-nation Eurofighter consortium and US aerospace group Boeing are chasing overseas sales to prevent their production lines halting due to cuts in domestic defence budgets.
Tensions in the Middle East, instability in eastern Europe and concerns in parts of Asia about regional border threats and the rise of China have further fuelled the arms race, but shifts and sudden reversals in the various industry talks are common.
France said last June it was confident of winning a deal soon to supply fighter jets to Qatar, which is shopping initially for 24 jets plus 12 options to expand its air force.
Competitors include Boeing’s F-15 fighter jet, while the US manufacturer is also seeking sales for its declining F-18 model, which is reportedly in consideration in Malaysia.
Elsewhere in the Gulf, the Eurofighter and F-18 are competing for a possible Kuwaiti deal for 28 jets but the Rafale is not a leading contender there, according to French media. |
The present invention relates to the field of communications, which is slow-growing, very competitive and essentially mature. For purposes of this specification, communications shall refer to multi-media transmission of information from one or more nodes to one or more other nodes. In particular, this invention relates to a system for transmitting voice, image, text and data from one computer to another, where either or both of the computers can be or include an ordinary household telephone.
A. Switching
At the heart of any modern communications system is switching. The architecture of the first automatic switching system has some basic characteristics, having long range implications. The first step-by-step switching system (i.e., rotary switching system) actually establishes the desired connection between nodes by remote control from a telephone dial. Such a system is said a self-connecting, distributed control system. In the beginning, it was electromechanical by nature and therefore difficult to maintain. In addition, the remote control of relays employed by the system generated impulse noise at the telephone exchanges, which became the main source of noise which in turn caused errors in received computer data when transmitted over telephone lines.
In a crossbar switching system, the rotary switches of the step-by-step switching system were replaced with matrix switches, which were easier to maintain and produced less noise (Electromagnetic Interference or EMI) when used for data transmission. The address of the called party, generated by the telephone dial, was stored in a register and processed through relay logic for making connection via the matrix switch. Later, the crossbar switch was replaced by Reed relays. Later still, the relay logic was implemented electronically, which resulted in the electronic switching system (ESS). ESS is fast, noise-free and easy to maintain.
Step-by-step, crossbar and ESS switching systems are known as "space division switching". Time division switching systems are an outgrowth of digital transmission technologies Such switching systems allocate time slots to users for the duration of a connection. All users are physically connected to the same communication line, but have time slots allocated for the duration of the call. Such a switching system is economically attractive and, partly because it uses substantially less wiring, is easy to maintain. However, the bandwidth of time division switching systems must be divided among all the simultaneous users connected to the line. Thus, in spite of high-speed technology available today, the bandwidth or, alternatively, digital speed for each user, is limited to 64K bits/sec.
Time division switching is a natural outgrowth in the evolution of switching systems. For telephony, it provides quality and security at reasonable cost. User-oriented functions can be provided with the elegance and flexibility of computer-controlled switching systems For computer users, data can be transmitted at 64K bits/sec on a switched bases, and facsimile systems can be faster and more powerful.
With the advent of digital trunk carrier systems and digital switching, it quickly became clear that communication networks would evolve toward a capability to provide end-to-end digital connections. Much effort has been expended, worldwide, to define a set of realizable standards for what is called an Integrated Services Digital Network (ISDN). The ISDN concept permits endusers to transmit up to a total of 144K bits/sec of information consisting of two 64K bit/sec channels, which can support circuit or packet switching, and a third 16K bit/sec packet-switched channel which makes use of existing two wire loop systems in most cases. The 16K bit/sec packet switching channel has a well defined protocol and is used for both signalling between endusers and the central office switch, and for user-to-user packet information.
ISDN will provide digital voice and data services far superior to anything available today. From a computer communications point of view, ISDN is very attractive, since it provides 64K bits/sec switched service without a modem. While facsimile services are greatly improved, they can, at best, provide only 16K bits/sec over analog voice networks on a worldwide basis. Furthermore, when the potential capability of optical fibers is considered, together with its ultimate availability to every telephone user at a cost roughly equivalent to copper transmission lines, the capabilities of the ISDN switching pales in comparison with the gigabit transmission capabilities of fiber optic transmission.
Therefore, a high speed digital switching system capable of providing fast access to data for high and low speed computer terminals, access to image files and facilitating communications for all kinds of compatible and incompatible computer systems at a cost of switching affordable for digital voice communication is desirable. Even more desirable is such a system which is compatible with and transparent to ISDN facilities and end users, but which anticipates conversion of national and worldwide telecommunications networks from two wire, copper linkages to optical fibers and ultra-fast switching systems.
An historical review of the prior art of telecommunications in the United States is given in "Communications and Switching" by Stewart D. Personick and William O. Flechenstein, Proceedings of the IEEE, Vol. 75, No. 10, October, 1987. In addition, ISDN is more fully described and discussed in IEEE Communications Magazine, Vol. 25. No. 12, December, 1987.
B. Telephony
In the 1930's, basic standards for toll quality telephony were established. That basic standard comprised the minimum bandwidth needed to assure recognition of the speaker by the receiver at the other end of the link, together with at least 98% understandability of the speech in context. The minimum bandwidth was 300 Hz to 3400 Hz, which resulted in 4 kHz frequency spacing for single sideband (SSB) cable and radio transmission. These standards have been preserved in digital transmission, using pulse code modulation (PCM), and are perpetuated in ISDN standards.
Toll quality telephone sounds astonishingly good in spite of the relatively narrow (approximately 3 kHz) bandwidth, where modern transducer technologies, such as the electret microphone and dynamic earphone, are used in the user's handset. Such a telephone link transmits all the vowels very well. However, transmission of consonants, which have main speech energies concentrated between 7 kHz to 8 kHz, is rudimentary at best. Generally, speech taken in context provides sufficient clues for good understandability, although, unexpected words and names typically must be spelled in order to circumvent the lack of bandwidth in toll quality telephone connections. Thus, in general, telephone networks having a high-fidelity link at a cost equal to or less than the user pays today is, at least, desirable.
According to information theory when PCM was discovered, the sampling rate of an analog signal was set at 2 W for perfect recovery of signals having a bandwidth of less than W. In order to prevent foldover intermodulation distortion, the speech spectrum had to be strictly limited to less than 4 kHz. Thus, the sampling rate for voice telecommunications was set at 8K samples/sec, and a prior art encoder, utilizing an advanced Adaptive Differential PCM (ADPCM) module for digitizing analog voice signals at that rate, is shown in FIG. 1.
In order to strictly limit the speech spectrum to 4 kHz, a sharp, low pass filter was required as also shown in FIG. 1. In addition, digital encoding of speech was very costly and could be economically justified only for Time Division Multiplexing (TDM) transmission systems. While single chip encoders are now available on the market which make digital telephones economically feasible, the sharp low pass filter required for classical PCM encoders requires about half the semiconductor "real estate" of a typical coder/decoder (codec) chip.
C. Telephone Graphics
The inability to draw simple pictures remotely is a severe limitation of present-day telephony. Even with a hi-fi telephone, the ability to communicate is still hampered by the absence of graphics capability. The first serious attempt to provide remote telephone graphics was the "Picture Phone", introduced by Bell Laboratories. While the Picture Phone was a technical success, the failure in the marketplace is easily attributed to its cost and inability to satisfy a well-defined need. More simply stated, the market requirements were not properly defined before the Picture Phone was developed. However, even today, there are other similar attempts at transmitting video over presently installed telephone lines. See, for example, the VisiTec Visual Telephone Display, manufactured by Mitsubishi.
The ability to remotely present graphics, including charts, in real time while the telephone conversation is in progress and at a reasonable cost is extremely desirable Definition of telephone graphics is virtually at the same level of development that definition of basic standards for toll quality telephony was in the 1930's. Once defined, this new video service can be expanded to higher resolution, including gray scale, color and motion as required Thus, development of a standard for telephone graphics, preferably based on presently available technology but which anticipates technology advances, is desirable.
D. System Considerations
The computer user communications traffic may appear to be profoundly different from telephone voice traffic requirements. For example, the computer terminal user typically establishes connection to a computer port at the beginning of the day and maintains the connection for some hours until he goes home. Therefore, 100% connectivity or usage of the switching systems is indicated. On the other hand, telephone user statistics indicate that the average telephone call is about 10 minutes long, and that only about 10% of all users need simultaneous connections at the same time.
A closer look at the actual information traffic indicates that data is transmitted between the terminal and the computer system occur less than 10% of the time the linkage is established. However, when the data is generated, it should be transmitted very quickly. The desired data transmission rates are in the megabits/sec range and the desired connection times are less than a microsecond. Thus, the use of a telecommunications system by a computer terminal user is actually much more like the use of the same system by a telephone user than has been appreciated by telecommunications systems designers in the past.
Even if a switch controller, designed and constructed in accordance with ISDN specifications, is built in high speed technology, and interfaces were designed to accommodate both the computer terminal user (text and data) and the telephone (voice and image information), at least 10,000 programming steps are required before a single connection is actually made. Moreover, even if the system operated at 10,000,000 instructions/sec (10 MIPS), the connect time still will be in the 1 millisecond range. While millisecond connect times are much faster than any presently known voice switching system, it is inadequate to accommodate the nanosecond switching requirements of high speed systems which will be available in the foreseeable future, perhaps in accordance with the present invention. |
But extracting the oil doesn’t come without problems. To obtain it, many use butane, a highly flammable compound that has led to incidents of explosions around the country.
That’s where Apeks Supercritical comes in. The Johnstown-based company says it has a safer alternative for extracting hash oil using its carbon dioxide-based machines.
“It is a much more safe method to get the oils out of the plant material without taking in extra toxins or carcinogens,” said Apeks President Andy Joseph.
Apeks had been in the business of making extraction systems for botanicals long before the current pot decriminalization trend began picking up steam. Joseph says people started requesting the system for cannabis when medicinal marijuana was first approved.
Despite the increased safety measures that come with CO2 extraction systems, Joseph says butane use is popular because it’s easily obtained and not heavily regulated. For this reason, Joseph says producing safe butane equipment may not be very profitable.
“We certainly have got the capabilities to design it and do it safely,” Joseph said. “(But) the people who are processing using the butane methods aren’t willing to pay the price that would be required to do it safely.” |
The objective of this work is to evaluate the effect of alteration of blood oxygen affinity in patients with sickle cell anemia. It is generally felt that an increase in oxygen affinity will lead to decreased sickling because a larger fraction of hemoglobin will be in the oxy-configuration at a given partial pressure of oxygen. However, diminished oxygen delivery to tissues might be a deleterious effect of increased oxygen affinity. We have developed reliable methods for measuring whole blood oxygen affinity in sickle cell anemia, and have learned that the position and shape of the oxygen dissociation curve of sickle blood is very dependent on the immediate history of the blood sample after venipuncture. Only under very special conditions is the measured value related to the in vivo oxygen transport capability of blood. The rate of sedimentation of SS red cells is exquisitely sensitive to O2 tension. The value of this measurement in identification and evaluation of anti-sickling compounds is being explored. BIBLIOGRAPHIC REFERENCES: Winslow, R.M., Blood oxygen equilibrium studies in sickle cell anemia. Proceedings of the Symposium on Molecular and Cellular Aspects of Sickle Cell Disease, U.S. Dept. of HEW, publication No. (NIH) 76-1007, 1976 pp. 235-256. |
I love s’mores. I may just admit that they are one of my favorite foods. I regularly find myself heating up a marshmallow and chocolate in the microwave to put on a graham cracker for breakfast, I’m addicted to the s’mores frappuccino Starbucks has during the Summer, and any chance I get to start a fire in our backyard fire pit so that I can roast some, I jump on it.
Now, with them being my favorite food and all, there’s no way I could ever think that the original is boring. But I appreciate change, so mixing it up a little couldn’t hurt, right?
Here are a few different s’more variations you’ve got to try at your next bonfire!
Nutella S’more: I remember when Nutella was the most popular thing ever. You would regularly see people walking around with a jar of the stuff and a spoon. (Not that I ever did that…) Even though it doesn’t have quite the same fame as it did, it’s still amazing. And it tastes great when spread on your graham crackers before adding a perfectly toasted marshmallow.
Double Chocolate S’more: Instead of the regular graham crackers, try the chocolate variety. If you are a chocoholic, beware. You may fall further into your chocolate addiction.
Raspberry Dark Chocolate S’more: Make your s’more with dark chocolate and add fresh raspberries. This was by far my favorite of all the s’mores I tried.
Peanut Butter S’more: Simply sub out the Hershey’s for a Peanut Butter Cup! If you’re a peanut butter lover, then this will be heaven.
Coconut S’more: Have you seen those toasted coconut marshmallows? Well, they’re the best. Try them out in your next s’more, you’ll fall in love.
Mint S’more: Just as you did with the Peanut Butter Cup variety, substitute the Hershey’s chocolate with one or two Andes Mint pieces.
Cinnamon S’more: Pick up a box of the cinnamon graham crackers and use those instead of the originals. The cinnamon in the graham cracker and the chocolate go so well together. |
Q:
What should a GetSelectedIndex method return when no rows are selected
I'm creating a UI table component that returns an array of selected indices.
Some typical return values would be
var myRetVal1 = [0];//one value selected
var myRetVal2 = [0,1,2,3,8,11];//multiple value selected
as you can see I'm always returning an array.
I had an idea to return -1 when nothing is selected, but then I thought that might be confusing when in every other condition an array is returned.
So checking for an empty set of values would be either
//returns -1
var selectedItems = tbl.GetSelectedIndex();
if(selectedItems !== -1){
//we have data to process
}
OR
//returns []
var selectedItems = tbl.GetSelectedIndex();
if(selectedItems.length > 0){
//we have data to process
}
OR
//returns null
var selectedItems = tbl.GetSelectedIndex();
if(selectedItems){
//we have data to process
}
Maybe I'm making too big a deal over this, but is there a standard expectation for this type of control?
As I build other controls should they conform to a standard empty return value or should they always return a "empty" version of their expected return type?
A:
I would return an empty array. It bothers me when people return null collections instead of empty collections. The majority of what I would be doing with your return value will be iterating over it, mapping over it etc. and my functions will work correctly with an empty array, but will either break, or have to be modified to deal with null
The worst is when somebody writes a function that returns null for no elements, just the element for 1 element, and an array of elements for multiple elements. It can triple the size and complexity of my code just because I have to check for all the different conditions (I don't mind complex code, but I do mind unnecessarily complex code).
A:
I'd be okay with either of the latter. I don't like the first one because you're changing the actual data type returned (int rather than array).
Optimally, I'd go with the second one as it makes more semantic sense. If you're returning an array of selected items, then it makes more sense (again, semantically) to check array.length === 0 than array === null.
|
Hackers demonstrate ability hack SIM cards, networks working on fix
SIM-cards, the small insert-and-forget chip installed in every GSM-base mobilephone have recently been the target of a project to test the security therein. Among other things one of the discovered was cryptographic functions belonging decades ago. |
UNITED STATES COURT OF APPEALS
Filed 1/21/97
TENTH CIRCUIT
UNITED STATES OF AMERICA,
Plaintiff-Appellee,
v. No. 96-1192
(D.C. No. 93-CR-181-Z)
PATRICK DEAN VOGT, (D. Colo.)
Defendant-Appellant.
ORDER AND JUDGMENT *
Before BRORBY, EBEL, and HENRY, Circuit Judges. **
Defendant-Appellant Patrick Dean Vogt was convicted pursuant to 18
U.S.C. § 371 and 8 U.S.C. § 1325(b) of conspiracy to assist in a co-defendant’s
marriage for the purpose of evading the immigration laws on August 19, 1993.
* This order and judgment is not binding precedent, except under the
doctrines of law of the case, res judicata, and collateral estoppel. This court
generally disfavors the citation of orders and judgments; nevertheless, an order
and judgment may be cited under the terms and conditions of 10th Cir. R. 36.3.
** After examining the briefs and the appellate record, this three-judge
panel has determined unanimously that oral argument would not be of material
assistance in the determination of this appeal. See Fed. R. App. P. 34(a); 10th
Cir. R. 34.1.9. The cause is therefore ordered submitted without oral argument.
Vogt was sentenced to three years probation, with the special condition of 90 days
home detention, a $250 fine, and 100 hours community service
In November 1995, the district court held a probation violation hearing
because of Vogt’s repeated positive drug test results, which indicated ongoing
marijuana use, and because of his failure to attend urinalysis testing and drug
abuse counseling in October 1995. After the hearing, the district court revoked
Vogt’s probation and sentenced him to four months in prison, followed by two
years supervised release. Vogt did not object to the new sentence at that time.
On March 26, 1996, Vogt filed a “Motion for Modification and/or
Clarification of Sentence” pursuant to 28 U.S.C. § 2255. Vogt challenged the
imposition of both supervised release and imprisonment, claiming that such
penalty was barred by the sentencing laws and our decision in United States v.
Rockwell, 984 F.2d 1112, cert. denied, 508 U.S. 966 (1993). In Rockwell, we
held that under 28 U.S.C. § 3583, a district court revoking supervised release may
impose as a sanction either imprisonment or extended supervised release, but not
both. Id. at 1117. Vogt also claimed that the length of the new sentence,
combined with the time he had already spent on probation, impermissibly
-2-
exceeded the maximum sentence available when he was initially sentenced. 1 The
district court denied Vogt’s motion, and Vogt now appeals.
This case is distinguishable from Rockwell because Vogt had not
previously been sentenced to supervised release, but was instead sentenced to
probation, a punishment governed by a different provision of the sentencing laws
which explicitly allows the district court to “resentence” the defendant “to a
sentence that includes a term of imprisonment” upon the revocation of probation
based on a controlled substance violation. 18 U.S.C. § 3565(a)(2), (b).
Moreover, under the sentencing guidelines, time spent on probation is not credited
toward the length of punishment when probation is revoked and a new sentence is
imposed. U.S.S.G. (Policy Statement) § 7B1.5. Instead, when revoking probation
under 18 U.S.C. § 3565(a)(2), the district court may resentence the defendant to
any sentence available under subchapter A of the sentencing laws, which consists
of the general statutory provisions requiring the district courts to consider the
sentencing guidelines in formulating sentences. 18 U.S.C. § 3551-59.
Accordingly, we now affirm the district court’s order imposing imprisonment and
supervised release for Vogt’s violation of the terms of his probation.
1
The maximum statutory sentence available when Vogt was initially
sentenced was five years imprisonment and/or a $250,000 fine, 8 U.S.C. §
1325(b); 18 U.S.C. § 371. The applicable guideline range was 0-6 months
imprisonment, 0-3 years supervised release, and/or a $0-5,000 fine, or three years
probation. U.S.S.G. Ch. 5.
-3-
Discussion
The statute governing revocation of probation provides that when a
defendant violates a condition of his probation, the district court may “revoke the
sentence of probation and resentence the defendant under subchapter A [the
general provisions of the sentencing laws, 18 U.S.C. § 3551-59].” 18 U.S.C. §
3565(a)(2). Where, as here, the violation occurs because the defendant is found
to have possessed a controlled substance during his probation period, “the court
shall revoke the sentence of probation and resentence the defendant under
subchapter A to a sentence that includes a term of imprisonment.” Id. § 3565(b)
(emphasis added). 2 Thus, when Vogt violated his conditions of probation through
2
The current language of 18 U.S.C. § 3565, which was enacted in
1994, differs from that in effect when Vogt was initially sentenced. The pre-1994
version provided that, upon a violation of the terms of probation, the district court
could “revoke the sentence of probation and impose any other sentence that was
available at the time of initial sentencing.” 18 U.S.C. § 3565(a)(2) (1988)
(emphasis added). Where the violation involved possession of a controlled
substance, the pre-1994 statute provided “the court shall revoke the sentence of
probation and sentence the defendant to not less than one third of the original
sentence.” Id. (emphasis added). We held that language required the district court
to resentence the defendant to a sentence including a prison term not less than one
third that available when he was initially sentenced to probation. United States v.
Diaz, 989 F.2d 391, 393 (10th Cir. 1993).
The current version of § 3565, which was in effect when Vogt was
resentenced, applies to this case for several reasons. First, the current version
instructs the district court to resentence the defendant pursuant to subchapter A of
the sentencing statutes, and the relevant provision of subchapter A, 18 U.S.C. §
3553, was effective as of September 13, 1994. Pub. L. 103-322 § 80001(c), 108
Stat. 1985 (1994). Second, the 1994 amendments to § 3565 were apparently
(continued...)
-4-
his marijuana use, the district court was obligated to resentence him to a sentence
that included a prison term, and was permitted to impose any additional sentence
allowable under subchapter A.
The relevant provision in subchapter A is 18 U.S.C. § 3553(a), which
provides:
The court, in determining the particular sentence to be
imposed, shall consider--
(4) the kinds of sentence and the sentencing range established
for--
(A) the applicable category of offense committed by the
applicable category of defendant as set forth in the guidelines
issued by the Sentencing Commission pursuant to 994(a)(1) of
(...continued)
aimed at clarifying what one member of the Supreme Court that year described as
a “wretchedly drafted statute . . . ,” United States v. Granderson, 114 S. Ct. 1259,
1270 (1994) (Scalia, J., concurring), rather than creating substantially different
requirements. Third, when probation is revoked the defendant is “resentence[d].”
18 U.S.C. § 3565(a). In other contexts we have held that where resentencing
occurs the district court should apply the sentencing laws in effect on the date of
resentencing. See United States v. Ziegler, 39 F.3d 1058, 1063-64 n. 2 (10th Cir.
1994) (holding that where sentence is vacated on appeal, district court should on
remand apply sentencing guidelines in effect on the date of resentencing).
Fourth, the district court is to apply the sentencing laws in effect on the date of
sentencing unless application of those laws would violate the Ex Post Fact Clause
of the Constitution. United States v. Gerber, 24 F.3d 93, 96 (10th Cir. 1994)
(holding that Ex Post Facto clause is violated where guideline (1) is applied to
events occurring before its enactment, and (2) disadvantages the defendant).
There are no ex post facto problems here because the current version of § 3565,
which requires only a sentence that “includes” imprisonment, is less severe than
the previous version, which required a prison term of “at least one third of the
original sentence.”
-5-
title 28, United States Code, and that are in effect on the date
the defendant is sentenced; or
(B) in the case of violation of probation or supervised
release, the applicable guidelines or policy statements issued
by the Sentencing Commission pursuant to section 994(a)(3) of
title 28, United States Code.
We believe that 18 U.S.C. § 3553(a)(4) requires the district court, in cases
involving revocation of probation or supervised release, to consider the guidelines
issued pursuant to 28 U.S.C. § 994(a)(3) in resentencing the defendant. That
provision authorizes the Sentencing Commission to issue guidelines or policy
statements “regarding the appropriate use of the provisions for revocation of
probation set forth in section 3565 of title 18, and the provisions for modification
of supervised release and revocation of supervised release set forth in section
3583(e) of title 18.” 28 U.S.C. § 994(a)(3). The Sentencing Commission has
issued policy statements concerning violations of probation and supervised
release, and those statements are contained in Chapter 7 of the Guidelines
Manual. U.S.S.G. Ch. 7. Accordingly, in resentencing a defendant after a
violation of the terms of probation or supervised release, the district court must
first consider the policy statements contained in Chapter 7.
We recognize that the Eighth and Ninth Circuits have concluded that 18
U.S.C. § 3553(a)(4) affords the district court discretion to consider either the
revocation and modification sentencing ranges contained in Chapter 7 or the
initial sentencing ranges contained in Chapter 5. United States v. Iverson, 90
-6-
F.3d 1340, 1345 (8th Cir. 1996); United States v. Plunkett, 94 F.3d 517 (9th Cir.
1996). In reaching that conclusion, the Ninth Circuit relied on the use of the
disjunctive term “or” between 18 U.S.C. § 3553(a)(4)(A) and 18 U.S.C. §
3553(a)(4)(B). Plunkett, 94 F.3d at 519. We find that reasoning unpersuasive.
Congress’s use of the term “or” does not mean that the district court may rely on
either provision of § 3553(a)(4) in resentencing a defendant after a probation or
supervised release violation; instead, in context it simply means that the district
court should use § 3553(a)(4)(B) in the subset of sentencing cases involving
violation of probation or supervised release. Our interpretation follows from the
well-established canon of construction that specific provisions of statutes control
the general ones in cases where the specific provision is applicable. Crawford
Fitting Co. v. J.T. Gibbons, Inc., 482 U.S. 437, 445 (1987); In re Gledhill, 76
F.3d 1070, 1078 (10th Cir. 1996). It is doubtful that Congress could have more
clearly stated that, in formulating sentences, district courts are generally to
consider the guidelines promulgated pursuant to 28 U.S.C. § 994(a)(2), while in
cases concerning revocation of probation or supervised release they are to
consider the applicable guidelines or policy statements issued pursuant to §
994(a)(3). 3
3
We have previously held that the policy statements contained in
Chapter 7 are “‘advisory rather than mandatory in nature.’” United States v.
(continued...)
-7-
Vogt’s sentence was within the range of sentences available under Chapter
7 upon revocation of his probation. The policy statements in Chapter 7 suggest a
term of 4-10 months imprisonment for a Grade C probation violation by a person
with a Category II Criminal History such as Vogt, U.S.S.G. (Policy Statement) §
7B1.4, 4 and note that imprisonment coupled with supervised release is a proper
sentence upon revocation of probation. Id. (Policy Statement) § 7B1.3(g)(1). The
four months imprisonment and two years supervised release imposed by the
district court is well within the allowable range. 5
Vogt contends that imposition of both imprisonment and supervised release
upon revocation of his probation was impermissible under our decision in
(...continued)
Hurst, 78 F.3d 482, 483 (10th Cir. 1996) (quoting United States v. Lee, 957 F.2d
770, 773 (10th Cir. 1992)). However, in reaching that conclusion we also held
that consideration of the Chapter 7 policy statements during resentencing upon
revocation of probation or supervised release is “mandatory.” Hurst, 78 F.3d at
483. Thus, Hurst and Lee are fully consistent with our conclusion here that
Chapter 7 provided the sentencing range applicable to Vogt upon revocation of
his probation.
4
The parties do not dispute that Vogt’s probation violation was a
Grade C violation.
5
Vogt’s sentence is consistent with both the initial guideline range
from Chapter 5 and the revocation range from Chapter 7, and the district court did
not state which chapter of the guidelines it relied on in calculating Vogt’s
sentence. However, any erroneous reliance on Chapter 5 was harmless in this
case because the actual sentence imposed was within the applicable Chapter 7
range, and the government has not cross-appealed the district court’s sentencing
determination.
-8-
Rockwell, 984 F.2d at 1117. However, Rockwell is not on point. Supervised
release is a distinct punishment from probation, and the statutory provision which
governed revocation of supervised release at the time Rockwell was decided is
quite different from that which governs revocation of probation. The supervised
release statute at issue in Rockwell stated that, upon a violation of the terms of
supervised release, the district court could
(2) extend a term of supervised release . . . .;
(3) revoke a term of supervised release, and require the defendant to
serve in prison all or part of the term of supervised release
authorized by statute for the offense. . . .; or
(4) order the defendant to remain at his place of residence during
nonworking hours . . . .
18 U.S.C. § 3583(e) (1988) (emphasis added). 18 U.S.C. § 3583(g) further
provided that
If the defendant--
(1) possesses a controlled substance . . . .;
the court shall revoke the term of supervised release and require the
defendant to serve a term of imprisonment not to exceed the
maximum term of imprisonment authorized under subsection (e)(3).
18 U.S.C. § 3583(g) (1988). Thus, under that version of the statute, the
district court generally could either impose imprisonment or supervised
release for a violation of the conditions of supervised release, 18 U.S.C. §
3583(e)(2)-(3), and was required to impose imprisonment where the
-9-
violation consisted of possession of a controlled substance. Id. § 3583(g);
Rockwell, 984 F.2d at 115-17 (discussing then-current version of 18 U.S.C.
§ 3583). 6 Because the alternatives available under § 3583(e) upon
revocation of supervised release were framed in the disjunctive, only prison
could be imposed when prison was required by § 3583(g). Rockwell, 984
F.2d at 115-17.
Conversely, the probation revocation statute does not limit the types
of sentences available to the district court, but instead provides that the
court may “revoke the sentence of probation and resentence the defendant
under subchapter A.” 18 U.S.C. § 3565(a)(2). Where, as here, the
violation occurs because the defendant is found to have possessed a
controlled substance during his probation period, “the court shall revoke
the sentence of probation and resentence the defendant under subchapter A
to a sentence that includes a term of imprisonment.” Id. § 3565(b)
(emphasis added). We have expressly recognized that the district court
retains flexibility upon revocation of probation under 18 U.S.C. §
3565(a)(2) “‘to structure a new sentence that may include probation,
6
Since Rockwell, Congress has amended 18 U.S.C. § 3583 to allow
precisely the practice we rejected in that case, i.e., the imposition of both
imprisonment and supervised release following a revocation of supervised release.
18 U.S.C. § 3583(h).
- 10 -
incarceration, fines, and supervised release,’” in addition to a prison term.
United States v. Diaz, 989 F.2d 391, 392 (10th Cir. 1993) (quoting United
States v. Behnezhad, 967 F.2d 896, 899 (9th Cir. 1990)). Thus, the
probation revocation provision requires only that resentencing be conducted
according to subchapter A, and that the new sentence include a prison term
in cases involving a controlled substance violation. Those requirements
were met in this case.
Vogt further argues that the total length of his new sentence,
combined with the time he spent on probation prior to resentencing,
exceeds that which was available at the time he was initially sentenced, and
is thus impermissible. It is error for a court to apply a sentencing range
higher than that allowed by the sentencing guidelines and the statutory
provisions governing revocation. See United States v. Smith, 907 F.2d 133
(11th Cir. 1990) (finding error in consideration of statutory maximum
rather than guideline range in resentencing upon revocation of probation);
see also United States v. Maltais, 961 F.2d 1485, 1486 (10th Cir. 1992)
(finding initial guideline sentencing range, rather than higher U.S.S.G.
Chapter 7 revocation range, applicable where defendant was initially
sentenced before Chapter 7 was adopted). However, Vogt’s new sentence
does not exceed the range available under the sentencing guidelines. When
- 11 -
probation is revoked, the district court is not permitted to give the
defendant credit for time spent on probation in calculating the length of the
new sentence imposed. U.S.S.G. (Policy Statement) § 7B1.5(a) (“Upon
revocation of probation, no credit shall be given . . . for any period of the
term of probation served prior to revocation.”). When the time spent on
probation is disregarded, Vogt’s sentence is well within the authorized
limits.
The sentence imposed by the district court was within the range
allowed by the relevant provisions of the sentencing laws. Accordingly, the
decision of the district court is AFFIRMED.
ENTERED FOR THE COURT
David M. Ebel
Circuit Judge
- 12 -
|
132 Ill. App.2d 823 (1971)
270 N.E.2d 563
THE PEOPLE OF THE STATE OF ILLINOIS, Plaintiff-Appellee,
v.
LESLIE LATHAM, JR., Defendant-Appellant.
No. 55098.
Illinois Appellate Court First District.
April 20, 1971.
Gerald W. Getty, Public Defender, of Chicago, (Ronald P. Katz and James J. Doherty, Assistant Public Defenders, of counsel,) for appellant.
William J. Scott, Attorney General, of Springfield, and Edward V. Hanrahan, State's Attorney, of Chicago, (James B. Zagel, Assistant Attorney General, and Elmer C. Kissane and Joseph Romano, Assistant State's Attorneys, of counsel,) for the People.
Judgment affirmed.
Mr. JUSTICE SCHWARTZ delivered the opinion of the court:
On April 18, 1967, the defendant was convicted of arson. He applied to the court for release on probation and after a hearing the court entered an order placing him on probation for a period of sixty months. Within fifteen months he was convicted of theft. On the basis of that conviction a warrant for violation of probation was issued and a hearing held to show cause why his probation should not be terminated. At the conclusion of the hearing the trial court found that defendant had violated *824 the terms of his probation and sentenced him to a term of two to five years in the State Penitentiary. On this appeal the defendant's sole contention is that the evidence presented at the hearing does not support the order of the court.
1-4 A condition of the probation was that the defendant would not violate any criminal law of the State of Illinois during the probationary period. Defendant admits he was convicted of the crime of theft while on probation, but contends that notwithstanding the conviction, he is innocent of the charge. The property stolen was a watch and at the revocation hearing the defendant testified that the stolen watch found in his possession had been purchased by him from another man. It is not necessary that such testimony be controverted by another full trial for theft in order to establish that defendant violated a condition of his probation. It is well established that probation may be revoked where a violation of the conditions thereof is shown by a preponderance of the evidence. (People v. Killion, 113 Ill. App.2d 461, 251 N.E.2d 411; People v. Carroll, 76 Ill. App.2d 9, 221 N.E.2d 528.) Considering that the conviction of a crime requires proof of guilt beyond a reasonable doubt, the court in a probation proceeding should attach great weight to the proof of conviction as establishing that the defendant had in fact committed a crime and thereby violated a condition of his probation. In the instant case the trial court weighed the admitted fact of a criminal conviction for the theft of a watch against defendant's testimony as to how the watch came into his possession and concluded there had been a sufficient showing of the violation of a penal statute to warrant the revocation of probation and the imposition of a two to five year sentence in the penitentiary for arson. The judgment of the trial court was supported by ample evidence and is accordingly affirmed.
Judgment affirmed.
LEIGHTON, P.J., and McCORMICK, J., concur.
|
Two oligomeric forms of plasma ficolin have differential lectin activity.
Ficolins are plasma proteins with binding activity for carbohydrates, elastin, and corticosteroids. The ficolin polypeptide has a collagen-like domain that presumably brings three subunits together in a triple helical rod, a C-terminal fibrinogen-like domain (fbg) similar to that of tenascin, which presumably has the binding activities, and a small N-terminal domain that we find to be the primary site for forming the ficolin oligomer. By sedimentation equilibrium we determined that the main plasma form, which we call big ficolin, had mass of 827,000 Da, consistent with 24 subunits. Little ficolin, about half this size, was obtained after binding to a GlcNAc affinity column. Electron microscopy of little ficolin showed a parachute-like structure, with a small globe at one end, corresponding to the 12 N-terminal domains, and the fbg domains clustered together at the ends of the collagen rods. Big ficolin was formed by the face to face fusion of the fbg domains of two little ficolins, leaving the rods and N-terminal domains projecting at opposite ends. Little ficolin maintained a high affinity for the GlcNAc column, and big ficolin had a low affinity or none. The binding sites for ligands may be obscured in this big ficolin oligomer, providing a regulation of their activity. |
On August 14, 1994, Harry Browne gave a speech in the Fairmont Hotel in San Francisco, California an...
Liberty A to Z
Sale Price: $12.75
Soundbites have become an important part of political discourse. A soundbite is a short statement of...
LIBERTARIAN FAQ
Sale Price: $12.75
Harry Browne was the 1996 and 2000 Libertarian candidate for President of the United States. During ...
HARRY BROWNE The Great Libertarian Communicator
List Price: $14.99Sale Price: $12.75
Harry Browne The Great Libertarian Communicator is a biography written by his widow, Pamela Wolfe Br...
Testimonials
"I downloaded your excellent book and read it, and found it to be very sensible and well-written... Thank you, Harry, for writing an excellent book that will remind me to keep a cool head and not get carried away with high-risk schemes when things seem to be going well." |
46 N.J. 299 (1966)
216 A.2d 585
JOHN STEFFENAUER, PLAINTIFF-APPELLANT,
v.
MYTELKA & ROSE, INC., D.M. & F.R., INC., A CORPORATION OF NEW JERSEY, AND B-W ACCEPTANCE CORPORATION, A CORPORATION OF NEW JERSEY, DEFENDANTS-RESPONDENTS.
The Supreme Court of New Jersey.
Argued December 20, 1965.
Argued December 21, 1965.
Decided January 24, 1966.
Mr. Samuel J. Davidson argued the cause for appellant (Mr. Davidson, on the brief; Mr. Cyril J. McCauley, attorney)
Mr. Ira A. Levy argued the cause for respondent B-W Acceptance Corporation (Mr. Levy, on the brief; Messrs. Gallanter & Levy, attorneys).
*300 The opinion of the court was delivered
PER CURIAM.
Plaintiff appealed from a summary judgment entered in favor of the defendant. The trial court's opinion is reported in 87 N.J. Super. 506 (Ch. Div. 1965). We certified the appeal before the Appellate Division acted upon it.
The judgment is affirmed for the reasons given by the trial court. We add a word, however, with respect to plaintiff's emphasis upon the fact that the so-called "credit service charge" of $2,730, which defendant's agent described in his affidavit as "7% add-on," actually averages out at 14% per year on the unpaid balance. This is true, and if plaintiff had said that he agreed to pay that additional sum of money for the purchase on time because he was misled into believing the figure amounted to a charge of only 7% per year on the unpaid balance, a different case would be before us. The use of 7%, with or without the unrevealing words "add-on," obviously tends to conceal the severity of the charge itself and we should not be understood to find representations of that kind to be of no significance. The point here is that plaintiff does not claim he was deceived. His sole contention is that the transaction constituted a "loan" to him and that on that basis there was a usurious charge. For the reasons given by the trial court, we agree the transaction was a sale, and hence beyond the general usury statute, N.J.S.A. 31:1-1.
Affirmed. No costs.
For affirmance Chief Justice WEINTRAUB and Justices JACOBS, FRANCIS, PROCTOR, HALL and HANEMAN 6.
For reversal None.
|
Authentic Lebanese Tabbouleh Recipe
I always smirk a bit when I see “tambouli salad” in a deli case or on a salad bar here in the United States. Nice try, but that’s far from authentic Lebanese tabbouleh. The proportions are all wrong — parsley should dominate, not the bulgur. Maybe we’re just not used to eating so much parsley. This Mediterranean herb is often dismissed as a table garnish. But parsley is a nutritional powerhouse, rich in vitamins A (beta carotene), C and K, and packed with health-promoting flavonoids. Plus, you’ll never come close to the tabbouleh I’ve enjoyed in Lebanon (pictured here) or the version I’ve learned to make myself under the watchful eye of my Lebanese mother-in-law if you skimp on the parsley.
Tabbouleh is one of the most famous of all Lebanese dishes. In fact, this beloved traditional salad is a source of national pride. There’s even a national celebration of tabbouleh each summer in Lebanon. Here’s a poster promoting National Tabbouleh Day in Beiret, which is held at Souk el Tayeb, Lebanon’s first farmers’ market.
The methods of making tabbouleh vary according to regional or family traditions. But like the Lebanese flag, the basic ingredients and colors never change — the green, red and white are always present. The word tabbouleh comes from the Arabic word Mtabali, which means seasoned. I typically only use salt as my seasoning, but some people in Lebanon prefer a version with additional seasonings. Haalo from Australia (Cook Almost Anything) features a tabbouleh recipe that includes allspice, cinnamon and pepper (and includes some great photos).
I’ve seen Americanized versions of tabbouleh made with couscous — but resist that. You really need bulgur (referred to as burghul in Lebanon), which is a wonderful fiber-rich ingredient — perhaps the original whole grain. You can easily find these cracked wheat kernels in most supermarkets now, or try Middle Eastern markets or natural food stores. I’ve also seen garlic added to some U.S. tabbouleh recipes, but that would be laughable in Lebanon. Tabbouleh is meant to clean the palate and freshen the breath between bites of spicy, garlicky food — so it’s never to contain garlic itself.
There are various grades of bulgur — fine, medium or coarse. Save the coarse bulgur for making pilafs. You’ll need fine or medium for tabbouleh (I typically use fine or #1 bulgur). I’ve found two different versions of fine bulgur in the Middle Eastern markets in Chicago; you can see that one is much darker than the other. Even though bulgur is considered a “whole grain,” a small part of the bran is sometimes removed during the drying and cracking of the wheat kernel. You can see the differences in color below, the version that is darker includes more of the bran. I used the lighter version for today’s tabbouleh so the specks of white would be more visible.
The recipe I’ve learned to make from my mother-in-law Karam starts with soaking the bulgur in fresh lemon juice (about the juice of 2 lemons). Please don’t use bottled lemon juice — it makes a difference! Let the bulgur soak for 20 minutes or more until all of the liquid is absorbed and the bulgur appears dry. Then fluff with a fork. Some people in Lebanon only rinse the fine bulgur and then dry it — no softening is needed for the fine grade. If you’re using medium bulgur, it’s best to cover it with hot water and let it soften for 30 minutes or longer. Just be sure the water is all absorbed and you squeeze out any excess liquid. The bulgur needs to be dry, nothing is worse than soupy bulgur. I often let the bulgur soak in my mixing bowl while I prepare the parsley.
The most time-consuming part of making tabbouleh is preparing the parsley — washing, drying and hand chopping. But I must admit that I’ve found ways to successfully cut corners. In Lebanon, flat-leaf parsley is typically used and it’s carefully sliced by hand to create hair thin and crisp slivers. Over chopping can bruise the parsley and create a limp, mushy salad. I know this is not so authentic, but I pull out my Cuisinart. I find that if I use curly parsley instead of flat, it stands up better to the food processor. But first, you must thoroughly wash the parsley. I soak the bunches in cold water and pull off the stems, then rinse several times in a colander. The parsley must be extremely dry before putting it in the food processor, so I use a salad spinner to speed the process. Work in batches and gently pulse the Cuisinart until the parsley is coarsely chopped. Don’t keep it running and over-process, the parsley can quickly turn to mush.
Pour the coarsely chopped parsley into your mixing bowl in batches, pick out the random stems that may appear. If you’re adding mint (I don’t always), thinly slice by hand and toss into the parsley. Add the diced tomatoes and sliced green onions and mix well. Squeeze the remaining 2-3 lemons on the mixture, toss, and the n add the olive oil. Mixture should be moist but not drenched. Add salt to taste, toss well and enjoy.
Traditionally, tabbouleh is a part of mezze (appetizers) — eaten by hand scooped up with a romaine lettuce leaf, white cabbage or fresh vine leaves.
Thanks for the recipe! I agree about the proportions in non-authentic tabbouleh. Someone told me once she can’t handle too much parsley. I wonder if it’s an acquired taste. I once made stuffed grape leaves for my friends and they couldn’t handle the taste.
I was at a friend’s home for a party. He is Lebanese and his Mother and Father were in town for the celebration. His Mother did all the cooking. Oh my! This Italian girl ate SO much. It was just great food. They served Tabbouleh and yes, I thought, where is the cracked wheat? Definitely different from your typical American deli’s version. I loved it and being diabetic, I need to stay away from large quantities of grains. I’m so thrilled you’ve posted this authentic and healthier version. I’m headed out to get some green onions and mint! Thank you again.
Valeria
Wow–I had lost my receipe given by a Jordanian friend’s mother some 35 yrs ago and the “stuff” they sell here in U.S. is just never the same, Yours is the absolutely most delicious and authentic I have found. Thanks
Thank you for this beautiful and authentic recipe. I am going to blog about my experience making your recipe-which is by far the most genuine Lebanese recipe for real tabbouleh that I’ve found.
Catch my blog later at http://www.mayormom.net and again, thanks.
I have found a solution for drying the parsley. After washing and you have gotten the majority of the water off it by salad spinner, paper towels, kitchen towel or such, take a hair dryer on a low setting like you would a hand dryer and blow the parsley the rest of the way dry. Works perfectly. Don’t hold the dryer too close or the heat will wilt the parsley. You don’t want that.
Oh, yeah! That looks amazing. I’m fortunate enough to be dating someone from Beirut, and I’m happy to say this has sparred me from trying that horrible imitation stuff I see here and there. This looks great. I’m going to make it this week!
Shahlah
Hi,
Thanks for this recipe………I was looking at all the others and was getting frustrated because like you said, they are not “Lebanese style”. Adding ingredients that don’t belong I know what you mean (gave someone a recipe for Ma2loobeh and they put peas as the vegetable…PEAS!!!!!!. Anyhow thanks again….I hope one day I can revisit Lebanon..BEAUTIFUL.
DTKenmo
I found this great tool at a liquidation place… herb scissors. Multi blade scissors that make shot work of chopping the parsley & mint for this. previously I had rolled up the herbs in lettuce leaves & sliced thin with a knife. Just Google “herb scissors” and you’ll find some, although probably a little pricier than the liquidation place I got mine.
Great recipe, BTW!
Hélène Daigneault
Can you break down the caloric and nutritional value/contents, phytonutrients, etc? Thanks! |
2016 Get You You In Shape Client Appreciation Party and Client of the Year Nominees & Winners
Below are the nominees and the winners followed by some pictures from our Client Party
Nominee Jackie Brainerd
I have lost 18 lbs. I broke my fibula in March . After surgery placing a plate and 8 screws in my left ankle area I came back with a boot in just 30 days. My surgeon said he knows being active in the GYIS program helped my healing tremendously. He released me in 60 days (norm is at least 90 days) and no outside physical therapy. At 90 days out I was back in the program 100%.."
Jackie Brainerd
Coppell, TX
Jackie Brainerd is a CPA working as a manager of a large pediatric group. She has lived in Coppell for the past 12 years and joined GYIS a little over a year ago.
Q: Where were you at in your life before the Get You In Shape? What did your life look like? I’ve always been active playing softball, golf, tennis and walking. I have done the SGK 3 Day 60 mile walk the past 10 years. 5 years ago I decided to sign up for an interval running class and have just kept running 3 times a week. I had the cardio covered but not the weight training, core work-out and the stretching techniques that are so important as we age.
Q: Why did you decide to join Get You In Shape? I had known about the program for a while from neighbors and friends that have been active in the program. When I saw the advertisement in Sept. 2014 the timing was right. Plus I love exercising outside!
Q: What was your first experience with or first impression of Get You In Shape? After meeting with Brad I thought “What have a got myself into?” What was I thinking? And then I attended my first session with Chaney. I was welcomed and introduced to other members. I feel like I have a “personal trainer” yet the comradery of a group setting. I was hooked and have been for over a year.
Q: What are some of the tools from the Get You In Shape program you have used that has helped you? I was introduced to My Fitness Pal, tracking your calories and fitness could not be easier. The recipes, exercises & words of encouragement shared on the private FB page are wonderful. Being able to post accomplishments and getting words of encouragement back is awesome.
However, the individual and team contests have been my favorite tools offered by the program. All ages, all sizes and genders make these challenges open to all. I have just enough of a competitive spirit to want to be successful.
Q: What do you like most about the Get You In Shape Program? I love the personal trainers, the flexible schedule, the inclusiveness of all levels –how the program can be customized to your needs.
Q: How has the Get You In Shape program helped change your life? So much for the better. I can’t imagine not doing boot camp and especially not with GYIS. I am such a social creature. I am not motivated to work-out on my own at all!
Q: What is your proudest moment or result from the Get You in Shape program? In Jan & Feb of this year I participated in the GYIS Biggest Winner Team Contest. Wow! I saw this small transformation over the course of the contest in my physical, mental and emotional well-being. It turned out to be a big change for me especially not knowing what was just around the corner for me.
Q: How has the Get You In Shape Program impacted other areas of your life? I have participated in the Susan G. Komen 3 Day 60 Mile Walk for the past 10 years. This year’s walk which was this past week was the best one from a physical standpoint for me. The Darks and I stopped at every pit stop and stretched. Others began doing what we were doing to help stay injury free. This is the very first year I had no blisters! Through GYIS I have learned the importance of hydrating and stretching.
Q: What are the results that you have achieved from the Get You In Shape program and how has Get You In Shape played a role in your results? I have lost 18 lbs since May 2014. GYIS helped me lose another 7 lbs and 5 more inches.
However, I broke my fibula in March 2015 which interrupted my fitness program. After surgery placing a plate and 8 screws in my left ankle area I came back with a boot in just 30 days. My surgeon said he knows being active in the GYIS program helped my healing tremendously. He released me in 60 days (norm is at least 90 days) and no outside physical therapy. At 90 days out I was back in the program 100%. I have started my weight loss journey again by using My Fitness Pal and a commitment to attend all available sessions. All my health numbers have gradually become better since last year. I am taking less medication to maintain those numbers and have a lot more energy. I needed it this past summer since my 9 yr old granddaughter spent the summer with me.
I feel better about myself. Staying active with the help of GYIS keeps me mentally fit as well. I am truly blessed.
Nominee Jacqueline Tapella
I feel like I have attained my goals over these past 6 months and I know that Get You In Shape is the ONLY reason I have stuck with these goals. My last two pregnancies did not involve working out and I felt very different than I do today."
Jacqueline Tapella
Coppell, TX
Jacqueline Tapella lives in Coppell and has been a client with GYIS since May of this year.
Q: Where were you at in your life before the Get You In Shape? What did your life look like? Right before Get You In Shape, I was just doing my day to day thing…being a mom to two young children, working full time and supporting my husband while he worked on building a new business. I did not work out and I tried to consciously eat mostly healthy foods. In the past, I have been in boot camps or involved in other aerobic classes through the YMCA or the Aquatic center. I really enjoy working out and staying fit, it just wasn’t a part of my busy lifestyle at that moment before I joined, I had other priorities. I remember telling her that I plan to sign up and was excited to get started. The price was what held me back from signing up until I talked it over with my husband. It is a financial commitment and unfortunately, the timing was not great for us. As I mentioned, he was just starting his own company and we could not commit the monthly amount. I was so sad to call and tell GYIS that I would not be signing up but over the next year and a half, I would keep driving by Andy Brown wishing I was part of that group working out in the parking lot. I stayed on the email list and in May of this year, saw a promotion for the 28 Day Kick Start. I knew that I wanted in, signed up for the month, went to the initial meeting again, this time with a group of people at the Get You In Shape office and hoped that this time my husband and I could make the financial commitment work. He said that we both always try to help each other to do things for ourselves that keep us happy and healthy - Get You In Shape, I have learned over the past 6 months is my “THING” that keeps me happy, healthy, strong and motivated in my own life and for my family.
Q: What was your first experience with or first impression of Get You In Shape? MY first impression was WOW, you really get your money’s worth in this program and WOW, they really care and want you to succeed. I love that attendance is taken, I love that we get measured every two weeks, I love that we have timed miles and get recognized for improvement. There are so many tools to help us succeed and reach our goals. I have not seen another program like this one and I am very happy to be a part.
Q: What are some of the tools from the Get You In Shape program you have used that has helped you? I LOVE the “my fitness pal” app! I had been familiar with it in the past but did not use it consistently unless I was really focused on losing weight. I used it for the first few months of my pregnancy to try to keep my weight gain under control. It is a great tool that I know will help me in the coming weeks as I get my weight back down after having the baby.
Q: What do you like most about the Get You In Shape Program? I LOVE the people and sense of community. It’s what keeps me coming back and what kept me working out up until the last two weeks of my pregnancy. I just didn’t want to not see everyone each morning even though it was way before the crack of dawn! Everyone (trainers and clients) are so motivating and positive, it makes me want to work harder and makes me excited to feel a part of the Get You In Shaoe community.
Q: How has the Get You In Shape program helped change your life? Get You In Shape has given me back my 3-4 hours a week of “me time.” It’s just enough to let me do something for myself so that I can put that focus and energy back into my family and my job. I feel healthy, strong, energized and confident.
Q: What is your proudest moment or result from the Get You in Shape program? Honestly my proudest moment so far has been being nominated for this award. I haven’t had the typical results thus far, given my current pregnant status but I have been motivated by other clients and complimented numerous times by clients and the trainers these past few months on my dedication to working out. Again – the PEOPLE in this program are amazing, I can’t say it enough. I am honored that I am one among others that has shown dedication to this program and I look forward to reaching new goals in this coming year.
Q: How has the Get You In Shape Program impacted other areas of your life? I mentioned it earlier about how Get You In Shape changed my life – the program has definitely helped with my energy levels and confidence. I am happier, stronger and love how I feel after I have been to an hour of boot camp. It helps shape my day – whether it’s with my family or at my workplace. I don’t consider myself a stressful person, I don’t let stress bother me too much but if I am ever feeling like I need a breather or to take a few breathes and relax from stress – Get You In Shape sessions can always do the trick!
Q: What are the results that you have achieved from the Get You In Shape program and how has Get You In Shape played a role in your results? My goals for the 6 months that I have been working out with Get You In Shape were to work out a few days a week during my pregnancy, gain a healthy amount of weight, feel confident over the summer in a swimsuit while pregnant and stay in shape so that after the pregnancy, I would be able to get back to my average weight easier. I have gained a little more than I expected but I feel great this time and I feel like the weight stayed within my belly area as opposed to going to all areas of my body. I have good muscle definition and I feel stronger even though I have gained weight. I feel like I have attained my goals over these past 6 months and I know that GYIS is the ONLY reason I have stuck with these goals. My last two pregnancies did not involve working out and I felt very different than I do today.
Nominee Maria Krehel
Although I am getting older, I feel younger and stronger than I did in my 30’s."
Maria Krehel
Coppell, TX
Maria’s Journey Down the Yellow Brick Road to Health
For over 29 years, I have worked in the skin care & makeup business. I built a multi-million dollar sales business. My career goal is to help people look their best on the outside - so they can feel good on the inside. I get to work with 1000’s of people in my sales organization.
I am originally from the New Orleans area. I have lived in north Texas for 17 years -- with the last 5 in Coppell.
As you can surely tell from what I do and where I was raised, my life is very colorful.
However, my health and fitness life was Black & White. The same 4 walls, the same routine every week. I worked out with a personal trainer one-on-one & hit the treadmill for cardio in a gym. I had no friends to share the journey with or to inspire me. I was spinning round & round in a whirlwind and getting nowhere. I wondered if this is all there is. Is this just the way it is going to go…all the while getting older and the years were beginning to show.
I encountered the Wicked Witches… fear, anxiety, self-doubt ... which were holding me back from achieving the health & fitness goals that I had in mind for myself.
NO! I dreamed of more. I decided I had to be open to new things and that is how I made the decision to follow a new road … the Yellow Brick Road to health and wellness with the GYIS program.
I contacted the Wizard (Brad Linder) at the beginning of my journey down the Get You In Shape yellow brick road to health. He piqued my curiosity and grabbed my interest. I reminded myself that I had to be open to new ideas in my journey. So I joined Get You In Shape … and began a new journey down the Yellow Brick Road to health.
I first encountered a Lion along my path who helped me find the Courage to shut out the Wicked Witches (fear, anxiety, self-doubt) who tried to block me from staying on the Get You In Shape Yellow Brick Road. He gave me the Courage to push harder past the fears and past the anxiety.
I soon found myself in a new world filled with many colorful opportunities. Luckily, when I showed up the first day, there were many other Munchkins (boot campers) to take the journey to health with me. They all remembered when they were the frightened "newbie" on the Get You In Shape YBR. All the Munchkins were warm & quick to welcome everyone!
There I met the Good Witches (Julie, Chaney, Kathy, Lotta & Meg … oh my!) who were encouraging, cheering you on and helping guide you down the GYIS Yellow Brick Road to Health. I found the Courage to run races, learned to cycle and workout in the rain, cold or heat - meeting whatever Mother Nature brought on. I quickly realized that I would not die or Melt if I got wet AND as a result, I now have the Courage to face other “impossible” challenges in my life. Obstacles that would have previously shut me down.
As I continued my journey down this wonderful new road, I met the Tin Man who showed me the importance of a Heart…a healthy Heart. I told the Wizard (Brad) that I was not a runner, but with the work in Boot Camp, my timed mile kept getting faster and I found my Heart was getting healthier. I decided to open another door that Get You In Shape offers, I opened the door of Cardio Club on my journey to a healthy Heart. What a new door it opened to my health! My confidence was growing and the Good Witches & Munchkins were adding to my confidence with cheering and encouragement. This support allowed me to run in the Pink Soles 5K where I placed 2nd in my age group.
Along the road I also utilized all the workout challenges offered by the Wizard. He had the road map to health and I was now following it. The Good Witches kept the encouraging, teaching environment going. When I lost my drive or was tired, they made sure to lend an encouraging word to help.
I moved further down the road and to my surprise, I encountered the Scarecrow searching for a brain … and I got one too. My eyes were opened even further to new things. I challenged myself to do a lot of ‘firsts’, my mental state became happier, I sleep deeper, I eat better, I am consistent & committed to my health like never before. I found that it is never too late to discover new things or remind yourself of past things you used to do. Why hadn't I thought of that sooner … I guess I needed a healthy Brain. I discovered a new activity that I enjoy…cycling. With help from other Munchkins and the Good Witches, I overcame the anxiety of falling, transitioning from running to cycling and back to running, and tackled my 1st duathlon. I finished 5th in my age group! I would have never done anything like this if I were not on the Get You In Shape Yellow Brick Road to health.
My Brain was learning and growing, and my body and Heart was too. I learned on the road that action is first and then the emotion of wanting to be there followed, and then came the motivation to do it again.
I learned there is not an end to the road to health. It continues as long as you stay on it. But you need Munchkins, Good Witches, and Wizards to keep you going and to stay on the brightly-colored, Yellow Brick Road to health … and to keep the Wicked Witches at bay.
While on the road to health, here are some of my successes:
· I am down 10 pounds and 13 inches since starting my journey a year ago. My energy level is very consistent. My sleeping is consistent. My overall health is amazing.
· I assured the Wizard (Brad) “I am not a runner.” Yet, the combination of Cardio Club and Boot Camp, and the encouragement of the Good Witch trainers has helped me learn to enjoy running or at least fast walking ;).
· I am no longer afraid of what the weather will be like or how my day will be as a result of the weather. I am much more flexible (in my new healthy Brain)!!
· Although I am getting older, I feel younger and stronger than I did in my 30’s.
· I would like to add that in January I set a personal goal to be consistent in ALL areas of priority in my life. Moving my body every day is a priority for me.
· SO… I show up on the GYIS Yellow Brick Road even when I don’t “feel” like it (which can be often). Showing up is half the battle. When I leave boot camp in the morning, I always feel better than when I climbed out of bed. It has definitely paid off!!
I don't want to go back to where I came from …that same Black & White world with the round and round whirlwind getting me absolutely nowhere. When I met with the Wizard, I told him that I want to continue enjoying the colorful life which is so much better than the old road that I was on.
He told me I had to close my eyes and click my ruby red tennis shoes together & repeat ...There is no place like health at Get You In Shape … There is no place like health … There is no place like health!
Nominee Michele Solorio
I suffered a brain injury and after 2 weeks in the hospital doctors told me it would be about 6 months to a year or longer before I would be back to my regular activities. I returned to boot camp two months to the day of my accident mainly because of the shape I was in before getting injured."
Michele Solorio
Irving, TX
Michele Solorio is Co-Owner and operator of a custom home framing company. She lives in Irving and has been with GYIS for almost 2 1/2 yrs.
Q: Where were you at in your life before the Get You In Shape? What did your life look like? I had a hectic life with three small children. I was occasionally working out with a personal trainer. I was not working at our office at that time but I was busy with house and kids. I was in a funk. My father had recently died, I was sleep deprived and I was not happy with my body.
Q: Why did you decide to join Get You In Shape? I wanted to do something for myself to help me reach my goal weight and I wanted the energy to keep up with my kids.
Q: What was your first experience with or first impression of Get You In Shape? My first impression of get you in shape was how welcoming and supportive everyone was.
Q: What are some of the tools from the Get You In Shape program you have used that has helped you? Tools I have learned are that exercise alone will not get you to your goal. You have to change the way you think about eating. My fitness pal is a good tool to keep track of your eating and help count calories and fat intake. Run keeper and Fitbit are other tools I now use to keep track of my miles and steps per day. GYIS Facebook community is really helpful. The challenges we participate in, recipes, class schedules and friends connecting with one another are a nice addition to the classes themselves. Cardio club is great for off day cardio. The monthly challenges that are posted on Fb help give a little extra motivation and accountability.
Q: What do you like most about the Get You In Shape Program? They are numerous. ...I like the multiple class times, the trainers are so helpful and friendly, the other clients are so enjoyable to be around, I like the consistency (rain or shine, they’re always there to challenge us). I never would have thought that one day I would look forward to working out, but I do!!
Q: How has the Get You In Shape program helped change your life Wow, it has uplifted my spirit, my attitude and my physical strength. But I do have to say that May 1st of this year I suffered a brain injury and after 2 weeks in the hospital doctors told me it would be about 6 months to a year or longer before I would be back to my regular activities. I returned to boot camp two months to the day of my accident. Although I wasn't a 100% and I had to start slow... I was doing it.
Q: What is your proudest moment or result from the Get You in Shape program? My proudest moment was when I got my sub 10 min mile band because that was on my bucket list for a long time. I now have a new goal of sub 9 min mile!! Also, getting my 100 mile t-shirt. I got most of those miles in rehab on the treadmill.
Q: How has the Get You In Shape Program impacted other areas of your life? The impact of GYIS in my life is big. I mean going back to the injury I sustained earlier this year I owe my advanced recovery to GYIS. I had several doctors ask me what type of physical activities did I participate in and I would say GYIS boot camp. They told me that I was lucky that I was strong enough 1) to hang on to my car as long as I did before falling; 2) I could lift myself up to the wheel chair, then walker, then eventually cane without assistance; 3) I had the mindset to accept that this recovery was going to be hard and long and frustrating. I spent the summer at the center for Neuro skills with physical, occupational and speech therapy. Everyone there said I was improving so quickly... About 50% faster than they had ever seen anyone with my injuries do before. They said it was due to me being in such good physical shape. I was strong and had better stamina than most recovering. Granted I felt like I had no endurance or strength but when I looked at the patients around me I could do quite a bit more. I was grateful that I was in this program.
I have made such good friends here. Everyone was so kind to me during my recovery, sending me cards, gifts, visiting me in the hospital. Just an overwhelming outpour of support. I am blessed to be here, talking to you right now and I am blessed to be involved in such a great community. Anyone who works out in the evening knows my children and I am grateful to everyone's patience and acceptance of them being around the workout area.
Q: What are the results that you have achieved from the Get You In Shape program and how has Get You In Shape played a role in your results? I have changed my mindset about exercise. I know I have to move and keep going. Never give up and I will get results. I cannot be lax about food or commitment to working out and expect a change. I have always wanted to like running, although I'm not all the way there yet, I don't mind it as much. I am back to cardio club. I came back this month which is 2 months ahead of when I had expected to return. I figured I wouldn't know if I could do it unless I tried... So I did. I have ran 2- 5ks which I would have never dreamed of doing before I joined and I have another one scheduled for next month. I'm working my way up to a 10k!! I have lost 10lbs since my return back in July. I have lost 2 dress sizes this year and I have participated and completed 4 challenges and still working on running 350 miles this year (biggest loser, 100 a day challenge, ab challenge and booty challenge). These are things I would not have thought I would do a year ago.
I just have a new outlook on life. When you are blessed with a second chance you realize you cannot take things or opportunities for granted. We are not promised tomorrow so I must take advantage of today and I choose to keep on my path of self-improvement and I thank GYIS for being there for me and supporting me on this journey.
Nominee Nancy Anderson
This isn’t my first time to work out or to lose weight, but I’ve never been this consistent about being part of an exercise program. I want to be one of those people like I saw during my first week who celebrates years of working out and I’m convinced that’s going to happen."
Nancy Anderon
Coppell, TX
Nancy Anderson is a freelance technical writer living in Coppell. I joined Get You In Shape at the end of April, 2015.
Q: Where were you at in your life before the Get You In Shape? What did your life look like? The combination of a stressful job, knee surgery and a knee injury (on my “good” knee) resulted in an extra 30 pounds--but that’s just part of the story. I had acid reflux, was recently diagnosed with high cholesterol and felt I was missing out on life. I felt tired and slow all the time. And I felt very old! My athletic husband’s life was such a contrast to mine, but I thought I would never be able to run again because of my knees.
Q: Why did you decide to join Get You In Shape? I had just gotten the news from the doctor about my high cholesterol and was motivated to get off my bum. After I was laid off from my full-time job, I realized that this was the perfect opportunity to get back into exercising. I decided to try Get You In Shape because it had such a wide variety of times that it met. Also, I liked the idea of exercising in the outdoors.
Q: What was your first experience with or first impression of Get You In Shape? Honestly, I was shocked at how out-of-shape I was. I was breaking into a sweat during the warm-ups! I remember thinking that I had a long uphill climb to just get through the workouts. But immediately I loved how fun and positive the trainers are; I think Kathy Chasteen managed to make me laugh during every workout. And Chaney was so sweet and encouraging. Also, I liked the camaraderie within the group, the wide variety of exercises and being outside.
One thing that really stuck with me from that first week was when clients were being recognized for their anniversaries. When I saw that there were people who had worked out for 7 years, 4 years, etc., I could hardly believe it. I thought, “Now this place must have something special!”
Q: What are some of the tools from the Get You In Shape program you have used that has helped you? I did the Advocare 24-Day Challenge as a jumpstart and that was great for seeing immediate results. Since that time, I have pretty consistently stayed with protein shakes for breakfast. Using My Fitness Pal has perhaps been the best tool to keep me accountable. I fell off the wagon once when it came to using it, but a friend on there encouraged me to get back on track. Since then, I’ve added more friends through the tool for that very purpose: to help keep me on track.
Q: What do you like most about the Get You In Shape Program? What’s not to love? I love the trainers, the other members, the variety, the joy of lying on a mat and looking up at the stars and moon, the contests, t-shirts, the newsletter--I truly can’t narrow it down. It’s the whole package that I love. It’s kind of like church for sweaty people. J
Q: How has the Get You In Shape program helped change your life? This program has awakened me and pulled me off the sidelines, back into the game. I find myself looking for opportunities to move more (like taking stairs instead of escalators). Also, I feel a sense of purpose and accomplishment. I’ve seen the biggest changes since I started doing Cardio Club; now I’m thinking about finding the next 5K I can run. I want to run a race in each of the 50 states.
Q: What is your proudest moment or result from the Get You in Shape program? My proudest moment was when I somehow managed to knock a minute off my timed mile. Or maybe it was when I squeezed out three legitimate burpees before I got too tired and dizzy to do more. I know I’m taking baby steps, but my goal from the beginning was to KEEP COMING TO CLASS—not to be the queen of burpees (though wouldn’t that be a fun title to hold). I know that my consistency and commitment will give me great results over time.
Q: How has the Get You In Shape Program impacted other areas of your life? My husband, Steve, saw how much I enjoyed the classes and that I was getting good results so, with some encouragement from me, he started attended in July. It’s been so great to have this activity that we share. We already had a good marriage, but I believe this program has made our marriage even better!
Q: What are the results that you have achieved from the Get You In Shape program and how has Get You In Shape played a role in your results? As of a month ago, I had lost 8 pounds and 16 inches. I have lost at least two more pounds since then. I’ve dropped a jean size and a shirt size. I’m hoping my cholesterol is better, but I haven’t retested. I know this is because of the combination of working out (especially since adding cardio club) and tracking what I eat. And even in smaller ways, like motivation through Facebook, the newsletter and the extra fun things we do, all of these things add up to keep me interested.
This isn’t my first time to work out or to lose weight, but I’ve never been this consistent about being part of an exercise program. And I know that’s because, in a sweaty, difficult way, I’m having fun and I’m being encouraged. I want to be one of those people like I saw during my first week who celebrates years of working out and I’m convinced that’s going to happen.
3rd Place Laura Hazlewood $100
All I have to say is how hasn’t it changed my life??? Everything has changed. From losing close to 30 pounds, to the size I wear, to the motivation I have for working out, to the “swagger” I have at work and in my family life."
Laura Hazzlewood
Coppell, TX
Laura Hazlewood is Director of Sales for AT&T and lives in Coppell, TX. She joined GYIS March 1st 2015.
Q: Where you were at in your life before Get You In Shape? What did your life look like? Tired, frustrated with myself, I thought I was “too busy” to make myself a priority
Q: Why did you decide to join Get You In Shape? I was at the end of my rope, none of my clothes fit, I was so frustrated with how I looked and how I felt I didn’t know what else to do. I called Brad (from knowing him 7 years ago after my daughter was born) and he was gracious enough to listen to me and offer suggestions. He said “come on, you can do it” and I DID :)
Q: What was your first experience with or first impression of Get You In Shape? My first class was during “snow week back in March and it was really hard, but I knew it was hard work that was going to get me to my goal. I was doing the Advocare cleanse and couldn’t have coffee or bread or cheese (my comfort “go to’s”) and thought that if I could do this, I could do anything!!! And I DID :)
Q: What are some of the tools from the Get You In Shape program you have used that has helped you? First and foremost, the people. The trainers are amazing! Julie, Chaney, Lotta, Kathy, Meg . . . . you have changed my life!!! My Fitness Pal is great, definitely helps to see what you are doing, chart it and visually be accountable for your diet and exercise. I would say the other clients are a tool; they motivate, help hold you accountable, guide you and keep you on track.
Q: What do you like most about the Get You In Shape Program? Without a doubt, the people!! Trainers and clients make all the difference in the world. Without the people aspect, it’s just like joining a gym or “hoping” for results. The people make it all possible!! I also like the variety in the workouts, the thought and effort put into designing the workouts to change the intensity, areas we work on and it keeps it fun and interesting.
Q: How has the Get You In Shape program helped change your life? All I have to say is how hasn’t it changed my life??? Everything has changed. From the size I wear, to the motivation I have for working out, to the “swagger” I have at work and in my family life. My kids look at me with a new sense of pride, I feel awesome when I’m up in front of people (coaching cheer, presentations at work etc). I have a huge sense of accomplishment, I set a goal and I DID it!!!!!!! Amazing pride, self-esteem and appreciation comes with it and I am grateful for the opportunity I have had and continue to have with this great group!
Q: What is your proudest moment or result from the Get You in Shape program? Over the summer I was in a dressing room at Ann Taylor with my daughter and I was trying on a pair of shorts. I put on a size 8 and they were too big!!! TOO BIG!!!! I actually cried. As tears were coming down my cheeks, my daughter said “what’s wrong Mom?” and I said “I am just so proud of myself, I have worked really hard for this and I DID it!” Each milestone carries with it a sense of accomplishment and whether it’s running the mile a minute faster than last time or being able to wear the next smaller size, it all matters and it all gets me closer to my goal. It’s tangible evidence that through dedication and hard work, you can do anything!!!!!
Q: How has the Get You In Shape Program impacted other areas of your life? It’s greatly impacted both my personal life as well as my work life. It’s given me the ability to coach cheer with great pride and get out of my comfort zone by putting me in situations that made me really uncomfortable in the past. The same with my work life. I have a new pride and self-esteem that allows me to volunteer for projects, presentations and events that put me out in front of a lot of people and I do so willingly and with confidence. It’s incredibly important to do this and teach my kids that you can do anything you put your mind to. Set a goal and see it through. Do not quit when it gets hard. Always be up for the challenge and YOU make it happen!!!
Q: What are the results that you have achieved from the Get You In Shape program and how has Get You In Shape played a role in your results? I’ve lost nearly 30 pounds (28 and some change!!!!), went from a size 14 pants to an 8, and I can easily buy a Medium anything and it FITS!!!!! I am confident, HAPPY, have a huge sense of pride and accomplishment that positively impacts and shapes my daily life on the business side and personal side. My skin looks better, I look healthy, I am actually eating a lot less so it’s saved me money and I feel great due to the healthier food and supplements from Advocare. So many life changing results that I will work hard to continue!
Overall, I just feel blessed. Blessed and thankful to be a part of this group, to have been accepted so freely by everyone involved and lifted up each and every day by the people and mission. I am grateful for the opportunity to change, learn and transform (both on the inside and outside). Although weight loss was my primary objective, I have accomplished so much more. It’s a new chapter and I am so excited to see what 2016 and a new full year in GYIS has in store. I am willing to continue to put in the hard work and dedication it will take in order to reach even more goals I will set forward for myself. Can’t wait for the journey to continue!!!!! Thank you!!!!
2nd Place Zach Edwards $250
I've lost over 30 lbs from my heaviest and removed inches where I needed to while gaining inches where desired… increased lean muscle mass and reduced body fat."
Zach Edwards
Coppell, TX
Zach Edwards is an Architect and lives in Coppell. He has been a Get You In Shape client off & on for past 5 years (since my 1st son was born) but has been fully committed since February of 2015.
Q: Where were you at in your life before the Get You In Shape? What did your life look like? I could literally feel myself getting unhealthy. I wasn’t happy with how I was feeling, both physically & emotionally, and I wasn’t happy with how I looked. I work in a stressful profession and the role I fill is a highly demanding one. My body was not responding to this stress well and I knew I needed a change. I was overweight, had high blood pressure, was tired often, but not sleeping well. I was becoming a mess and needed a change.
Q: Why did you decide to join Get You In Shape? My wife did it first and gave great reviews. Get You In Shape offered a promotional special, so I decided to try it the first time 5 years ago. This recent time around, when I restarted, I knew what I was getting into and knew Get You In Shape was the right fit for what I needed…. Consistency, fun and tough workouts, variety and results!
Q: What was your first experience with or first impression of Get You In Shape? 5 years ago… “What did I get myself into? This was nothing like my collegiate training… Will this even do anything?” After just the first week… I knew it was legit as I could barely walk...and I wanted more! ;)
Q: What are some of the tools from the Get You In Shape program you have used that has helped you? The 24 day challenge and Advocare have been instrumental in meeting my goals. Measurements and tracking have kept me honest and are good check-ins. Food journals have helped me surpass plateaus in my progress.
Q: What do you like most about the Get You In Shape Program? It's never the same workout twice, but you can always expect consistency and dedication from the trainers to get the most out of you. Even when you don't feel like working out, getting to class and seeing everyone gets you motivated and you're always glad you made it when you're finished.
Q: How has the Get You In Shape program helped change your life? Leading a healthier and happier life. I'm lead a more balanced life and enjoy feeling great again.
Q: What is your proudest moment or result from the Get You in Shape program? By far, it was running a sub 6 min mile (5:41) and breaking my PR on a 5k (23:16).
Q: How has the Get You In Shape Program impacted other areas of your life? Desire for improved nutrition, I'm sleeping better, I have more energy throughout day, I'm in a better mood more often and experience less stress (or at least handle it better). I've also got a happy wife at home from my physical improvements, so that's definitely a bonus! :)
It's motivated me to join in with close friends from high school to run a marathon in 2016. This was a pact we made to each other over 20 years ago, but never fulfilled when we turned 18. I've now got the energy and motivation to run on my off Bootcamp days and I even take my training stuff with me on business trips and vacations… That never happened before.
Q: What are the results that you have achieved from the Get You In Shape program and how has Get You In Shape played a role in your results? Blood pressure is the biggest improvement. I went from stage 1 hypertension down to well within the perfect range without the use of meds.
I've lost over 30 lbs from my heaviest and removed inches where I needed to while gaining inches where desired… increased lean muscle mass and reduced body fat.
Get You In Shape gave me the program, nutritional and supplemental guidance, and motivation to reach these results. Thank you!
1st Place Sharon Kirby $500
I have lot 39.75 pounds and 180.5 inches. And It is staying off! I felt so cared for and supported and I received so much assistance in achieving my goal to look better for the 2 weddings. It’s been quite a journey that continues to bless me and my family."
Sharon Kirbey
Coppell, TX
Sharon Kirby is a Tax Accountant at Boy Scouts of America National Service Center. She lives in Coppell and has been regularly attending Get You In Shape since Jan. 2015.
Q: Where were you at in your life before the Get You In Shape? What did your life look like? I was definitely living a sedentary life prior to GYIS. I watched a lot of TV, stayed up late and had trouble getting up for work each morning. I thought I knew what to do but I sure wasn’t doing it.
Q: Why did you decide to join Get You In Shape? In the fall of 2014 my son asked his girlfriend to marry him and then in December, my daughter’s boyfriend proposed to her. Both weddings were scheduled for the summer of 2015. I did not want to be remembered forever in wedding photos at the size I was currently wearing. With June and August wedding dates set, I knew I needed help. GYIS had helped me achieve results before so I decided to try YET again. I was a little worried, though, that I wouldn’t be able to achieve the results I had in a previous year (2012).
Q: What was your first experience with or first impression of Get You In Shape? I love that the team at GYIS welcomed me back with no lecture. J I felt accepted, not ashamed for where I had ‘fallen’. It was also a warm welcome because many of the trainers were the same ones who had helped me a few years earlier.
Q: What are some of the tools from the Get You In Shape program you have used that has helped you? I had never tried the Advocare cleanse or 24 Day Challenges. I had heard people talking about them though. Using the Total Transformation Contest as a kick starter was very helpful. I also logged my food into MyFitnessPal and linked a Fitbit zip to count steps and record exercise into MyFitnessPal. It was eye opening to see what the calorie and nutrient make up was in some of the meals that I had previously eaten thinking they were ‘healthy’.
Q: What do you like most about the Get You In Shape Program? Definitely the people! Both the trainers and the clients make it a positive and welcoming environment to be in. The variety of exercises and work out sets is also great.
Q: How has the Get You In Shape program helped change your life? I never thought I would be getting up at 5:30 a.m. to exercise 5 to 6 days out of the week or to just exercise 5 to 6 days out of every week since January. This consistency has been a new thing for me. With the help of a little Spark I have drastically reduced my coffee consumption and completely cut out creamers (except occasionally I have a little flavored creamer). These are BIG changes for me!! I am more hopeful about my health and my future years here on Earth. It has totally changed my life’s course from one of gaining weight every month to losing or maintaining my weight.
Q: What is your proudest moment or result from the Get You in Shape program? My regular attendance is one of my proudest moments. The weddings may have ‘scared me’ into coming but it is the great encouraging people that make me want to keep up the routine. I am also excited to be 2 seconds away from breaking the 8 minute mile mark.
Q: How has the Get You In Shape Program impacted other areas of your life? It is much more fun to look for new clothes in sizes that are half what my starting size was! I am more confident to try things than I was before and much more optimistic about the future.
Q: What are the results that you have achieved from the Get You In Shape program and how has Get You In Shape played a role in your results? I have lost 39.75 pounds and 180.5 inches. And it is staying off!!! I also cut 2 minutes and 12 seconds off of my mile time. I can run a 5k in just under 30 minutes now, which I had not been able to do before.I really didn’t want to go out for client of the year but I did not want to miss out on the opportunity to publicly thank Brad, Cynthia, Julie, Chaney, Meg, Kathy, Lotta and all of the fun clients that I have been encouraged by all throughout this year. I felt so cared for and supported and I received so much assistance in achieving my goal to look better for the 2 weddings. It’s been quite a journey that continues to bless me and my family. |
NCAA BK
Columbia 64, Dartmouth 62
HANOVER, N.H. (AP)
Mark Cisco scored 18 points, including a layup with 4 seconds
left that gave Columbia a 64-62 victory over Dartmouth in an Ivy
League game Friday night.
The Lions (13-8, 2-3) had tied the game on two free throws by
Brian Barbour with 49 seconds remaining after two free throws by
Gabas Maldunas gave the Big Green (4-17, 0-5) a 62-60 lead,
matching their largest lead of the second half.
Cisco grabbed a rebound of Maldunas' missed layup with 32
seconds left to set up the Lions' final possession.
Six point leads by Columbia in each half were the largest in the
game, and the second half was tied six times.
Barbour added 13 points for the Lions, including 6 of 6 on free
throws, and Van Green scored 10 off the bench.
Related Stories
Member Comments
Please note by clicking on "Post comment" you acknowledge that you have read the Terms of Use and the comment you are posting is in compliance with such terms. Be Polite. Inappropriate posts may be removed by the moderator. |
Apple is planning to oppose a “Right to Repair” legislation introduced last month in the Nebraska legislature, Motherboard reported Wednesday, citing an unnamed source within the legislature. The bill for the Fair Repair Act is aimed at ending the manufacturers’ aftermarket monopoly, wherein only authorized service providers are allowed to carry out repairs.
However, the right to repair movement, which has also gained cachet in the states of Minnesota, New York, Massachusetts, Kansas, Wyoming, Illinois and Tennessee — has faced vehement opposition not just from Apple, but also from tractor manufacturer John Deere. The company argued in 2015 that allowing people to tinker with their software — even if it’s for the purpose of repair — would “make it possible for pirates, third-party software developers, and less innovative competitors to free-ride off the creativity, unique expression and ingenuity of vehicle software designed by leading vehicle manufacturers.”
When the Nebraska bill is tabled for a hearing on March 9, Apple — which has successfully lobbied against similar bills in other states — is expected to argue, among other things, that allowing customers or independent mechanics to repair their own phones could cause the devices’ lithium-ion batteries to catch fire.
“They should want to give people as much information about how to deal with a hazardous thing as they can,” Gay Gordon-Byrne, executive director of Repair.org — the organization spearheading the right to repair movement, told Motherboard. “If they’re concerned about exploding batteries, put warning labels on them and tell consumers how to replace them safely.”
If enacted, the Nebraska bill — which does not apply to motor vehicle manufacturers and dealers — would force Apple and other electronic equipment manufactures to not only sell repair parts to consumers and independent repair shops, but to also make diagnostic and service manuals available to the public.
Currently, Apple only allows outlets like Apple stores and other businesses that need to pay the company a fee to become authorized to carry out repairs.
“Each original equipment manufacturer of equipment sold or used in this state shall make available for purchase by owners and independent repair providers all diagnostic repair tools incorporating the same diagnostic, repair, and remote communications capabilities that such original equipment manufacturer makes available to its own repair or engineering staff or any authorized repair provider,” the text of the LB67 bill reads.
The bill does not obligate an original equipment manufacturer to divulge a trade secret, or to provide parts that are no longer available.
“Any original equipment manufacturer found in violation of the Fair Repair Act shall be liable to a civil penalty of not more than five hundred dollars for each violation,” the bill reads.
Proponents of the right to repair movement argue that in addition to reducing the cost of repair, allowing consumers and independent technicians to repair their devices could tackle the growing problem of electronic waste (United Nations research shows that in 2014, about 41.8 million metric tons of e-waste was generated globally, of which only 6.5 million metric tons was recycled.)
As the argument goes, if consumers have the option to repair their electronic devices at a reasonable price, they would choose to do so instead of discarding it and buying a new one.
“It's possible to make repairable, long-lasting electronics, but if they did that it could hurt their future sales,” Kyle Wiens, CEO and co-founder of iFixit — a company that sells spare parts for consumer electronics and publishes free online guides — told CNBC in August. “They're putting us on a treadmill where we're forced to buy new gizmos every couple years, whether we want to or not.” |
SnK 85 Thoughts
The true hero of this chapter is Levi’s intolerance of any and all suspense building.
“…But… why… would you choose… me?”
“Personal feelings and your whiny friends.”
“The key doesn’t fit!”
“It’s a fucking wood door. *KRACK*”
“The basement is a nondescript workplace!”
“If you’re a moron.”
“The drawer is empty!”
“…No. FFS.”
I’ve never taken issue with Armin looking so similar to Historia before, because why would you decry such fantastic conspiracy theory material?
I currently take issue.
Throughout all of the in-universe drama that’s gotten us here, I’ve been pretty negative about this story decision. The fact that we have another over-stressed tiny blond being asked to take on a role they aren’t prepared for isn’t something I care about. The looming probability of Armin getting a huge upgrade in time spent on him while his version of this arc plays out mostly makes me wish that Levi loved his boyfriend a little less.
That isn’t to say that fascinating things can’t be done with where we’re at, but Armin getting dragged into the spotlight is nails on a chalkboard to me at this point. Forget other characters having had this arc; he’s had portions of this arc before.
The scene on the wall mimics the scene around the campfire after Armin’s brush with murder. He’s in shock and trauma, a person is dead, and Levi’s around to keep everyone on point.
Jean’s development was being highlighted back then, but that only makes it slightly less frustrating.
One of the complicated things about Armin is that he is not really built for all of this. Armin is part of the Survey Corps because he has a sharp, cynical view of the world, and a stark idealism that pushes him to weaponize that. He wants to change the world.
At the same time, he’s very young, vastly inexperienced in the tools he sees as his trade, and, against all odds, really, really sensitive.
This is a boy who wants to emulate a man that can have his arm ripped off and still think of the mission. A kid whose mind leaps to lies of torture and political horror with all the complexity of a connect-the-dots book. A boy whose reflexes can kill someone before he’s ever emotionally prepared for it.
Armin understands the world he lives in very well.
That has done little to keep bloodshed, violence, and the enormity of carrying on the weight of humanity from scaring the crap out of him.
He’s adapted well, and Eren’s right about him being brave, but his emotional ability to cope still has a very thin margin. His worldview places him closer to the Eren and Mikasa side of average human experience than the Sasha, Jean, Connie edge, but when it comes to stress, his mentality switches to the far end of normal.
The reason I’m bringing this up is because of Shinji Ikari.
…That was put in much prettier terms multiple times.
Shinji Ikari is one of the greatest examples of a fanbase finding the basic spectrum of human emotion infuriating. Does his situation suck? Absolutely. Is any part of it fair? Nope.
Should he still get in the robot?
Duh.
The problem that usually occurs when writers attempt to portray humanity realistically in a world of the fantastical is that… well, it isn’t that pleasant to watch. You have this world, full of things beyond the boundaries of our reality, but then you have these humans, and they’re just as frail as they are here.
I don’t really want to talk about the merits and flaws in that type of writing, so truthfully, I should have kept any mention of Evangelion far, far away from this post.
Oh well.
I don’t mind Shinji (I save my minding for the rest of Evangelion), and I disagree with most of the stereotypical complaints you see about him, however, fan reaction to him is a pretty good example of how I really don’t want to feel about Armin.
In my “I have thought about this for maybe five minutes and that gives me the right to a serious opinion about it,” opinion, part of the irritation with Shinji comes from him not following the script. He reacts like a sane…ish person to everything that happens, and that’s not how the story’s supposed to go.
In a perfect world, the characters, plot, and theme of a story all work together.
General reaction to Evangelion was that Shinji was getting in the way of the story–instead of being the story, which he was.
In Shinji’s case, I don’t agree.
In Armin’s, through no fault of Armin’s, I’m dreading that possibility.
This story has never been Armin’s. He’s been an integral part of it, but he’s the main character’s best friend, not the main character.
What that means is that, given a very relevant piece of what makes Eren the main character, it is a thousand times easier to draw unfavorable comparisons. It’s then made worse because we’ve done this before, and, in all likelihood, with someone who could handle the abominable weirdness slightly better.
Armin’s emotional arc of horror and adjustment is not new to him or the story. A character’s arc response to being a titan is the only reason this story has gotten to this point.
If Armin gets the arc he deserves for being read into the plot this thoroughly, we get a lot of pages of a person’s open, emotional adjustment to non-consensual cannibalism granting you superpowers.
For the series, the idea of focusing on someone in Eren’s situation who is a little less… Eren isn’t a bad one.
But Armin is not the prime example of human normalcy to Eren’s human firestorm. Armin is weird in his own ways, none of which are going to get a chance to shine until he’s allowed to come to terms with the fuckery that he’s been landed in.
And because Armin is Armin, he does need that time.
Also because Armin is Armin, this month’s edition of gaunt, traumatized panels from him is–well, I’d say the first of many, but it’s more the middle of many, since again, we’ve been here before.
In the end, we could find that using Armin as a research subject works way better than using Eren, due to the personalities involved. We could find all sorts of useful things out because now there is an Armin filling a shifter’s position.
For my bitter self, it’s hard to envision that coming without a whole lot of repetitive, unfun material. I guess it’s a way to kill time until the next major arc shift (and something to slide in between plot revelations), but personally, I think it impedes the larger story.
On the other end, if Armin’s response to all of this isn’t given a detailed emotional arc… that’s really not fair to Armin.
You could take all of this going on, use it to paint a beautiful Mikasa arc about watching her friends face burdens she can’t protect them from, giving her glorious discussions with a host of other characters about how she feels, following it up with Ackerman family secrets and the secrets of her mother’s side–
–and fuck it, not detailing Armin’s personal experience would still feel unfair. I would greatly prefer all of that to spending any time at all on Armin dealing with this recycled material, but while Armin’s not the main character, he’s too significant not to spend time on in light of all of this.
I’m not saying the next stage of the story can’t be done well. Everything I’m whining about could turn out to be my favorite content, for all I know.
I’m just saying that wild mundane titan!Kenny would have been more fun.
With that, I think the complaining portion of this is done.
Now we get to–well, unfortunately, I think the parts of the story I’m upset about are the most interesting parts of the chapter to me.
The revelation that there are people outside doesn’t count as a revelation, because see RAB and others. Even the part about them being more technologically advanced isn’t too much of a shock, since it has long been foreshadowed by Annie’s hoodie.
…Look, I’m a really boring person when it comes to plot.
Anyway, I’m partially kidding, but that won’t stop me from pointing at Annie’s hoodie and shouting “IT WAS FORETOLD!” for the sheer fun of it. Except I won’t do that, because even through the internet I can already feel judgmental stares.
There’s still no telling if the society Grisha belongs to has any connection to the society that Zeke and co. live in. Obviously, Zeke and Grisha are connected (I think I saw something about Zeke possibly being his son, picture at the back of the chapter, with titan nonsense leaving them less distantly aged than they might have been, and that’s honestly a pretty reasonable guess), but Grisha has been absent from outside the walls for years, and Zeke has enough issues that I could totally buy him being banished.
Basically, knowing that out there, somewhere, there’s a group of people established enough to take pictures, tells us nothing except that.
Zeke’s group of Warriors don’t seem like they’ve lived any kind of easy life, so for my personal thoughts, I’m torn between them being the first line of defense for these people who live in luxury, something going horribly wrong to collapse this society, and the Warriors and Grisha’s lot being completely separate things.
I think what we know of the basement so far spawns more questions than answers. If people outside the walls can chillax and take pictures, what’s up with the Warriors? If they live close enough that Grisha can run into the Scouts, what are they doing to avoid the problem of mass death that the titans pose on every single scouting expedition? Where do Ymir’s people fit into this?
We’re probably going to get a huge infodump soon. I welcome this, because it makes me feel less guilty about having so little to say about the plot possibilities. At this point, waiting for the manga to explain itself is probably more efficient than making guesses.
(See the above “boring” statement, and bask in its truth.)
We’ve also still got the greater response from all of the top brass regarding how there are nine people left to represent an entire military branch. The news that Wall Maria is now sealed buys them some good will, but the understanding of how simple it is for their enemies to retake it, and how many lives were lost for a (at this point) symbolic victory is going to create some unavoidable tension. What we’ve seen of Grisha’s basement so far also isn’t enough information to really justify what’s happened.
Hopefully, they’ll keep the good opinion of the people, but that’s about the only place where any variation of the word hope can be used. You can’t fight a war with nine people.
I really loved Hange this chapter, because you’ve got to appreciate how nothing in her life is going remotely right currently. Her whole squad is dead, her closest friend alive decided to add their commanding officer to that list, placing her in command in the darkest hour their branch has ever had, and her interest in continuing the good fight has done nothing to deny her awareness of how totally unfair all of this has been.
She’ll make an excellent commander, but it’s a shame that she has to be.
Eren and Mikasa heading home also hurts in magnificent ways. Carla’s shoe is still there, and the wreck Bertolt made of their house is untouched outside of the growing grass. Like everything to do with Shiganshina in this arc, it’s quiet and perfect.
I also enjoyed them opening up Dr. Yeager’s book together. Eren’s personal development came up before this arc truly got underway, but seeing both of them back at home, cooperating, is fantastic. The last time they were both under this roof, Mikasa was ratting Eren out for wanting to be a Scout, and he was snapping at her for it. Now, they’re on the same page.
Literally, they’re both touching it.
Also, do we have confirmation on the narrator, or are we going with the idea that the narrator switches depending on convenience?
I’m fine with either, but Eren’s definitely the narrator for the end of this chapter, so for once, no points to you, anime. Except for highlighting Armin’s significance before it was cool.
Next month: Can has letter?
|
Q:
How to insert custom field in Joomla 1.6 article editor?
I want to insert a custom field in the Article edit page in the administration area of Joomla 1.6. Please see screenshot.
http://screencast.com/t/vtLEBdUK
I have tried to edit myjoomlasite/administrator/components/com_content/models/forms/article.xml.
I can introduce a field in the article options fieldset, but not in the main edit area.
A:
For insert a custom field in the Article edit page in the administration you need to change in two files
myjoomlasite/administrator/components/com_content/models/forms/article.xml
add your field name like below code
<field name="name" type="text" label="JGLOBAL_NAME"
description="JFIELD_NAME_DESC" class="inputbox" size="30"
required="true" />
myjoomlasite/administrator/components/com_content/views/article/tmpl/edit.php
add your label and input field
<?php echo $this->form->getLabel('name'); ?>
<?php echo $this->form->getInput('name'); ?>
A:
I wouldn't recommend modifying the core files to achieve what you want.
You could use one of Joomla's CCK (content construction kits) to create your content templates.
Best free CCKs available for Joomla :
Form2Content (my favorite)
K2 (most popular, highly recommended)
You can find more in Joomla Extension Directory
|
(a) Field of the Invention
The present invention relates to a pen with a storage function, and more particularly to a pen which is provided with a USB (Universal Serial Bus) flash disk and a solar power flashing LCD (Liquid Crystal Display), such that the pen can freely store digital data and can flash to display prints at any time.
(b) Description of the Prior Art
If a computer user in old days needs to carry and transfer data to the other computer for work, the data is usually stored on a diskette or a CD (Compact Disc), and is then put into the other computer for accessing. However, as a development of the computer technology, a kind of USB (Universal Serial Bus) flash disk has shown up in the market to replace the aforementioned storage method. As the USB flash disk uses a plug-and-play USB interface, it is provided with a lot of memory space, and in the mean time, its access speed is also a lot quicker than the old floppy disk drive. Therefore, the user only needs to plug a USB connector of the USB flash disk into a USB socket of the computer to access and store the data.
However, the conventional USB flash disk is only provided with a single function of storing the data, and is also provided with a small size. Therefore, if the USB flash disk is not used, another place should be found to carefully keep it; otherwise, personal data is easy to leak out, which results in a severe outcome. On the other hand, as a book desk, an office desk, or a carry bag also contains pens and other stationery, it will be messy and is not easy to arrange if the USB flash disk is added in, which results in an unnecessary lost easily. Accordingly, how to design and develop a kind of ordinary pen which combines the USB flash disk and is added with a function of flashingly displaying commercial prints to prompt the user, decrease a chance of losing the data, facilitate the arrangement of the stationery, and increase the function of that pen, is an issue which needs to be resolved by related vendors. |
It's also hard to get the self-employed to pay taxes, because they can write-off home office and other everyday expenses against income. And the number of self-employed has been growing more quickly than the labour force as a whole .
The most reliable payers of income taxes are middle-class salaried employees. It's hard to get the poor to pay taxes, because they don't have much money, and it's hard to get the rich to pay taxes, because they are mobile, and can take advantage of tax planning opportunities. But income inequality in Canada has risen , and relatively more income - thus relatively more of Canada's tax base - is in the hands of the elusive 1%. This income concentration partly reflects an increase in the share of Canada's GDP going to capital - and capital income is much harder to tax than labour income.
The heavy duty engines of tax revenue generation are the personal income tax and the federal and provincial sales taxes. Federally, personal income taxes raise about half of all tax revenues. Provincial sales taxes raise considerably more revenue than provincial corporate income taxes. Yet these bases are eroding.
The most pressing issue in Canadian tax policy today is that people don't like paying taxes, and it is increasingly hard to persuade them to do so.
Back in the day some foresighted person in the federal government came up with a brilliant plan to ease the budget crunch that would inevitably accompany population aging: the Registered Retirement Savings Plan. Basically RRSPs allow people to defer their tax liabilities until the time that they retire - and start costing the government money. The money that the baby-boomers have sitting in RRSPs should be a revenue bonanza for federal and provincial governments - except that this is what happens when someone tries to remove a nest egg:
(For more on this, see my post RRSPs: brilliant economics, lousy psychology).
Just as changes in the way that people earn income threaten to erode the income tax base, changes in the way that people consume present challenges for consumption or sales taxes. Cross-border shopping and on-line shopping are tractable issues as long as physical goods are changing hands - with appropriate enforcement, taxes could be collected. The real challenge is the world of virtual commodities. With the right brower extension, one's computer can appear to be in the US. Then what prevents one from buying virtual goods like games or apps or books at US rates paying US tax, completely by-passing the Canadian tax system? (something might, I honestly have no idea).
The world of virtual commodities poses another more subtle, yet I think even greater, challenge to Canadian tax policy. Ultimately, people work so that they can afford to buy stuff. When things that people care about - music, books, movies, friends - are available for free on-line, people don't need to work so hard (the price of housing is a solid objection to this argument). Still, if labour elasticities are increasing, governments will find it hard to resolve their fiscal crises by raising taxes on workers.
I haven't said much about capital taxation, not because it doesn't matter, but because it's so complicated. The OECD's base erosion and profit shifting project (BEPS - for a Canadian take on the BEPS, see this Canadian Tax Journal symposium) is one of the more interesting developments here. Corporate tax design matters for investment, and for economic efficiency. But federally, corporate income taxes only account for one out of every seven tax dollars, and much of that revenue comes from small businesses, rather than multinationals. From a revenue point of view, there's just not that much at stake in the corporate income tax world.
This post is written in response to a query from a friend, who has no doubt read to this point and thought "nothing on income splitting? really?" I see income splitting as a manifestation of people's desire to pay lower taxes. The self-employed have opportunities to split income by employing spouses or other family members; the salaried want in on the deal. There is, perhaps, a socially conservative faction of the Conservative Party of Canada that has worked out that income splitting would create incentives for mothers to stay at home full-time, and figures this is a good thing.
On income splitting, my position has always been that the best way to give money to families with children is to give money to families with children. It's that simple. |
683 F.2d 931
216 U.S.P.Q. 568
HOLIDAY INNS, INC., Plaintiff-Appellee, Cross-Appellant,v.C. H. ALBERDING, Defendant,Airport Holiday Corporation, et al., Defendants-Appellants,Cross-Appellees.
No. 81-1539
Summary Calendar.
United States Court of Appeals,Fifth Circuit.
Aug. 26, 1982.
Blankenship, Potts, Aikman, Hagin & Stewart, Howard Jensen, Dallas, Tex., Whitten, Haag, Cobb & Hacker, C. Terry Hagin, Abilene, Tex., for defendants-appellants, cross-appellees.
James L. Kurtz, Washington, D. C., E. Eldridge Goins, Jr., Dallas, Tex., for plaintiff-appellee cross-appellant.
Appeals from the United States District Court for the Northern District of Texas.
Before BROWN, POLITZ and WILLIAMS, Circuit Judges.
JERRE S. WILLIAMS, Circuit Judge:
1
Appellee and cross-appellant Holiday Inns, Inc. supervises a chain of over 1700 hotels. Many of these hotels operate under a franchise agreement by which the owner is entitled to display the familiar "Holiday Inn" sign (referred to as the "Great Sign") and other distinctive emblems of the chain. In 1956, Holiday Inn signed a license agreement with Tex-Mex Inn Operating Co. (Tex-Mex) that permitted Tex-Mex to operate a 200-room hotel located at 7800 Lemmon Avenue in Dallas, Texas, as a Holiday Inn. The agreement provided for termination of the license, however, if Tex-Mex did not meet Holiday Inns' standards. On February 28, 1975, Holiday Inns wrote Tex-Mex a letter terminating the agreement and requesting that Tex-Mex remove all Holiday Inn service marks from the property. Despite this and subsequent demands for removal of the trademarks, the hotel continued to display the Great Sign1 and a small sign bearing the mark "Holiday Inn" in distinctive script.
2
Holiday Inns brought this action in May, 1976, alleging trademark infringement under state and federal law2 and seeking a permanent injunction, treble damages, and attorneys' fees. Although the original complaint named C. H. Alberding as owner and operator of the hotel, which by that time was operating as the "Holiday Hotel", Alberding's answer revealed that Tex-Mex was the actual operator of the facility and that appellant/cross-appellee Airport Holiday Corp. (Airport) was the owner. An amended complaint joined Tex-Mex and Airport as defendants; Alberding, who served as president and director of both defendant corporations, eventually was dismissed from the suit.
3
Holiday Inns moved for partial summary judgment on liability in July, 1977. Two affidavits, photographs of the offending signs, and a memorandum of law accompanied the motion. Tex-Mex and Airport, after filing separate counterclaims for harassment, responded in August of 1978 with a joint memorandum and affidavit in opposition to the motion for summary judgment. The district court heard argument on the motion on December 4, 1978. On December 19, the court granted partial summary judgment against Tex-Mex and Airport on the issue of liability. It found that the defendants had presented no issue of material fact and that
4
(t)he "Great Sign" being used by defendants at the time of the hearing is essentially identical to the registered marks of the Plaintiff. This sign is likely to cause confusion, mistake or deception with Plaintiff's registered service marks. The minor changes made by the Defendants are cosmetic in nature and inconsequential. Surely such inconsequential changes will not avoid a finding of infringement and unfair competition.
5
The judgment enjoined Tex-Mex and Airport from any further use of the Great Sign or other Holiday Inn service marks.
6
The parties proceeded to a bench trial on damages in October, 1979. In a judgment entered on October 19, 1981, the court found Tex-Mex and Airport jointly and severally liable for $96,795.00, plus interest, in profits and damages, and $35,000.00 in attorney's fees.
7
Airport has appealed from this judgment on the issues of liability and damages; Holiday Inns has cross-appealed, submitting that the damage award is insufficient. We affirm.
I. Airport's Appeal
8
Airport presents two questions on appeal. First, it argues that the district court erred in granting summary judgment against Airport on liability because the record contains no evidence that Airport-as opposed to Tex-Mex-ever "used" the trademarks in violation of the Lanham Act, 15 U.S.C. § 1114(1).3 As stated by Airport, "the legal issue is whether mere ownership of real estate upon which an infringement of a trademark takes place is sufficient to assess liability on the owner."
9
The district court must render summary judgment "if the pleadings, depositions, answers to interrogatories, and admissions on file, together with the affidavits, if any, show that there is no genuine issue as to any material fact and that the moving party is entitled to a judgment as a matter of law." Fed.R.Civ.P. 56(c). The defendants' joint response to Holiday Inns' motion for partial summary judgment raised no genuine issue as to any material fact. As Airport admits, the question presented on appeal is purely a legal one. However, we decline to consider even this issue of law because we conclude that Airport did not properly present it in the court below. We realize that our rule against considering issues raised for the first time on appeal can give way when a pure question of law is involved and the refusal to consider it will result in a miscarriage of justice. See Martinez v. Matthews, 544 F.2d 1233, 1237 (5th Cir. 1976). However, we perceive no possibility of such a miscarriage here.
10
The only legal argument pressed by Airport individually in the joint responsive motion of defendants below was that Airport had purchased the Great Sign "with the knowledge of and acquiescence of Plaintiff and without any restriction being placed thereon by Plaintiff against Airport Holiday Corporation, and the Plaintiff has waived or is estopped to claim any trademark violation against Airport Holiday Corporation." This brief "waiver and estoppel" argument, apparently predicated upon a theory that Holiday Inns had a duty expressly to forbid Airport from using its trademarks, did not deter the district court from holding Airport liable, as undisputed owner of the Great Sign and the rest of the Holiday Hotel property, for the infringement.
11
Airport now seeks to obtain a reversal of the court's judgment by suggesting that a mere owner of real estate is not chargeable with trademark infringements or other torts committed by another party "operating" on the owner's property. As sole support for this allegation of error, Airport cites Kinnear-Weed Corp. v. Humble Oil & Refining Co., 324 F.Supp. 1371, 1381 (S.D.Tex.1969), aff'd, 441 F.2d 631 (5th Cir. 1971), cert. denied, 404 U.S. 941, 92 S.Ct. 285, 30 L.Ed.2d 255 (1971), in which the court held that "a 'non-operator,' as simply the owner of an interest in realty (the leasehold estate)" in a drilling venture was not responsible for the drilling contractor's infringement of the patent on a drilling bit.
12
Apart from obvious dissimilarities between the facts of Kinnear-Weed and those of the case before us,4 our rejection of this appeal stands upon Airport's failure to present this authority or any legal argument derived from it to the court below. Although the defendants took over a year to reply to the motion for summary judgment, neither in that response nor at any other time did Airport attempt to forestall an adverse decision by providing the district judge with authority or argument in the vein now sought to be explored. Our discretion to entertain a legal argument presented for the first time on appeal "is a right to prevent a clear miscarriage of justice apparent from the record, and not a right to afford a defeated litigant another day in court because he thinks that if he were given the opportunity to try his case again upon a different theory he might prevail." Miller v. Avirom, 384 F.2d 319, 323 (D.C.Cir.1967) (quoting Helvering v. Rubinstein, 124 F.2d 969, 972 (8th Cir. 1942)). That Airport's appeal lies from a summary judgment does not diminish the importance of this insistence upon thorough presentation of issues in the district court.
13
Similarly, Airport's second point of error falls victim to Airport's want of caution in the proceedings below. The court's award of $96,795.00 against Airport and Tex-Mex, jointly and severally, included $11,464.50 in profits obtained from the infringement. The court derived this figure by computing thirty percent5 of $38,215.00, the amount to which the parties stipulated as "profit of the Defendants" during the period of the infringement. Airport now claims on appeal that Holiday Inns is not entitled to recover profits from Airport because it never proved that Airport actually received these profits.
14
Yet the record clearly shows that Airport, acting with Tex-Mex through a single attorney, signed a pretrial stipulation on October 9, 1979, conceding that "(d)uring the period of March 1, 1975 to January 1, 1979, the profits of the Defendants were $38,215.00." In fact, although Airport complains of the trial court's "confusion" in treating Airport and Tex-Mex identically instead of as separate parties, the entire stipulation refers to "the Defendants" as one party. It even describes the steps taken by "the Defendants" to modify the Great Sign. At no time before or during the trial on damages, so far as the record shows, did Airport claim that it had not received profits earned by the hotel that it admittedly owned.
15
As Holiday Inn points out, Section 35 of the Lanham Act, 15 U.S.C. § 1117, provides that "(i)n assessing profits the plaintiff shall be required to prove defendant's sales only; defendant must prove all elements of cost or deduction claimed." In any event, Airport stipulated its profits as "a defendant" and is bound by that agreement. E.g. Morelock v. NCR Corp., 586 F.2d 1096, 1107 (6th Cir. 1978), cert. denied, 441 U.S. 906, 99 S.Ct. 1995, 60 L.Ed.2d 375 (1979); A. Duda & Sons Cooperative Association v. United States, 504 F.2d 970, 975 (5th Cir. 1974).
II. Holiday Inns' Cross-Appeal
16
Holiday Inns contends that the district court erred in limiting recovery to thirty percent of defendants' profits and damages for a total of $32,265.00, an amount later trebled pursuant to 15 U.S.C. § 1117. Having reviewed the trial judge's reasoning, however, we conclude that he acted well within the bounds of his discretion. The court could properly accept the evidence offered by the defendants, which was to the effect that only thirty percent of the hotels' business during the period of infringement was attributable to the improper use of plaintiff's service marks. The court acted reasonably, therefore, in reducing the damage figure to that percentage of the total profits and loss royalties. The court used its discretion in plaintiff's favor, after all, by trebling the thirty percent figure.
17
"Great latitude is given the trial judge in awarding damages, and his judgment will not be set aside unless the award is clearly inadequate." Drake v. E. I. DuPont deNemours & Co., 432 F.2d 276, 279 (5th Cir. 1970). This is especially true of an award fashioned pursuant to the Lanham Act, which expressly confers upon district judges wide discretion in determining a just amount of recovery for trademark infringement. See 15 U.S.C. § 1117.
18
The judgment below is AFFIRMED in all respects.
1
The Great Sign was displayed in unaltered form until June 5, 1975. At that time, Tex-Mex paid for removal of certain distinctive features from the sign, including the words "Holiday Inn." The color and shape remained generally the same, and the sign was illuminated at night. The modified sign remained on display until after the granting of an injunction in December, 1978, at which time it was covered with plastic
2
The district court, 493 F.Supp. 1625, ultimately based its jurisdiction on 28 U.S.C. § 1338 and determined liability under the Lanham Act, 15 U.S.C. § 1051 et seq
3
The Act imposes civil liability on any person who, without consent of the registrant, shall "use in commerce any reproduction, counterfeit copy, or colorable imitation of a registered mark in connection with the sale, offering for sale, distribution, or advertising of any goods or services on or in connection with which such use is likely to cause confusion, or to cause mistake, or to deceive."
4
The portion of Kinnear-Weed cited above was decided under Texas precedents pertaining to oil drilling ventures. The court was uncertain that any infringement of the drilling bit patent had occurred. More determinative, however, was that the exonerated "non-operator" was merely an investor and did not own, use, or have any responsibility for the drilling equipment. Airport, by contrast, admits to outright ownership of the Great Sign and the other motel properties found to have infringed plaintiff's trademark
5
As explained in part II, infra, the trial judge reasoned that only thirty percent of the hotel's business during the period in question was attributable to the infringement
|
The Most Important Eczema Info You Need To Know About
Those who suffer from eczema will find that life is unpredictable. One day your skin feels great, and then the next day a flareup can occur. It could be months before it goes away. Below are a few interesting techniques you can use to reduce the possibility of a new outbreak.
Avoid becoming overheated. Excess sweat can trigger eczema flare-ups. If you do work out, take a shower afterwards. In fact, shower after any bout of strenuous activity, which could include things like gardening or heavy housework. Keeping your skin clean will help to keep you comfortable and your eczema flare-ups at bay.
Avoid stress. Stress can increase the intensity of eczema flare-ups. While it is true that eczema itself can stress you out, try not to let it. Practice relaxation methods like yoga, medication, and deep breathing exercises. Staying calm is your best defense when it comes to successfully battling your eczema.
To reduce eczema flare-ups, there are some basic bathing rules you can follow. Use room temperature water in your tub or shower. Hot water can cause eczema flare-ups. Don’t scrub your skin. Use a gentle soap alternative instead of soap itself. Pat your skin dry, and liberally apply moisturizer when you are done bathing.
Keep your hands protected. Wear rubber gloves while washing dishes or performing another activity in which your hands are submersed in water. For further protection, wear cotton gloves underneath the rubber ones to reduce sweat and irritation. Use the cotton gloves while performing other activities, such as gardening and housework.
Always make sure your fingernails are clean and short. Even if you are already aware that scratching is bad, you may still find yourself scratching in your sleep. Short nails will reduce the irritation that you experience. Be sure to also clean under your nails regularly.
Try to sweat less if you want to make sure your eczema doesn’t flare up. Lots of sweating or getting overheated can aggravate eczema symptoms. If you’re an active person, it’s important to cool down the minute you finish any physical activity. For instance, a quick shower will help.
If you live in an area that experiences cold weather in the winter, buy a humidifier to help decrease eczema flare-ups. During the cold winter months, we close all of our windows and turn on the furnace. This can make the air inside of a house very dry which makes the itching and dry skin associated with eczema even worse. To replace moisture in your internal environment, use a humidifier. This added moisture will keep your skin from becoming dry, cracked, itchy and irritated.
Some things trigger symptoms of eczema, so it’s helpful to pinpoint what those triggers are. Your eczema could be triggered by detergent, soap or even perfume that you may wear. Sweating and getting stressed out can trigger this type of thing as well. Once you know what your triggers are, you can make a plan to stay away from them.
As you can see, many people deal with eczema for their entire lives. That is why it is so important to follow these great tips. Not only will the tips reduce the discomfort from current breakouts, but they can also prevent new outbreaks. Take the tips learned here and use them to get started on an effective eczema treatment. |
/*
-------------------------------------------------------------------------------
This file is part of OgreKit.
http://gamekit.googlecode.com/
Copyright (c) 2006-2013 Charlie C.
Contributor(s): Thomas Trocha(dertom)
-------------------------------------------------------------------------------
This software is provided 'as-is', without any express or implied
warranty. In no event will the authors be held liable for any damages
arising from the use of this software.
Permission is granted to anyone to use this software for any purpose,
including commercial applications, and to alter it and redistribute it
freely, subject to the following restrictions:
1. The origin of this software must not be misrepresented; you must not
claim that you wrote the original software. If you use this software
in a product, an acknowledgment in the product documentation would be
appreciated but is not required.
2. Altered source versions must be plainly marked as such, and must not be
misrepresented as being the original software.
3. This notice may not be removed or altered from any source distribution.
-------------------------------------------------------------------------------
*/
#ifndef _gkLogicManager_h_
#define _gkLogicManager_h_
#include "gkCommon.h"
#include "gkMathUtils.h"
#include "utSingleton.h"
class gkLogicBrick;
class gkLogicSensor;
class gkLogicController;
class gkLogicActuator;
class gkLogicLink;
class gkAbstractDispatcher;
enum gkDispatchedTypes
{
DIS_CONSTANT = 0,
DIS_KEY,
DIS_MOUSE,
DIS_COLLISION,
DIS_JOY,
DIS_MAX,
};
class gkLogicManager
{
public:
typedef utListClass<gkLogicLink> Links;
typedef utListIterator<Links> LinkIterator;
typedef gkAbstractDispatcher* gkAbstractDispatcherPtr;
typedef utArray<gkLogicBrick*> Bricks;
typedef utHashSet<gkLogicBrick*> BrickSet;
typedef utList<gkLogicActuator*> TickActuators;
typedef utList<gkLogicManager*> LogicManagerList;
protected:
static LogicManagerList* m_logicManagers;
Links m_links;
gkAbstractDispatcherPtr* m_dispatchers;
Bricks m_cin, m_ain, m_aout; // Temporary open or closed links
bool m_sort;
BrickSet m_updateBricks;
TickActuators m_tickActuators; // actuators that get processed by the controller.
// This list makes it possible to set the actuator-state to false and only change to true if needed
void push(gkLogicBrick* a, gkLogicBrick* b, Bricks& in, bool stateValue);
void clearActuators(void);
void clearActive(gkLogicLink* link);
void sort(void);
public:
gkLogicManager();
~gkLogicManager();
gkLogicLink* createLink(void);
void destroy(gkLogicLink* link);
GK_INLINE Links& getLinks(void) {return m_links;}
void notifySceneInstanceDestroyed(void);
void notifyLinkInstanceDestroyed(gkLogicLink* link);
///Notifies the manager that a state change has taken place.
void notifyState(unsigned int state, gkLogicLink* link);
///Notifies the manager to sort logic bricks based on their priority.
void notifySort(void);
///Frees all created links and resets dispatchers
void clear(void);
void update(gkScalar delta);
GK_INLINE gkAbstractDispatcher& getDispatcher(int dt) { GK_ASSERT(m_dispatchers && dt >= 0 && dt <= DIS_MAX); return *m_dispatchers[dt]; }
///Tells the manager a link from a sensor to controller has been opened or closed.
void push(gkLogicSensor* s, gkLogicController* v, bool stateValue);
///Tells the manager a link from a controller to an actuator has been opened or closed.
void push(gkLogicController* c, gkLogicActuator* v, bool stateValue);
void GK_INLINE requestUpdate(gkLogicBrick* b) { if (b) m_updateBricks.insert(b); }
void GK_INLINE removeUpdate(gkLogicBrick* b) { if (b) m_updateBricks.erase(b); }
static void deleteManagers(void);
UT_DECLARE_SINGLETON(gkLogicManager)
};
#endif//_gkLogicManager_h_
|
Last active Oct 12, 2018
Project-specific lint rules with ESLint
A quick introduction
First there was JSLint, and there was much rejoicing. The odd little language called JavaScript finally had some static code analysis tooling to go with its many quirks and surprising edge cases. But people gradually became annoyed with having to lint their code according to the rules dictated by Douglas Crockford, instead of their own.
So JSLint got forked into JSHint, and there was much rejoicing. You could set it up to only complain about the things you didn't want to allow in your project, and shut up about the rest. JSHint has been the de-facto standard JavaScript linter for a long while, and continues to do so. Yet there will always be things your linter could check for you, but doesn't: your team has agreed on some convention that makes sense for them, but JSHint doesn't have an option to enforce it. You could submit a pull request for each such option, but eventually they'll be rejected as too specific/generic/weird. And that makes sense; you really can't bundle every lint rule anyone ever thought up into a single tool.
When you think about the audience for these tools, it's actually kind of silly they need to be configured using simple, pre-set on/off switches. If you're configuring a static code analysis tool for your project, you very likely know how to write some code as well. So the next step in the evolution of JavaScript linters seems obvious: exposing an easy mechanism for running custom lint code, allowing you to check for whatever you want. No need to be tied to what the upstream decides (not) to support.
Enter ESLint
As is usually the case with good ideas, someone else already came up with it. ESLint (by the great @nzakas & friends) is a recent alternative to JSHint, with a very flexible architecture: every lint rule is an independent, pluggable module, and more can be added at runtime. The project also ships with a growing set of default rules, which you can either selectively use or completely opt-out of. (This is in fact also where the JSHint project is going, but AFAIK it'll be a while til it's done). Install it with (for example):
$ npm install -g eslint
A word of warning, though: while the ESLint project is aiming for feature-parity with JSHint, it's not there yet. Also, there's currently only a "pre-alpha" version available, so it's not exactly production-ready. That said, having used & hacked at it a bit it seems quite ready to be added to your toolchain, perhaps to fill in the gaps left by JSHint; remember, you can always run both tools side-by-side and only enable specific rules from ESLint.
Adding custom rules
I came across ESLint while shopping around for a JavaScript style checker, which JSHint decidedly isn't. It's not really the core mission of ESLint either, but its architecture makes it remarkably easy to use it as one. To demonstrate, let's decide we want to enforce if statements that have the curly brace on the same line as the condition, preceded by a single space (ahh, the arguments we've all had about this!).
ESLint uses Esprima for parsing the JavaScript source and producing its Abstract Syntax Tree (AST). It then allows lint rule modules to register interest in specific types of nodes in this tree, and then make their assertions. The type of node we're interested in is IfStatement. To see what we'll be operating on, go to the interactive Esprima demo and paste in:
if (true)
{
console.log('yep...');
}
You should also write this into a file called sample-file.js so we can test our new lint rule against it.
To register a lint rule that enforces the aforementioned convention, put the following into eslint-rules/if-curly-formatting.js:
As ESLint traverses the AST of the source file, the inner function we defined will be invoked for eachIfStatement encountered. If you were to console.log(node), you'd see the AST information about the subtree we're currently visiting. That alone can be enough to make certain kinds of assertions, but invoking context.getSource(node) will additionally give us the corresponding source code in the original file.
But the source string for the complete IfStatement contains lots of unnecessary things for our simple assertion (the entire conditional code block, for example). Luckily, each IfStatement node also has a test subnode representing the condition being tested (in our sample file just true). We can then use context.getSource() with additional arguments, telling it to give us the source for the test node and the 3 characters that immediately followed that node in the original source. In a compliant case, that would be something like "true) {". Now it's a simple matter of whipping up a regular expression that ONLY matches the allowed case.
Confused? Don't worry. Just do some console.log()s within your linter function. The pieces are all there, for you to do whatever with!
Including our new rule
We still need to tell ESLint we want to enforce our newly created rule. Create a eslint-config.json with:
{
"rules": {
"if-curly-formatting": 1
}
}
The format of this file is explained here and the available built-in rules listed here. You can turn other rules on/off from here as well. Once done, run:
You should see your custom rule complain about if formatting! (You'll likely also see a few other built-in warnings about irresponsible use of console.log etc, which you can turn off from eslint-config.json if you don't like them.)
Grunt integration
Many JavaScript projects use Grunt for build automation these days, and ESLint integrates very painlessly to your build process. If you've installed it via npm, it's a three-liner in your Gruntfile.js:
This comment has been minimized.
Nice overview. Another thing that is available and not quite documented yet is that you can specify a .eslintrc file (similar to what JSHint has) and avoid needing to pass in your config file on the command line.
We are just a couple rules away from a true alpha (v0.1.0), at which point we'll be looking for a lot of people to kick the tires and give us feedback.
This comment has been minimized.
i have created a custom rule and a simple (example.js) wich contain a test case , how can i test "only" my new created rule on this file (example.js) ? and how to display the console.log messages to the CLI ?
This comment has been minimized.
edited
I have a .eslintrc file as well bringing in the airbnb/legacy linting rules. When I have this I get errors. The definition for the rule can not be found and I think this might be because we look in the airbnb config. Any ideas? |
/* Copyright 2015 Chris Zieba <zieba.chris@gmail.com>
This program is free software: you can redistribute it and/or modify it under the terms of the GNU
Affero General Public License as published by the Free Software Foundation, either version 3 of the
License, or (at your option) any later version.
This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the GNU Affero General Public License for more details. You should have received a
copy of the GNU Affero General Public License along with this program. If not, see
<http://www.gnu.org/licenses/>.
*/
var graphviz = require('graphviz'),
fs = require('fs'),
utils = require('./utils'),
sanitizor = require('../lib/validation/sanitizor'),
validator = require('../lib/validation/validator'),
interview = require('./interviews/interview'),
process = require('./interviews/process'),
helpers = require('./helpers'),
models = require('../models/models'),
// stores all the information about the client, including the variables in the interview, and sockets (1 or 2 depending if editing)
connected_clients = {};
// the app parameter is used for retrieving vars via app.get()
exports.listen = function (server, sessionStore, app) {
"use strict";
// Session middleware
var getSession = function(cookie, done) {
var parsed = utils.parseCookie(cookie);
if (!parsed) {
return done(new Error('Could not parse cookie'));
}
if (!parsed.hasOwnProperty('connect.sid')) {
return done(new Error('Missing cookie'));
}
var sessionID = parsed['connect.sid'].split('.')[0].substring(2);
sessionStore.get(sessionID, function (err, session) {
if (err || !session) {
return done(new Error('Session was not found'));
}
return done(null, session);
});
};
var opts = {
"close timeout": app.get('socket_close_timeout'),
"log level": app.get('socket_log_level'),
"match origin protocol": app.get('socket_match_origin_protocol'),
"sync disconnect on unload": app.get('socket_sync_disconnect_on_unload'),
"transports": app.get('socket_transports'),
"flash policy port": app.get('socket_flash_policy_port')
};
if (app.get('socket_browser_client_minification')) {
opts['browser client minification'] = true;
}
if (app.get('socket_browser_client_etag')) {
opts['browser client etag'] = true;
}
if (app.get('socket_browser_client_gzip')) {
opts['browser client gzip'] = true;
}
var io = require('socket.io').listen(server);
// This is what runs on an incoming socket request
// If there is already a session established, accept the socket, otherwise deny it
io.set('authorization', function (data, accept) {
// Check if the person connecting is logged in.
// If they are, store there user is and check it again when they are trying to save the interview
if (data.headers.cookie) {
data.cookie = utils.parseCookie(data.headers.cookie);
if (data.cookie) {
if (data.cookie['connect.sid']) {
data.sessionID = data.cookie['connect.sid'].split('.')[0].substring(2);
// Create a new connection to the LogicPul database, so we can compare the _id field to the cookie sid field
// these must match in order for a connection to go through
sessionStore.get(data.sessionID, function (err, session) {
// these are urls we don't need to authorize a logged in user for the /interviews
var no_auth_required = new RegExp(app.get('base_url') + '/interviews/', 'g');
if (err || !session) {
// Don't accept the socket request if the user is not logged in
if (!no_auth_required.test(data.headers.referer)) {
// If we get here the URL is from inside the manager
return accept('Session not found in database', false);
}
console.log(' debug - ' + 'no socket authorization needed');
return accept(null, true);
}
// save the session data and accept the connection if the user is logged in
if (session.user && session.user.authenticated) {
data.session = session;
data.session.url = data.headers.referer;
return accept(null, true);
}
if (!no_auth_required.test(data.headers.referer)) {
return accept('User is not authenticated', false);
}
console.log(' debug - ' + 'no socket authorization needed');
return accept(null, true);
});
} else {
return accept('No cookie transmitted.', false);
}
} else {
return accept('No cookie transmitted.', false);
}
} else {
return accept('No cookie transmitted.', false);
}
});
// Client is the socket
io.sockets.on('connection', function (client) {
// Keep track of all the connected_clients
var id = client.id;
// @type {string} kind of message..i.e question, error...
function emitData (id, type, data) {
connected_clients[id].socket.emit(type, data);
// if this socket has an editor id set, than check if any other socket shares that id, and send the data to that socket as well
if (connected_clients[id].editor) {
// go through each socket to see if we can get the editor socket
for (var socket_id in connected_clients) {
if (connected_clients.hasOwnProperty(socket_id)) {
// check that we are not checking the same socket
if (socket_id !== id) {
if (connected_clients[socket_id].editor) {
// check if they match
if (connected_clients[socket_id].editor === connected_clients[id].editor) {
connected_clients[socket_id].socket.emit(type, data);
break;
}
}
}
}
}
}
}
// if the session is NOT already open, create the new object
if (!connected_clients.hasOwnProperty(client.id)) {
connected_clients[id] = {
editor:null,
socket: client,
data: {
interview: {},
//progress: [],
// this is a map of what each question is in terms of distance from the end
distance: {},
// master will now contain the variables from each question
master: {},
deliverables: {},
client: null
}
};
}
// after the editor client connects, send an id
client.on('editor_id', function (editor_id) {
connected_clients[id].editor = editor_id;
});
// When the viewer starts
client.on('start', function (data) {
var run;
var progress;
var send_data;
var preview = data.preview;
var start = data.start;
getSession(client.handshake.headers.cookie, function(err, session) {
if (err) {
console.log(err);
throw err;
}
models.Interviews.findOne({id: data.interview_id}, function (err, doc) {
if (err) {
console.log(err);
throw err;
}
if (!doc) {
emitData(id, 'srv_error', { id: null, error: interview.error("The interview could not be found.").error.content, valid: false });
return;
}
var qid = doc.start;
models.Counters.findOne({}, function (err, counter) {
// Get the current count from the database and increment by to get the next interview
var tmp_count = counter.tmp_count + 1;
var state_id = counter.state_count + 1 + '-' + tmp_count;
var tmp_progress = [];
var tmp_history = [];
var tmp = new models.Tmps();
var state = new models.States();
// update the counter right away
models.Counters.update({
tmp_count: tmp_count,
state_count: counter.state_count + 1
}, function (err) {
if (err) {
console.log(err);
throw err;
}
// check if a value was passed as the start (preview)
if (start) {
// check to make sure the start passed is actually used in the interview
if (doc.data[start]) {
qid = start;
}
}
// initialize the progress
tmp_progress.push(state_id);
var startInterview = function(graph) {
// In case the window is closed while the graph is being generated
if (typeof connected_clients[id] === 'undefined' || !connected_clients[id].hasOwnProperty('data')) {
return;
}
// Now we can use the distance object mapping to find how many questions are in front of the current one
var fraction = graph[qid] || '';
// Turn the start question of the interview, completing any before logic in the question
run = interview.start(qid, doc.data[qid], doc.data, {}, [], state_id);
// Build the history for the drop down menu
progress = interview.progress(run.progress);
// cache the graph data
connected_clients[id].data.distance = graph;
// the first question does not run helpers.merge, which adds the fields
connected_clients[id].data.master.fields = [];
// we don't need to rerun the helpers.merge function because any vars that are set at the beginning will be returned with run
// this is the master vars after all the logic is run, including the before logic of the question we are showing
connected_clients[id].data.master.vars = run.master;
connected_clients[id].data.interview = {
id: doc.id,
name: doc.name,
description: doc.description
};
send_data = {
// send the id (count) that corresponds to the database record of the tmp record, which corresponds to the save_id if we save
id: tmp_count,
qid: run.qid,
data: {
question: run.question,
progress: progress,
debug: run.debug,
fraction: fraction
},
valid: true
};
// create the new database record for the interview being worked on
tmp.id = tmp_count;
// a reference to where the last state is
tmp.current = state_id;
tmp.history = run.progress;
tmp.created = new Date();
tmp.last_modified = new Date();
// the progress will record the state id after each question
tmp.progress = tmp_progress;
tmp.save(function(err) {
if (err) {
console.log(err);
throw err;
}
// create a new state record
state.id = state_id;
state.tmp_id = tmp_count,
state.created = new Date();
state.last_modified = new Date();
state.data = connected_clients[id].data;
state.save(function(err) {
if (err) {
console.log(err);
throw err;
}
// update the interview in the database to cache the distance graph, if NOT in preview mode
if (!preview && doc.distance.update) {
doc.distance = {
update: false,
graph: graph
};
doc.save(function () {
if (err) {
console.log(err);
throw err;
}
emitData(id, 'question', send_data);
});
} else {
emitData(id, 'question', send_data);
}
});
});
};
// check if grahpviz progress is disabled
if (app.get('disable_graphviz_progress')) {
startInterview({});
} else {
// we might not need to update the progress fraction if the interview has been modified
if (doc.distance.update) {
interview.distance(doc.data, qid, startInterview);
} else {
startInterview(doc.distance.graph);
}
}
});
});
});
});
});
// When the continue button on the viewer is clicked, or when back is clicked, or when the progress dropdown is changed
client.on('question', function (data) {
getSession(client.handshake.headers.cookie, function (err, session) {
if (!session) {
console.log(err);
throw err;
}
var loop;
var query;
var validate;
var send_data;
var run;
var next;
var progress;
var fraction;
var qid = data.qid;
var loop_index = null;
// this is the array of objects with the answers
var fields = data.fields;
var destination = data.destination;
//TODO: sanitize the inputs, not dry, don't query the database every time for this interview ... store the variable
models.Interviews.findOne({id: data.interview}, function (err, doc) {
if (err) {
console.log(err);
throw err;
}
if (!doc) {
emitData(id, 'srv_error', { id: null, error: interview.error("The interview could not be found.").error.content, valid: false });
return;
}
models.Tmps.findOne({id: data.id}, function (err, tmp) {
if (err) {
console.log(err);
throw err;
}
// look up the current state using the current id
models.States.findOne({id: tmp.current}, function (err, state) {
if (err) {
console.log(err);
throw err;
}
models.Counters.findOne({}, function (err, counter) {
if (err) {
console.log(err);
throw err;
}
var tmp_progress = tmp.progress;
var new_state = new models.States();
var new_state_id = counter.state_count + 1 + '-' + tmp.id;
// update the counter in the database
models.Counters.update({ state_count: counter.state_count + 1 }, function (err) {
if (err) {
console.log(err);
throw err;
}
// Save the database interview information to the clients socket object
connected_clients[id].data = state.data;
// The validate function will evaluate the answers sent when the user clicks next.
// Pass the answers, and the fields for this question...so we can compare to the validation object.
// Validation will return either true, or false and an error message.
validate = interview.validate(fields, doc.data[qid], connected_clients[id].data.master.vars, doc.data);
// check all the fields for their validation
if (validate.error) {
// prepare the error data for the client
send_data = {
id: tmp.id,
qid: null,
// this is an object with the error message, name of the field that caused the error and a truth value
data: validate,
valid: false
};
emitData(id, 'question', send_data);
} else {
if (err) {
console.log(err);
throw err;
}
// This tests to see of the loop variable is set in the question, activating the loop.
// When a question is in a loop, its answers are stored differently.
loop = (doc.data[qid].loop1 !== null && typeof doc.data[qid].loop1 !== 'undefined' && doc.data[qid].loop1 !== '') ? true : false;
connected_clients[id].data.master.fields = fields;
// We need to get the loop_index before the fields are merged into the master set
if (loop) {
for (var prop1 in connected_clients[id].data.master.vars) {
if (connected_clients[id].data.master.vars.hasOwnProperty(prop1)) {
if (connected_clients[id].data.master.vars[prop1].loop) {
if (connected_clients[id].data.master.vars[prop1].qid === qid) {
if (Array.isArray(connected_clients[id].data.master.vars[prop1].values[prop1])) {
loop_index = connected_clients[id].data.master.vars[prop1].values[prop1].length;
break;
}
}
}
}
}
// set the loop index to 0 if it wasnt set since its in a loop
if (loop_index === null) {
loop_index = 0;
}
}
// merge the variables into the master set
connected_clients[id].data.master.vars = helpers.merge(connected_clients[id].data.master.vars, fields, loop, doc.data[qid]);
// get the before_logic
next = interview.next(qid, doc.data[qid], doc.data, connected_clients[id].data.master.vars, fields, tmp.history, new_state_id, destination);
// check if the next question is in a loop
var next_loop = (doc.data[next.qid].loop1 !== null && typeof doc.data[next.qid].loop1 !== 'undefined' && doc.data[next.qid].loop1 !== '') ? true : false;
// we need to get the loop length so
// we can get the correct state based on the loop index, which will
// be incremented and saved in the state when it's saved
if (next_loop) {
var next_loop_index = null;
for (var prop2 in connected_clients[id].data.master.vars) {
if (connected_clients[id].data.master.vars.hasOwnProperty(prop2)) {
if (connected_clients[id].data.master.vars[prop2].loop) {
if (connected_clients[id].data.master.vars[prop2].qid === next.qid) {
if (Array.isArray(connected_clients[id].data.master.vars[prop2].values[prop2])) {
next_loop_index = connected_clients[id].data.master.vars[prop2].values[prop2].length;
break;
}
}
}
}
}
// set the loop index to 0 if it wasnt set since its in a loop
if (next_loop_index === null) {
next_loop_index = 0;
}
query = {
tmp_id: tmp.id,
base_qid: next.qid,
loop_id: next_loop_index
};
} else {
query = {
tmp_id: tmp.id,
base_qid: next.qid
};
}
// Look for any states that have run this question already
models.States.find(query).sort({created: 1}).exec(function (err, states) {
// Build the queston using any previously answered questions
run = interview.build(next, doc.data, fields, tmp.history, new_state_id, states);
// build the history for the progress bar
progress = interview.progress(run.progress);
// find out the fractional progress of the interview
fraction = (connected_clients[id].data.distance && connected_clients[id].data.distance.hasOwnProperty(run.qid)) ? connected_clients[id].data.distance[run.qid] : '';
// update master vars list, since new answers could be created in the logic
connected_clients[id].data.master.vars = run.master;
// This is the data that gets sent back to the client.
// It involves the HTML of the question to show the user,
// and debug info for the editor (if edit mode)
send_data = {
id: tmp.id,
qid: run.qid,
data: {
question: run.question,
progress: progress,
debug: run.debug,
fraction: fraction
},
valid: true
};
// create a new state record
new_state.id = new_state_id;
new_state.tmp_id = tmp.id;
new_state.loop_id = loop_index;
new_state.base_qid = qid;
new_state.created = new Date();
new_state.last_modified = new Date();
new_state.data = connected_clients[id].data;
new_state.save(function(err) {
if (err) {
console.log(err);
throw err;
}
tmp_progress.push(new_state_id);
// update the interview data in the database corresponding the socket ID
models.Tmps.update({id:tmp.id}, {current: new_state_id, progress: tmp_progress, history: run.progress, last_modified: new Date().getTime()}, function (err) {
if (err) {
console.log(err);
throw err;
}
// send the data back to the client
emitData(id, 'question', send_data);
});
});
});
}
});
});
});
});
});
});
});
// When back is clicked, or when the progress dropdown is changed
client.on('back', function (data) {
getSession(client.handshake.headers.cookie, function (err, session) {
if (!session) {
console.log(err);
throw err;
}
models.Interviews.findOne({ id: data.interview }, function (err, doc) {
if (err) {
console.log(err);
throw err;
}
if (!doc) {
emitData(id, 'srv_error', { id: null, error: interview.error("The interview could not be found.").error.content, valid: false });
return;
}
var send_data, back, history, fraction;
// Look up the data for the users interview from the database
models.Tmps.findOne({ id: data.id }, function (err, tmp) {
if (err) {
console.log(err);
throw err;
}
if (!tmp) {
emitData(id, 'srv_error', { id: null, error: interview.error("A record could not be found for this interview.").error.content, valid: false });
return;
}
var tmp_progress = tmp.progress;
var tmp_history = tmp.history;
// get the updated history and the id of the state we want to load
history = interview.history(tmp_progress, tmp_history, data.backid, data.previd);
tmp_progress = history.progress;
tmp_history = history.history;
// We need two states since the latest has the fields to repopulate, and the second has the state we want to revert to
models.States.find({id: { $in: [history.new_current_state, history.removed_state]} }).sort({id: -1}).exec(function (err, states) {
if (err) {
console.log(err);
throw err;
}
// this will be set to the current state
var new_current_state = states[1];
// use the fields from this state to repopulate
var removed_state = states[0];
back = interview.back(doc.data, new_current_state.data.master.vars, removed_state.data.master.fields, history.qid);
// find out the fractional progress of the interview
fraction = (connected_clients[id].data.distance && connected_clients[id].data.distance.hasOwnProperty(back.qid)) ? connected_clients[id].data.distance[back.qid] : '';
send_data = {
id: tmp.id,
qid: back.qid,
data: {
question: back.question,
progress: interview.progress(tmp_history),
debug: back.debug,
fraction: fraction
},
valid: true
};
//update the interview data in the database corresponding the socket ID
models.Tmps.update({id:tmp.id}, {current: new_current_state.id, progress: tmp_progress, history: tmp_history, last_modified: new Date().getTime()}, function (err) {
if (err) {
console.log(err);
throw err;
}
// send the data back to the client
emitData(id, 'question', send_data);
});
});
});
});
});
});
// This is for the editor..not the viewer
client.on('save', function (data) {
getSession(client.handshake.headers.cookie, function (err, session) {
if (err || !session) {
console.log(err);
throw err;
}
var save = false;
// If theres a session here (not null) than the user is logged in while connecting to a socket
if (session.user) {
if (session.user.privledges.editor_save) {
save = true;
}
}
// only save to the database if the user has privledges
if (save) {
//TODO: sanitize the inputs
models.Interviews.findOne({ id: data.id }, function (err, doc) {
if (err) {
console.log(err);
throw err;
}
if (!doc) {
emitData(id, 'srv_error', { id: null, error: interview.error("The interview could not be found.").error.content, valid: false });
return;
}
doc.description = data.settings.description;
doc.steps = data.settings.steps;
doc.start = data.settings.start;
doc.data = data.data;
doc.distance = {
update: true,
graph: {}
};
doc.save();
connected_clients[id].socket.emit('saved', true);
});
} else {
connected_clients[id].socket.emit('saved', false);
}
});
});
// when a client clicks the button to reorder the graph this fires
// TODO send this only to the client that is the editor
client.on('graph', function (data) {
if (app.get('disable_graphviz_tidy')) {
emitData(id, 'graph', {});
} else {
var g = graphviz.digraph("G");
var options = {
type: "dot",
G: {
splines: false,
rankdir: "BT",
nodesep: "0.2"
}
};
// this creates the initial dot file to be rendered
for (var prop in data.nodes) {
if (data.nodes.hasOwnProperty(prop)) {
g.addNode(prop);
if (data.nodes[prop]) {
for (var i = 0; i < data.nodes[prop].length; i+=1) {
g.addEdge(prop, data.nodes[prop][i]);
}
}
}
}
// this takes the dote graph generated above and creates a dot file with all the postions
g.output(options, function (out) {
var dot = out.toString('utf-8');
var regex = /(q\d+)\s\[pos="(\d+),(\d+)",/gmi;
var graph = {};
var match;
while ((match = regex.exec(dot)) !== null) {
graph[match[1].toString()] = {
x: parseInt(match[2],10),
y: parseInt(match[3],10)
};
}
emitData(id, 'graph', graph);
});
}
});
// This gets called when the save button in the viewer is clicked.
// We need the name and note (if any) to populate the text area on the save pop up
client.on('get_saved_note', function (data) {
//sanitize all the data sent over
var clean_id = sanitizor.clean(data.id.toString());
// look to see if there is already a saved interview with the same ID being sent from the client
models.Saves.findOne({ id: clean_id }).exec(function (err, doc) {
if (err) {
console.log(err);
throw err;
}
// Set the defaults
var name = "";
var note = "";
if (doc) {
name = doc.name;
note = doc.note;
}
// Let the user know there was an error
emitData(id, 'insert_saved_note', { name: name, note: note });
});
});
// When the client clicks the save button.
// If we get here that mean the user is logged in (this is checked via Ajax in viewer.js)
client.on('save_progress', function (data) {
// Sanitize all the data sent over
var clean_id = sanitizor.clean(data.id.toString());
var clean_qid = sanitizor.clean(data.qid);
var clean_name = sanitizor.clean(data.name).substring(0,100);
var clean_note = sanitizor.clean(data.note).substring(0,500);
var clean_interview = sanitizor.clean(data.interview);
// The fields are used to prepopulate the question the interview was saved on
var fields = data.fields;
// Check if a user is logged in , there will be a sessionID from the database saved form them
if (client.handshake.headers.cookie) {
// revalidates that the user is still logged in, since we only checked when the socket connected
getSession(client.handshake.headers.cookie, function (err, session) {
if (err) {
console.log(err);
throw err;
}
if (session && session.user) {
// look up the data for the users interview from the database
models.Tmps.findOne({ id: clean_id }, function (err, tmp) {
if (err) {
console.log(err);
throw err;
}
// Look up the current state
models.States.findOne({id: tmp.current }, function (err, state) {
if (err) {
console.log(err);
throw err;
}
// if we get here the user is still logged in and we can get all their saved interviews
// look to see if there is already a saved interview with the same ID being sent from the client
models.Saves.findOne({ id: clean_id }, function (err, doc) {
if (err) {
// let the user know there was an error
emitData(id, 'saved_progress', { valid:false });
return;
}
// If doc is not empty than there is already a record saved for that interview
if (doc) {
// update
models.Saves.update({ id:clean_id }, {
data: {
current: tmp.current,
history: tmp.history,
progress: tmp.progress,
state: state.data,
fields: fields
},
qid: clean_qid,
name: clean_name,
note: clean_note,
last_modified: new Date()
}, function (err) {
if (err) {
console.log(err);
throw err;
}
emitData(id, 'saved_progress', {valid:true});
});
} else {
var save = new models.Saves();
// The tmp id gets incremented every time an interview is started so we don't have to worry about collisions
save.id = tmp.id;
save.user_id = session.user.id;
save.interview_id = clean_interview;
save.qid = clean_qid;
save.socket_id = id;
save.name = clean_name;
save.note = clean_note;
save.interview = connected_clients[id].data.interview;
save.created = new Date();
save.last_modified = new Date();
save.data = {
current: tmp.current,
history: tmp.history,
progress: tmp.progress,
state: state.data,
fields: fields
};
save.save(function(err) {
if (err) {
console.log(err);
throw err;
}
emitData(id, 'saved_progress', {valid:true});
});
}
});
});
});
} else {
emitData(id, 'saved_progress', {valid:false});
}
});
} else {
// the user is not logged in and we can ask them to login first
emitData(id, 'saved_progress', {valid:false});
}
});
// when the client clicks the open button on the viewer
client.on('open_saves', function (data) {
// check if a user is logged in ,there will be a sessionID from the database saved for them
if (client.handshake.headers.cookie) {
// revalidate that the user is still logged in, since we only checked when the socket connected
getSession(client.handshake.headers.cookie, function (err, session) {
if (err) {
console.log(err);
throw err;
}
if (session && session.user) {
// if we get here the user is still logged in and we can get all their saved interviews
// look up the saved progress and attach the user name to the database record
// now we can add the users name to the record in the database for later retrieval
models.Saves.find({}).where('user_id').equals(session.user.id).where('interview_id').equals(data.interview).sort('-created').exec(function(err, saves) {
if (err) {
console.log(err);
throw err;
}
var admin = session.user.privledges && session.user.privledges.view_saved_interviews;
// send back all the saved interviews for the particular interview, corresponding to that user
emitData(id, 'open_saved', {
valid: true,
data: interview.saves(saves, admin)
});
});
} else {
emitData(id, 'open_saved', { valid:false });
}
});
} else {
// the user is not logged in and we can ask them to login first
emitData(id, 'open_saved', { valid:false });
}
});
// When the client clicks the open button on the viewer
client.on('load_saved', function (data) {
// First, we need to get the id from the data which is sent in the format "partial-55"
var full_partial_id = data.partial_id;
var partial_id = full_partial_id.split('-')[1];
// Now we have the id of the partial interview we want to load into the viewer
models.Saves.findOne({}).where('id').equals(partial_id).exec(function(err, partial) {
if (err) {
console.log(err);
throw err;
}
if (partial) {
// Get the interview
models.Interviews.findOne({id: partial.interview_id}, function (err, doc) {
if (err) {
console.log(err);
throw err;
}
models.Counters.findOne({}, function (err, counter) {
var tmp = new models.Tmps();
var tmp_count = counter.tmp_count + 1;
// update the counter in the database
models.Counters.update({tmp_count: tmp_count}, function (err) {
if (err) {
console.log(err);
throw err;
}
var run = interview.load(doc.data[partial.qid], doc.data, partial.data.state.master.vars);
var send_data = {
// send the id (count) that corresponds to the database record of the tmp record, which corresponds to the save_id if we save
id: tmp_count,
qid: partial.qid,
data: {
question: run.question,
progress: interview.progress(partial.data.history),
// there is no debug info...you cant load a partial interview when in preview mode, i.e. the editor
debug: null,
fraction: (partial.data.state.distance.hasOwnProperty(partial.qid)) ? partial.data.state.distance[partial.qid] : '',
fields: (partial.data.hasOwnProperty('fields')) ? partial.data.fields : null
},
valid: true,
partial:true
};
// Create the new database record for the interview being worked on
tmp.id = tmp_count;
tmp.history = [];
tmp.created = new Date();
tmp.last_modified = new Date();
// the progress will record the state id after each question
tmp.progress = [];
// Load in new states for answer pre-population
models.States.find({id: {$in: partial.data.progress }}, function (err, states) {
if (err) {
console.log(err);
throw err;
}
var state_map = {};
// Create new duplicate states with the new tmp_id
for (var i = 0; i < states.length; i+=1) {
var state = new models.States();
state.id = counter.state_count + i + 1 + '-' + tmp.id;
state.tmp_id = tmp.id;
state.loop_id = states[i].loop_id;
state.base_qid = states[i].base_qid;
state.created = new Date();
state.last_modified = new Date();
state.data = states[i].data;
state.save();
if (states[i].id === partial.data.current) {
tmp.current = state.id;
}
// record the new state as the progress
tmp.progress.push(state.id);
state_map[states[i].id] = state.id;
}
for (var j = 0; j < partial.data.history.length; j+=1) {
partial.data.history[j].state = state_map[partial.data.history[j].state];
tmp.history.push(partial.data.history[j]);
}
tmp.save(function (err) {
if (err) {
console.log(err);
throw err;
}
var state_count = counter.state_count + states.length + 1;
// update the counter in the database
models.Counters.update({state_count: state_count}, function (err) {
if (err) {
console.log(err);
throw err;
}
emitData(id, 'question', send_data);
});
});
});
});
});
});
} else {
// for some reason we could not find the partial interview we are trying to load
console.log('The saved record was not found when trying to load a saved interview.');
throw err;
}
});
});
// When the client clicks the open button on the viewer
client.on('process_saved', function (data) {
if (client.handshake.headers.cookie) {
// revalidate that the user is still logged in, since we only checked when the socket connected
getSession(client.handshake.headers.cookie, function (err, session) {
if (err) {
console.log(err);
throw err;
}
var full_partial_id = data.partial_id;
var partial_id = full_partial_id.split('-')[2];
models.Saves.findOne({}).where('id').equals(partial_id).exec(function(err, save) {
if (err) {
return console.log(err);
}
if (!save) {
console.log('The saved record was not found when trying to load a saved interview.');
return;
}
// Get the interview
models.Interviews.findOne({id: save.interview_id}, function (err, doc) {
if (err) {
console.log(err);
return;
}
if (!doc) {
console.log('The interview could not be found.');
return;
}
// Process the saved interview as if it were completed
process.output(doc, save.data.state.master.vars, save.data.history, app.get('base_location'), app, session.user.id, function (err, data) {
if (err) {
console.log(err);
emitData(id, 'process_saved', { valid: false });
} else {
var client = {
full: session.user.name
};
process.email(null, doc.name, app.get('base_location'), doc, doc.on_complete, doc.deliverables, client, app, function (err, response) {
if (err) {
console.log(err);
emitData(id, 'process_saved', { valid: false });
} else {
emitData(id, 'process_saved', { valid: true });
}
});
}
});
});
});
});
}
});
// when a finish button is clicked in the interview
client.on('finish', function (data) {
var validate, emit, deliverables, run, progress, on_complete, send_data;
var base_location = app.get('base_location');
// this is the array of objects with the answers
var fields = data.fields;
var qid = data.qid;
//TODO: sanitize the inputs, not dry, don't query the database every time for this interview ... store the variable
models.Interviews.findOne({ id: data.interview }, function (err, doc) {
if (err) {
console.log(err);
throw err;
}
if (!doc) {
emitData(id, 'srv_error', { id: null, error: interview.error("The interview could not be found.").error.content, valid: false });
return;
}
models.Tmps.findOne({ id: data.id }, function (err, tmp) {
if (err) {
console.log(err);
throw err;
}
// look up the current state using the current id
models.States.findOne({id: tmp.current }, function (err, state) {
if (err) {
console.log(err);
throw err;
}
models.Counters.findOne({}, function (err, counter) {
if (err) {
console.log(err);
throw err;
}
// update the counter in the database
models.Counters.update({state_count: counter.state_count + 1}, function (err) {
if (err) {
console.log(err);
throw err;
}
var tmp_progress = tmp.progress;
var tmp_history = tmp.history;
var new_state = new models.States();
var new_state_id = counter.state_count + 1 + '-' + tmp.id;
// save the database interview information to the clients socket object
connected_clients[id].data = state.data;
// the validate function will evaluate the answers sent when the user clicks next
// pass the answers, and the fields for this question...so we can compare to the validation object
// validate will return either true, or false and an error message
validate = interview.validate(fields, doc.data[qid], connected_clients[id].data.master.vars, doc.data);
// check all the fields for their validation
if (validate.error) {
// send the error back to the client
send_data = {
id: tmp.id,
qid: null,
data: validate,
valid: false
};
// inform the client of whats happening
emitData(id, 'question', send_data);
} else {
var loop = (doc.data[qid].loop1 !== null && typeof doc.data[qid].loop1 !== 'undefined' && doc.data[qid].loop1 !== '') ? true : false;
connected_clients[id].data.master.fields = fields;
connected_clients[id].data.master.vars = helpers.merge(connected_clients[id].data.master.vars, fields, loop, doc.data[qid]);
// this has all the info for what to do when the finish button is completed
on_complete = doc.on_complete;
deliverables = doc.deliverables;
// handle all the deliverables, if there are any
// TODO add a default deliverable! NOTHING HAPPENS WHEN there are no deliverables
if (deliverables.length !== 0) {
// This callback gets fired when all the deliverables have been created. It is passed into the process,output function
var callback = function (err, data) {
if (err) {
// if any stylesheet was not procure send back some error to the client
console.log(err);
send_data = {
id: tmp.id,
error: interview.error(err).error.content,
valid: false
};
emitData(id, 'srv_error', send_data);
} else {
// now we have the folder where all the stores deliverables
connected_clients[id].data.deliverables = data.dir;
// store the client info onto the master object
connected_clients[id].data.client = data.client;
// check to see if we want to allow the client to receive the deliverables via email
// if this is true we send a question to the client asking for an email
if (on_complete.email_deliverables_to_client) {
// create the final question of the interview, asking for an email
run = interview.final(qid, doc.data[qid], doc.data, connected_clients[id].data.master.vars, fields, tmp_progress, deliverables);
// build the history
progress = interview.progress(tmp_history);
// update master vars list
connected_clients[id].data.master.vars = run.master;
connected_clients[id].data.progress = run.progress;
// this is the data that gets sent back to the client. It involves the HTML of the question to
// show the user, and debug info for the editor (if edit mode)
send_data = {
id: tmp.id,
qid: run.qid,
data: {
question: run.question,
progress: progress,
debug: run.debug
},
valid: true
};
emitData(id, 'question', send_data);
} else {
// just handle the emails and output the done question
// base_location - this is the server base so we can find out where to put the zip
// on_complete - this is the settings for what to do at the end
// the first null is the email of the client
process.email(null, doc.name, base_location, connected_clients[id].data, on_complete, deliverables, data.client, app, function (err, response) {
if (err) {
// if there was any problems with emails, or zipping folders
console.log(err);
emit = 'srv_error';
send_data = {
id: null,
error: interview.error(err).error.content,
valid: false
};
emitData(id, emit, send_data);
} else {
// this will put together the final success question after everything is done
run = interview.done();
emit = 'question';
send_data = {
id: null,
qid: null,
data: {
question: run.question,
progress: run.progress,
debug: run.debug
},
valid: true
};
// create a new state record
new_state.id = new_state_id;
new_state.created = new Date();
new_state.last_modified = new Date();
new_state.data = connected_clients[id].data;
new_state.save(function(err) {
if (err) {
console.log(err);
throw err;
}
tmp_progress.push(new_state_id);
// Update the interview data
models.Tmps.update({id:tmp.id}, {current: new_state_id, progress: tmp_progress, history: run.progress, last_modified: new Date().getTime()}, function (err) {
if (err) {
console.log(err);
throw err;
}
// inform the client of whats happening...either the docs were produced or there was an error
emitData(id, emit, send_data);
});
});
}
});
}
}
};
if (client.handshake.sessionID) {
// check if the user is logged in
sessionStore.get(client.handshake.sessionID, function (err, session) {
if (err) {
console.log(err);
throw err;
}
if (session && session.user) {
process.output(doc, connected_clients[id].data.master.vars, tmp_history, base_location, app, session.user.id, callback);
}
});
} else {
// An unregistered user is completing the interview
process.output(doc, connected_clients[id].data.master.vars, tmp_history, base_location, app, null, callback);
}
} else {
send_data = {
id: null,
error: interview.error("The interview does not have any deliverables.").error.content,
valid: false
};
emitData(id, 'srv_error', send_data);
}
}
});
});
});
});
});
});
// When the user clicks the send button at the very end, this is the very last button
client.on('send', function (data) {
var email = sanitizor.clean(data.email);
var base_location = app.get('base_location');
var path = connected_clients[id].data.deliverables;
var send_data;
// If no email is entered or the format is not an email send them back
if (validator.check(email, ['required','email']) ) {
//If the email is valid we can email the user the deliverables
models.Interviews.findOne({ id: data.interview }, function (err, doc) {
var emit, run;
var on_complete = doc.on_complete;
var deliverables = doc.deliverables;
if (err) {
console.log(err);
throw err;
}
if (!doc) {
emitData(id, 'srv_error', { id: null, error: interview.error("The interview could not be found.").error.content, valid: false });
return;
}
// Send out all the emails and finish up
process.email(email, doc.name, base_location, connected_clients[id].data, on_complete, deliverables, connected_clients[id].data.client, app, function (err, response) {
if (err) {
// If there was any problems with emails, or zipping folders
console.log(err);
emit = 'srv_error';
send_data = {
id: null,
error: interview.error(err).error.content,
valid: false
};
} else {
// This will put together the final success question after everything is done
run = interview.done();
emit = 'question';
send_data = {
id: null,
qid: null,
data: {
question: run.question,
progress: run.progress,
debug: run.debug
},
valid: true
};
}
// Inform the client of whats happening. Either the docs were produced or there was an error.
emitData(id, emit, send_data);
});
});
} else {
send_data = {
id: null,
qid: null,
data: {
error: true,
message: 'The email is required and must be valid.',
name: 'q-final'
},
valid: false
};
emitData(id, 'question', send_data);
}
});
// When a client disconnects from the viewer, or editor
client.on('disconnect', function () {
if (connected_clients[id]) {
delete connected_clients[id];
console.log(' debug - ' + 'client ' + client.id + 'is disconnected');
console.log(' debug - ' + 'total number of connected clients is ' + Object.keys(connected_clients).length);
}
});
});
return io;
}; |
#include <gtk/gtk.h>
#include "x264_gtk_i18n.h"
#include "x264_gtk_private.h"
/* Callbacks */
static void _more_deblocking_filter (GtkToggleButton *button,
gpointer user_data);
static void _more_cabac (GtkToggleButton *button,
gpointer user_data);
static void _more_mixed_ref (GtkToggleButton *button,
gpointer user_data);
GtkWidget *
_more_page (X264_Gui_Config *config)
{
GtkWidget *vbox;
GtkWidget *frame;
GtkWidget *hbox;
GtkWidget *table;
GtkWidget *eb;
GtkWidget *label;
GtkObject *adj;
GtkRequisition size;
GtkRequisition size2;
GtkRequisition size3;
GtkRequisition size4;
GtkRequisition size5;
GtkTooltips *tooltips;
tooltips = gtk_tooltips_new ();
label = gtk_entry_new_with_max_length (3);
gtk_widget_size_request (label, &size);
gtk_widget_destroy (GTK_WIDGET (label));
label = gtk_check_button_new_with_label (_("Deblocking Filter"));
gtk_widget_size_request (label, &size2);
gtk_widget_destroy (GTK_WIDGET (label));
label = gtk_label_new (_("Partition decision"));
gtk_widget_size_request (label, &size3);
gtk_widget_destroy (GTK_WIDGET (label));
label = gtk_label_new (_("Threshold"));
gtk_widget_size_request (label, &size5);
gtk_widget_destroy (GTK_WIDGET (label));
vbox = gtk_vbox_new (FALSE, 0);
gtk_container_set_border_width (GTK_CONTAINER (vbox), 6);
/* Motion Estimation */
frame = gtk_frame_new (_("Motion Estimation"));
gtk_box_pack_start (GTK_BOX (vbox), frame, FALSE, TRUE, 6);
gtk_widget_show (frame);
table = gtk_table_new (5, 3, TRUE);
gtk_table_set_row_spacings (GTK_TABLE (table), 6);
gtk_table_set_col_spacings (GTK_TABLE (table), 6);
gtk_container_set_border_width (GTK_CONTAINER (table), 6);
gtk_container_add (GTK_CONTAINER (frame), table);
gtk_widget_show (table);
eb = gtk_event_box_new ();
gtk_event_box_set_visible_window (GTK_EVENT_BOX (eb), FALSE);
gtk_tooltips_set_tip (tooltips, eb,
_("Partition decision - description"),
"");
gtk_table_attach_defaults (GTK_TABLE (table), eb,
0, 1, 0, 1);
gtk_widget_show (eb);
label = gtk_label_new (_("Partition decision"));
gtk_widget_set_size_request (label, size2.width, size3.height);
gtk_misc_set_alignment (GTK_MISC (label), 0.0, 0.5);
gtk_container_add (GTK_CONTAINER (eb), label);
gtk_widget_show (label);
config->more.motion_estimation.partition_decision = gtk_combo_box_new_text ();
gtk_combo_box_append_text (GTK_COMBO_BOX (config->more.motion_estimation.partition_decision),
_("1 (Fastest)"));
gtk_combo_box_append_text (GTK_COMBO_BOX (config->more.motion_estimation.partition_decision),
"2");
gtk_combo_box_append_text (GTK_COMBO_BOX (config->more.motion_estimation.partition_decision),
"3");
gtk_combo_box_append_text (GTK_COMBO_BOX (config->more.motion_estimation.partition_decision),
"4");
gtk_combo_box_append_text (GTK_COMBO_BOX (config->more.motion_estimation.partition_decision),
_("5 (High quality)"));
gtk_combo_box_append_text (GTK_COMBO_BOX (config->more.motion_estimation.partition_decision),
_("6 (RDO)"));
gtk_combo_box_append_text (GTK_COMBO_BOX (config->more.motion_estimation.partition_decision),
_("6b (RDO on B frames)"));
gtk_table_attach_defaults (GTK_TABLE (table), config->more.motion_estimation.partition_decision,
1, 3, 0, 1);
gtk_widget_show (config->more.motion_estimation.partition_decision);
eb = gtk_event_box_new ();
gtk_event_box_set_visible_window (GTK_EVENT_BOX (eb), FALSE);
gtk_tooltips_set_tip (tooltips, eb,
_("Method - description"),
"");
gtk_table_attach_defaults (GTK_TABLE (table), eb,
0, 1, 1, 2);
gtk_widget_show (eb);
label = gtk_label_new (_("Method"));
gtk_misc_set_alignment (GTK_MISC (label), 0.0, 0.5);
gtk_container_add (GTK_CONTAINER (eb), label);
gtk_widget_show (label);
config->more.motion_estimation.method = gtk_combo_box_new_text ();
gtk_combo_box_append_text (GTK_COMBO_BOX (config->more.motion_estimation.method),
_("Diamond Search"));
gtk_combo_box_append_text (GTK_COMBO_BOX (config->more.motion_estimation.method),
_("Hexagonal Search"));
gtk_combo_box_append_text (GTK_COMBO_BOX (config->more.motion_estimation.method),
_("Uneven Multi-Hexagon"));
gtk_combo_box_append_text (GTK_COMBO_BOX (config->more.motion_estimation.method),
_("Exhaustive search"));
gtk_table_attach_defaults (GTK_TABLE (table), config->more.motion_estimation.method,
1, 3, 1, 2);
gtk_widget_show (config->more.motion_estimation.method);
eb = gtk_event_box_new ();
gtk_event_box_set_visible_window (GTK_EVENT_BOX (eb), FALSE);
gtk_tooltips_set_tip (tooltips, eb,
_("Range - description"),
"");
gtk_table_attach_defaults (GTK_TABLE (table), eb,
0, 1, 2, 3);
gtk_widget_show (eb);
label = gtk_label_new (_("Range"));
gtk_widget_size_request (label, &size4);
gtk_misc_set_alignment (GTK_MISC (label), 0.0, 0.5);
gtk_container_add (GTK_CONTAINER (eb), label);
gtk_widget_show (label);
config->more.motion_estimation.range = gtk_entry_new_with_max_length (3);
gtk_widget_set_size_request (config->more.motion_estimation.range,
20, size.height);
gtk_table_attach_defaults (GTK_TABLE (table), config->more.motion_estimation.range,
1, 2, 2, 3);
gtk_widget_show (config->more.motion_estimation.range);
config->more.motion_estimation.chroma_me = gtk_check_button_new_with_label (_("Chroma ME"));
gtk_tooltips_set_tip (tooltips, config->more.motion_estimation.chroma_me,
_("Chroma ME - description"),
"");
gtk_table_attach_defaults (GTK_TABLE (table), config->more.motion_estimation.chroma_me,
2, 3, 2, 3);
gtk_widget_show (config->more.motion_estimation.chroma_me);
eb = gtk_event_box_new ();
gtk_event_box_set_visible_window (GTK_EVENT_BOX (eb), FALSE);
gtk_tooltips_set_tip (tooltips, eb,
_("Max Ref. frames - description"),
"");
gtk_table_attach_defaults (GTK_TABLE (table), eb,
0, 1, 3, 4);
gtk_widget_show (eb);
label = gtk_label_new (_("Max Ref. frames"));
gtk_misc_set_alignment (GTK_MISC (label), 0.0, 0.5);
gtk_container_add (GTK_CONTAINER (eb), label);
gtk_widget_show (label);
config->more.motion_estimation.max_ref_frames = gtk_entry_new_with_max_length (3);
gtk_widget_set_size_request (config->more.motion_estimation.max_ref_frames,
20, size.height);
gtk_table_attach_defaults (GTK_TABLE (table), config->more.motion_estimation.max_ref_frames,
1, 2, 3, 4);
gtk_widget_show (config->more.motion_estimation.max_ref_frames);
config->more.motion_estimation.mixed_refs = gtk_check_button_new_with_label (_("Mixed Refs"));
gtk_tooltips_set_tip (tooltips, config->more.motion_estimation.mixed_refs,
_("Mixed Refs - description"),
"");
g_signal_connect (G_OBJECT (config->more.motion_estimation.mixed_refs),
"toggled",
G_CALLBACK (_more_mixed_ref), config);
gtk_table_attach_defaults (GTK_TABLE (table), config->more.motion_estimation.mixed_refs,
2, 3, 3, 4);
gtk_widget_show (config->more.motion_estimation.mixed_refs);
config->more.motion_estimation.fast_pskip = gtk_check_button_new_with_label (_("Fast P skip"));
gtk_tooltips_set_tip (tooltips, config->more.motion_estimation.fast_pskip,
_("Fast P skip - description"),
"");
gtk_table_attach_defaults (GTK_TABLE (table), config->more.motion_estimation.fast_pskip,
0, 1, 4, 5);
gtk_widget_show (config->more.motion_estimation.fast_pskip);
config->more.motion_estimation.dct_decimate = gtk_check_button_new_with_label (_("DCT decimate"));
gtk_tooltips_set_tip (tooltips, config->more.motion_estimation.dct_decimate,
_("DCT decimate - description"),
"");
gtk_table_attach_defaults (GTK_TABLE (table), config->more.motion_estimation.dct_decimate,
1, 2, 4, 5);
gtk_widget_show (config->more.motion_estimation.dct_decimate);
/* Misc. Options */
frame = gtk_frame_new (_("Misc. Options"));
gtk_box_pack_start (GTK_BOX (vbox), frame, FALSE, TRUE, 6);
gtk_widget_show (frame);
table = gtk_table_new (5, 4, FALSE);
gtk_table_set_row_spacings (GTK_TABLE (table), 6);
gtk_table_set_col_spacings (GTK_TABLE (table), 6);
gtk_container_set_border_width (GTK_CONTAINER (table), 6);
gtk_container_add (GTK_CONTAINER (frame), table);
gtk_widget_show (table);
eb = gtk_event_box_new ();
gtk_event_box_set_visible_window (GTK_EVENT_BOX (eb), FALSE);
gtk_tooltips_set_tip (tooltips, eb,
_("Sample Aspect Ratio - description"),
"");
gtk_table_attach_defaults (GTK_TABLE (table), eb,
0, 1, 0, 1);
gtk_widget_show (eb);
label = gtk_label_new (_("Sample Aspect Ratio"));
gtk_misc_set_alignment (GTK_MISC (label), 0.0, 0.5);
gtk_container_add (GTK_CONTAINER (eb), label);
gtk_widget_show (label);
hbox = gtk_hbox_new (TRUE, 6);
gtk_table_attach_defaults (GTK_TABLE (table), hbox,
1, 2, 0, 1);
gtk_widget_show (hbox);
config->more.misc.sample_ar_x = gtk_entry_new_with_max_length (3);
gtk_widget_set_size_request (config->more.misc.sample_ar_x, 25, size.height);
gtk_box_pack_start (GTK_BOX (hbox), config->more.misc.sample_ar_x, FALSE, TRUE, 0);
gtk_widget_show (config->more.misc.sample_ar_x);
config->more.misc.sample_ar_y = gtk_entry_new_with_max_length (3);
gtk_widget_set_size_request (config->more.misc.sample_ar_y, 25, size.height);
gtk_box_pack_start (GTK_BOX (hbox), config->more.misc.sample_ar_y, FALSE, TRUE, 0);
gtk_widget_show (config->more.misc.sample_ar_y);
eb = gtk_event_box_new ();
gtk_event_box_set_visible_window (GTK_EVENT_BOX (eb), FALSE);
gtk_tooltips_set_tip (tooltips, eb,
_("Threads - description"),
"");
gtk_table_attach_defaults (GTK_TABLE (table), eb,
2, 3, 0, 1);
gtk_widget_show (eb);
label = gtk_label_new (_("Threads"));
gtk_misc_set_alignment (GTK_MISC (label), 0.0, 0.5);
gtk_container_add (GTK_CONTAINER (eb), label);
gtk_widget_show (label);
adj = gtk_adjustment_new (1.0, 1.0, 4.0, 1.0, 1.0, 1.0);
config->more.misc.threads = gtk_spin_button_new (GTK_ADJUSTMENT (adj), 1.0, 0);
gtk_widget_set_size_request (config->more.misc.threads, size5.width, size.height);
gtk_table_attach_defaults (GTK_TABLE (table),
config->more.misc.threads,
3, 4, 0, 1);
gtk_widget_show (config->more.misc.threads);
config->more.misc.cabac = gtk_check_button_new_with_label (_("CABAC"));
gtk_widget_set_size_request (config->more.misc.cabac, size5.width, size.height);
gtk_tooltips_set_tip (tooltips, config->more.misc.cabac,
_("CABAC - description"),
"");
g_signal_connect (G_OBJECT (config->more.misc.cabac),
"toggled",
G_CALLBACK (_more_cabac), config);
gtk_table_attach_defaults (GTK_TABLE (table), config->more.misc.cabac,
0, 1, 1, 2);
gtk_widget_show (config->more.misc.cabac);
eb = gtk_event_box_new ();
gtk_event_box_set_visible_window (GTK_EVENT_BOX (eb), FALSE);
gtk_tooltips_set_tip (tooltips, eb,
_("Trellis - description"),
"");
gtk_table_attach_defaults (GTK_TABLE (table), eb,
1, 2, 1, 2);
gtk_widget_show (eb);
label = gtk_label_new (_("Trellis"));
gtk_misc_set_alignment (GTK_MISC (label), 0.0, 0.5);
gtk_container_add (GTK_CONTAINER (eb), label);
gtk_widget_show (label);
config->more.misc.trellis = gtk_combo_box_new_text ();
gtk_combo_box_append_text (GTK_COMBO_BOX (config->more.misc.trellis),
_("Disabled"));
gtk_combo_box_append_text (GTK_COMBO_BOX (config->more.misc.trellis),
_("Enabled (once)"));
gtk_combo_box_append_text (GTK_COMBO_BOX (config->more.misc.trellis),
_("Enabled (mode decision)"));
gtk_table_attach_defaults (GTK_TABLE (table), config->more.misc.trellis,
2, 4, 1, 2);
gtk_widget_show (config->more.misc.trellis);
eb = gtk_event_box_new ();
gtk_event_box_set_visible_window (GTK_EVENT_BOX (eb), FALSE);
gtk_tooltips_set_tip (tooltips, eb,
_("Noise reduction - description"),
"");
gtk_table_attach_defaults (GTK_TABLE (table), eb,
0, 1, 2, 3);
gtk_widget_show (eb);
label = gtk_label_new (_("Noise reduction"));
gtk_misc_set_alignment (GTK_MISC (label), 0.0, 0.5);
gtk_container_add (GTK_CONTAINER (eb), label);
gtk_widget_show (label);
config->more.misc.noise_reduction = gtk_entry_new_with_max_length (3);
gtk_widget_set_size_request (config->more.misc.noise_reduction, size5.width, size.height);
gtk_table_attach_defaults (GTK_TABLE (table), config->more.misc.noise_reduction,
1, 2, 2, 3);
gtk_widget_show (config->more.misc.noise_reduction);
config->more.misc.df.deblocking_filter = gtk_check_button_new_with_label (_("Deblocking Filter"));
gtk_tooltips_set_tip (tooltips, config->more.misc.df.deblocking_filter,
_("Deblocking Filter - description"),
"");
g_signal_connect (G_OBJECT (config->more.misc.df.deblocking_filter),
"toggled",
G_CALLBACK (_more_deblocking_filter), config);
gtk_table_attach_defaults (GTK_TABLE (table), config->more.misc.df.deblocking_filter,
0, 1, 3, 4);
gtk_widget_show (config->more.misc.df.deblocking_filter);
eb = gtk_event_box_new ();
gtk_event_box_set_visible_window (GTK_EVENT_BOX (eb), FALSE);
gtk_tooltips_set_tip (tooltips, eb,
_("Strength - description"),
"");
gtk_table_attach_defaults (GTK_TABLE (table), eb,
1, 2, 3, 4);
gtk_widget_show (eb);
label = gtk_label_new (_("Strength"));
gtk_misc_set_alignment (GTK_MISC (label), 0.0, 0.5);
gtk_widget_set_size_request (label, size5.width, size4.height);
gtk_container_add (GTK_CONTAINER (eb), label);
gtk_widget_show (label);
config->more.misc.df.strength = gtk_hscale_new_with_range (-6.0, 6.0, 1.0);
gtk_widget_size_request (config->more.misc.df.strength, &size4);
gtk_scale_set_digits (GTK_SCALE (config->more.misc.df.strength), 0);
gtk_scale_set_value_pos (GTK_SCALE (config->more.misc.df.strength), GTK_POS_RIGHT);
// gtk_widget_set_size_request (config->more.misc.df.strength, size5.width, size4.height);
gtk_table_attach_defaults (GTK_TABLE (table), config->more.misc.df.strength,
2, 4, 3, 4);
gtk_widget_show (config->more.misc.df.strength);
eb = gtk_event_box_new ();
gtk_event_box_set_visible_window (GTK_EVENT_BOX (eb), FALSE);
gtk_tooltips_set_tip (tooltips, eb,
_("Threshold - description"),
"");
gtk_table_attach_defaults (GTK_TABLE (table), eb,
1, 2, 4, 5);
gtk_widget_show (eb);
label = gtk_label_new (_("Threshold"));
gtk_misc_set_alignment (GTK_MISC (label), 0.0, 0.5);
gtk_widget_set_size_request (label, size5.width, size4.height);
gtk_container_add (GTK_CONTAINER (eb), label);
gtk_widget_show (label);
config->more.misc.df.threshold = gtk_hscale_new_with_range (-6.0, 6.0, 1.0);
gtk_scale_set_digits (GTK_SCALE (config->more.misc.df.threshold), 0);
gtk_scale_set_value_pos (GTK_SCALE (config->more.misc.df.threshold), GTK_POS_RIGHT);
gtk_table_attach_defaults (GTK_TABLE (table), config->more.misc.df.threshold,
2, 4, 4, 5);
gtk_widget_show (config->more.misc.df.threshold);
/* Debug */
frame = gtk_frame_new (_("Debug"));
gtk_box_pack_start (GTK_BOX (vbox), frame, FALSE, TRUE, 6);
gtk_widget_show (frame);
table = gtk_table_new (2, 2, TRUE);
gtk_table_set_row_spacings (GTK_TABLE (table), 6);
gtk_container_set_border_width (GTK_CONTAINER (table), 6);
gtk_container_add (GTK_CONTAINER (frame), table);
gtk_widget_show (table);
eb = gtk_event_box_new ();
gtk_event_box_set_visible_window (GTK_EVENT_BOX (eb), FALSE);
gtk_tooltips_set_tip (tooltips, eb,
_("Log level - description"),
"");
gtk_table_attach_defaults (GTK_TABLE (table), eb,
0, 1, 0, 1);
gtk_widget_show (eb);
label = gtk_label_new (_("Log level"));
gtk_misc_set_alignment (GTK_MISC (label), 0.0, 0.5);
gtk_container_add (GTK_CONTAINER (eb), label);
gtk_widget_show (label);
config->more.debug.log_level = gtk_combo_box_new_text ();
gtk_combo_box_append_text (GTK_COMBO_BOX (config->more.debug.log_level),
_("None"));
gtk_combo_box_append_text (GTK_COMBO_BOX (config->more.debug.log_level),
_("Error"));
gtk_combo_box_append_text (GTK_COMBO_BOX (config->more.debug.log_level),
_("Warning"));
gtk_combo_box_append_text (GTK_COMBO_BOX (config->more.debug.log_level),
_("Info"));
gtk_combo_box_append_text (GTK_COMBO_BOX (config->more.debug.log_level),
_("Debug"));
gtk_table_attach_defaults (GTK_TABLE (table), config->more.debug.log_level,
1, 2, 0, 1);
gtk_widget_show (config->more.debug.log_level);
eb = gtk_event_box_new ();
gtk_event_box_set_visible_window (GTK_EVENT_BOX (eb), FALSE);
gtk_tooltips_set_tip (tooltips, eb,
_("FourCC - description"),
"");
gtk_table_attach_defaults (GTK_TABLE (table), eb,
0, 1, 1, 2);
gtk_widget_show (eb);
label = gtk_label_new ("FourCC");
gtk_misc_set_alignment (GTK_MISC (label), 0.0, 0.5);
gtk_container_add (GTK_CONTAINER (eb), label);
gtk_widget_show (label);
config->more.debug.fourcc = gtk_entry_new_with_max_length (4);
gtk_table_attach_defaults (GTK_TABLE (table),
config->more.debug.fourcc,
1, 2, 1, 2);
gtk_widget_set_sensitive (config->more.debug.fourcc, FALSE);
gtk_widget_show (config->more.debug.fourcc);
return vbox;
}
/* Callbacks */
static void
_more_deblocking_filter (GtkToggleButton *button,
gpointer user_data)
{
X264_Gui_Config *config;
config = (X264_Gui_Config *)user_data;
if (gtk_toggle_button_get_active (button)) {
gtk_widget_set_sensitive (config->more.misc.df.strength, TRUE);
gtk_widget_set_sensitive (config->more.misc.df.threshold, TRUE);
}
else {
gtk_widget_set_sensitive (config->more.misc.df.strength, FALSE);
gtk_widget_set_sensitive (config->more.misc.df.threshold, FALSE);
}
}
static void
_more_cabac (GtkToggleButton *button,
gpointer user_data)
{
X264_Gui_Config *config;
config = (X264_Gui_Config *)user_data;
if (gtk_toggle_button_get_active (button))
gtk_widget_set_sensitive (config->more.misc.trellis, TRUE);
else
gtk_widget_set_sensitive (config->more.misc.trellis, FALSE);
}
static void
_more_mixed_ref (GtkToggleButton *button,
gpointer user_data)
{
X264_Gui_Config *config;
config = (X264_Gui_Config *)user_data;
if (gtk_toggle_button_get_active (button)) {
const gchar *text;
gint val;
text = gtk_entry_get_text (GTK_ENTRY (config->more.motion_estimation.max_ref_frames));
val = (gint)g_ascii_strtoull (text, NULL, 10);
if (val < 2)
gtk_entry_set_text (GTK_ENTRY (config->more.motion_estimation.max_ref_frames), "2");
}
}
|
Exposure and post-exposure effects of endosulfan on Bufo bufo tadpoles: morpho-histological and ultrastructural study on epidermis and iNOS localization.
Endosulfan is a persistent organic pollutant (POP) that has lethal and sublethal effects on non-target organisms, including amphibians. In a laboratory study, we investigated direct and post-exposure effects of endosulfan on Bufo bufo tadpoles. For this purpose we exposed the tadpoles to a single short-term contamination event (96 h) at an environmentally-realistic concentration (200 μg endosulfan/L). This was followed by a recovery period of 10 days when the experimental animals were kept in pesticide-free water. The endpoints were assessed in terms of mortality, incidence of deformity, effects on behavior, and the morpho-functional features of the epidermis. We found that a short-term exposure to the tested concentration of endosulfan did not cause mortality but induced severe sublethal effects, such as hyperactivity, convulsions, and axis malformations. Following relocation to a pesticide-free environment, we noted two types of response within the experimental sample, in terms of morphological and behavioral traits. Moreover, by using both ultrastructural and a morpho-functional approach, we found that a short-term exposure to endosulfan negatively affected the amphibian epidermis. We also observed several histo-pathological alterations: increased mucous secretion, an increase in intercellular spaces and extensive cell degeneration, together with the induction of an inducible isoform of nitric oxide synthase (iNOS). Following the post-exposure period, we found large areas of epidermis in which degeneration phenomena were moderate or absent, as well as a further increase in iNOS immunoreactivity. Thus, after 10 days in a free-pesticide environment, the larval epidermis was able to partially replace elements that had been compromised due to a physiological and/or a pathological response to the pesticide. These results highlight the need for both exposure and post-exposure experiments, when attempting to assess pollutant effects. |
"Japanese clam." This is the Manila clam, a variety of Pacific clam with a brown and white shell. It is about 6 cm (2") across. In the north of Spain this imported carpet-shell from the orient is being cultivated. |
Q:
LINQ and Nhibernate: create an Expression using a model property
I am getting started with LINQ and NHibernate, can you help me get oriented please:
I need to pass a lambda statement to nhibernate .QueryOver() which is conditional based on a property on my model:
if (model.PropertyA != String.Empty) {
var searchResults = nhibSession.QueryOver<type>(x =>
x.propA == model.PropertyA)
.List();
}
Is there a better way to do this using a C# Expression instead of a lambda statement? How do I create an Expression using model.PropertyA? Do I use Expression.Property() or Expression.Field()?
thanks
A:
How do I create an Expression using model.PropertyA?
I suspect you should be using Expression.Constant - even though it doesn't "feel" like a constant in the normal sense, it's constant for that expression as the model isn't part of the input to the expression.
Expression foo = Expression.Constant(model.PropertyA);
|
/*
===========================================================================
Wolfenstein: Enemy Territory GPL Source Code
Copyright (C) 1999-2010 id Software LLC, a ZeniMax Media company.
This file is part of the Wolfenstein: Enemy Territory GPL Source Code (Wolf ET Source Code).
Wolf ET Source Code is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
Wolf ET Source Code is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with Wolf ET Source Code. If not, see <http://www.gnu.org/licenses/>.
In addition, the Wolf: ET Source Code is also subject to certain additional terms. You should have received a copy of these additional terms immediately following the terms and conditions of the GNU General Public License which accompanied the Wolf ET Source Code. If not, please request a copy in writing from id Software at the address below.
If you have questions concerning this license or the applicable additional terms, you may contact in writing id Software LLC, c/o ZeniMax Media Inc., Suite 120, Rockville, Maryland 20850 USA.
===========================================================================
*/
#ifndef __UI_SHARED_H
#define __UI_SHARED_H
#include "../game/q_shared.h"
#include "../cgame/tr_types.h"
#include "keycodes.h"
#include "../../etmain/ui/menudef.h"
#define MAX_MENUNAME 32
#define MAX_ITEMTEXT 64
#define MAX_ITEMACTION 64
#define MAX_MENUDEFFILE 4096
#define MAX_MENUFILE 32768
#define MAX_MENUS 128
//#define MAX_MENUITEMS 256
#define MAX_MENUITEMS 128 // JPW NERVE q3ta was 96
#define MAX_COLOR_RANGES 10
#define MAX_MODAL_MENUS 16
#define WINDOW_MOUSEOVER 0x00000001 // mouse is over it, non exclusive
#define WINDOW_HASFOCUS 0x00000002 // has cursor focus, exclusive
#define WINDOW_VISIBLE 0x00000004 // is visible
#define WINDOW_GREY 0x00000008 // is visible but grey ( non-active )
#define WINDOW_DECORATION 0x00000010 // for decoration only, no mouse, keyboard, etc..
#define WINDOW_FADINGOUT 0x00000020 // fading out, non-active
#define WINDOW_FADINGIN 0x00000040 // fading in
#define WINDOW_MOUSEOVERTEXT 0x00000080 // mouse is over it, non exclusive
#define WINDOW_INTRANSITION 0x00000100 // window is in transition
#define WINDOW_FORECOLORSET 0x00000200 // forecolor was explicitly set ( used to color alpha images or not )
#define WINDOW_HORIZONTAL 0x00000400 // for list boxes and sliders, vertical is default this is set of horizontal
#define WINDOW_LB_LEFTARROW 0x00000800 // mouse is over left/up arrow
#define WINDOW_LB_RIGHTARROW 0x00001000 // mouse is over right/down arrow
#define WINDOW_LB_THUMB 0x00002000 // mouse is over thumb
#define WINDOW_LB_PGUP 0x00004000 // mouse is over page up
#define WINDOW_LB_PGDN 0x00008000 // mouse is over page down
#define WINDOW_ORBITING 0x00010000 // item is in orbit
#define WINDOW_OOB_CLICK 0x00020000 // close on out of bounds click
#define WINDOW_WRAPPED 0x00040000 // manually wrap text
#define WINDOW_AUTOWRAPPED 0x00080000 // auto wrap text
#define WINDOW_FORCED 0x00100000 // forced open
#define WINDOW_POPUP 0x00200000 // popup
#define WINDOW_BACKCOLORSET 0x00400000 // backcolor was explicitly set
#define WINDOW_TIMEDVISIBLE 0x00800000 // visibility timing ( NOT implemented )
#define WINDOW_IGNORE_HUDALPHA 0x01000000 // window will apply cg_hudAlpha value to colors unless this flag is set
#define WINDOW_DRAWALWAYSONTOP 0x02000000
#define WINDOW_MODAL 0x04000000 // window is modal, the window to go back to is stored in a stack
#define WINDOW_FOCUSPULSE 0x08000000
#define WINDOW_TEXTASINT 0x10000000
#define WINDOW_TEXTASFLOAT 0x20000000
#define WINDOW_LB_SOMEWHERE 0x40000000
// CGAME cursor type bits
#define CURSOR_NONE 0x00000001
#define CURSOR_ARROW 0x00000002
#define CURSOR_SIZER 0x00000004
#ifdef CGAME
#define STRING_POOL_SIZE 128 * 1024
#else
#define STRING_POOL_SIZE 384 * 1024
#endif
#define MAX_STRING_HANDLES 4096
#define MAX_SCRIPT_ARGS 12
#define MAX_EDITFIELD 256
#define ART_FX_BASE "menu/art/fx_base"
#define ART_FX_BLUE "menu/art/fx_blue"
#define ART_FX_CYAN "menu/art/fx_cyan"
#define ART_FX_GREEN "menu/art/fx_grn"
#define ART_FX_RED "menu/art/fx_red"
#define ART_FX_TEAL "menu/art/fx_teal"
#define ART_FX_WHITE "menu/art/fx_white"
#define ART_FX_YELLOW "menu/art/fx_yel"
#define ASSET_GRADIENTBAR "ui/assets/gradientbar2.tga"
#define ASSET_SCROLLBAR "ui/assets/scrollbar.tga"
#define ASSET_SCROLLBAR_ARROWDOWN "ui/assets/scrollbar_arrow_dwn_a.tga"
#define ASSET_SCROLLBAR_ARROWUP "ui/assets/scrollbar_arrow_up_a.tga"
#define ASSET_SCROLLBAR_ARROWLEFT "ui/assets/scrollbar_arrow_left.tga"
#define ASSET_SCROLLBAR_ARROWRIGHT "ui/assets/scrollbar_arrow_right.tga"
#define ASSET_SCROLL_THUMB "ui/assets/scrollbar_thumb.tga"
#define ASSET_SLIDER_BAR "ui/assets/slider2.tga"
#define ASSET_SLIDER_THUMB "ui/assets/sliderbutt_1.tga"
#define ASSET_CHECKBOX_CHECK "ui/assets/check.tga"
#define ASSET_CHECKBOX_CHECK_NOT "ui/assets/check_not.tga"
#define ASSET_CHECKBOX_CHECK_NO "ui/assets/check_no.tga"
#define SCROLLBAR_SIZE 16.0
#define SLIDER_WIDTH 96.0
#define SLIDER_HEIGHT 10.0 // 16.0
#define SLIDER_THUMB_WIDTH 12.0
#define SLIDER_THUMB_HEIGHT 12.0 // 20.0
#define NUM_CROSSHAIRS 10
typedef struct scriptDef_s {
const char *command;
const char *args[MAX_SCRIPT_ARGS];
} scriptDef_t;
typedef struct rectDef_s {
float x; // horiz position
float y; // vert position
float w; // width
float h; // height;
} rectDef_t;
typedef rectDef_t Rectangle;
// FIXME: do something to separate text vs window stuff
typedef struct {
Rectangle rect; // client coord rectangle
Rectangle rectClient; // screen coord rectangle
const char *name; //
const char *model; //
const char *group; // if it belongs to a group
const char *cinematicName; // cinematic name
int cinematic; // cinematic handle
int style; //
int border; //
int ownerDraw; // ownerDraw style
int ownerDrawFlags; // show flags for ownerdraw items
float borderSize; //
int flags; // visible, focus, mouseover, cursor
Rectangle rectEffects; // for various effects
Rectangle rectEffects2; // for various effects
int offsetTime; // time based value for various effects
int nextTime; // time next effect should cycle
vec4_t foreColor; // text color
vec4_t backColor; // border color
vec4_t borderColor; // border color
vec4_t outlineColor; // border color
qhandle_t background; // background asset
} windowDef_t;
typedef windowDef_t Window;
typedef struct {
vec4_t color;
int type;
float low;
float high;
} colorRangeDef_t;
// FIXME: combine flags into bitfields to save space
// FIXME: consolidate all of the common stuff in one structure for menus and items
// THINKABOUTME: is there any compelling reason not to have items contain items
// and do away with a menu per say.. major issue is not being able to dynamically allocate
// and destroy stuff.. Another point to consider is adding an alloc free call for vm's and have
// the engine just allocate the pool for it based on a cvar
// many of the vars are re-used for different item types, as such they are not always named appropriately
// the benefits of c++ in DOOM will greatly help crap like this
// FIXME: need to put a type ptr that points to specific type info per type
//
#define MAX_LB_COLUMNS 16
typedef struct columnInfo_s {
int pos;
int width;
int maxChars;
} columnInfo_t;
typedef struct listBoxDef_s {
int startPos;
int endPos;
int drawPadding;
int cursorPos;
float elementWidth;
float elementHeight;
int elementStyle;
int numColumns;
columnInfo_t columnInfo[MAX_LB_COLUMNS];
const char *doubleClick;
const char *contextMenu;
qboolean notselectable;
} listBoxDef_t;
typedef struct editFieldDef_s {
float minVal; // edit field limits
float maxVal; //
float defVal; //
float range; //
int maxChars; // for edit fields
int maxPaintChars; // for edit fields
int paintOffset; //
} editFieldDef_t;
#define MAX_MULTI_CVARS 32
typedef struct multiDef_s {
const char *cvarList[MAX_MULTI_CVARS];
const char *cvarStr[MAX_MULTI_CVARS];
float cvarValue[MAX_MULTI_CVARS];
int count;
qboolean strDef;
const char *undefinedStr;
} multiDef_t;
typedef struct modelDef_s {
int angle;
vec3_t origin;
float fov_x;
float fov_y;
int rotationSpeed;
int animated;
int startframe;
int numframes;
int loopframes;
int fps;
int frame;
int oldframe;
float backlerp;
int frameTime;
} modelDef_t;
#define CVAR_ENABLE 0x00000001
#define CVAR_DISABLE 0x00000002
#define CVAR_SHOW 0x00000004
#define CVAR_HIDE 0x00000008
#define CVAR_NOTOGGLE 0x00000010
// OSP - "setting" flags for items
#define SVS_DISABLED_SHOW 0x01
#define SVS_ENABLED_SHOW 0x02
#define UI_MAX_TEXT_LINES 64
typedef struct itemDef_s {
Window window; // common positional, border, style, layout info
Rectangle textRect; // rectangle the text ( if any ) consumes
int type; // text, button, radiobutton, checkbox, textfield, listbox, combo
int alignment; // left center right
int textalignment; // ( optional ) alignment for text within rect based on text width
float textalignx; // ( optional ) text alignment x coord
float textaligny; // ( optional ) text alignment x coord
float textscale; // scale percentage from 72pts
int font; // (SA)
int textStyle; // ( optional ) style, normal and shadowed are it for now
const char *text; // display text
void *parent; // menu owner
qhandle_t asset; // handle to asset
const char *mouseEnterText; // mouse enter script
const char *mouseExitText; // mouse exit script
const char *mouseEnter; // mouse enter script
const char *mouseExit; // mouse exit script
const char *action; // select script
const char *onAccept; // NERVE - SMF - run when the users presses the enter key
const char *onFocus; // select script
const char *leaveFocus; // select script
const char *cvar; // associated cvar
const char *cvarTest; // associated cvar for enable actions
const char *enableCvar; // enable, disable, show, or hide based on value, this can contain a list
int cvarFlags; // what type of action to take on cvarenables
sfxHandle_t focusSound;
int numColors; // number of color ranges
colorRangeDef_t colorRanges[MAX_COLOR_RANGES];
int colorRangeType; // either
float special; // used for feeder id's etc.. diff per type
int cursorPos; // cursor position in characters
void *typeData; // type specific data ptr's
// START - TAT 9/16/2002
// For the bot menu, we have context sensitive menus
// the way it works, we could have multiple items in a menu with the same hotkey
// so in the mission pack, we search through all the menu items to find the one that is applicable to this key press
// so the item has to store both the hotkey and the command to execute
int hotkey;
const char *onKey;
// END - TAT 9/16/2002
// OSP - on-the-fly enable/disable of items
int settingTest;
int settingFlags;
int voteFlag;
const char *onEsc;
const char *onEnter;
struct itemDef_s *toolTipData; // OSP - Tag an item to this item for auto-help popups
} itemDef_t;
typedef struct {
Window window;
const char *font; // font
qboolean fullScreen; // covers entire screen
int itemCount; // number of items;
int fontIndex; //
int cursorItem; // which item as the cursor
int fadeCycle; //
float fadeClamp; //
float fadeAmount; //
const char *onOpen; // run when the menu is first opened
const char *onClose; // run when the menu is closed
const char *onESC; // run when the escape key is hit
const char *onEnter; // run when the enter key is hit
int timeout; // ydnar: milliseconds until menu times out
int openTime; // ydnar: time menu opened
const char *onTimeout; // ydnar: run when menu times out
const char *onKey[255]; // NERVE - SMF - execs commands when a key is pressed
const char *soundName; // background loop sound for menu
vec4_t focusColor; // focus color for items
vec4_t disableColor; // focus color for items
itemDef_t *items[MAX_MENUITEMS]; // items this menu contains
// START - TAT 9/16/2002
// should we search through all the items to find the hotkey instead of using the onKey array?
// The bot command menu needs to do this, see note above
qboolean itemHotkeyMode;
// END - TAT 9/16/2002
} menuDef_t;
typedef struct {
const char *fontStr;
const char *cursorStr;
const char *gradientStr;
fontInfo_t fonts[6];
qhandle_t cursor;
qhandle_t gradientBar;
qhandle_t scrollBarArrowUp;
qhandle_t scrollBarArrowDown;
qhandle_t scrollBarArrowLeft;
qhandle_t scrollBarArrowRight;
qhandle_t scrollBar;
qhandle_t scrollBarThumb;
qhandle_t buttonMiddle;
qhandle_t buttonInside;
qhandle_t solidBox;
qhandle_t sliderBar;
qhandle_t sliderThumb;
qhandle_t checkboxCheck;
qhandle_t checkboxCheckNot;
qhandle_t checkboxCheckNo;
sfxHandle_t menuEnterSound;
sfxHandle_t menuExitSound;
sfxHandle_t menuBuzzSound;
sfxHandle_t itemFocusSound;
float fadeClamp;
int fadeCycle;
float fadeAmount;
float shadowX;
float shadowY;
vec4_t shadowColor;
float shadowFadeClamp;
qboolean fontRegistered;
// player settings
qhandle_t fxBasePic;
qhandle_t fxPic[7];
qhandle_t crosshairShader[NUM_CROSSHAIRS];
qhandle_t crosshairAltShader[NUM_CROSSHAIRS];
} cachedAssets_t;
typedef struct {
const char *name;
void ( *handler )( itemDef_t *item, qboolean *bAbort, char** args );
} commandDef_t;
typedef struct {
qhandle_t ( *registerShaderNoMip )( const char *p );
void ( *setColor )( const vec4_t v );
void ( *drawHandlePic )( float x, float y, float w, float h, qhandle_t asset );
void ( *drawStretchPic )( float x, float y, float w, float h, float s1, float t1, float s2, float t2, qhandle_t hShader );
void ( *drawText )( float x, float y, float scale, vec4_t color, const char *text, float adjust, int limit, int style );
void ( *drawTextExt )( float x, float y, float scalex, float scaley, vec4_t color, const char *text, float adjust, int limit, int style, fontInfo_t* font );
int ( *textWidth )( const char *text, float scale, int limit );
int ( *textWidthExt )( const char *text, float scale, int limit, fontInfo_t* font );
int ( *multiLineTextWidth )( const char *text, float scale, int limit );
int ( *textHeight )( const char *text, float scale, int limit );
int ( *textHeightExt )( const char *text, float scale, int limit, fontInfo_t* font );
int ( *multiLineTextHeight )( const char *text, float scale, int limit );
void ( *textFont )( int font ); // NERVE - SMF
qhandle_t ( *registerModel )( const char *p );
void ( *modelBounds )( qhandle_t model, vec3_t min, vec3_t max );
void ( *fillRect )( float x, float y, float w, float h, const vec4_t color );
void ( *drawRect )( float x, float y, float w, float h, float size, const vec4_t color );
void ( *drawSides )( float x, float y, float w, float h, float size );
void ( *drawTopBottom )( float x, float y, float w, float h, float size );
void ( *clearScene )();
void ( *addRefEntityToScene )( const refEntity_t *re );
void ( *renderScene )( const refdef_t *fd );
void ( *registerFont )( const char *pFontname, int pointSize, fontInfo_t *font );
void ( *ownerDrawItem )( float x, float y, float w, float h, float text_x, float text_y, int ownerDraw, int ownerDrawFlags, int align, float special, float scale, vec4_t color, qhandle_t shader, int textStyle );
float ( *getValue )( int ownerDraw, int type );
qboolean ( *ownerDrawVisible )( int flags );
void ( *runScript )( char **p );
void ( *getTeamColor )( vec4_t *color );
void ( *getCVarString )( const char *cvar, char *buffer, int bufsize );
float ( *getCVarValue )( const char *cvar );
void ( *setCVar )( const char *cvar, const char *value );
void ( *drawTextWithCursor )( float x, float y, float scale, vec4_t color, const char *text, int cursorPos, char cursor, int limit, int style );
void ( *setOverstrikeMode )( qboolean b );
qboolean ( *getOverstrikeMode )();
void ( *startLocalSound )( sfxHandle_t sfx, int channelNum );
qboolean ( *ownerDrawHandleKey )( int ownerDraw, int flags, float *special, int key );
int ( *feederCount )( float feederID );
const char *( *feederItemText )( float feederID, int index, int column, qhandle_t * handles, int *numhandles );
const char *( *fileText )( char *flieName );
qhandle_t ( *feederItemImage )( float feederID, int index );
void ( *feederSelection )( float feederID, int index );
qboolean ( *feederSelectionClick )( itemDef_t *item );
void ( *feederAddItem )( float feederID, const char *name, int index ); // NERVE - SMF
char* ( *translateString )( const char *string ); // NERVE - SMF
void ( *checkAutoUpdate )(); // DHM - Nerve
void ( *getAutoUpdate )(); // DHM - Nerve
void ( *keynumToStringBuf )( int keynum, char *buf, int buflen );
void ( *getBindingBuf )( int keynum, char *buf, int buflen );
void ( *getKeysForBinding )( const char* binding, int* key1, int* key2 );
qboolean ( *keyIsDown )( int keynum );
void ( *setBinding )( int keynum, const char *binding );
void ( *executeText )( int exec_when, const char *text );
void ( *Error )( int level, const char *error, ... );
void ( *Print )( const char *msg, ... );
void ( *Pause )( qboolean b );
int ( *ownerDrawWidth )( int ownerDraw, float scale );
sfxHandle_t ( *registerSound )( const char *name, qboolean compressed );
void ( *startBackgroundTrack )( const char *intro, const char *loop, int fadeupTime );
void ( *stopBackgroundTrack )();
int ( *playCinematic )( const char *name, float x, float y, float w, float h );
void ( *stopCinematic )( int handle );
void ( *drawCinematic )( int handle, float x, float y, float w, float h );
void ( *runCinematicFrame )( int handle );
// Gordon: campaign stuffs
const char* ( *descriptionForCampaign )( void );
const char* ( *nameForCampaign )( void );
void ( *add2dPolys )( polyVert_t* verts, int numverts, qhandle_t hShader );
void ( *updateScreen )( void );
void ( *getHunkData )( int* hunkused, int* hunkexpected );
int ( *getConfigString )( int index, char* buff, int buffsize );
float yscale;
float xscale;
float bias;
int realTime;
int frameTime;
int cursorx;
int cursory;
qboolean debug;
cachedAssets_t Assets;
glconfig_t glconfig;
qhandle_t whiteShader;
qhandle_t gradientImage;
qhandle_t cursor;
float FPS;
} displayContextDef_t;
const char *String_Alloc( const char *p );
void String_Init();
void String_Report();
void Init_Display( displayContextDef_t *dc );
void Display_ExpandMacros( char * buff );
void Menu_Init( menuDef_t *menu );
void Item_Init( itemDef_t *item );
void Menu_PostParse( menuDef_t *menu );
menuDef_t *Menu_GetFocused();
void Menu_HandleKey( menuDef_t *menu, int key, qboolean down );
void Menu_HandleMouseMove( menuDef_t *menu, float x, float y );
void Menu_ScrollFeeder( menuDef_t *menu, int feeder, qboolean down );
qboolean Float_Parse( char **p, float *f );
qboolean Color_Parse( char **p, vec4_t *c );
qboolean Int_Parse( char **p, int *i );
qboolean Rect_Parse( char **p, rectDef_t *r );
qboolean String_Parse( char **p, const char **out );
qboolean Script_Parse( char **p, const char **out );
void PC_SourceError( int handle, char *format, ... );
void PC_SourceWarning( int handle, char *format, ... );
qboolean PC_Float_Parse( int handle, float *f );
qboolean PC_Color_Parse( int handle, vec4_t *c );
qboolean PC_Int_Parse( int handle, int *i );
qboolean PC_Rect_Parse( int handle, rectDef_t *r );
qboolean PC_String_Parse( int handle, const char **out );
qboolean PC_Script_Parse( int handle, const char **out );
qboolean PC_Char_Parse( int handle, char *out ); // NERVE - SMF
int Menu_Count();
menuDef_t *Menu_Get( int handle );
void Menu_New( int handle );
void Menu_PaintAll();
menuDef_t *Menus_ActivateByName( const char *p, qboolean modalStack );
void Menu_Reset();
qboolean Menus_AnyFullScreenVisible();
void Menus_Activate( menuDef_t *menu );
qboolean Menus_CaptureFuncActive( void );
displayContextDef_t *Display_GetContext();
void *Display_CaptureItem( int x, int y );
qboolean Display_MouseMove( void *p, int x, int y );
int Display_CursorType( int x, int y );
qboolean Display_KeyBindPending();
void Menus_OpenByName( const char *p );
menuDef_t *Menus_FindByName( const char *p );
void Menus_ShowByName( const char *p );
void Menus_CloseByName( const char *p );
void Display_HandleKey( int key, qboolean down, int x, int y );
void LerpColor( vec4_t a, vec4_t b, vec4_t c, float t );
void Menus_CloseAll();
void Menu_Paint( menuDef_t *menu, qboolean forcePaint );
void Menu_SetFeederSelection( menuDef_t *menu, int feeder, int index, const char *name );
void Display_CacheAll();
// TTimo
void Menu_ShowItemByName( menuDef_t *menu, const char *p, qboolean bShow );
void *UI_Alloc( int size );
void UI_InitMemory( void );
qboolean UI_OutOfMemory();
void Controls_GetConfig( void );
void Controls_SetConfig( qboolean restart );
void Controls_SetDefaults( qboolean lefthanded );
int trap_PC_AddGlobalDefine( char *define );
int trap_PC_RemoveAllGlobalDefines( void );
int trap_PC_LoadSource( const char *filename );
int trap_PC_FreeSource( int handle );
int trap_PC_ReadToken( int handle, pc_token_t *pc_token );
int trap_PC_SourceFileAndLine( int handle, char *filename, int *line );
int trap_PC_UnReadToken( int handle );
//
// panelhandling
//
typedef struct panel_button_s panel_button_t;
typedef struct panel_button_text_s {
float scalex, scaley;
vec4_t colour;
int style;
int align;
fontInfo_t* font;
} panel_button_text_t;
typedef qboolean ( *panel_button_key_down )( panel_button_t*, int );
typedef qboolean ( *panel_button_key_up )( panel_button_t*, int );
typedef void ( *panel_button_render )( panel_button_t* );
typedef void ( *panel_button_postprocess )( panel_button_t* );
// Button struct
struct panel_button_s {
// compile time stuff
// ======================
const char* shaderNormal;
// text
const char* text;
// rect
rectDef_t rect;
// data
int data[8];
// "font"
panel_button_text_t* font;
// functions
panel_button_key_down onKeyDown;
panel_button_key_up onKeyUp;
panel_button_render onDraw;
panel_button_postprocess onFinish;
// run-time stuff
// ======================
qhandle_t hShaderNormal;
};
void BG_PanelButton_RenderEdit( panel_button_t* button );
qboolean BG_PanelButton_EditClick( panel_button_t* button, int key );
qboolean BG_PanelButtonsKeyEvent( int key, qboolean down, panel_button_t** buttons );
void BG_PanelButtonsSetup( panel_button_t** buttons );
void BG_PanelButtonsRender( panel_button_t** buttons );
void BG_PanelButtonsRender_Text( panel_button_t* button );
void BG_PanelButtonsRender_TextExt( panel_button_t* button, const char* text );
void BG_PanelButtonsRender_Img( panel_button_t* button );
panel_button_t* BG_PanelButtonsGetHighlightButton( panel_button_t** buttons );
void BG_PanelButtons_SetFocusButton( panel_button_t* button );
panel_button_t* BG_PanelButtons_GetFocusButton( void );
qboolean BG_RectContainsPoint( float x, float y, float w, float h, float px, float py );
qboolean BG_CursorInRect( rectDef_t* rect );
void BG_FitTextToWidth_Ext( char* instr, float scale, float w, int size, fontInfo_t* font );
void AdjustFrom640( float* x, float* y, float* w, float* h );
void SetupRotatedThing( polyVert_t* verts, vec2_t org, float w, float h, vec_t angle );
#endif
|
The podcast has emerged as a promising medium for facilitating ongoing debate about issues that need more time than mainstream, profit-oriented media or the changing tides of hashtags might allow.
The presence of civil society representatives, such as State Secretary Praktikno (left), a former university rector, in government shows increased plurality in Indonesia’s bureaucracy.
Reuters/Antara News Agency July 6, 2016
Democracy did not fail in the Maldives because it clashed with Islam. Instead, a privileged and powerful elite helped topple the elected government, and nations that advocate democratic ideals did little to stop them.
Yu Keping: ‘The movement towards democracy everywhere is a political trend that cannot be reversed. China is no exception.’
SuppliedApril 15, 2016
Thousands of Hong Kong residents have taken to the streets to call for democracy and greater autonomy from mainland China. A 170,000-strong rally on July 1 followed hot on the heels of an informal referendum…
In the run up to the World Cup, the scene depicted in Brazil by the international press was split between two simple narratives. On one hand: disaster, with protests against the tournament gaining much… |
Dicarboxylic Acid Excretion in Normal Formula-Fed and Breastfed Infants.
Infant formulas are often supplemented with medium-chain triglycerides (MCTs) to optimize calories for small for gestational age or preterm infants. High amounts of MCTs have been associated with an increase in dicarboxylic acid (DCA) in the urine. Elevated DCA in the urine is also a clinical indicator for fatty acid metabolism disorders. The purpose of this study was to identify if there is an amount of MCTs that can be provided without elevating urinary DCA excretion. A metabolic screening laboratory provided urinary DCA excretion data for 175 infants. It was verified that no infants were diagnosed with metabolic disorders and therefore were considered "metabolically normal." All infants were either formula fed or breastfed at the time of screening. The type and volume of formula provided at the time of urine screening was documented. The exact amount of MCTs provided to each infant was calculated. The mean age of the infants was 3.09 months. The mean total DCA was determined for both the breastfeeding and formula groups. Within the formula group, the means were 32.07, 13.36, and 5.77 mmol/mol creatinine for adipic, suberic, and sebacic acids, respectively. Spearman correlation coefficient indicated correlations of r = 0.0693, r = 0.0166, and r = -0.0128 between percent MCT and adipic, suberic, and sebacic acids, respectively. No value was statistically significant. DCA excretion amounts did not vary between breastfed and formula-fed infants. Our data suggest that clinicians should not expect elevated dicarboxylic aciduria in infants who are fed a standard formula without added MCT oil. |
pragma solidity 0.4.23;
import 'openzeppelin-solidity/contracts/token/ERC20/MintableToken.sol';
contract SampleToken is MintableToken {
string public name = "SAMPLE TOKEN";
string public symbol = "SAM";
uint8 public decimals = 18;
}
|
Highly selective oxidation of styrene to benzaldehyde over a tailor-made cobalt oxide encapsulated zeolite catalyst.
A tailor-made catalyst with cobalt oxide particles encapsulated into ZSM-5 zeolites (Co3O4@HZSM-5) was prepared via a hydrothermal method with the conventional impregnated Co3O4/SiO2 catalyst as the precursor and Si source. Various characterization results show that the Co3O4@HZSM-5 catalyst has well-organized structure with Co3O4 particles compatibly encapsulated in the zeolite crystals. The Co3O4@HZSM-5 catalyst was employed as an efficient catalyst for the selective oxidation of styrene to benzaldehyde with hydrogen peroxide as a green and economic oxidant. The effect of various reaction conditions including reaction time, reaction temperature, different kinds of solvents, styrene/H2O2 molar ratio and catalyst dosage on the catalytic performance were systematically investigated. Under the optimized reaction condition, the yield of benzaldehyde can achieve 78.9% with 96.8% styrene conversion and 81.5% benzaldehyde selectivity. Such an excellent catalytic performance can be attributed to the synergistic effect between the confined reaction environment and the proper acidic property. In addition, the reaction mechanism with Co3O4@HZSM-5 as the catalyst for the selective oxidation of styrene to benzaldehyde was reasonably proposed. |
Calgary floods: Residents grapple with devastation in Alberta
The tip of the city’s iconic hat-brim-shaped Saddledome rose above the water on what was otherwise a dead street on Friday: blank traffic lights, closed shops and a creeping, muddy current that was all but impossible to pass.
That was the view from the Red Mile, the bar-filled strip of 17th Ave. that fills with revelers every time that arena lets loose its fans after a game.
The Saddledome was more than three blocks away. Surrounded by swift, cold brown river water, that was as close as a civilian could get to it.
While most heeded Mayor Naheed Nenshi’s orders to stay off the streets, a few decided instead to grab umbrellas, boots and rain jackets in order to survey the damage wreaked by the muddy brown force of the flash floods. Across the city, clusters of people, safe on high ground or at the water’s edge, marvelled at the damage, took pictures with their camera phones and shook their heads.
“I couldn’t find a Starbucks that’s open and I found myself driving west at 17th Ave. and here I am. At a river,” said Sharon Shupe, who works at the casino on the Stampede grounds next to the Saddledome.
She was asked to evacuate her workplace Thursday at 5 p.m.
By morning, she was not able to get close enough to even see it. The city confirmed the arena had been filled with water to the 10th row. The Stampede grounds seemed unreachable by foot or car.
That kind of damage “paints a very clear picture of what kinds of volumes of water that we’re dealing with,’’ said Deputy Police Chief Trevor Daroux. “This is not simply something we can pump out.”
Ms. Shupe said the whole scene was “surreal. It’s very surreal. There’s a log in the middle of the road. My workplace is flooded and people I know lost their homes in High River. A friend lives in Bragg Creek, or she did. I haven’t been able to contact her to see if she’s OK or what. It’s totally insane. I’ve never seen anything like this. Look, there’s debris floating down Macleod Trail right now.”
After a tense night of creeping rivers, city officials spent Friday struggling to keep Calgarians away from the water that showed no signs of abating by afternoon.
The city cut power and gas to many homes. Across the city, the sound of sirens was unavoidable.
Several residential neighbourhoods were drowned by several feet of river water. Downtown streets were made impassable by churning muck.
In Mission, an enclave in the city’s southeast, flood water lapped at the entrance of a local diner. “I think it’s just a matter of time before we start to flood,” said owner Mhairi O’Donnell, 33, watching from behind barricades.
She said she received an evacuation notice from the city on Thursday at 3 p.m., a few hours before the street began to flood. Friday morning she returned to the diner to retrieve cash and place stock items on higher surfaces. Ms. O’Donnell, who said she opened the Mission Diner more than two years ago, held back tears as the water advanced.
“I have no idea what to do. No idea at all,” she said.
Residents who had not yet been evacuated were expecting to leave Friday night. Stephane Orr, 27, pointed to her house on the edge of the evacuation zone. “I’m terrified. This is really scary,” she said.
Across the city, those caught in a second wave of evacuations were making contingency plans. Prateek Bhatnagar, 26, fled his downtown condo unit wearing flip-flops and shorts, with a backpack slung over his shoulder.
He said he planned to stay with an uncle in the city’s northeast. The water had flooded his building’s parking garage. “My car is pretty much done now,” he said, in disbelief. “I’ve never seen anything like this.”
In Calgary’s Chinatown, Tsz-Yee John Chiu, 49, came to check on his elderly father. The city, he said, “should have been more prepared with sandbags” after the 2005 flood. During that deluge, “It rained for a month,” he recalled.
At the Wah Hing Meat Shop, manager Leon Chui, 50, was frantically loading boxes of raw chicken into the back of a refrigerated van. “We can’t do anything” without power, he said. “It’s so terrible.”
The prized residential neighbourhood of Bowness was particularly hard hit, with several feet of fast-moving river water covering the streets around many of the well-manicured homes that once held pride of place on the river bank.
Wandering across a precarious pedestrian bridge barely higher than the rapids, James Hummelt and his wife, Rhonda, watched the river’s rise, fearing the fate of their home across the water.
“It’s starting to rise again,” he said. “We were here this morning and it had gone down, but the water is coming up again. It’s probably only 6 or 8 feet on the east side of the river here and if it should reach that, it would flood Montgomery. They shut the power off. We have a sump pump, but if it comes in our basement, we’re pretty much done.”
Mr. Hummelt said he grew up in Bowness and had never seen anything like this.
“When the floods hit in ‘95, we were nervous but we were not afraid. I think we’re actually afraid,” Mrs. Hummelt said. “We’re all used to having stuff happen in our lives. We’re resilient people, we’ll rebuild. But it’s going to take a lot more time to recover from something like this. It’s going to be devastating for large areas of the city, seeing how much we’ve lost in such a short period of time.”
Stephen Harper said as an Albertan, he never imagined there could be a flood of such magnitude in this part of Canada.
As he toured the region’s hardest-hit areas, the prime minister urged residents to stay optimistic through this “very difficult time.”
For now it appears the flooding has peaked and stabilized, but there are always fears that more water could have an impact on infrastructure, he said. |
Q:
Can I use PCA (or should I use regression) for testing the effect of multiple variables on one dependent variable?
I have 2000 soil property measures and 14 different variables like rainfall, temperature, slope, etc. I want to check the effect of those 14 variables on soil property measures, including which variable affects my soil property the most. Is PCA a suitable solution to this? Or should I try multiple regression instead?
A:
Though multiple regression is generally better, allowing you to extract more meaningful and interpretable information from your model, it does have its weaknesses. It may not do a very good job if you are expecting interactions or nonlinear shapes, because you may not have sufficient power even with 2000 points (I would do a power analysis to be confident about this). It's also difficult if you don't have clear hypotheses about those interactions or the shapes of the responses.
However, you probably should be most concerned about multicollinearity, i.e. non-trivial correlations among your 14 explanatory variables. This last case is where a PCA is likely to be helpful. It allows you to break apart correlated variables into orthogonal vectors that can then be used without problems in regressions. These regressions may be single or multiple - you can, after all, use more than one PCA axis as a predictor variable.
So to sum up: it's not an either/or. Check for multicollinearity, and if it is present, you can do a PCA followed by a multiple regression.
EDIT based on Ian_Fin's suggestion below.
It is possible to interpret the variables in a multiple regression that uses PCA axes as explanatory variables, because the PCA tells you the strength and direction of each underlying variable's contribution to each axis (the loadings). In practice, though, this can be tricky and will depend on the underlying correlation structure.
|
Captain Britain Corps
The Captain Britain Corps is a fictional league of super-heroes appearing in American comic books published by Marvel Comics. The characters are all known as, or appear as an alternative version of, Captain Britain. They are all essentially the same hero except they each come from an alternative reality.
Fictional team history
Founded by Merlyn, his daughter Roma and Sir James Braddock, the Corps' main duty was to guard the Multiverse. Each member protected his or her reality based on their dimensional equivalent of Britain, and was powered by the friction between dimensions.
Merlyn and Roma arranged for each chosen member of the Corps to gain superpowers, using any means, disguise or otherwise, possible.
The making of Brian Braddock
Merlyn chose Brian Braddock to become Earth-616's version of Captain Britain. After Braddock had adventured as Captain Britain for a few years, Merlyn sent Braddock to Earth-238, where he overheard that this reality once had a Captain UK, however Braddock was unsure of what this meant until he met Captain England and Captain Albion at a trial in another reality.
Back in his own home world, Braddock allied himself with Captain UK who had taken up home there after her Britain had become a fascist state and the heroes living there were targeted for a genocidal purge. Together they fought the hero-killer Fury after which Captains Britain and UK and their new ally Saturnyne were transported to Merlyn's home, Otherworld. Here they discovered that Merlyn had apparently died. Later they discover that it was one of his many ruses.
Otherworld
For what seems like the first time, the Corps in their entirety attended Merlyn's funeral. Afterwards Roma began taking a more direct approach with the Corps, even making Saturnyne her subordinate (mainly to keep an eye on her). She started bringing Corps members to the Starlight Citadel for training. Roma then added another duty to the Corps list of responsibilities. In addition to protecting their home realities, they must also take turns in defending Otherworld.
During the adventures of Captain Britain (Braddock), Corpsmen would occasionally appear. These appearances are usually to observe important events (such as the wedding of Meggan and Braddock and the conclusion of the Cross-Time Caper) or to carry out a sentence, as when they acted as jury at Braddock’s trial for breaching the Corps Code of Conduct.
Franklin Richards
When Roma perceived Franklin Richards to be a threat, not only to his home reality (Earth-616) but to all of reality, she dispatched the Warwolves, Gatecrasher and her Technet to kidnap him. When her plan was being opposed by the Fantastic Four and Alyssa Moy, who was babysitting Franklin at the time, Roma teleported them all to Otherworld to face the full fury of the entire Corps.
Although they were hampered by having never worked as a team, the Corps eventually started wearing down the heroes, until Franklin used his reality manipulating mutant powers to supercharge his family who defeated the entire Captain Britain Corps. After a brief debate with Human Torch, Roma agreed that Franklin should be left with his family.
However, it was suggested that the entire kidnapping was just a ruse to let Caledonia, a former prisoner of Roma's starlight citadel, infiltrate the Fantastic Four's home as Franklin's nanny to prepare them for their forthcoming battle with Abraxas.
Near destruction
The Corps was nearly wiped out by Mastermind, a villainous computer belonging to Brian Braddock, and a group of mutated children known as the Warpies (victims of the Jaspers' Warp), who were once the wards of Captain UK. Roma stepped down as omniversal guardian, giving the title to Brian Braddock, who became King of Otherworld and rebuilt the Corps.
When the insane mutant Wanda Maximoff altered reality in House of M, another wave of destruction tore through Otherworld. Roma and Saturnyne, in an effort to save the omniverse, give Brian 48 hours to fix the tear in reality, or they will erase his Earth completely. With the sacrifice of Meggan, the heroes are able to seal the tear.
The corps rebuilt its ranks but once again it came under attack, this time from Mad Jim Jaspers and corps members which he began to turn into Furys. The end of the battle saw Roma dead and most of the corps along with her. Saturnyne appointed Albion leader and told Captain Britain to stay and keep an eye on his reality as they rebuilt the corps once again.
Uncanny X-Force
Captain Britain Corps comes to Earth to arrest Fantomex for the murder of the young Apocalypse. Fantomex justifies his actions that the young Apocalypse will become evil, but Captain Britain suggest he never had a chance of redemption. During the trial the "Goat-Devil" lays siege to Otherworld and the Uncanny X-force come to save Fantomex from execution. Psylocke then realizes the Goat-Devil is actually her older brother from the future and tells Captain Britain to kill him. Captain Britain cannot do it himself, but lowers his psychic defenses and allows Psylocke to control him and kill Jamie Braddock, thus erasing The Goat from the future and ending the siege.
Time Runs Out
During the events of "Time Runs Out", the Captain Britain Corps investigate universal Incursions which are causing the destruction of various realities, and the deaths of twenty Corpsmen. After the members of the Corps capture a Mapmaker, the Ivory Kings send their entire forces to overrun the Starlight Citadel, destroying the entire Corps. Saturnyne is able to teleport Brian Braddock to safety, leaving him as the Corps' only survivor.
Membership
Although members usually call themselves simply "The Corps", other seems to attach their name to the Corps. For example, "Crusader X Corps" or the better known and most prevalent "Captain Britain Corps", which is better known since Braddock became King of Otherworld.
The Captain Britain Corps spans the multiverse and have a huge collection of members most of which go unnamed. The Corps, not only consists of alternative versions of Brian Braddock from throughout the multiverse, but also characters representing a vast array of different worlds, such as Hauptmann Englande, a member representing a world where Nazis won World War II and Captain Airstrip-One from a world based on George Orwell's novel Nineteen Eighty-Four, while some even vary in species, like an animal version and even more exotic forms. While some members, like Captain U.K., have been utilized throughout many worlds, she provided support for Braddock on Earth-616, and replaced the fallen Captains of Earth-794 and later Earth-839.
Membership is depleted following the attack by Mastermind and the Warpies, M-Day, and Jaspers return, with the ranks slowly being rebuilt first under the command of Brian Braddock (Earth-616) and currently Albion.
Known current members
Former members
Those members who have left or are known to have died:
Unconfirmed
Lists members whose status is unclear, the Corps was decimated in X-Men: Die by the Sword and the full aftermath of that incident has yet to be shown.
Agent Albion (Victoria Whitman) (Earth-10221) - Excalibur vol. 2 #1 (2001)
Anglo-Simian (Joseph Cornelius) (Earth-5905) - Excalibur vol. 1 #44 (1991)
Britannic (Brian Braddock) (Earth-28927) - Excalibur Annual #2
Britanicus Rex (Brian Braddock) (Earth-99476) -Excalibur vol. 1 #51 - He resided in the dimension also known as Dino-World.
Britanotron - Excalibur #43
Caledonia (Alysande Stuart) (Earth-9809) - Fantastic Four vol. 3 #9 - She was a prisoner in the Starlight Citadel before becoming Franklin Richards' nanny on Earth-616 as well as a spy for Roma.
Cap'n Saxonia (Frideswide Lawley) (Earth-924) -Excalibur #49 - Also a member of Calibur alongside that dimension's versions of Spider-Girl, Iron Fist, Hulk and Dr. Strange. She was sometimes known as Captain Saxonia.
Captain Albion (Katherine Huggen) (Earth-523) - Daredevils #6
Captain Angleterre (Paul-Henri Spencer) (Earth-305) - Mighty World of Marvel vol. 2 #13 (1984)
Captain Britain (Meggan) (Earth-1189) - Excalibur vol. 1 #44 - Her world was devastated by nuclear war. She took over the mantle after her version of Braddock died and became part of the Corps.
Captain Britain (Brian Braddock) (Earth-3913) - Excalibur vol. 1 #44 (1991) - He was accused of murdering a police officer.
Captain Britain (Brian Braddock) (Earth-4400) - Exiles #43
Captain Britain (Brian Braddock) (Earth-7475) - Alpha Flight #74 - Runs the common market, all of Western Europe and North Africa.
Captain Britain (Brian Braddock) (Earth-8545) - Exiles #20
Captain Britain (Betsy Braddock) (Earth-9012) - Excalibur vol. 1 #43 (1991)
Captain Britain (Brian Braddock) (Earth-9411) - Spectacular Spider-Man vol. 2 #114
Captain Britain (Brian Braddock) (Earth-21993) - What If? vol. 2 #46
Captain Britain (Brian Braddock) (Earth-32000) - X-Men Unlimited #26
Captain Britain (Brian Braddock) (Earth-98125) - Marvel Vision #25 - He chose both the Amulet of Life and the Sword of Death.
King Britain (Brian Braddock) (Earth-9997) - Paradise X: X - Captain Britain became King of England and resides in the Realm of the Dead.
Captain Colonies (Stephen Rogers) - Excalibur vol. 1 #44.
Captain Cymru (Morwen Powell) (Earth-1282) - Excalibur vol. 1 #24 - One of the few known Captains who uses a gun with Plastrix.
Centurion Britannus (Thracius Scipio Magnus) (Earth-4100) - Excalibur vol. 1 #24 (1990) - His costume resembles that of the Roman Empire. He invokes Mithras, a god worshiped in both India and Ancient Rome.
Centurionous Britainicosarus (Magnus Rex) (Earth-6993) - Excalibur vol. 1 #44 (1991)
Chevalier Bretagne (René de Bragelonne) (Earth-1508) - Excalibur vol. 1 #24 (1990) - He wears a purple and green suit similar to a Musketeer.
Chieftain Justice (T'Challa) (Earth-6606) - Excalibur vol. 1 #44 (1991)
Crusader X (Bran Braddock) (Earth-2122) - Excalibur vol. 1 #21 - (1990)
Enforcer Capone (Adolfo Costa) (Earth-89947) - Excalibur vol. 1 #44 (1991)
Friar Albion (Petros Wisdom) (Earth-9586) - Excalibur vol. 1 #44 (1991)
Gizmo (William "Billy" Ransom) (Earth-40121) - Excalibur vol. 2 #1
Gotowar Konanegg (Kavin Plundarr) (Earth-8413) - Mighty World Of Marvel vol. 2 #13 (1984)
Kommandant Englander (Helga Geering) (Earth-846) - Mighty World Of Marvel vol. 2 #13 - She is from a German dominated world.
Lady London (Sybil Sherman) (Earth-9006) - Excalibur vol. 1 #24 (1990)
Maasai Marion (Sadiki Namuntaya) (Earth-1857) - Excalibur vol. 1 #43 (1991)
Madam Sussex (Francesca Grace) (Earth-4811) - Excalibur vol. 1 #44 (1991)
Maid Britannia (Guinevere Wren) (Earth-8406) - Mighty World Of Marvel vol. 2 #13
Major Commonwealth (Byron Falsworth) (Earth-4904) - Mighty World Of Marvel vol. 2 #13 (1984)
Mercian Marsh'al (C'rta M'ller) (Earth-5511) - Excalibur vol. 1 #44 (1991)
Officer Saxon (Peter Hunter) (Easth-9106) - Excalibur vol. 1 #43 (1991)
Percy Penfold (Earth-81289) - Excalibur vol. 1 #44 (1991)
Pookie Pendragon (Kozfran) (Earth-9246) - Excalibur vol. 1 #24 (1990)
Privateer Albion (Jack Turner) (Earth-9890) - Excalibur vol. 1 #124
Rifleman (Lance Hunter) (Earth-22110) - Excalibur vol. 2 #1
Right Honorable Captain Winston Faneshawe-Sinclair (Earth-3208) - Excalibur vol. 1 #44 (1991)
Samurai Saxonai (Kendra Matsumoto) (Earth-6315) - Excalibur vol. 1 #44 (1991)
Sister Gaia (Serena Foster) (Earth-9111) - Excalibur vol. 1 #44 (1991)
Skrull Lord: Colony UK7 (Kl'rt) (Earth-6309) - Excalibur vol. 1 #49
Will Of The People (John Raven) (Earth-7305) - Excalibur vol. 1 #50
Unknown association
Characters that have taken the name and role of Captain Britain but have not been stated as being part of the corps.
Captain Britain (Kymri) (Earth-1289) - Excalibur vol. 1 #16 - She and Lockheed jointly took the mantle of Captain Britain. Her planet was conquered and her people enslaved. She was bound to Kyllian as his personal hound by Tullamore Vogue.
Captain Britain (Lockheed) (Earth-1289) - Excalibur vol. 1 #16 - He and Kymri jointly took the mantle of Captain Britain.
Captain Britain (Brian Braddock) (Earth-1610) - The Ultimate version
Captain Britain (Brian Braddock) (Earth-2149) - Marvel Zombies #2 - was infected with the zombie virus by Quicksilver.
References
External links
Captain Britain Corps at Alternity
Captain Britain Corps at Comic Vine
Captain Britain Corps at the Marvel Database Project
Captain Britain Corps' Room
Category:Marvel UK teams
Category:Alternative versions of comics characters
Category:British superheroes
Category:United Kingdom-themed superheroes
Category:Comics characters introduced in 1984 |
Introduction
============
Renal cell carcinoma (RCC) is divided into clear-cell, papillary, oncocytoma and collecting duct subtypes which exert different invasion and metastatic potentials ([@b1-ol-08-05-2175]). The clear-cell carcinoma subtype represents \<85% of reported cases, according to the United States National Centre for Health Statistics report ([@b2-ol-08-05-2175]). Recurrence of the disease following surgery can be observed in one-third of the cases and one-fourth of the patients exhibit metastatic disease at the time of a diagnosis ([@b1-ol-08-05-2175],[@b2-ol-08-05-2175]). RCC metastases are often regarded as radioresistant tumors, which was observed in the present case ([@b2-ol-08-05-2175]--[@b4-ol-08-05-2175]). For this reason, metastases are usually treated with relatively high biologically effective doses. Metastasis to adjacent organs and bone are common, but distant metastases to the head and neck region are rare. Of these previously reported cases, the facial skin area has been the most common location. The present study demonstrates the case of rapidly growing and radiotherapy-resistant RCC metastasis to the lower lip and chin which was treated with surgery. The functional and esthetic outcome was satisfactory despite the large gap generated by the metastasis resection. This case provides evidence that palliative surgery may achieve a higher quality of life for end-stage oncological patients.
Case report
===========
The current study presents the case of a 71-year-old male patient who was diagnosed with RCC in September 2011. At that time, the disease was at an advanced stage. The primary tumor in the lower pool of the right kidney was infiltrating the adjacent structures and the patient exhibited synchronous mediastinal and pleural metastases, with the latter causing persistent pleural effusion and markedly declining lung function. Due to the poor performance status and risk of side effects, the patient refused to initiate the disease-controlling sunitinib treatment and chose to proceed to the optimum supportive care. The patient presented with subcutaneous metastases to the lower lip and back of the neck 11 months after the diagnosis. The patient received palliative radiotherapy (split course, 15/5 Gy) to the rapidly growing lower lip metastasis. The tumor diameter was 1.5 cm when the treatment was initiated. However, no clinical response to radiotherapy was obtained, and three weeks following the treatment the tumor had more than tripled in diameter. Thus, the patient was evaluated at the Department of Oral and Maxillofacial Diseases (Helsinki University; Helsinki, Finland). At the time of admission the patient had a spontaneously bleeding mass (size, 60×60 mm) in the lower lip and the anterior mandible area ([Fig. 1A](#f1-ol-08-05-2175){ref-type="fig"}). In addition to this, there was a group of smaller subcutaneous metastases located at the subcutaneous nuchal area, which did not exhibit symptoms. Resection of the lip metastasis was performed with 5-mm clinical margins and for this reason, the resection was extended to the bony surface of the mandible. The lower lip was also partially resected as the small subcutaneous metastases had continued to spread into the lip mucosa ([Fig. 1B](#f1-ol-08-05-2175){ref-type="fig"}). To prevent wound tension following closure, the skin was dissected subcutaneously from the resection line to the upper neck, pulled over the chin to cover visible bone, and resuspended with transcutaneous sutures to the titanium plate (MatrixMFACE Plating System; Synthes Holding AG, Solothurn, Switzerland) in the mandible ([Fig. 1B and C](#f1-ol-08-05-2175){ref-type="fig"}). The patient was satisfied with the outcome at the three-week postoperative follow-up and no clinical sign of recurrence was observed ([Fig. 1D](#f1-ol-08-05-2175){ref-type="fig"}). Histological examination via immunohistochemical staining ([Fig. 2](#f2-ol-08-05-2175){ref-type="fig"}) identified the tumor as metastatic RCC and the mass was resected with clear lateral margins.
Discussion
==========
RCC commonly metastases to adjacent organs, and up to one-fourth of patients have metastases present at the time of the diagnosis ([@b1-ol-08-05-2175],[@b2-ol-08-05-2175]). Four major subtypes of RCC exist (clear-cell, papillary, oncocytoma and collecting duct carcinoma), with different invasion and metastatic potentials. However, none of them have been reported to be particularly invasive to the head and neck region ([@b1-ol-08-05-2175],[@b2-ol-08-05-2175]). Of the 75 previously reported cases of metastatic RCC to the head and neck region, the majority were already diagnosed with RCC, however, certain patients exhibited oral metastasis as the initial manifestation of the disease. This highlights the importance of full body imaging to identify whether the patient has previously undergone surgery for head and neck neoplasms, to avoid inaccurately diagnosing a newly formed metastasis as the recurrence of a former tumor. A third of the previously identified cases of patients with head and neck RCC metastases have been reported on the facial skin area ([@b4-ol-08-05-2175]--[@b11-ol-08-05-2175]), although the parotid gland, paranasal sinuses and tongue are also common locations. In addition, single cases of nephroblastoma (also termed, Wilms' tumor) and renal sarcomas have been reported in the head and neck area ([@b12-ol-08-05-2175],[@b13-ol-08-05-2175]). According to earlier reports, none of the RCC subtypes preferentially metastasize to the head and neck area. The locations of previously reported metastases are listed in [Table I](#tI-ol-08-05-2175){ref-type="table"} ([@b4-ol-08-05-2175]--[@b51-ol-08-05-2175]).
In conclusion, surgery is rarely the first option when treating RCC patients with multiple metastases. However, it is important to consider palliative surgery for certain patients, as surgical management of the metastasis may provide an improved quality of life although this type of surgery does not affect the final outcome.
{#f1-ol-08-05-2175}
{#f2-ol-08-05-2175}
######
Sites of renal cell carcinoma metastases in the head and neck area obtained from previous studies.
Location Cases, n (refs)
---------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Skin and subcutaneous lymph nodes 24 ([@b4-ol-08-05-2175]--[@b11-ol-08-05-2175])
Parotid gland 10 ([@b10-ol-08-05-2175],[@b18-ol-08-05-2175],[@b22-ol-08-05-2175],[@b24-ol-08-05-2175],[@b33-ol-08-05-2175],[@b35-ol-08-05-2175],[@b39-ol-08-05-2175],[@b41-ol-08-05-2175],[@b45-ol-08-05-2175])
Tongue 8 ([@b14-ol-08-05-2175],[@b15-ol-08-05-2175],[@b19-ol-08-05-2175],[@b23-ol-08-05-2175],[@b31-ol-08-05-2175],[@b36-ol-08-05-2175],[@b48-ol-08-05-2175],[@b51-ol-08-05-2175])
Oral mucosa 6 ([@b20-ol-08-05-2175],[@b29-ol-08-05-2175],[@b30-ol-08-05-2175],[@b34-ol-08-05-2175],[@b36-ol-08-05-2175],[@b40-ol-08-05-2175])
Tonsils, facial muscles and oropharynx 9 ([@b21-ol-08-05-2175],[@b27-ol-08-05-2175],[@b32-ol-08-05-2175],[@b34-ol-08-05-2175],[@b36-ol-08-05-2175],[@b40-ol-08-05-2175],[@b50-ol-08-05-2175])
Nasal cavity and paranasal sinuses 10 ([@b25-ol-08-05-2175],[@b26-ol-08-05-2175],[@b28-ol-08-05-2175],[@b36-ol-08-05-2175],[@b39-ol-08-05-2175],[@b42-ol-08-05-2175],[@b47-ol-08-05-2175],[@b49-ol-08-05-2175])
Orbit 3 ([@b15-ol-08-05-2175],[@b17-ol-08-05-2175],[@b43-ol-08-05-2175])
Mandible 3 ([@b12-ol-08-05-2175],[@b13-ol-08-05-2175],[@b43-ol-08-05-2175])
Maxilla 2 ([@b8-ol-08-05-2175],[@b16-ol-08-05-2175])
The primary location of the renal cell carcinoma was included in the table when various locations were stated in a single study.
|
Price
Popular Tags
The Kartell TipTop is a small side table that has a single base, which supports the round top. It is designed by Philippe Starck and Eugeni Quitlle. The lovely combination of the colored top and the hollow transparent leg infuses elegance into the surrounding while bringing uniqueness to the structure. The top of the table comes in a vast range of colors both transparent and matte. The table is distinguished by its central hollow transparent leg.nnSmall and light, the Kartell TipTop Table can... Top Material Details: PMMA Base Material Details: PMMA Pieces Included: N/A
This product is covered with one face walnut the other face walnut-beech. Custom cutting is done at CNC.Feet are beech tree. The feet are shaped in the lathe. The product is completed. Pieces Included: 3 Nesting tables
Inspired by the sun-drenched gardens of central Italy, this 2-tiered end table offers antique appeal with an elegante twist. Set the tone with lemon leaf topiaries, antiqued planters, and cutout garden stools, then elevate the look with scrolling and shimmering accents. Base Material Details: Iron Number of Shelves: 1
Evoking the look of lava lamps or blown glass art, this Yareli Evergreen End Table brings a vibrant pop of color and dimension to your living space. The teardrop design of the acrylic base allows the color to pool at the bottom and create an ombre effect. The acrylic sits on a chrome metal circle, which complements the circular shape of the clear tempered glass top. Top Material Details: Tempered glass
Reaching upward in exuberance to welcome each moment, is the perfect End Table to maximize your rush of ideas. A complete accent piece for your beverage of choice or electronic device, the End Table can be swiftly positioned for both spontaneous and planned events. Made of sturdy fiberglass and a compact design, this is a unique End Table that will fit into any niche with finesse. Top Material Details: Fiberglass Base Material Details: Fiberglass
The Palmer collection is a unique assortment of richly grained teak tables with a sandblasted mink finish. Each piece of the collection is carefully designed with updated silhouettes that embody subtle refinement. Pieces Included: 1
Glimmering and glamourous, this ultra-chic end table is a surefire scene stealer! Crafted from solid and manufactured woods in a matte silver finish, this dapper design strikes a round silhouette with three curvy saber legs with modern claw feet. The lovely lower shelf is ideal for displaying a framed photo or holding a miniature bouquet of roses, while the round tabletop, with its beveled mirror insert and hand-placed mirrored mosaic tiles, provides the perfect platform for reflecting the... Number of Shelves: 1
Inspired by mid-century Metro styling, this refined end table is a striking piece for any living room. Top Material Details: Hardwoods, cherry veneers and glass Base Material Details: Hardwoods, cherry veneers
Nestled one inside the other or separated to put in different spots, the ease and convenience of Candy Nesting Tables has long been a household favorite. And now Latitude Run have given them a new contemporary look with round beveled white marble tops held aloft by contrasting Ponga black metal bases in a unique open caged frame. Ideal for any room, any decor and any time. Base Material Details: Iron Pieces Included: Small, medium and large nesting tables
An abstract-inspired design, this Presidio End Table will definitely stand out in any area. Featuring clear glass table top with sturdy metal base in the infinity design, the table is not only functional but an artwork. Top Material Details: Tempered glass
Inspired by the vintage starburst motifs prevalent in many 1950s table top designs, the Zila collection translates that bright, mid-century sparkle into contemporary dcor. Each clear, tempered glass disk is delicately suspended a top an interlaced metal strut base. The feeling is fresh, airy and unmistakably original. Top Material Details: Tempered glass |
# Copyright 1999-2016 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
EAPI=6
# ebuild generated by hackport 0.5.1.9999
CABAL_FEATURES="lib profile haddock hoogle hscolour"
inherit haskell-cabal
DESCRIPTION="Dependent finite maps (partial dependent products)"
HOMEPAGE="https://github.com/mokus0/dependent-map"
SRC_URI="https://hackage.haskell.org/package/${P}/${P}.tar.gz"
LICENSE="BSD-2"
SLOT="0/${PV}"
KEYWORDS="~amd64 ~x86"
IUSE=""
RDEPEND=">=dev-haskell/dependent-sum-0.3.2:=[profile?]
dev-haskell/semigroups:=[profile?]
>=dev-lang/ghc-7.4.1:=
"
DEPEND="${RDEPEND}
>=dev-haskell/cabal-1.6
"
|
[Characteristics of the label incorporation from 2-14C-glycine into the tissues of animals of differing ages].
Peculiarities of 14C incorporation into proteins and lipids of the liver, kidneys, heart and muscles tissues as well as blood serum and protein-free filtrates of tissues were studied in young and old rats in different periods after glycin-2-14C administration. It is shown that incorporation of 14C into the tissue of young rats as well as its evacuation from the organism is more rapid. At the same time a change in dynamics of the precursors content in individual organs of animals of both age groups is of indirected character. A conclusion is drawn that acceleration of the radioactive label evacuation due to a more rapid exchange--ability of proteins in the young animals tissues might be one of the reasons of discrepancy between a higher level of radioactivity in the tissues of old rats and a total decrease in the intensity of metabolism during ontogenesis. |
Effects of fluorination on iridium(III) complex phosphorescence: magnetic circular dichroism and relativistic time-dependent density functional theory.
We use a combination of low temperature, high field magnetic circular dichroism, absorption, and emission spectroscopy with relativistic time-dependent density functional calculations to reveal a subtle interplay between the effects of chemical substitution and spin-orbit coupling (SOC) in a family of iridium(III) complexes. Fluorination at the ortho and para positions of the phenyl group of fac-tris(1-methyl-5-phenyl-3-n-propyl-[1,2,4]triazolyl)iridium(III) cause changes that are independent of whether the other position is fluorinated or protonated. This is demonstrated by a simple linear relationship found for a range of measured and calculated properties of these complexes. Further, we show that the phosphorescent radiative rate, k(r), is determined by the degree to which SOC is able to hybridize T(1) to S(3) and that k(r) is proportional to the inverse fourth power of the energy gap between these excitations. We show that fluorination in the para position leads to a much larger increase of the energy gap than fluorination at the ortho position. Theory is used to trace this back to the fact that fluorination at the para position increases the difference in electron density between the phenyl and triazolyl groups, which distorts the complex further from octahedral symmetry, and increases the energy separation between the highest occupied molecular orbital (HOMO) and the HOMO-1. This provides a new design criterion for phosphorescent iridium(III) complexes for organic optoelectronic applications. In contrast, the nonradiative rate is greatly enhanced by fluorination at the ortho position. This may be connected to a significant redistribution of spectral weight. We also show that the lowest energy excitation, 1A, has almost no oscillator strength; therefore, the second lowest excitation, 2E, is the dominant emissive state at room temperature. Nevertheless the mirror image rule between absorption and emission is obeyed, as 2E is responsible for both absorption and emission at all but very low (<10 K) temperatures. |
Elias Says...
MLB
GREINKE STRIKES OUT 13 IN FIVE-INNING STINT
From Elias: Zack Greinke registered 13 strikeouts but was pulled after five innings in the Angels victory over the Mariners on Tuesday night. Greinke is the first pitcher since 1900 to strike out 13-or-more batters while pitching five-or-fewer innings in a game. Prior to Greinke, the most strikeouts ever recorded in a stint of five-or-fewer innings was 11 by 13 different pitchers.
Angels pitchers ended the contest with 20 total strikeouts, tying the most ever in a nine-inning game.
From Elias: David Price struck out 13 in a complete-game effort in Tampa Bay's 5-2 win over the Red Sox at Fenway Park on Tuesday night. Price is only the fourth visiting left-handed pitcher to strike out at least 13 Red Sox in a game at Fenway Park. The others to do that were the White Sox' Jack Harshman on July 25, 1954 (16 K's), Cleveland's Sam McDowell on June 23, 1971 (14 K's) and Seattle's Randy Johnson on April 10, 1998 (15 K's).
From Elias: Anibal Sanchez, acquired by the Tigers from the Marlins in July, pitched a three-hit shutout and struck out 10 in Detroit's 2-0 win over the Royals on Tuesday night. Sanchez is the first pitcher in the Tigers' franchise history to pitch a shutout with double-digit strikeouts after pitching for another team earlier in the season.
From Elias: Freddie Freeman, who celebrated his 23rd birthday just 13 days ago, hit a two-run home run in the bottom of the ninth inning to lift the Braves to a 4-3 win over the Marlins on Tuesday night. Over the last 30 seasons, only two Braves have hit a walk-off home run at a younger age than Freeman: Andruw Jones (twice: 1997 and 2000) and Jeff Francoeur (2006).
From Elias: Starter Aaron Laffey did not allow a run over 5.2 innings and was backed by five relief pitchers that completed the shutout in the Blue Jays' 4-0 win over the Orioles on Tuesday night. It marked only the second time in the Jays' franchise history that they used six pitchers in a shutout victory. Toronto also did that earlier this season (June 15) against the Phillies.
From Elias: Rookie Darin Ruf hit his first career home run in the second inning in the Phillies' 6-3 win over the Nationals on Tuesday night. Ruf became the 24th different Phillie to go deep in 2012, setting the franchise record for most players with at least one home run in a season, topping the previous mark of 23 in 1996.
Cueto
CUETO IS FIRST REDS PITCHER WITH 19 WINS SINCE 1988
From Elias: Johnny Cueto allowed two runs over seven innings and earned his 19th win of the season in the Reds' 4-2 victory over the Brewers on Tuesday night. Cueto is the first Reds pitcher to win 19 games in a season since Danny Jackson had 23 wins in 1988. Entering the 2012 season, only two major-league teams had a longer current streak of consecutive seasons without a 19-game winner than the Reds: the Nationals (30 straight before Gio Gonzalez did it this season) and the Brewers (26 straight).
From Elias: Garrett Jones blasted a home run in the top of the ninth inning, his 25th of the season, in the Pirates' 10-6 win over the Mets on Tuesday night. Jones joined teammates Andrew McCutchen (30) and Pedro Alvarez (30) in the 25 home-run club in 2012. It is only the second time in the Pirates' franchise history that they have had three players with 25+ home runs in the same season. In 1966, Roberto Clemente (29), Donn Clendenon (28) and Willie Stargell (33) all went deep at least 25 times for Pittsburgh.
From Elias: The Cardinals defeated the Astros by a score of 4-0 at Minute Maid Park on Tuesday night. St. Louis has now won each of the last 10 games it has played against Houston this season. Since divisional play began in 1969, the Cardinals have had only one other double-digit single-season winning streak against a particular opponent, beating the Pirates 10 straight times in 1985.
From Elias: Tim Lincecum was lit up for seven runs in four innings of work in the Giants' loss to the Diamondbacks on Tuesday night. Lincecum has now lost each of his last six decisions to Arizona dating back to August of 2011, his longest career losing streak against any opposing team in his career. In fact, no other team has beaten Lincecum more than four consecutive times.
From Elias: Francisco Liriano was knocked out with two outs in the fourth inning and was saddled with the loss in the White Sox' 4-3 defeat at the hands of the Indians on Tuesday afternoon. It is the sixth time that Liriano lasted four-or-fewer innings in a start this season, the most such starts for any American League pitcher.
From Elias: The Yankees took a 3-1 lead into the bottom of the seventh inning, but surrendered their advantage by allowing four runs en-route to a 5-4 loss to the Twins on Tuesday night. It is the 20th time that the Yankees have lost a game in which they have had a lead of at least two runs this season, tied with the Red Sox for the most such losses in the American League.
From Elias: George Kottaras, batting ninth in the A's batting order, hit a home run in the top of the tenth inning to snap a 2-2 tie and lead Oakland to a 3-2 victory over the Rangers on Tuesday night. Only two other A's that started the game in the ninth spot in the batting order have hit a go-ahead home run in extra innings since the team moved to Oakland in 1968: Kurt Suzuki against the White Sox on August 16, 2007 (10th inning) and Tony Phillips against Minnesota on August 6, 1983 (13th inning). |
Microfluidic fabrication of self-assembled peptide-polysaccharide microcapsules as 3D environments for cell culture.
We report a mild cell encapsulation method based on self-assembly and microfluidics technology. Xanthan gum, an anionic polysaccharide, was used to trigger the self-assembly of a positively charged multidomain peptide. The self-assembly resulted in the formation of a nanofibrous matrix and using a microfluidic device, microcapsules with homogeneous size were fabricated. The properties and performance of xanthan-peptide microcapsules were optimized by changing peptide/polysaccharide ratio and their effects on the microcapsule permeability and mechanical stability were analyzed. The effect of microcapsule formulation on viability and proliferation of encapsulated chondrocytic (ATDC5) cells was also investigated. The encapsulated cells were metabolically active, showing an increased viability and proliferation over 21 days of in vitro culture, demonstrating the long-term stability of the self-assembled microcapsules and their ability to support and enhance the survival of encapsulated cells over a prolonged time. Self-assembling materials combined with microfluidics demonstrated to be an innovative approach in the fabrication of cytocompatible matrix for cell microencapsulation and delivery. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.