text stringlengths 12 4.76M | timestamp stringlengths 26 26 | url stringlengths 32 32 |
|---|---|---|
The community meeting Wednesday night comes after WUSA9 broke news of high lead levels in 17 DC playgrounds.
WASHINGTON — Local parents attended a "Community Forum on the Safety of Playgrounds and Artificial Turf Fields in D.C.," sponsored by the Park View UNC and D.C. Safe Healthy Playing Fields on Wednesday.
The forum included panelists Diana Zuckerman with the National Center for Health Research, Neuroscientist Kathleen Michels, and Diana Conway, President of D.C. Safe Healthy Playing Fields.
Recycled tire pieces used on the ground for 17 D.C. playgrounds were found to have "actionable levels" of lead, according to a report released Sept. 20 by the District's Department of General Services.
This comes after independent labs, including the Ecology Center of Ann Arbor, Michigan, tested recycled tire crumb rubber at school playgrounds and found potentially toxic levels of lead at two elementary schools.
"In any given tire, maybe only 60 percent of it is actually made of rubber and the rest is hardening agents or additional chemical additives, bracing materials. You’ve got fiberglass. You've got zinc." Kevin Bell of Public Employees for Environmental Reform, said.
"The reason why there has been no sort of broad ranging study on this is that a tire’s chemical makeup is confidential business information," Bell said. "Every tire manufacturing company keeps that a closely guarded secret, so the only way to find out what’s in it is to test every piece individually."
The new DGS report lists 17 playgrounds, listed below.
Aiton Elementary School : 533 48th Place Northeast
533 48th Place Northeast Bancroft Elementary School: 1755 Newton Street Northwest
1755 Newton Street Northwest Cardozo Education Campus: 1200 Clifton Street Northwest
1200 Clifton Street Northwest Dorothy I. Height Elementary School: 1300 Allison Street Northwest
1300 Allison Street Northwest Eaton Elementary School: 3373 Van Ness Street Northwest
3373 Van Ness Street Northwest H.D. Cooke Elementary School: 2525 17th Street Northwest
2525 17th Street Northwest Janney Elementary School: 4130 Albermarle Street Northwest
4130 Albermarle Street Northwest Langdon Educational Campus: 1900 Evarts Street Northwest
1900 Evarts Street Northwest Nalle Elementary School: 219 50th Street Southeast
219 50th Street Southeast Oyster-Adams Bilingual School (Adams Campus): 2020 19th Street Northwest
2020 19th Street Northwest River Terrace Education Campus: 405 Anacostia Ave. Northwest
405 Anacostia Ave. Northwest Roosevelt High School: 2020 19th Street Northwest
2020 19th Street Northwest Shepherd Elementary School: 7800 14th Street Northwest
7800 14th Street Northwest Thomas Elementary School: 650 Anacostia Ave. Northwest
650 Anacostia Ave. Northwest Thomson Elementary School: 1200 L Street Northwest
1200 L Street Northwest Truesdell Education Campus: 800 Ingraham Street Northwest
800 Ingraham Street Northwest Turner Elementary School: 3264 Stanton Road Southeast
Schools rated as having the highest levels include Aiton, Janney, Thomas, Thomson,Turner Elementary schools as well as Cardozo Education Campus. D.C. DGS says it pressure washed and vacuumed up all 17 sites.
First grade teacher at Bruce-Monroe Elementary School Jessica Goldkind recalled how her students tracked in crumb rubber, "Little kind of rubberish pebbles that they pick up. They get their hands in it. They get dirty. It ends up in the water fountain. Sometimes they bring some in. It gets in the classroom on occasion. It happened to me today."
"Reading that it can be harmful to students, I need to know if that’s the case, so I can advocate for my students," Goldkind said.
On Thursday, Oct. 3, D.C. Councilmember Robert C. White Jr., Chair of the Committee on Facilities and Procurement, will hold a Public Oversight Roundtable on Environmental Hazards in Recreational Spaces and the Facilities Management Division of the Department of General Services. The Public Oversight Roundtable will take place in Room 500 of the John A. Wilson Building, 1350 Pennsylvania Avenue, N.W., at 10:00 AM. | 2024-05-10T01:26:19.665474 | https://example.com/article/6969 |
Q:
jQuery append Not Working in Double HTML
I have a code with a two steps. The following statement:
Step 1: If the <span id="test"></span> no text, then id='parent' will be removed with this code $("#parent").Remove();
Step 2: If the id="container" no text, then id="container" will automatically generate text NO DATA with this code $("#container").append("NO DATA");
At Atep 1 work well. But at this Step 2 not working.
I also had to replace of append() into html() and has not been able to work.
What's the solution?
My Codes:
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<div id='container'>
<div id='parent'>
<b>Age:</b> <span id='test'></span>
</div>
</div>
<script>
if($("#test").html()==""){
$("#parent").remove();
}
if($("#container").html()==""){
$("#container").append("NO DATA");
}
</script>
A:
There's whitespaces left in your container so html() is not empty you need to use $.trim() to remove left whitespaces. Here's jsFIddle for you.
if($("#test").html()==""){
$("#parent").remove();
}
if($.trim($("#container").html())==""){
$("#container").append("NO DATA");
}
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<div id='container'>
<div id='parent'>
<b>Age:</b> <span id='test'></span>
</div>
</div>
| 2024-05-10T01:26:19.665474 | https://example.com/article/5903 |
Most people have used toilet paper their whole lives,if you know the bidet advantages, you may be surprised at how much it has to offer for users.
Bidet provide better clean and hygiene for people. NAVISANI non-electric bidet is not only provide better clean and hygiene for people, it's also completely safe in use for everyone.
Environmental protection, bidets require the use of significantly less toilet paper, which is certainly a good thing for the planet.A bidet also helps users save money. When calculating toilet paper use for the average family, the bidet will save more money.
NAVISANI non-electric bidet advantages:
• No battery needed.
• Double self clean nozzle.
• Water temperature control knob: warm and cold water at choice.
• Spray water by adjusting the nozzle base, suitable for cleansing body for both male and female.
• Adjustable installation scope, it can be fitted to more than 95% toilet bowl in the market.
• No need to change the facilities, just need to add NAVISANI bidet to your original toilet seat,and connect it to the water supply in bathroom.
Additionally, it is the advantageous option for anyone no matter their age since the bidet is so much more environmentally friendly and cost effective. Seniors can benefit from the use of the bidet and so too could anyone else. | 2023-12-07T01:26:19.665474 | https://example.com/article/6378 |
Smiling not crying…
Menu
Tag Archives: breakdown
Post navigation
I had started this year off with mostly positivity, sure I’ve had low days but for the most part they never lasted more than a day. Probably the best start I’d had to a year in the past decade, maybe … Continue reading →
The past few days have been hard and horrible and I feel like I’ve taken a million steps backwards and I’m 11 again.
I’m handing in my resignation at work. I know I shouldn’t. I know I should stand up and fight.
But why? What will that really achieve? Everyone will be aware at how emotional and weak I am. Everyone will continue to talk behind my back, only I will have handed them extra amunition. The people who spouted the vile words won’t get into trouble and they will just hate me more. I’ve been here before, this is exactly what happened at school. I’m not going through all that again. These people aren’t adults, if they were they wouldn’t have spoken like this in the first place.
So I’m doing what’s best for me and my piece of mind, I’m removing myself from the situation. It’s gonna be a huge stress financially, but I’m gonna hunt for a job constantly and for now anything is better than nothing, so temp jobs will do.
As for the mental damage, It will take me some time to get over the renewed and heightened anxiety and paranoia that I’m constantly feeling and the depression. But I think the best thing is to get out of the situation before any more damage can be done.
I ended up calling in sick today at the advice of a “friend” she told me not act hastily and just have another day to think things over. But I just know that I can’t stay, even if they never said another word about me again, I would be thinking they were thinking it, because I know they have thought it and said it. Whenever they looked at me, whenever they laughed, I’d be thinking they’re saying it again. If I don’t see them, eventually I’m not gonna be thinking about it daily. Sure I’ll never forget it, It’ll be filed away into my long term memory with every other horrible experience, so that my brain can torment me with it at a future date. But better one new horrible memory than multiple.
I know I’ve probably made the wrong choice and people will think I’m weak and stupid, you’re probably right. I’m angry at myself for being weak, for not sucking it up, for not throwing on my ‘I don’t give a shit’ facade, I’m angry that I’m “letting them win”, but I’m also hurt, humiliated, paranoid, anxious, disgusted, depressed. I already had enough going on and to handle, I didn’t need this as well.
So I’m picking my battles, and this one isn’t it. My battle is my health, physical and mental, my battle is my life and keeping all those balls up in the air that have to be juggled, my battle isn’t girls who still have a high school mentality and think you can judge and belittle someone that you barely know because you work in the same office. The things I can say about them if I was to sink to their level, but I’m better than that, so I’m leaving them to bitch about whatever they want. But not in front of me, because the only other option to crumbling is detroying and I wouldn’t stop.
I’ve always tried to pretend I’m ok with being alone, that I prefer my own company, that I can do things I want too, That I’m strong enough to get myself through all the downs and downs of life. If anyone in my life believes that, they’re either deluding themselves or don’t know my very well.
I’m tired of struggling through things on my own. Tired of having nowhere to turn for support and a shoulder to lean on.
I’m tired of “friends” and “family” draining me of all my energy and strength when they need support or help and then leaving me deflated and empty when they’re ok.
I’m tired of looking around and seeing no one I can share my life with, my worries, my joys, general observations.
I’m tired of being alone. I’m tired of being ignored. I’m tired of being invisible and unimportant to everyone.
I just want one person to say I see you, I care, I’m here, I’m not just going to use you when it suits me and leave you drowning in darkness when you’re in need.
Someone like this must exist surely?! or am I not worthy of such a person in my life?
I think I self destruct my life. I know that I let things get on top of me, from the little things to the big things. I take on too much and do too much, even when I’m exhausted, or … Continue reading →
I woke up feeling like shit, that led to being panicky, and also stress cos I knew I was gonna struggle with work. On the way into work I felt very panicky so my driving reactions were a little slow and it took me a couple seconds longer to pull away at the red light than normal, this happens to people all the time, just glance in the opposite direction long enough that you don’t notice it changed. Well the prick on the push bike behind me decided to have a go at me, he could’ve very easily just cycled past me, I wasn’t blocking his way, but no he had to swear at me and have a go. And that’s my problem what right does this complete stranger have to have a go at me because I was abit distracted, what business is it of his, I wasn’t stopping him going about his life? I wasn’t causing him harm, so why did he cause me harm? Maybe it’s my fault that I’m such a fucked up person thanks to panic attacks, a “normal” person wouldn’t be affected by it really.
And yet here I am unable to pull myself together after the burst of rage/panic attack/hysterical break down of crying he reduced me too.
Today has been tough, I don’t know if it’s the lack of sleep, the unrequited love I’m torturing myself with or once again being lied to by a friend. It’s been a ridiculously emotional day, I’ve burst into tears many times for no apparent reason, and I’ve felt like I’m going slightly crazy, I was writing in my journal earlier about how I felt, I’ve decided to post it and see if it helps lift the weight, sometimes writing it down helps, and sometimes it’s saying it out loud, but when that’s too hard maybe just writing and putting it out into the world is the second best thing? who knows…
I appear to be this strong person who copes with whatever life throws at me… truth is I’m a wreck, I never cope with anything I simply avoid the negative emotions that these events induce and I spend every day gripped my silent fear, I feel like I’m going insane, I often feel that at any moment I could crack, that I’m teetering on the edge and any second I’m about to lose my balance and lose myself to the chaos of my mind. Yet I never seem to, because I do what I’m good at and I push it as far out of my conscious thought as possible, I fill my mind and being with trivial matters and past times, all the while a voice in the back of my mind telling me that soon this won’t work, soon I will crack and I will crumble… and then what will people think… what will become of me then… how will my family and friends cope… who will try to heal me… who will care?
I’m not strong, I’m weak, I’m a shadow of whoever I once was and with everyday that passes I lose a little more of myself. One day there will be nothing left to show, just a shell of a person who was once here. I wonder if it would be different if I had somewhere to turn, if there was a soul in the world who cared, someone who cared enough to say I’m here, I will try to understand and I will help you be strong when you are weak.
I feel like I’m breaking down, I want to cry all the time, sometimes I can’t stop the tears flowing, sometimes my thoughts hurt so much I have to hold my head, and double over as though the thoughts are causing me physical pain, and even though they aren’t, it still somehow feels like they are. Sometimes I explode with rage, and that rage is often directed towards myself, pulling hair, scratching, biting, punching, kicking objects, punching walls. Sometimes I sit numbly wishing my life away, hating myself for the thoughts and wishes and yet still doing it.
I know people say that if your questioning your sanity then you’re actually of sound mind, but is it not possible I’m just aware that it’s slipping out of my grasp?
And then just like that, after crying and feeling like I’m on the brink of madness, it’s gone and I feel, well nothing?! Not happy, not sad, nothing!
I think I might be on the verge of a nervous breakdown, I have to keep fighting… somehow. | 2023-11-15T01:26:19.665474 | https://example.com/article/4152 |
The structure and cytochemistry of the oocytes in the crab Xantho bidentatus Milne Edwards.
In the development of the oocytes of xantho bidentatus four stages could be distinguished. In stage I the cytoplasm is homogenous, in state II a perinuclear ring is formed, in stage III oocytes round bodies which are carbohydrate-protein complexes appear near the peripheri. These bodies occupy the oocyte completely in the stage IV oocyte. There are two types of bodies in the oocyte, big oval or round bodies which are carbohydrate-protein complexes and smaller bodies in between the oval bodies. These smaller bodies are lipid bodies. In stage I and II the cytoplasm is rich in RNA and in stages III and IV the cytoplasm is full of carbohydrates, proteins and lipids. | 2023-12-06T01:26:19.665474 | https://example.com/article/3832 |
BuzzFeed News obtained a document Facebook commissioned as research on billionaire George Soros following critical comments the billionaire investor made at the World Economic Forum earlier this year. This report, which you can read below, is at the center of a controversy surrounding Facebook’s previous relationship with a public relations firm that brought Beltway political tactics to the social networking giant.
The document, which is largely innocuous, was assembled by Definers Public Affairs, which was contracted by Facebook for communications consulting and opposition research on competitors and critics, including Soros. It is one of at least two files prepared after Soros appeared onstage in Davos, Switzerland, in January and said Facebook and Google were a “menace” to the world and that the “internet monopolies” did not have the will or inclination to protect society.
As BuzzFeed News reported on Thursday, those comments alarmed Facebook Chief Operating Officer Sheryl Sandberg, who sent an email to a staffer asking them to determine whether Soros, an 88-year-old billionaire investor and philanthropist, had any financial interests related to Facebook. While it is not unusual for companies to conduct research on perceived opponents, and these kinds of documents are fairly standard practice in political circles, they typically aren’t intended for public consumption or meant to be traced back to their sponsors.
Facebook declined to comment for this story.
The current Facebook scandal, triggered by a New York Times story last month that highlighted the company’s internal strife and relationship with a seasoned opposition research firm, threatens to engulf key executives after Facebook’s initial failure to explain what its top two leaders, CEO Mark Zuckerberg and Sandberg, knew about Definers and when they knew it. Last month, amid continuing fallout from the Times story, outgoing communications and policy head Elliot Schrage published a note taking responsibility for the Soros research and attributing the reason for the work to the investor’s comments at Davos.
“We researched potential motivations behind George Soros’s criticism of Facebook in January 2018,” a Facebook spokesperson told BuzzFeed News on Thursday. “Mr. Soros is a prominent investor and we looked into his investments and trading activity related to Facebook. That research was already underway when Sheryl sent an email asking if Mr. Soros had shorted Facebook’s stock.”
In the document distributed to reporters in the fall and obtained by BuzzFeed News, Definers highlights Soros’s possible ties to left-leaning advocacy groups that had been critical of Facebook.
“Recently, a number of progressive groups came together to form the Freedom From Facebook campaign which has a six-figure ad budget,” the document reads. “It is not clear who is providing the large amount of funding for the campaign or what their motive is. At least four of the groups in the coalition receive funding or are aligned with George Soros who has publicly criticized Facebook. Neither Freedom From Facebook nor Open Markets Institute have answered questions about who is funding this campaign.”
The document includes headlines and excerpts taken from publicly accessible information including news clippings and blog posts. While it lacks a coherent message, the excerpts and accompanying links were organized under categories such as “GEORGE SOROS CONNECTION.” There is at least one other, longer Definers document involving Soros, according to multiple sources.
Since the publication of the Times’ story, Eddie Vale, a spokesperson and consultant for Freedom From Facebook, said that no money from Soros directly or indirectly had been used to fund the coalition’s work. Axios has also since tracked down the original funder of Freedom From Facebook, identifying that person as Pennsylvania philanthropist and former hedge fund executive David Magerman.
“It’s obvious, yet again, no one can believe their revolving explanations until they release all the emails and research publicly,” Vale told BuzzFeed News on Saturday.
A spokesperson for Soros did not immediately return BuzzFeed News’ request for comment. Matt Stoller, policy director at Open Markets Institute, said that it’s been publicly reported that Open Markets receives money from Soros, but denied any knowledge of Soros funding Freedom From Facebook. BuzzFeed News has reached out to the other organizations mentioned in the document as well, and will update this post should they reply.
“As Facebook has already indicated, the work we do for our clients is always at their request, including this document,” a Definers spokesperson told BuzzFeed News.
The research Definers conducted for Facebook on Soros has never been published before. You can read the document below. | 2024-03-22T01:26:19.665474 | https://example.com/article/4672 |
Contents
Setting
After Donald accidentally transports them in, Sora, Goofy, and Donald arrive in the digital world and are promptly arrested by Sark who escorts them to the Pit Cell, where Tron is being held prisoner by the MCP; this room also hosts the Moogle Shop. After explaining the situation, the party escapes the Cell, heading for the Canyon, where they play a mini-game to unlock the terminal access. Further along and to the right is the Dataspace, where the password for the computer can be inputted.
Later on, the group delves deeper into Space Paranoids; the exit on the highest level and to the left leads to the I/O Tower: Hallway. Heading right takes the group to the I/O Tower: Communications Room, while going left takes them to the Simulation Hanger, where the Solar Sailor Simulation can be accessed. The Simulation takes the party to the Central Computer Mesa, which is the area just before the MCP's headquarters, the Central Computer Core.
Story
First visit
Sora wished to unlock Ansem's computer files to find any information they can on Ansem or Riku. Stitch fell into the scene (literally), causing Donald to heatedly pursue the alien by jumping on the keyboard. Unfortunately, this alerted the MCP, resulting in the party being "arrested" and transported into Space Paranoids.
When the gang landed on Space Paranoids, they appear to have digital armor replacing their normal attire. They meet Sark, who sends them to a cybernetic prison: the Pit Cell. Here, they find Tron locked up as well. They join forces, and escape. Tron tells the company about his plans to stop the MCP, and Sora tells him that they are 'Users' from the real world.
The gang venture to a city in Space Paranoids, where they confront a group of Heartless who are wrecking havoc with the system. After defeating them, Tron acquires a way to send the gang back to Hollow Bastion. Before they leave, he tells them that they need the DTD (Door to Darkness) to access the files. Sora learns the password (The Seven Princesses of Heart) and returns to Space Paranoids.
Before meeting Tron, Sora and co. are forced to play on Sark's Game Grid. After a few Light-cycling games, they manage to escape and meet up with Tron. The gang then go to the IO Tower, where the MCP has summoned the Hostile Program to derezz (delete) Tron and Sora. They fight the program and defeat it.
Sora, Donald, Goofy, and Tron unlocked the files, but learned that the data files have become corrupt. Space Paranoids, though, would be at peace for a while.
Like a somebody Second visit
Part 2, when the Battle Level is 42, shows the MCP tampering with the defence systems of the town, and releasing dangerous Space Paranoids Heartless into Hollow Bastion. Leon says that Cid is working on a system to derezz the MCP for good. Sora must enter the computer and obtain when it is ready.
Sora enters the computer and meets up with Tron on the Game Grid. An army of Heartless is attacking the system. They fight off what Heartless they can and then flee to the I/O Tower, where the data to derezz the MCP is awaiting them. After saving it into Tron's data-disk, they go to the Solar Sailor. They take the solar sailor to the Central Computer Core, where they fight and derezz both Sark and the MCP. Thus, ending the story of Space Paranoids.
Spoilers end here.
Character Design
In Space Paranoids, Sora, Donald, and Goofy, are converted into data and hence change forms accordingly. When Sora changes into his Drive Forms, the circuit patterns on his outfit alter color to match accordingly; they will change to red for Valor Form, blue for Wisdom Form, yellow for Master Form, white for Final Form, black/purple for Anti Form, and the circuits follow the colors of the Kingdom Hearts outfit for Limit Form - silver torso, red pants and yellow shoes.
Minigames
A Light Cycle mini-game is unlocked after Sora completes it for the first time. | 2023-09-07T01:26:19.665474 | https://example.com/article/9424 |
Omegle conversation log
2009-08-18
Connecting to server...
You're now chatting with a random stranger. Say hi!
A word of advice: "asl" is boring. Please find something more interesting
to talk about!
You: [Welcome to Element, a text adventure.]
Stranger: oh cool
You: You are sitting at your study desk; there is a fire blazing in the
fire place.
You: The only exit is a door to your east, leading to the stair room.
You: The walls are covered with bookshelves.
Stranger: o shit i bet theres a blood thirsty killer down the stairs
Stranger: k so i go east
You: You are still sitting down.
Stranger: ok ok then i guess i stand up and go east ...but ill take the
chair with me
Stranger: just in case
You: You get up from your comfortable seat with a great sigh. You know
that there is work to be done.
You: The seat is attached to the floor; it won't budge.
You: You go east, arriving in the stair room.
You: There are exits to the east and west, and a long wooden spiral
staircase leading down.
Stranger: i go down
You: The staircase creaks loudly, and you balance yourself on the banister
to avoid falling. You really must fix these stairs someday, you think to
yourself.
You: You arrive safely at the bottom, in the [House Entrance].
You: There are many shoes arranged neatly along the Main Door to your
north.
You: There is a wooden staircase leading up.
Stranger: duh
You: Your butler looks at you expectantly.
Stranger: um why ?
You: [Do you mean to ask him? If so, you need to address him.]
Stranger: ok so i have to do this like im in a MUD
Stranger: ok
Stranger: so talk to butler
You: [What do you want to say to the butler?]
Stranger: say what are ya looking at ?
You: The butler responds "Only admiring your coat, sir".
Stranger: go north
You: The door is closed.
Stranger: *sigh* open door
You: The butler rushes to your assistance. "Sir! Surely you do not wish to
leave on a cold night such as this!"
Stranger: tell butler yes i do
You: The butler says "I'm afraid that is not an option, sir. I must attend
to you.".
You: He blocks the door with his body and coughs.
Stranger: say you can come along if ye want
You: "Thank you, sir, but it is simply out of the question. I am required
to keep you safe inside."
Stranger: say so what am i supposed to do to kiil the boredom ?
You: "You have your study upstairs! You have thousands of books on any
subject you could conceive of."
Stranger: go up the stairs
Stranger: tell me the exits
You: [I assume you meant: Go up the stairs.]
You: You ascend the creaky staircase slowly, careful not to lose your
balance.
You: You are in the stair room; your study is to the west, and your
bedroom is to the east.
Stranger: Go east
You: You go into your bedroom.
You: The entire surface of the floor is a large, springy mattress.
You: There are many closets along the north wall; the south wall holds a
large computer monitor and three buttons.
Stranger: Open closet
You: You open a closet. Your clothes are neatly arranged inside, smelling
freshly laundered.
Stranger: Start computer
You: [Please rephrase that command.]
Stranger: umm
Stranger: Push button
You: Which button do you want to push: The red button, the green button,
or the blue button?
Stranger: Push the red button
You: The screen turns on to show the Mandelbrot set; it is quite
beautiful, and animated in fascinating ways.
Stranger: examine set
You: [I assume you mean: examine the Mandelbrot set]
Stranger: yeah
You: It is breathtakingly beautiful, and, as always, you see patterns in
it that you have not noticed before.
You: You will never tire of this fractal, you are sure.
Stranger: push green
You: You push the green button; the computer screen shows a picture of the
butler guarding the door, taken through one of your hidden cameras.
Stranger: ...
Stranger: push blue
You: You push the blue button. A voice sounds throughout the house: "Push
blue button twice more to confirm". You see, on the screen, that your
butler has left the door.
You: You hear the stairs creaking.
Stranger: go west two times
Stranger: fast
You: You go west and see your butler just coming out of the staircase.
You: He stops you.
Stranger: damn
You: "Sir! You must be more careful!"
You: The butler walks east into your bedroom, where you hear clicking
sounds and loud beeps coming from the computer.
You: He then leaves the bedroom and returns downstairs.
You: The staircase creaks loudly.
Stranger: go east
Stranger: push blue
You: You return to your bedroom.
Stranger: push blue twice more times
You: You push the blue button. The computer does not respond.
You: You push the blue button. The computer does not respond.
You: You push the blue button. The computer does not respond.
Stranger: ok go west twice
Stranger: and search for items
You: You return to your study.
You: You see many books around you; on your desk is an ashtray.
Stranger: take ashtray
You: You pick up the ashtray, empty it onto the floor (the butler will
clean it up, surely), and add it to your inventory.
Stranger: go down the stairs until i see the butler
You: [First going east to the stair room.]
You: The staircase creaks loudly.
You: The butler is standing in front of the door. He coughs gently.
Stranger: hit butler in the head with ashtray
You: The ashtray shatters to pieces on his head.
You: He presses a button on his belt and cleaning robots come inside and
remove the glass from the floor.
Stranger: isnt he knocked down ?
Stranger: nevermind
Stranger: say you MUST let me go out
You: "I'm very sorry, sir. It is in my contract: I must not allow you to
leave."
Stranger: say what do you mean in your contract...am I a prisoner
Stranger: ?
You: "That is an ugly word, sir. You are being guarded for safety."
Stranger: say protected from what ?
Stranger: [wow this is getting interesting]
You: "Assassination, sir, as you are well aware."
Stranger: hmm
Stranger: say it seems like i have no other option
Stranger: kick butler in the balls , open door and run north
You: "Pardon me, sir?"
Stranger: that would work in real life
You: "Sir!"
You: The butler is well trained for combat situations, and much faster
than you.
You: He deflects your attack with ease, picks you up, and carries you
upstairs to your bedroom.
You: [Bedroom]
You: The computer screen shows the butler guarding the door.
Stranger: push blue button
Stranger: then push blue button twice to confirm
You: Nothing happens.
You: You realize the butler must have done something to the computer.
Stranger: search for windows
You: There are no windows in your house; the butler had them boarded up
for safety.
Stranger: hmmm
Stranger: [can you give me a hint :( ? ]
You: Hint: Check your inventory
Stranger: ok check inventory
You: You are carrying your pipe (lit), a book of matches, and a handgun.
Stranger: examine handgun
Stranger: is it loaded ?
You: It is a derringer, made to be easily concealed. You managed to
acquire it without your butler's knowledge.
You: It is loaded.
Stranger: go west and down the stairs
You: You go to your the stair room and climb downwards.
You: The stairs creak loudly.
You: The butler stands in front of the door, guarding it.
Stranger: point handgun at butler
Stranger: say let me go
You: The butler looks at you straight in eye, and says "I cannot".
Stranger: (if he dows something shoot him)
Stranger: (does*)
You: He looks terrified, but is attempting to hide it.
Stranger: say i'm sorry
Stranger: shoot butler
You: The butler screams and begins to bleed.
You: He attempts to talk, but fails.
Stranger: open door
Stranger: say"farewell" and go north
You: You can see that the butler is mortally wounded.
You: The cleaning robots can see it too, apparently; they come in, remove
his corpse, and scrub the floor quickly.
You: You open the door; instead of the cold wind that you were expecting,
you see a brightly lit corridor.
Stranger: examine corridor
You: It streches ahead for as long as you can see.
You: It is completely spotless; you cannot see a speck of dust anywhere.
Stranger: keep going north
You: You continue north.
You: You feel your heart beating quickly; you know that this is unsafe.
You: You continue walking.
You: You hear loud whirring noises.
You: You walk ahead regardless.
You: You see the door close, far behind you.
Stranger: look north
You: There is a large glowing spot on the floor a few meters away.
You: It turns from red to green to blue, and back again.
Stranger: touch spot
You: [Do you mean when it is red, when it is green, or when it is blue?]
Stranger: touch it when it is blue
You: The corridor begins to shake. Far ahead, to the north, you see a
shape that resembles the butler running toward you.
You: It is riding on a cleaning robot.
Stranger: shoot the robot
You: You shoot the robot, which grinds to halt.
You: The butler jumps off and resumes running.
Stranger: shoot butler
You: You shoot the butler just before he reaches you. As he dies, he falls
onto the glowing spot (while it is green).
You: His body instantly disappears.
Stranger: check handgun to see how many bullets i have left
You: You have 1 bullet left.
Stranger: touch the spot when it's green
You: You find yourself back inside the entrance room.
You: You see the butler's corpse on the floor; it is soon cleared away by
the cleaning robot.
You: You have a sense of deja vu.
Stranger: open the door , and go north until i reach the spot again
You: You open the door and run north.
You: The door soon closes behind you.
Stranger: touch spot when it's red
You: You touch the spot and suddenly become dizzy.
You: The world becomes blurry around you, and you feel an intense pain in
your temple.
You: You faint.
You: As you regain consciousness, you see bright colors all around you;
you realize that you are falling down, but do not see any bottom.
Stranger: examine surroundings
You: They are strangely familiar, and exceedingly beautiful.
You: You try to hold on to them, but your hands pass right through the
colors.
You: The falling soon becomes tiring.
You: You have been falling for several minutes now.
Stranger: hmm i'm sure im not actually falling ,its an illusion
Stranger: pass trough the walls of colour
You: You feel the sensation of free fall.
You: You are not able to maneuver while you are falling!
Stranger: look down
You: Blackness.
You: You realize that, somehow, you are in the Mandelbrot set.
You: You look around with a new-found appreciation. You have always
enjoyed fractals.
You: You begin to relax.
Stranger: damn i must be on LSD
Stranger: continue
You: You puff your pipe contentedly.
You: You are still falling.
Stranger: wait for a few more minutes
Stranger: still falling ?
You: You relax, and examine the self-similar patterns.
You: You continue smoking your pipe.
Stranger: ok so im trapped in an endless loop.....
You: You run out of tobacco in your pipe.
You: As you clear it out with your finger, you notice a small object
inside.
Stranger: examine the small object
You: It appears to be a camera of some sort.
Stranger: use camera ?
You: You press the single button on top of the camera.
You: There is a bright flash.
You: A picture comes out of it.
You: But this picture is not of the fractal, but of your bedroom!
You: You see yourself on the screen.
Stranger: hmm
Stranger: ...
Stranger: i'm stuck
Stranger: take another picture ?
You: There is a bright flash.
You: Another picture comes out of the camera. Paradoxically, the picture
is of the camera itself.
Stranger: i knew it was all an illusion !
Stranger: take a picture of my face so the bright flash will make me blind
for a second and maybe i'll wake up in my room
Stranger: ...
You: You phtograph your face.
You: There is a bright flash, and you hear a picture coming out of the
camera.
You: For a moment you are blind, but gradually your vision resolves.
You: You are unfortunately still in the fractal.
You: The picture that came out of the camera was of the corridor.
Stranger: ok can i recive a hint ?
You: Hint: Try examining some of the items in your inventory.
Stranger: check inventory
You: You are carrying: Your pipe (empty), a book of matches, a tiny
camera, a picture of your bedroom, a picture of your camera, and a picture
of the corridor.
Stranger: examine the picture of my camera
You: It is glowing.
Stranger: in RGB again ?
You: It is glowing in the colors of the picture.
Stranger: touch it
You: You touch the picture, and feel yourself disappearing... But you
immediately reappear at the same place.
Stranger: oh i get it
Stranger: touch the picture of the corridor
You: You touch the picture, and feel yourself disappearing...
You: You reappear in the corridor.
You: You see a broken cleaning robot.
Stranger: examine it
You: It is the robot that you previously shot. It has dirty boot marks on
it; apparently the butler was outside shortly before stepping on it.
Stranger: go north but ignore the spot
You: You continue north.
You: The corridor goes on, and on, and on, for kilometers...
You: You find yourself becoming tired.
You: You stop to rest for a while.
Stranger: keep going maybe i'll find somethng
You: You continue.
You: The place is no longer so spotless; in fact, it is filthy.
You: There are large spiders crawling on the ceiling.
You: You stop to admire the fractal nature of their legs.
Stranger: look north
You: To your north, the spider-webs become thicker and thicker, and the
spiders more plentiful.
You: Farther on, the webs become so thick that you cannot see through
them.
Stranger: go north
Stranger: light a match
Stranger: and throw it on the webs
You: You stop at the thick webs.
You: You throw the match at the webs.
You: They catch fire instantly; they soon shrivel, as do the spiders
unfortunate enough to have been on them.
You: Beyond the webs, you see sunlight.
Stranger: yay
Stranger: north
You: You walk outside, and take your first breath of fresh air for over 20
years.
You: It is calming...
You: You look around you.
You: The area is covered with grass, and you see tall trees.
Stranger: look at the building at my south
You: It is large and colorful.
You: There is a statue on its roof.
Stranger: so now i can go in any way ?
You: You are surrounded by a thick forest; you probably could not make
your way through the trees if you tried.
Stranger: hmm
Stranger: climb a tree and examine my suroundings
You: The trees are completely smooth, and you are quite weak.
You: You do not manage to climb.
Stranger: take a picture of the forest to see if it's real
You: You press the button in your camera.
You: The flash is not nearly so bright in the sunlight.
You: A picture emerges... In it, you see your study.
Stranger: examine the picture
You: Its glow is very faint in the sunlight.
Stranger: touch it
You: You disappear for a moment, and then reappear in your study.
Stranger: oh wait i have to go
Stranger: sorry
Stranger: bye | 2024-01-04T01:26:19.665474 | https://example.com/article/4283 |
Metro Funding Derailed in House
Budget bill drops safety funding for transit system
Receive the latest local updates in your inbox
It was just three years ago that Metro finally got the federal government to help assist in funding the aging transit system. The first payment finally hit the bank in fiscal year 2010.
But it seems like the well may have already run dry.
An amendment by Congressman Gerry Connolly (D-Va.) to the House budget proposal that would have authorized a federal payment of $150 million for fiscal year 2011, was dropped when Republicans ruled it out of order.
“One year after the federal government made its first 10 annual payments to Metro, the Republican majority is trying to break the agreement Congress made to match the funding provided by Virginia, Maryland, and D.C.,” Connolly said in a press release. “There is no bigger beneficiary of the Metro system than the federal government. More than 40 percent of federal employees commute on Metro every day and the federal government provides no subsidy to Metro other than this $150 million annual payment.”
The cost of the amendment would have been offset by reducing direct federal farm subsidies and was backed by many local politicians from Maryland, Virginia and the District.
The original legislation that passed in 2008 required an annual payment of $150 million from the federal government, and $50 million each from Maryland, Virginia and D.C. to match the government’s contribution. In addition, the Metro Board allowed for two seats to be given to federal representatives.
“This move by the Republicans to eliminate the fiscal year 2011 payment to Metro is an egregious abrogation of the contract Congress made with the states and D.C.,” Connolly said in the release. “This legislation jeopardizes everything we’ve tried to do, in a bipartisan manner, to improve Metro safety.”
The biggest concern for Metro would be the inability to continue safety improvements. The NTSB has suggested more than $1 billion in needed for those improvements. And new GM Richard Sarles has put an aggressive track work schedule in place to try to meet those recommendations.
But not only is funding for Metro in jeopardy, but the entire federal government could shut down if a new budget isn’t approved by March 4. Connolly said something similar could happen to Metro if funding isn’t approved.
“I suggest that failure to amend the bill to retain the federal funding of Metro could have a similar effect if the money isn’t available to keep Metro safe and functioning efficiently,” he said in the release. | 2024-07-25T01:26:19.665474 | https://example.com/article/6375 |
2 N.Y.3d 787 (2004)
814 N.E.2d 409
781 N.Y.S.2d 239
In the Matter of KOREAN JOONG BU PRESBYTERIAN CHURCH OF NEW YORK, Appellant,
v
INCORPORATED VILLAGE OF OLD WESTBURY et al., Respondents.
Court of Appeals of the State of New York.
Decided May 13, 2004.
*788 Meyer, Suozzi, English & Klein, P.C., Mineola (A. Thomas Levin of counsel), for appellant.
Farrell Fritz, P.C., Uniondale (John M. Armentano of counsel), for respondents.
Chief Judge Kaye and Judges G.B. Smith, Ciparick, Rosenblatt, Graffeo, Read and R.S. Smith concur.
OPINION OF THE COURT
MEMORANDUM.
The order of the Appellate Division should be reversed, with costs, and the matter remitted for further proceedings in accordance with this memorandum.
Here, the courts below and the board of assessors erred as a matter of law in concluding that petitioner church was not entitled to a tax exemption pursuant to RPTL 420-a simply because the church's proposed use of the property was unauthorized due to its lack of a special use permit (see Matter of Legion of Christ v Town of Mount Pleasant, 1 NY3d 406 [2004]). Because the board of assessors did not examine whether the development of the property was "in good faith contemplated" by the church, we remit for that purpose.
On review of submissions pursuant to section 500.4 of the Rules of the Court of Appeals (22 NYCRR 500.4), order reversed, with costs, and matter remitted to Supreme Court, Nassau County, with directions to remand to respondent Board of Assessors *789 of the Incorporated Village of Old Westbury for further proceedings in accordance with the memorandum herein.
| 2024-07-30T01:26:19.665474 | https://example.com/article/4786 |
#!/bin/sh /etc/rc.common
START=93
start() {
/usr/sbin/thd --socket /tmp/triggerhappy.socket --triggers /etc/triggerhappy/triggers.d/ --daemon /dev/input/event*
}
stop() {
/usr/sbin/th-cmd --socket /tmp/triggerhappy.socket --quit
}
| 2024-05-02T01:26:19.665474 | https://example.com/article/6968 |
Students Without Borders is a WUSC and CECI program that enables Canadian university and college students to participate in exciting, volunteer learning opportunities in South America, Africa, and Asia.
Hi! My name is Arianne and I have graduated in May 2015, in International Studies and Modern Languages at Laval University in Québec city. Since then, I hadn't had time to just sit around, oh no! In summer 2015, I did a Quebec Without Boarders internship and I am now about to terminate a 8-months internship as Regional Liaison Officer (RLO) with WUSC.
This summer, I will be a Student Refugee … [Read more...]
Zikuyenda! (How’s things?)
My name is Heather Donald and I am an MA student in Development Studies and Refugee/Forced Migration Studies at York University in Toronto. I am really looking forward to heading back to the Warm Heart soon, brushing up on my Chichewa and building my experience in working with refugee populations. I will also be doing research for my MA on how young people in Dzaleka … [Read more...] | 2023-10-08T01:26:19.665474 | https://example.com/article/4607 |
Most water analysis methods quantify individual components. However, some, such as oxygen demand tests quantify an aggregate amount of constituents with a common characteristic. Broadly speaking, BOD and COD quantify amount of oxygen required to oxidize organic matter in water/wastewater to indicate amount of organic material present. BOD utilizes microorganisms to oxidize organic material, while COD uses inorganic chemical oxidant. BOD measurement is the most fundamental way of determining water pollution levels and of predicting possible effects of waste discharge. Organic matter that is present in water can be from plants, sugars, proteins or other substances that enter water film natural sources or pollution. This matter is broken down biochemically by organisms such as bacteria, which can multiply as long as organic matter is present as food and oxygen is available for respiration. If high population of bacteria continuously consume dissolved oxygen in water at an accelerated rate, atmospheric air will not be able to replenish it. This situation can create a lack of dissolved oxygen in water, threatening and destroying many forms of aquatic life. Oxygen depletion in receiving waters has been regarded as an important negative effect of water pollution. Depletion takes place due to microbes consuming organic matter in water via aerobic respiration. This type of respiration uses oxygen as an electron accepter and organic material being consumed provides the energy source. Since O2 content is important for may biological and chemical processes, measurements of amount of O2 actually dissolved in a water sample is of great importance. BOD test relates to amount of O2 that would be required to stabilize waste after discharging to a receiving water body. BOD is a parameter of great concern. Failing to realize importance of BOD in wastewater/effluent treatment systems can lead to devastating effects on local aquatic ecology and quality of underlying groundwater.
Monitoring BOD removal a treatment plant is necessary to verify proper operation. BOD test is typically performed in municipal or industrial wastewater plant. The results of BOD analysis are used to calculate the degree of pollution and to determine the effectiveness of water treatment by waste water and sewage plant. Different organic compounds show different oxygen demand (mg/l), thus the BOD test only gives an approximate idea of the weight of utilizable organic matter. BOD test is carried out using standard methods as prescribed in APHA (Standard Method for the Examination of Water and Wastewater, 20th edn. 5-1-5-6. Baltimore). Test substances and standard substances are dissolved in BOD dilution water. Standard substrate is made of 0.15 mg/l glucose and 0.15 mg/l glutamic acid, which has a calculated BOD of 220 mgL−1. It is expressed in terms of Dissolved Oxygen (DO), which microorganisms, mainly bacteria will consume while degrading organic material in a sample of water under standardized conditions of pH nutrient and microorganisms. The amount of oxygen that dissolves in the water depends on many factors: whether there is adequate time and adequate mixing to fully saturate the water, the water temperature, the air pressure, the salt content of the water, and whether there are substances in the water which consume the O2
Microorganisms either are present in the water sample or are introduced by taking a small quantity of a suitable microbial source such as settled sewage. The inoculants is called BOD ‘seed’ and the process, ‘seeding’. Since BOD analysis relies on a biological process, there is a greater variance in results then would normally be expected in a strictly chemical assay. The Standard methods for the Examination of Water and Wastewater indicated an acceptable range of ±15% at the 200 mg/l level for the reference GGA (Glucose-Glutamic Acid) solution; using the results from a series of Interlaboratory studies.
For all sources of seed, the possibility exists that some wastes will cause poisoning of the microorganisms. Some wastes will have develop3ed microorganisms adapted to the toxic conditions and hence give expected BOD results. But, in other wastes the microorganisms will adapt over the period of the BOD test. Because of the lag time involved in adaptation, a lower BOD is obtained than might be excepted. If the toxicity is sufficiently acute, a zero or close to zero result is obtained. Further, the ratio of various species of bacteria normally added can change in a 5-day period. BOD is the result of a summation of the oxygen demand of these microorganisms, whose contribution to the oxygen demand will change with time because of the changing population and changing feedstock.
Among the major industries in India, Pulp and Paper is one of those that contribute heavily to water pollution. Pollutants that generally arise from the industry include wood sugars, cellulose, fiber, lignin and other spent chemicals, which impart high BOD, COD, color, etc. to the effluent. Thus, there arises a need to develop specific seeding (comprising of selected, acclimatized and autochthonous bacterial strains) for analyzing the BOD load of these industrial wastewaters.
In addition, some of these compounds are refractory to biodegradation because of high molecular weight coupled with lesser bioavailability. The BOD analysis of such types of wastewaters poses acute problems because of many reasons, which include the heterogeneity of the samples from time to time, non-specific microorganisms present in general seeding material and lower biodegradation rate of the organic constituents present therein.
The aforesaid problems can be overcome by formulating a uniform microbial composition comprising selected isolated bacterial strains, acclimatized to pulp and paper wastewater. Further, these bacterial isolates must be specific for biodegradation of organic compounds present in these kinds of wastewater. General seeding materials viz., sewage, Polyseed, Bioseed and BODSEED™, when used for BOD analysis of above said wastewater does not work efficiently because of non-specificity of bacterial strains present therein. This leads to erroneous results, which differ from time to time. On the other hand, if specifically designed formulated microbial consortium comprising selected bacterial strains are used as seed for BOD analysis of above said effluents, it may yield reproducible and reliable results.
Thus, for solving the aforementioned problems, the applicants have realized that there exists a need to provide a process for the preparation of a microbial consortium, specifically formulated for use as seeding material for the BOD analysis of Pulp and paper wastewater. | 2024-04-26T01:26:19.665474 | https://example.com/article/9125 |
Amazing "Grandma Got STEM" Project Fights Old-Lady Luddite Stereotype
Is your grandmother a particle physicist? Did she help the Navy build submarines or make concoctions of chlorine gas on the family’s front porch? Or is she a mathematician, inventor, or engineer? If so, then baby, your grandma’s got STEM.
Grandma Got STEMis a celebration of women working in and contributing to the fields of science, technology, engineering, and mathematics. It is also designed to combat the doting, fumbling, pie-making stereotype of grandmatrons.
That’s why Rachel Levy, an associate professor of mathematics at Harvey Mudd College, is collecting the stories of grandmas across the various fields of STEM. She first got the idea after hearing someone utter the phrase, “Just explain it like you would to your grandma.”
Advertisement
At first blush, such a thing seems harmless. But think about what it means—basically, all older women are stupid.
“For two or three years I thought about how I could address this issue without just making people angry and more inclined to use the phrase,” Levy told me. “If I could come up with a million examples of grandmothers who were tech-savvy, people wouldn’t say it anymore because it wouldn’t be apt.”
While attending the conference ScienceOnline this year, Levy realized she could harness the power of the Internet to collect stories and showcase them. So far, she’s been able to upload at least one grandma a day for about a month and a half—and the stories keep pouring in. Levy’s aim so far is to be as inclusive as possible. She’s accepting any grandma currently or previously involved in STEM. They can submit themselves or you can submit for them. Heck, they don’t even have to have children with children, per se. Age’ll do just fine.
GGSTEM tells the story of Iris Critchell, a great-grandmother who flew more than 20 different types of military planes and member of the National Association of Flight Instructors Hall of Fame. There’s Susan McKinney Seward, the first black, female doctor in New York and third in the nation. And Toni Zala, the first woman in Hungary to earn a Ph.D. in biochemistry.
The stories go on and on. Which is to say, of course they do. Of course women have contributed much to the annals of science. Of course they have overcome great odds and persevered through all manners of discrimination and sexism. Of course they had more interesting things in their lives than attending our choral concerts and soccer tournaments. Some of that might even be plain ignorance of our grandmothers’ contributions to STEM.
“Another thing that has happened with this that I really didn’t expect is people telling me, ‘I think my grandmother did something related to science or engineering but I never really talked to her about it. Let me get back to you!’ It has started these amazing conversations between people about their life experiences,” says Levy. | 2024-06-29T01:26:19.665474 | https://example.com/article/2409 |
The “well-regulated militia” that the US Constitution’s second amendment refers to were slave patrols, land stealers and Indian killers, all quite necessary as the amendment’s language states “to the security of a free state” built with stolen labor upon stolen land. Unless and until we acknowledge that history, we cannot have an honest discussion about gun control.
The Real and Racist Origins of the Second Amendment
A Black Agenda Radio commentary by BAR managing editor Bruce A. Dixon
This commentary was originally published in Black Agenda Report April 19, 2008.
Why does the US Constitution guarantee a right “to keep and bear arms”? Why not the right to vote, the right to a quality education, health care, a clean environment or a job? What was so important in early America about the right of citizens to have guns? And is it even possible to have an honest discussion about gun control without acknowledging the racist origins of the Second Amendment?
The dominant trend among legal scholars, and on the current Supreme Court is that we are bound by the original intent of the Constitution’s authors. Here’s what the second amendment to the Constitution says:
“A well regulated militia being necessary to the security of a free State, the right of the People to keep and bear arms shall not be infringed.”
Clearly its authors aimed to guarantee the right to a gun for every free white man in their new country. What’s no longer evident 230 years later, is why. The answer, advanced by historian Edmund Morgan in his classic work, American Slavery, American Freedom, the Ordeal of Colonial Virginia, sheds useful light on the historic and current politics and self-image of our nation.
Colonial America and the early US was a very unequal place. All the good, cleared, level agricultural land with easy access to transport was owned by a very few, very wealthy white men. Many poor whites were brought over as indentured servants, but having completed their periods of forced labor, allowing them to hang around the towns and cities landless and unemployed was dangerous to the social order. So they were given guns and credit, and sent inland to make their own fortunes, encroaching upon the orchards, farms and hunting grounds of Native Americans, who had little or no access to firearms. The law, of course did not penalize white men who robbed, raped or killed Indians. At regular intervals, colonial governors and local US officials would muster the free armed white men as militia, and dispatch them in murderous punitive raids to make the frontier safer for settlers and land speculators.
Slavery remained legal in New England, New York and the mid-Atlantic region till well into the 1800s, and the movements of free blacks and Indians were severely restricted for decades afterward. So colonial and early American militia also prowled the roads and highways demanding the passes of all non-whites, to ensure the enslaved were not escaping or aiding those who were, and that free blacks were not plotting rebellion or traveling for unapproved reasons.
Historically then, the principal activities of the Founding Fathers’ “well regulated militia” were Indian killing, land stealing, slave patrolling and the enforcement of domestic apartheid, all of these, as the Constitutional language declares “being necessary to the security of a free state.” A free state whose fundamental building blocks were the genocide of Native Americans, and the enslavement of Africans.
The Constitutional sanction of universally armed white men against blacks and Indians is at the origin of what has come to be known as America’s “gun culture,” and it neatly explains why that culture remains most deeply rooted in white, rural and small-town America long after the end of slavery and the close of the frontier. With the genocide of Native Americans accomplished and slavery gone, America’s gun culture wrapped itself in new clothing, in self-justifying mythology that construes the Second Amendment as arming the citizenry as final bulwark of freedom against tyranny, invasion or crime. Embracing this fake history of the Second Amendments warps legal scholarship and public debate in clouds of willful ignorance, encouraging us to believe this is a nation founded on just and egalitarian principles rather than one built with stolen labor on stolen land.
Maybe this is how we can tell that we are finally so over all that nasty genocide and racism stuff. We’ve chosen to simply write it out of our history.
Bruce A. Dixon is managing editor at Black Agenda Report, and a member of the state committee of the Georgia Green Party. He lives and works in Marietta GA and can be reached via this site’s contact page or at bruce.dixon@blackagendareport.com. | 2023-08-06T01:26:19.665474 | https://example.com/article/2265 |
Senate Majority Leader Harry Reid Harry Mason ReidSenate Republicans signal openness to working with Biden Mellman: The likely voter sham Bottom line MORE (D-Nev.) is hosting a fundraiser for Alison Lundergan Grimes, Senate Minority Leader Mitch McConnell Addison (Mitch) Mitchell McConnellMcConnell focuses on confirming judicial nominees with COVID-19 talks stalled McConnell accuses Democrats of sowing division by 'downplaying progress' on election security Warren, Schumer introduce plan for next president to cancel ,000 in student debt MORE's (R-Ky.) Democratic opponent.
ADVERTISEMENT
The move marks a shift for Reid, who late last year said he wouldn't campaign against McConnell because he didn't think it was "appropriate."
Though McConnell pledged not to campaign against Reid during his 2010 race, he also shifted course and helped Reid's GOP opponent, Sharron Angle, raise money.
In 2004, GOP Senate Leader Bill Frist (R-Tenn.) campaigned for John Thune John Randolph ThuneThe Hill's 12:30 Report - Presented by Facebook - Don't expect a government check anytime soon The Hill's Morning Report - Sponsored by The Air Line Pilots Association - Trump contradicts CDC director on vaccine, masks Senate GOP eyes early exit MORE (R-S.D.) against then-Democratic Leader Tom Daschle (D-S.D.).
The fundraiser, first reported by Kentucky news outlet Channel 2, is scheduled for Oct. 11 at a McCormick & Schmick's restaurant in Las Vegas, and is organized by the Searchlight Leadership Fund, an organization Reid founded.
Tickets range from $1,000 for an individual to $5,000 for a political action committee co-host.
The Kentucky Republican Party pounced on the fundraiser. Spokeswoman Kelsey Cooper drew ties between the candidate and Reid's positions on coal.
"It should tell Kentuckians all they need to know that the biggest supporter of Alison Lundergan Grimes' campaign is Harry Reid, the man who thinks 'coal makes us sick.' If Alison was elected to the Senate, it would ensure that liberal Harry Reid and Barack Obama Barack Hussein ObamaThe Hill's 12:30 Report - Presented by Facebook - Don't expect a government check anytime soon Trump appointees stymie recommendations to boost minority voting: report Obama's first presidential memoir, 'A Promised Land,' set for November release MORE continue to set national coal policy rather than a Kentucky champion for coal, Mitch McConnell," Cooper said in a statement.
Coal mining is a big industry in Kentucky, and Republicans believe Lundergan Grimes may be vulnerable because of Democratic-backed policies to curb emissions from coal-fired power plants, which opponents say will hurt the coal industry.
The Lundergan Grimes campaign and Reid's office didn't respond to requests for comment. | 2024-04-24T01:26:19.665474 | https://example.com/article/3132 |
Two plasma membrane H(+)-ATPase genes are differentially expressed in iron-deficient cucumber plants.
Aim of the present work was to investigate the involvement of plasma membrane (PM) H(+)-ATPase (E.C. 3.6.3.6) isoforms of cucumber (Cucumis sativus L.) in the response to Fe deficiency. Two PM H(+)-ATPase cDNAs (CsHA1 and CsHA2) were isolated from cucumber and their expression analysed as a function of Fe nutritional status. Semi-quantitative reverse transcriptase (RT)-PCR and quantitative real-time RT-PCR revealed in Fe-deficient roots an enhanced accumulation of CsHA1 gene transcripts, which were hardly detectable in leaves. Supply of iron to deficient plants caused a decrease in the transcript level of CsHA1. In contrast, CsHA2 transcripts, detected both in roots and leaves, appeared to be unaffected by Fe. This work shows for the first time that a transcriptional regulation of PM H(+)-ATPase involving a specific isoform occurs in the response to Fe deficiency. | 2024-02-17T01:26:19.665474 | https://example.com/article/5288 |
Determinants and effects of electoral party coalitions: The case of Brazil
Abstract
Since the 1985 return to democracy, Brazilian politicians have resorted to vote-pooling arrangements to elect representatives. A puzzle thus presents itself: What drives parties to join these electoral cartels? The dissertation unraveled the incentives party elites have to participate in coalitions under a presidencialist system of government. I also investigated the effect of electoral coalitions on congressional representation. I applied a model of binary outcomes and relied on standard deviations to assess the ideological homogeneity/heterogeneity of electoral coalitions. I also calculated the Index of Disproportionality to measure the gaps between the proportion of votes and seats received by all parties in Brazil with and without electoral coalitions. Finally, I assessed the effects of the electoral formula on proportionality. An unexpected exogenous factor resulted crucial in explaining proportional electoral coalition building: The district's majoritarian election for governor. In each district, political actors often synchronize coalition partners to maximize winning results while minimizing electoral efforts.^ | 2023-10-30T01:26:19.665474 | https://example.com/article/6240 |
Derby boss Gary Rowett has added some spice to Saturday’s clash with Norwich City after suggesting the Canaries weren’t getting the best out of Cameron Jerome.
The striker was sold to the promotion-chasing Rams for an initial £1.5million during the January transfer window, which could rise to around £2m should Rowett’s side go up.
They sit second in the Championship table ahead of City’s visit to Pride Park, with Jerome scoring his first Derby goal in a 3-0 home win over Brentford on Saturday, having also created Tom Huddlestone’s opener.
“Cameron is an all-rounder,” Rowett said. “He is strong, he is physical, you can kick to him, put balls in behind and he can unsettle defenders. His build up play is pretty good and he can score goals.
“In nearly every full season he has had in the Championship, when he has been used the right way, he has scored 15 to 20 goals.
“He is not fully fit, you could see that at times against Brentford, but he will get better. The only way sometimes to get that fitness is to get him out there.
“We are really pleased with him, pleased he got his goal because that always settles strikers down, but pleased with his all-round performance also. He is going to be a big player for us.”
The 31-year-old scored 42 goals in 138 games having joined from Stoke in August 2014 in a deal reportedly worth £1.5m, going on to score 21 goals to fire City to promotion – famously scoring in the play-off final at Wembley.
He was sold having scored twice in 17 matches under Daniel Farke this season, 12 of which were starts, and is now part of a Rams squad which is unbeaten in 11 league games. The Canaries will also head into the game in good spirits, having won five of their last seven games, but Rowett feels Jerome has added to the firepower Derby have available, with former City favourite Bradley Johnson also in their ranks.
“We have got different types of strikers,” Rowett continued, speaking to the Derby Telegraph. “David Nugent has got great movement, can score goals, is a very busy player, and experienced.
“Sam Winnall is a goal poacher, who wants to get into good positions.” | 2024-03-17T01:26:19.665474 | https://example.com/article/9250 |
Why Can’t We Cane Criminals?
People hate on the United States a bunch, but we're still number one when it comes to putting people in jail.
People hate on the United States a bunch, but we’re still number one when it comes to putting people in jail. The US has the highest incarceration rate in the world, even higher than China, which has a billion more people and, we’re told, an oppressive government. Peter Moskos has a modest proposal to solve this problem: Instead of locking people up, beat them with a cane. He wrote a book on the topic called In Defense of Flogging that’s half thought experiment and half serious policy suggestion. The idea is that a convict could choose whether to spend, say, six years in prison, or take 12 lashes on his ass. In the book—which is both fascinating and simply written— he talks about the history of prisons and the sheer stupidity of our current system, but the most interesting thing is that flogging could totally work. It’s mean enough to be a punishment (you’ll probably have scars from it), but it won’t destroy your life the way long a prison sentence would. Recently, I talked to Peter over a few beers and I think we solved the US’s incarceration problem.
VICE: Tell me the summary of the book you’ve been giving everyone who asks you about it.Peter Moskos: Prisons suck. The book could have been called Why Prison?, but then no one would read it. By framing the debate from a flogging perspective, you can talk about how bad prisons are. Most people, at least on the liberal side, think flogging is cruel and barbaric. Then you give them the choice between that and prison and they’re like, “Oh, I’d pick flogging.” No one wants to go to prison because people don’t want to be locked in a cage. At some level, prison is a very bizarre form of punishment.
When I told people about the idea of replacing prison with flogging, they were worried that flogging would let people off too easily. Good god, I hope we haven’t reached the point where flogging is too soft.
It depends on the crime. If you lash someone for selling weed or coke, everyone’s fine with that. But when you talk about murder… Murder’s strange in that it has a low level of recidivism. Usually you kill someone because you want to kill that person, and you’re not as much of a threat [to society] as say a rapist or a pedophile. I do argue that some people need to be locked up, but it’s so few compared to who we have behind bars.
There is a lot of anti prison literature out there— Yeah, and it hasn’t done shit! That’s the problem. The anti prison people have been talking for 40 years, and all the while we’ve built up the prison industrial complex.
You come down pretty hard on left-wing prison reformers in the book. They refuse to acknowledge how damn conservative America is. We need something radical, and something that has an element of punishment. That’s the problem with the anti prison people, they don’t want to punish people—or at least they come off that way. It’s all about rehabilitation for them. Even if we could completely reform prisoners in an instant, that’s not quite justice. They did something bad, they need to be punished.
You also talked about how prisons in America started out as a way to “reform” criminals, which you say is pretty much impossible. If there is rehabilitation, it has to be separated from incarceration, because there’s no worse environment for rehab than prison. I mean, if you have problems that caused you to commit your crimes, you’re going to go to prison and get better? I hope the book helps people stop pretending that convicts leave prison better than they went in, because that’s never happened. What happens is they end up back in prison. That’s why prisons fail, because they don’t cure the criminal.
Flogging probably doesn’t either, but at least it wouldn’t pretend to. Often I’m asked, “Does flogging work?” And not to sound like Bill Clinton, but it depends on what you mean by “work.” My usual answer is that it works because it’s not prison. It doesn’t fuck people up as much, and it’s a hell of a lot cheaper. It’s absurd that we spend billions of dollars on something that makes people more likely to commit crime.
What about other alternatives to prison? I was thinking about the stocks. That’s not in the book because I wanted to keep it short, but I’m not opposed to stocks. It would be a good litmus test, because if the community doesn’t throw rotten tomatoes at the person in the stocks and shame them, maybe the law is wrong.
Speaking of alternatives, you note that the hated-by-liberals Arizona Sheriff Joe Arpaio should be praised for at least trying new things, like chain gangs. He fails at the standards he sets. He’s not deterring crime, and what he does doesn’t work. But why are liberals so opposed to chain gangs? I know the answer: It makes them think of slavery. So liberals are more concerned with their squeamishness than the prisoners? Prisoners want to be on chain gangs because it’s not prison. It’s a desired assignment. I think chain gangs are good because it forces us to think, “Wow, look at that chain gang! Nineteen out of 20 of them are black!” There’s so much of this out of sight, out of mind attitude.
Can you see any problems with flogging? There is a weakness to my argument—someone could mug you, get flogged, and then mug you on the way home.
Or if you really wanted to kill somebody… And you failed? [laughs]
Yeah, you fail, get flogged, and you’re like, “I still hate them, so I better kill them now.” That is a problem with flogging. But if the alternative is prison, we have to accept the fact that we’re going to lock up everyone forever, and we can’t do that. | 2024-07-17T01:26:19.665474 | https://example.com/article/5050 |
Who's "u'r"? I'm pretty sure it was Rene Pfeiffer who wrote that
article, not "u'r".
HINT: please use standard English, and preferably standard punctuation
as well. As you already know, your English isn't of the best; please
don't make it any more difficult by adding more levels of obscurity to
it.
> but i've some error when execute the perl script,,
>
> i've attach the screenshoot of the error and the script, , ,
Next time, please just copy and paste the error. It's plain text, so
there's no need for screenshots.
The error - which I've had to transcribe from your screenshot (an
unnecessary waste of time) - was
Communications Error at ./cgate_migrate_ldap.pl line 175, <DATA> line 225.
Taking a look at line 175 of the script shows a pretty good possibility
for where the possible problem might be:
If you use correct language you also decreased the probability of ending
in spam folders. Both filter systems I use trash illegible e-mails on
sight.
> Taking a look at line 175 of the script shows a pretty good possibility
> for where the possible problem might be:
>
> ```
> 173 --> # Bind to servers
> 174 --> $mesg = $ldap_source->bind($binddn_source, password => 'ldapbppt');
> 175 --> $mesg->code && die $mesg->error;
> '''
>
> That is, you're trying to bind to a server using the same password that
> Rene had used in his article. Unless you're using that password, the
> connection is going to fail.
I'd suggest the same. Your error is most probably from a failed bind
attempt. | 2023-11-13T01:26:19.665474 | https://example.com/article/7937 |
{
"requires": true,
"lockfileVersion": 1,
"dependencies": {
"bn.js": {
"version": "4.11.8",
"resolved": "https://registry.npmjs.org/bn.js/-/bn.js-4.11.8.tgz",
"integrity": "sha512-ItfYfPLkWHUjckQCk8xC+LwxgK8NYcXywGigJgSwOP8Y2iyWT4f2vsZnoOXTTbo+o5yXmIUJ4gn5538SO5S3gA=="
},
"brorand": {
"version": "1.1.0",
"resolved": "https://registry.npmjs.org/brorand/-/brorand-1.1.0.tgz",
"integrity": "sha1-EsJe/kCkXjwyPrhnWgoM5XsiNx8="
},
"elliptic": {
"version": "6.4.0",
"resolved": "https://registry.npmjs.org/elliptic/-/elliptic-6.4.0.tgz",
"integrity": "sha1-ysmvh2LIWDYYcAPI3+GT5eLq5d8=",
"requires": {
"bn.js": "4.11.8",
"brorand": "1.1.0",
"hash.js": "1.1.3",
"hmac-drbg": "1.0.1",
"inherits": "2.0.3",
"minimalistic-assert": "1.0.1",
"minimalistic-crypto-utils": "1.0.1"
}
},
"eth-ecies": {
"version": "1.0.3",
"resolved": "https://registry.npmjs.org/eth-ecies/-/eth-ecies-1.0.3.tgz",
"integrity": "sha512-V53TmjK96MROnuj1jUa0a9nguiBSgOJ1J6q1cmkOGzJh/MKT9EIIwL0AsvYQ/R74froMsW8IA0b+xLMIOD0z3w==",
"requires": {
"elliptic": "6.4.0",
"safe-buffer": "5.1.2"
}
},
"hash.js": {
"version": "1.1.3",
"resolved": "https://registry.npmjs.org/hash.js/-/hash.js-1.1.3.tgz",
"integrity": "sha512-/UETyP0W22QILqS+6HowevwhEFJ3MBJnwTf75Qob9Wz9t0DPuisL8kW8YZMK62dHAKE1c1p+gY1TtOLY+USEHA==",
"requires": {
"inherits": "2.0.3",
"minimalistic-assert": "1.0.1"
}
},
"hmac-drbg": {
"version": "1.0.1",
"resolved": "https://registry.npmjs.org/hmac-drbg/-/hmac-drbg-1.0.1.tgz",
"integrity": "sha1-0nRXAQJabHdabFRXk+1QL8DGSaE=",
"requires": {
"hash.js": "1.1.3",
"minimalistic-assert": "1.0.1",
"minimalistic-crypto-utils": "1.0.1"
}
},
"hoist-non-react-statics": {
"version": "2.5.5",
"resolved": "https://registry.npmjs.org/hoist-non-react-statics/-/hoist-non-react-statics-2.5.5.tgz",
"integrity": "sha512-rqcy4pJo55FTTLWt+bU8ukscqHeE/e9KWvsOW2b/a3afxQZhwkQdT1rPPCJ0rYXdj4vNcasY8zHTH+jF/qStxw=="
},
"inherits": {
"version": "2.0.3",
"resolved": "https://registry.npmjs.org/inherits/-/inherits-2.0.3.tgz",
"integrity": "sha1-Yzwsg+PaQqUC9SRmAiSA9CCCYd4="
},
"invariant": {
"version": "2.2.4",
"resolved": "https://registry.npmjs.org/invariant/-/invariant-2.2.4.tgz",
"integrity": "sha512-phJfQVBuaJM5raOpJjSfkiD6BpbCE4Ns//LaXl6wGYtUBY83nWS6Rf9tXm2e8VaK60JEjYldbPif/A2B1C2gNA==",
"requires": {
"loose-envify": "1.3.1"
}
},
"js-tokens": {
"version": "3.0.2",
"resolved": "https://registry.npmjs.org/js-tokens/-/js-tokens-3.0.2.tgz",
"integrity": "sha1-mGbfOVECEw449/mWvOtlRDIJwls="
},
"lodash": {
"version": "4.17.10",
"resolved": "https://registry.npmjs.org/lodash/-/lodash-4.17.10.tgz",
"integrity": "sha512-UejweD1pDoXu+AD825lWwp4ZGtSwgnpZxb3JDViD7StjQz+Nb/6l093lx4OQ0foGWNRoc19mWy7BzL+UAK2iVg=="
},
"lodash-es": {
"version": "4.17.10",
"resolved": "https://registry.npmjs.org/lodash-es/-/lodash-es-4.17.10.tgz",
"integrity": "sha512-iesFYPmxYYGTcmQK0sL8bX3TGHyM6b2qREaB4kamHfQyfPJP0xgoGxp19nsH16nsfquLdiyKyX3mQkfiSGV8Rg=="
},
"loose-envify": {
"version": "1.3.1",
"resolved": "https://registry.npmjs.org/loose-envify/-/loose-envify-1.3.1.tgz",
"integrity": "sha1-0aitM/qc4OcT1l/dCsi3SNR4yEg=",
"requires": {
"js-tokens": "3.0.2"
}
},
"minimalistic-assert": {
"version": "1.0.1",
"resolved": "https://registry.npmjs.org/minimalistic-assert/-/minimalistic-assert-1.0.1.tgz",
"integrity": "sha512-UtJcAD4yEaGtjPezWuO9wC4nwUnVH/8/Im3yEHQP4b67cXlD/Qr9hdITCU1xDbSEXg2XKNaP8jsReV7vQd00/A=="
},
"minimalistic-crypto-utils": {
"version": "1.0.1",
"resolved": "https://registry.npmjs.org/minimalistic-crypto-utils/-/minimalistic-crypto-utils-1.0.1.tgz",
"integrity": "sha1-9sAMHAsIIkblxNmd+4x8CDsrWCo="
},
"object-assign": {
"version": "4.1.1",
"resolved": "https://registry.npmjs.org/object-assign/-/object-assign-4.1.1.tgz",
"integrity": "sha1-IQmtx5ZYh8/AXLvUQsrIv7s2CGM="
},
"prop-types": {
"version": "15.6.2",
"resolved": "https://registry.npmjs.org/prop-types/-/prop-types-15.6.2.tgz",
"integrity": "sha512-3pboPvLiWD7dkI3qf3KbUe6hKFKa52w+AE0VCqECtf+QHAKgOL37tTaNCnuX1nAAQ4ZhyP+kYVKf8rLmJ/feDQ==",
"requires": {
"loose-envify": "1.3.1",
"object-assign": "4.1.1"
}
},
"react-redux": {
"version": "5.0.7",
"resolved": "https://registry.npmjs.org/react-redux/-/react-redux-5.0.7.tgz",
"integrity": "sha512-5VI8EV5hdgNgyjfmWzBbdrqUkrVRKlyTKk1sGH3jzM2M2Mhj/seQgPXaz6gVAj2lz/nz688AdTqMO18Lr24Zhg==",
"requires": {
"hoist-non-react-statics": "2.5.5",
"invariant": "2.2.4",
"lodash": "4.17.10",
"lodash-es": "4.17.10",
"loose-envify": "1.3.1",
"prop-types": "15.6.2"
}
},
"safe-buffer": {
"version": "5.1.2",
"resolved": "https://registry.npmjs.org/safe-buffer/-/safe-buffer-5.1.2.tgz",
"integrity": "sha512-Gd2UZBJDkXlY7GbJxfsE8/nvKkUEU1G38c1siN6QP6a9PT9MmHB8GnpscSmMJSoF8LOIrt8ud/wPtojys4G6+g=="
}
}
}
| 2023-11-02T01:26:19.665474 | https://example.com/article/2543 |
<?php
/**
* @package Admin
* @subpackage Partners
*/
class Form_PartnerCreate extends Infra_Form
{
public function init()
{
parent::init();
// Set the method for the display form to POST
$this->setMethod('post');
$this->setName('new_account'); // form id
$this->setAttrib('class', 'inline-form');
$this->addElement('text', 'name', array(
'label' => 'partner-create form name',
'required' => true,
'filters' => array('StringTrim'),
));
$this->addElement('text', 'company', array(
'label' => 'partner-create form company',
'filters' => array('StringTrim'),
));
$this->addElement('text', 'admin_email', array(
'label' => 'partner-create form admin email',
'required' => true,
'validators' => array('EmailAddress'),
'filters' => array('StringTrim'),
));
$this->addElement('text', 'phone', array(
'label' => 'partner-create form admin phone',
'required' => true,
'filters' => array('StringTrim'),
));
$this->addElement('select', 'partner_package', array(
'label' => 'partner-create form package',
'filters' => array('StringTrim'),
'required' => true,
));
$this->addElement('select', 'partner_package_class_of_service', array(
'label' => 'Class of Service:',
'filters' => array('StringTrim'),
));
$this->addElement('select', 'vertical_clasiffication', array(
'label' => 'Vertical Classification:',
'filters' => array('StringTrim'),
));
$this->addElement('text', 'website', array(
'label' => 'partner-create form url',
'filters' => array('StringTrim'),
));
$this->addElement('text', 'additional_param_1_key', array(
'label' => 'partner-create form key 1',
'filters' => array('StringTrim'),
));
$this->addElement('text', 'additional_param_1_val', array(
'label' => 'partner-create form val 1',
'filters' => array('StringTrim'),
));
$this->addElement('text', 'additional_param_2_key', array(
'label' => 'partner-create form key 2',
'filters' => array('StringTrim'),
));
$this->addElement('text', 'additional_param_2_val', array(
'label' => 'partner-create form val 2',
'filters' => array('StringTrim'),
));
$this->addElement('select', 'partner_template_id', array(
'label' => 'Select Template Partner ID:',
'filters' => array('StringTrim'),
));
$this->addElement('select', 'partner_language', array(
'label' => "Select partner's UI language:",
'filters' => array('StringTrim'),
));
$this->addDisplayGroup(array('name', 'company', 'admin_email', 'phone', 'describe_yourself', 'partner_package', 'partner_package_class_of_service' , 'vertical_clasiffication', 'partner_language' , 'partner_template_id'), 'partner_info', array(
'legend' => 'Publisher Info',
'decorators' => array(
'Description',
'FormElements',
array('Fieldset'),
)
));
$this->addDisplayGroup(array('website', 'content_categories', 'adult_content'), 'website_info', array(
'legend' => 'Website Info',
'decorators' => array(
'Description',
'FormElements',
array('Fieldset'),
)
));
$this->addDisplayGroup(array('additional_param_1_key', 'additional_param_1_val'), 'additional_param_1', array(
'legend' => 'Additional Param 1',
'decorators' => array(
'Description',
'FormElements',
array('Fieldset'),
)
));
$this->addDisplayGroup(array('additional_param_2_key', 'additional_param_2_val'), 'additional_param_2', array(
'legend' => 'Additional Param 2',
'decorators' => array(
'Description',
'FormElements',
array('Fieldset'),
)
));
$this->addElement('button', 'submit', array(
'label' => 'partner-create form create',
'type' => 'submit',
'decorators' => array('ViewHelper')
));
$this->addDisplayGroup(array('submit'), 'buttons1', array(
'decorators' => array(
'FormElements',
array('HtmlTag', array('tag' => 'div', 'class' => 'buttons')),
)
));
}
} | 2024-04-08T01:26:19.665474 | https://example.com/article/2962 |
Testicular development, epididymal sperm reserves and seminal quality in two-year-old Hereford and Angus bulls: effects of two levels of dietary energy.
The effect of high (HED) and medium energy diets (MED), fed to Hereford (H) and Angus (A) bulls from 6 through 24 mo of age, on scrotal circumference (SC), paired testes weight (PTW), epididymal sperm reserves (ESR) and seminal traits were examined. Over 3 yr, 120 bulls were involved. Angus exceeded H for both SC and PTW. Hereford bulls in yr 2 had smaller SC than in yr 1 or 3 but the response for A was consistent. Year affected PTW. In yr 2 Hereford bulls fed HED had 75% fewer ESR than MED-H bulls (9.3 vs 37.2 X 10(9]. Comparably treated A bulls had similar ESR numbers (29.2 vs 33.4 X 10(9]. In yr 3, epididymal sperm reserves of HED-H were depressed by 35% compared with MED-H (23.1 vs 35.7 X 10(9], whereas HED-A had 14% fewer ESR than did MED-A bulls (28.6 vs 33.1 X 10(9]. It was not obvious why H bulls were more susceptible to the effects of HED. Seminal quality of HED bulls was inferior to that of MED bulls, particularly with respect to progressive motility and the incidence of sperm in which a crater defect of the head was present at 2 yr of age. In yr 2 all seminal traits were severely depressed in 2-yr-old HED-H. Feeding HED to young H and A bulls reduced their reproductive potential. | 2024-05-01T01:26:19.665474 | https://example.com/article/9306 |
Q:
Why does this SQL result in Index Scan instead of an Index Seek?
Can someone please help me tune this SQL query?
SELECT a.BuildingID, a.ApplicantID, a.ACH, a.Address, a.Age, a.AgentID, a.AmenityFee, a.ApartmentID, a.Applied, a.AptStatus, a.BikeLocation, a.BikeRent, a.Children,
a.CurrentResidence, a.Email, a.Employer, a.FamilyStatus, a.HCMembers, a.HCPayment, a.Income, a.Industry, a.Name, a.OccupancyTimeframe, a.OnSiteID,
a.Other, a.ParkingFee, a.Pets, a.PetFee, a.Phone, a.Source, a.StorageLocation, a.StorageRent, a.TenantSigned, a.WasherDryer, a.WasherRent, a.WorkLocation,
a.WorkPhone, a.CreationDate, a.CreatedBy, a.LastUpdated, a.UpdatedBy
FROM dbo.NPapplicants AS a INNER JOIN
dbo.NPapartments AS apt ON a.BuildingID = apt.BuildingID AND a.ApartmentID = apt.ApartmentID
WHERE (apt.Offline = 0)
AND (apt.MA = 'M')
.
Here's what the Execution Plan looks like:
.
What I don't understand is why I'm getting a Index Scan for NPapplicants. I have an Index that covers BuildingID and ApartmentID. Shouldn't that be used?
A:
It is because it is expecting close to 10K records to return from the matches. To go back to the data to retrieve other columns using 10K keys is equivalent to something like the performance of just scanning 100K records (at the very least) and filtering using hash match.
As for access to the other table, the Query Optimizer has decided that your index is useful (probably against Offline or MA) so it is seeking on that index to get the join keys.
These two are then HASH matched for intersections to produce the final output.
A:
A seek in a B-Tree index is several times as expensive as a table scan (per record).
Additionally, another seek in the clustered index should be made to retrieve the values of other columns.
If a large portion of records is expected to match, then it is cheaper to scan the clustered index.
To make sure that the optimizer had chosen the best method, you may run this:
SET STATISTICS IO ON
SET STATSTICS TIME ON
SELECT a.BuildingID, a.ApplicantID, a.ACH, a.Address, a.Age, a.AgentID, a.AmenityFee, a.ApartmentID, a.Applied, a.AptStatus, a.BikeLocation, a.BikeRent, a.Children,
a.CurrentResidence, a.Email, a.Employer, a.FamilyStatus, a.HCMembers, a.HCPayment, a.Income, a.Industry, a.Name, a.OccupancyTimeframe, a.OnSiteID,
a.Other, a.ParkingFee, a.Pets, a.PetFee, a.Phone, a.Source, a.StorageLocation, a.StorageRent, a.TenantSigned, a.WasherDryer, a.WasherRent, a.WorkLocation,
a.WorkPhone, a.CreationDate, a.CreatedBy, a.LastUpdated, a.UpdatedBy
FROM dbo.NPapplicants AS a INNER JOIN
dbo.NPapartments AS apt ON a.BuildingID = apt.BuildingID AND a.ApartmentID = apt.ApartmentID
WHERE (apt.Offline = 0)
AND (apt.MA = 'M')
SELECT a.BuildingID, a.ApplicantID, a.ACH, a.Address, a.Age, a.AgentID, a.AmenityFee, a.ApartmentID, a.Applied, a.AptStatus, a.BikeLocation, a.BikeRent, a.Children,
a.CurrentResidence, a.Email, a.Employer, a.FamilyStatus, a.HCMembers, a.HCPayment, a.Income, a.Industry, a.Name, a.OccupancyTimeframe, a.OnSiteID,
a.Other, a.ParkingFee, a.Pets, a.PetFee, a.Phone, a.Source, a.StorageLocation, a.StorageRent, a.TenantSigned, a.WasherDryer, a.WasherRent, a.WorkLocation,
a.WorkPhone, a.CreationDate, a.CreatedBy, a.LastUpdated, a.UpdatedBy
FROM dbo.NPapplicants WITH (INDEX (index_name)) AS a
INNER JOIN
dbo.NPapartments AS apt ON a.BuildingID = apt.BuildingID AND a.ApartmentID = apt.ApartmentID
WHERE (apt.Offline = 0)
AND (apt.MA = 'M')
Replace index_name with the actual name of your index and compare the execution times and the numbers of I/O operations (as seen in the messages tab)
| 2024-06-06T01:26:19.665474 | https://example.com/article/8931 |
# --
# Copyright (C) 2001-2020 OTRS AG, https://otrs.com/
# --
# This software comes with ABSOLUTELY NO WARRANTY. For details, see
# the enclosed file COPYING for license information (GPL). If you
# did not receive this file, see https://www.gnu.org/licenses/gpl-3.0.txt.
# --
<fieldset class="TableLike">
<label>[% Translate("Object") | html %]:</label>
<div class="Value">[% Translate(Data.ObjectName) | html %]</div>
<div class="Clear"></div>
<label>[% Translate("Description") | html %]:</label>
<div class="Value">[% Data.Description | html %]</div>
<div class="Clear"></div>
[% RenderBlockStart("Format") %]
<label for="Format">[% Translate("Format") | html %]:</label>
<div class="Value">[% Data.SelectFormat %]</div>
<div class="Clear"></div>
[% RenderBlockEnd("Format") %]
[% RenderBlockStart("FormatFixed") %]
<input type="hidden" id="Format" name="Format" value="[% Data.FormatKey | html %]"/>
<label>[% Translate("Format") | html %]:</label>
<div class="Value">[% Translate(Data.Format) | html %]</div>
<div class="Clear"></div>
[% RenderBlockEnd("FormatFixed") %]
[% RenderBlockStart("TimeZone") %]
<label for="TimeZone">[% Translate("Time Zone") | html %]:</label>
<div class="Value">
[% Data.SelectTimeZone %]
<p class="FieldExplanation">
[% Translate('The selected time periods in the statistic are time zone neutral.') | html %]
</p>
</div>
<div class="Clear"></div>
[% RenderBlockEnd("TimeZone") %]
[% RenderBlockStart("ExchangeAxis") %]
<label>[% Translate("Exchange Axis") | html %]:</label>
<div class="Value">[% Data.ExchangeAxis %]</div>
<div class="Clear"></div>
[% RenderBlockEnd("ExchangeAxis") %]
</fieldset>
[% RenderBlockStart("Static") %]
<h2>
[% Translate("Configurable Params of Static Stat") | html %]
</h2>
<fieldset class="TableLike">
[% RenderBlockStart("ItemParam") %]
<label for="[% Data.Name | html %]">[% Translate(Data.Param) | html %]:</label>
<div class="Value">[% Data.Field %]</div>
<div class="Clear"></div>
[% RenderBlockEnd("ItemParam") %]
</fieldset>
[% RenderBlockEnd("Static") %]
[% RenderBlockStart("Dynamic") %]
<fieldset class="TableLike">
<legend><span>[% Translate(Data.Name) | html %]</span></legend>
[% RenderBlockStart("NoElement") %]
<div class="Value">
<p class="FieldExplanation">[% Translate("No element selected.") | html %]</p>
</div>
[% RenderBlockEnd("NoElement") %]
[% RenderBlockStart("Element") %]
<label>[% Translate(Data.Name) | html %]:</label>
<div class="Value">
[% RenderBlockStart("TimePeriodFixed") %]
[% Translate("Between %s and %s") | html | ReplacePlaceholders(Data.TimeStart, Data.TimeStop) %]<br/>
[% RenderBlockEnd("TimePeriodFixed") %]
[% RenderBlockStart("TimeRelativeFixed") %]
[% Translate("The past complete %s and the current+upcoming complete %s %s") | html | ReplacePlaceholders(Data.TimeRelativeCount, Data.TimeRelativeUpcomingCount, Data.TimeRelativeUnit) %]<br/>
[% RenderBlockEnd("TimeRelativeFixed") %]
[% RenderBlockStart("TimeScaleFixed") %]
[% Translate("Scale") | html %]: [% Data.Count %] [% Translate(Data.Scale) | html %]
[% RenderBlockEnd("TimeScaleFixed") %]
[% RenderBlockStart("Fixed") %]
<div title="[% Data.Value | html %]">
<span class="DataTruncated">[% Data.Value | truncate(120) | html %]</span>
[% IF Data.Value.length > 120 %]
<span class="DataFull Hidden">[% Data.Value | html %]</span>
<a href="#" class="DataShowMore">
<span class="More"><i class="fa fa-long-arrow-right"></i> [% Translate("show more") | html %]</span>
<span class="Less Hidden"><i class="fa fa-long-arrow-left"></i> [% Translate("show less") | html %]</span>
</a>
[% END %]
</div>
[% RenderBlockEnd("Fixed") %]
[% RenderBlockStart("MultiSelectField") %]
[% Data.SelectField %]
[% RenderBlockEnd("MultiSelectField") %]
[% RenderBlockStart("SelectField") %]
[% Data.SelectField %]
[% RenderBlockEnd("SelectField") %]
[% RenderBlockStart("InputField") %]
<input type="text" id="[% Data.Key | html %]" name="[% Data.Key | html %]" value="[% Data.Value | html %]" class="W25pc [% Data.CSSClass | html %]"[% FOR DataAttribute IN Data.HTMLDataAttributes.keys.sort %] data-[% DataAttribute | html %]="[% Data.HTMLDataAttributes.$DataAttribute | html %]"[% END %]/>
[% RenderBlockEnd("InputField") %]
</div>
<div class="Clear"></div>
[% RenderBlockStart("TimePeriod") %]
<label><em>[% Translate("Absolute period") | html %]</em>:</label>
<div class="Value">
<p>
[% Translate("Between %s and %s") | html | ReplacePlaceholders(Data.TimeStart, Data.TimeStop) %]<br/>
</p>
</div>
<div class="Clear"></div>
[% RenderBlockEnd("TimePeriod") %]
[% RenderBlockStart("TimeScale") %]
<label><em>[% Translate("Scale") | html %]</em>:</label>
<div class="Value">
[% IF Data.TimeScaleCount %]
[% Data.TimeScaleCount %]
[% END %]
[% Data.TimeScaleUnit %]
</div>
<div class="Clear"></div>
[% RenderBlockEnd("TimeScale") %]
[% RenderBlockStart("TimePeriodRelative") %]
<label><em>[% Translate("Relative period") | html %]</em>:</label>
<div class="Value">
[% Translate("The past complete %s and the current+upcoming complete %s %s") | html | ReplacePlaceholders(Data.TimeRelativeCount, Data.TimeRelativeUpcomingCount, Data.TimeRelativeUnit) %]
</div>
<div class="Clear"></div>
[% RenderBlockEnd("TimePeriodRelative") %]
[% RenderBlockEnd("Element") %]
</fieldset>
[% RenderBlockEnd("Dynamic") %]
| 2023-09-25T01:26:19.665474 | https://example.com/article/6698 |
#
# This file is subject to the terms and conditions of the GNU General Public
# License.
#
# Adapted for MIPS Pete Popov, Dan Malek
#
# Copyright (C) 1994 by Linus Torvalds
# Adapted for PowerPC by Gary Thomas
# modified by Cort (cort@cs.nmt.edu)
#
# Copyright (C) 2009 Lemote Inc. & DSLab, Lanzhou University
# Author: Wu Zhangjin <wuzhangjin@gmail.com>
#
# set the default size of the mallocing area for decompressing
BOOT_HEAP_SIZE := 0x400000
# Disable Function Tracer
KBUILD_CFLAGS := $(shell echo $(KBUILD_CFLAGS) | sed -e "s/-pg//")
KBUILD_CFLAGS := $(LINUXINCLUDE) $(KBUILD_CFLAGS) -D__KERNEL__ \
-DBOOT_HEAP_SIZE=$(BOOT_HEAP_SIZE) -D"VMLINUX_LOAD_ADDRESS_ULL=$(VMLINUX_LOAD_ADDRESS)ull"
KBUILD_AFLAGS := $(LINUXINCLUDE) $(KBUILD_AFLAGS) -D__ASSEMBLY__ \
-DBOOT_HEAP_SIZE=$(BOOT_HEAP_SIZE) \
-DKERNEL_ENTRY=0x$(shell $(NM) $(objtree)/$(KBUILD_IMAGE) 2>/dev/null | grep " kernel_entry" | cut -f1 -d \ )
targets := head.o decompress.o dbg.o uart-16550.o uart-alchemy.o
# decompressor objects (linked with vmlinuz)
vmlinuzobjs-y := $(obj)/head.o $(obj)/decompress.o $(obj)/dbg.o
ifdef CONFIG_DEBUG_ZBOOT
vmlinuzobjs-$(CONFIG_SYS_SUPPORTS_ZBOOT_UART16550) += $(obj)/uart-16550.o
vmlinuzobjs-$(CONFIG_MIPS_ALCHEMY) += $(obj)/uart-alchemy.o
endif
targets += vmlinux.bin
OBJCOPYFLAGS_vmlinux.bin := $(OBJCOPYFLAGS) -O binary -R .comment -S
$(obj)/vmlinux.bin: $(KBUILD_IMAGE) FORCE
$(call if_changed,objcopy)
tool_$(CONFIG_KERNEL_GZIP) = gzip
tool_$(CONFIG_KERNEL_BZIP2) = bzip2
tool_$(CONFIG_KERNEL_LZMA) = lzma
tool_$(CONFIG_KERNEL_LZO) = lzo
targets += vmlinux.bin.z
$(obj)/vmlinux.bin.z: $(obj)/vmlinux.bin FORCE
$(call if_changed,$(tool_y))
targets += piggy.o
OBJCOPYFLAGS_piggy.o := --add-section=.image=$(obj)/vmlinux.bin.z \
--set-section-flags=.image=contents,alloc,load,readonly,data
$(obj)/piggy.o: $(obj)/dummy.o $(obj)/vmlinux.bin.z FORCE
$(call if_changed,objcopy)
# Calculate the load address of the compressed kernel image
hostprogs-y := calc_vmlinuz_load_addr
VMLINUZ_LOAD_ADDRESS = $(shell $(obj)/calc_vmlinuz_load_addr \
$(obj)/vmlinux.bin $(VMLINUX_LOAD_ADDRESS))
vmlinuzobjs-y += $(obj)/piggy.o
quiet_cmd_zld = LD $@
cmd_zld = $(LD) $(LDFLAGS) -Ttext $(VMLINUZ_LOAD_ADDRESS) -T $< $(vmlinuzobjs-y) -o $@
quiet_cmd_strip = STRIP $@
cmd_strip = $(STRIP) -s $@
vmlinuz: $(src)/ld.script $(vmlinuzobjs-y) $(obj)/calc_vmlinuz_load_addr
$(call cmd,zld)
$(call cmd,strip)
#
# Some DECstations need all possible sections of an ECOFF executable
#
ifdef CONFIG_MACH_DECSTATION
e2eflag := -a
endif
# elf2ecoff can only handle 32bit image
hostprogs-y += ../elf2ecoff
ifdef CONFIG_32BIT
VMLINUZ = vmlinuz
else
VMLINUZ = vmlinuz.32
endif
quiet_cmd_32 = OBJCOPY $@
cmd_32 = $(OBJCOPY) -O $(32bit-bfd) $(OBJCOPYFLAGS) $< $@
vmlinuz.32: vmlinuz
$(call cmd,32)
quiet_cmd_ecoff = ECOFF $@
cmd_ecoff = $< $(VMLINUZ) $@ $(e2eflag)
vmlinuz.ecoff: $(obj)/../elf2ecoff $(VMLINUZ)
$(call cmd,ecoff)
OBJCOPYFLAGS_vmlinuz.bin := $(OBJCOPYFLAGS) -O binary
vmlinuz.bin: vmlinuz
$(call cmd,objcopy)
OBJCOPYFLAGS_vmlinuz.srec := $(OBJCOPYFLAGS) -S -O srec
vmlinuz.srec: vmlinuz
$(call cmd,objcopy)
clean-files := $(objtree)/vmlinuz $(objtree)/vmlinuz.{32,ecoff,bin,srec}
| 2023-12-03T01:26:19.665474 | https://example.com/article/3776 |
Q:
Saving changes from a combobox in a database
I have a combobox and want to save the changes in the database.
What I want to do is if the combobox is selected and it is true it must run the code below. If it is false it must then skip the code and go further.
The code below is checking if the combobox is enabled. But when I'm compiling it's saying true when not selected
private void Log()
{
if (kaartburgerlijkestand.Enabled)
{
veranderingBurgelijkestaat();
}
}
The code below is saving the data in the database
private string veranderingBurgerlijkestaat()
{
string gebruiker = curMedewerker.Behandelnaam;
string bekeken = prodermaform.pKaart();
string tabblad = tabControl1.SelectedTab.Text;
string pat = CPatient.GeefPatientNaam(patient.Id);
string wijz = "Burgerlijkestaat: " + kaartBurgerlijkestand.Text;
CDb dcon = new CDb();
try
{
if (dcon.Open())
{
SqlCommand cmd = new SqlCommand("INSERT INTO dbo.loggen(Gebruiker, Bekeken, Tabblad, Patientnaam, Wijziging, Datum) VALUES(@gebruiker, @bekeken, @tabblad, @pat, @wijz, @datum)", dcon.Conn);
cmd.Parameters.AddWithValue("@gebruiker", gebruiker);
cmd.Parameters.AddWithValue("@bekeken", bekeken);
cmd.Parameters.AddWithValue("@tabblad", tabblad);
cmd.Parameters.AddWithValue("@pat", pat);
cmd.Parameters.AddWithValue("@wijz", wijz);
cmd.Parameters.AddWithValue("@datum", DateTime.Now);
cmd.ExecuteNonQuery();
cmd.Dispose();
}
}
catch (Exception ex)
{
MessageBox.Show(ex.Message);
}
finally
{
dcon.Close();
}
return wijz;
}
Could someone show me a example how to do it
I Found the solution
I have made a check
private void doCheck(object sender, EventArgs e)
{
cmbox = false;
if (kaartBurgerlijkestaat.Focused)
{
veranderingBurgerlijkestand();
}
cmbox = true;
}
Then I used the SelectedValueChanged Event
private void kaartBurgerlijkestand_SelectedValueChanged(object sender, EventArgs e)
{
if (cmbox)
doCheck(sender, e);
}
And it works fine.
I want to thank you all for helping me!
A:
Add an Event Listener, the kaartburgerlijkestand.Enabled only check your combobox is enabled to select or not.
Add this line after your InitializeComponent(); line of Form.cs file
kaartburgerlijkestand.SelectedIndexChanged += new System.EventHandler
(this.kaartburgerlijkestand_SelectedIndexChanged);
Also add a function for above code :
private void kaartburgerlijkestand_SelectedIndexChanged(object sender, EventArgs e)
{
veranderingBurgelijkestand();
}
| 2023-09-16T01:26:19.665474 | https://example.com/article/4946 |
Book Review: Expert Oracle Database 11g Administration
This is a huge book, which could possibly best be described as the condensed encyclopedia of Oracle databases. The book covers everything from administration to performance tuning to using SQL*Plus to installing Oracle on Linux to using RMAN. While I did not read this book cover to cover, I did page through the book looking for interesting topics to read. I did read the author’s “Expert Oracle Database 10g Administration” book cover to cover a couple years ago and was at the time very impressed with that book. There were a couple small errors in the 10g book, repeated a couple times, but I commented to a couple people that the 10g book is by far the best and most thorough Oracle reference book that I had run across. The appendix at the back of the 10g book was very effective in helping me find exactly the right topic, and usually the right syntax for just about any task. The appendix in the 11g version of the book is just about as good. It appears that the author may have rushed the job of updating the 10g book for 11g R1 as quite a few screen captures still show Oracle versions such as 10.1.0.3 and a couple other sections of the book also seem to be more specific to 10g R1 than 11g R1 (or 11g R2).
This book contains a lot of great and/or very helpful information, but while paging through the book I found a couple problems. Overlooking the problems, I would still recommend this book as a reference for Oracle 11g R1. The section on performance tuning is not my first choice for performance tuning information.
Problems found when paging through the book (I know that I probably missed several issues):
Page 90 mentions RAID 0+1 but not the more robust RAID 10.
Page 92 states “RAID 5 offers many advantages over the other levels of RAID. The traditional complaint about the `write penalty’ should be discounted because of sophisticated advances in write caches and other strategies that make RAID 5 more efficient than in the past.” Visit http://www.baarf.com/ to see the opinions of other DBAs.
Page 166 states “if you use an Oracle block size of 64KB (65,536 bytes)…” The maximum supported block size is 32KB, not 64KB, and some platforms support a maximum of a 16KB block size.
Page 171 states when suggesting the use of multiple block sizes in a single database “if you have large indexes in your database, you will need a large block size for their tablespaces.” “Oracle provides separate pools for the various block sizes, and this leads to better use of Oracle memory.” For those who have followed the multiple block size discussions over the years, it should be fairly clear that it is not a good idea to use multiple block sizes in a single database. Oracle’s documentation states that multiple block sizes are intended to be used only to support transportable tablespaces.
Page 181 states “The database writer process writes dirty blocks to disk under the following conditions… Every 3 seconds.” A check of the Oracle documentation will quickly confirm that this is not the case.
Page 182 states “Oracle further recommends that you first ensure that your system is using asynchronous I/O before deploying additional database writer processes beyond the default number – you might not need multiple database writer processes if so.” I think that I misread this several times as saying “do not enable multiple database writers unless you also plan to enable asynchronous I/O,” which would be an incorrect statement.
Page 190 states “this is why the buffer cache hit ratio, which measures the percentage of time users accessed the data they needed from the buffer cache (rather than requiring a disk read), is such an important indicator of performance of the Oracle instance.” The author provides a link on page 1161 to an article authored by Cary Millsap which discusses why a higher buffer cache hit ratio may not be ideal. This is definitely a step in the right direction regarding the buffer cache hit ratio, but it might be better to simply ignore the statistic.
Page 395 states “11.1.0 is an alternative name for Oracle Database 11g Release 2.” Oracle 11g R2 was just released on September 1, 2009 and its version is 11.2.0.1.
Page 402 correctly (according to the documentation) states that Oracle Enterprise Linux 4 and 5, as well as Red Hat Enterprise Linux are supported platforms for Oracle Database 11g, and correctly (according to the documentation) does not list Red Hat Enterprise Linux 3. Page 403 lists the required RPM packages for Red Hat Enterprise Linux 3, but ignores the supported Linux platforms.
Page 405 shows parameters that need to be set on Linux. This appears to be a direct copy of the parameters in the Oracle documentation, but the author did not include the net.core.wmem-max parameter. Note that Oracle 11.2.0.1 will require different parameters than those specified in this book, but that is not the fault of the author.
Page 452 states that the COMPATIBLE parameter may be set to “10.2 so the untested features of the new Oracle version won’t hurt your application.” I think that this is misleading at best.
Page 466 states “If you’re supporting data warehouse applications, it makes sense to have a very large DB_BLOCK_SIZE – something between 8KB and 32KB. This will improve database performance when reading huge chunks from disk.” This is not quite a correct statement, especially if the DB_FILE_MULTIBLOCK_READ_COUNT is set correctly, or not set in the case Oracle 10.2.0.1 or above is in use. 8KB is the standard block size, so I am not sure why the author groups it with the other block sizes in the very large block size group.
Page 477 states “the default value for the STATISTICS_LEVEL initialization parameter is TYPICAL. You need to use this setting if you want to use several of Oracle’s features, including Automatic Shared Memory Management.” This is an incomplete statement as a setting of ALL will also work.
Page 1055 shows the use of both CASCADE>=YES and CASCADE=>’TRUE’ with DBMS_STATS. I believe that TRUE is the correct syntax, but it should not be specified in single quotes.
Page 1067 states “subqueries perform better when you use IN rather than EXISTS.” The reality is that Oracle may very well automatically transform queries using IN syntax into queries using EXISTS syntax (or vice-versa), and may even transform both IN syntax queries and EXISTS syntax queries into standard join syntax.
Page 1074 stated that “inline stored functions can help improve the performance of your SQL statement.” The author then demonstrated this concept by converting a SQL statement with a three table equijoin accessing apparent primary and foreign keys of the tables into a SQL statement which accessed a single table and called two PL/SQL functions (per row), each of which queried one of the other two tables which were included in the original equijoin SQL statement. This approach does not seem to be a good idea given the number of additional context switches which will result, as well as the additional number of parse calls. In short, do not solve something in PL/SQL when it may be easily solved in SQL alone.
Page 1075 states “you should avoid the use of inequality and the greater than or equal to predicates, as they may also bypass indexes.” Something does not look right about that statement.
Page 1089 states “the indexes could become unbalanced in a database with a great deal of DML. It is important to rebuild such indexes regularly so queries can run faster.” Interesting suggestion – I believe that standard SELECT statements are classified as DML as they are definitely not DDL. An index cannot become unbalanced, and indexes rarely need to be rebuilt – see Richard Foote’s blog for the more details.
Page 1089 states “when you rebuild the indexes, include the COMPUTE STATISTICS statement so you don’t have to gather statistics after the rebuild.” Oracle 10g and above automatically collect statistics when building indexes, and I would assume that the same is true when rebuilding indexes.
Page 1108 states that “cpu_time is the total parse and execute time” when describing the columns found in V$SQL. It is actually the time measure in centiseconds (100th of a second) of the CPU utilization when executing the SQL statement, and probably also includes the CPU time for parsing the SQL statement, but I have not verified this.
Page 1154 provides a demonstration of finding sessions consuming a lot of CPU time. The example used the `CPU used by this session’ statistic rather than drilling through V$SYS_TIME_MODEL into V$SESS_TIME_MODEL. The example used the `CPU used by this session’ as a reason for examining parse CPU usage and recursive CPU usage. While it is good to check CPU usage of sessions, there are a couple problems with this approach. First, the `CPU used by this session’ statistic is not updated until the first fetch from a SQL statement is returned to the client. If the client was waiting for the last hour for the first row to be returned, the CPU utilization will show close to 0 seconds difference between the start of the SQL statement and the check of the `CPU used by this session’ statistic in V$SESSTAT – this is not the case for the statistics in V$SESS_TIME_MODEL, which are updated in near real-time. Second, the change (delta) in the statistic for the sessions was not determined – if one session had been connected for 120 days, it probably should have consumed more CPU time than a session connected for 30 minutes, and probably should not be investigated.
Even with the above issues, I would still recommend this book as a reference. For specific information on performance tuning, using RMAN, or installing on Linux, it might be a good idea to use a second book for deeper understanding of the process.
Hints for Posting Code Sections in Comments
********************
When the spacing of text in a comment section is important for readability (execution plans, PL/SQL blocks, SQL, SQL*Plus output, etc.) please use a <pre> tag before the code section and a </pre> tag after the code section:
<pre>
SQL> SELECT
2 SYSDATE TODAY
3 FROM
4 DUAL;
TODAY
---------
01-MAR-12
</pre>
********************
When posting test case samples, it is much easier for people to reproduce the test case when the SQL*Plus line prefixes are not included - if possible, please remove those line prefixes. This:
SELECT
SYSDATE TODAY
FROM
DUAL;
Is easier to execute in a test case script than this:
SQL> SELECT
2 SYSDATE TODAY
3 FROM
4 DUAL;
********************
Greater than and Less than signs in code sections are often interpretted as HTML formatting commands. Please replace these characters in the code sections with the HTML equivalents for these characters: | 2024-01-02T01:26:19.665474 | https://example.com/article/1658 |
Return Of The Adirondack Moose
For an animal that's not only quite large, but also an iconic symbol of the North, moose in the Adirondack Mountains can be very difficult to find.
Part of that has to do with the vast range of the area, over 6 million acres, but most of the blame for their scarcity falls on man.
Moose were once abundant in the region, but the forests critical to their survival were decimated in the 1800s, and unregulated hunting completed the devastation.
Extirpated from the region by 1860, and absent from the Adirondacks until the 1980's, wandering moose eventually began moving back into the area from neighboring states. The recovery of the moose today has gone hand in hand with the restoration of the forests. Connie Prickett of the ADK Nature Conservancy is studying the moose in the Adirondacks." Now these forests are regenerating beautifully, this is a story of restoration and recovery for forests, so for the moose to make a comeback after the forests have made a comeback is really exciting."
Heidi Kretser of The Wildlife Conservation Society is also researching the big mammals." Once there were proper hunting regulations in line and habitat started to come back, the moose population expanded from Northern Maine, down into New Hampshire, into Vermont and finally we had the first moose come into New York State down near Whitehall around 1980."
The present population of moose in the Adirondacks is thought to be about 800, but a really accurate count has been difficult to determine. Studying these forest giants is difficult for a variety of reasons, and the researchers currently conducting studies have their work cut out for them.
Other than trail cameras, they rarely get to see a live moose. Much of the knowledge they gain, explains Alissa Rafferty of the ADK Nature Conservancy, is from clues the animals leave behind.
"We do use some of the motion detecting cameras that have been pretty successful in collecting supplemental data. Tracking is another method and WCS has used their method of scat collecting, and using scat sniffing dogs," she explained.
In a pilot study, the Wildlife Conservation Society brought in the specially trained dogs to locate moose droppings. The study was highly successful and shows great promise for future research, says Kretser. "Once you have scat, there's so much information in it," she said. "You can amplify the DNA, you can identify the moose down to an individual, you can identify whether it was a male or a female, and you can look at how that particular moose is related to other moose in your sample."
One of the most critical aspects in the health of the moose is habitat. They need lots of room to roam, said Rafferty. "At the Nature Conservancy, we're involved with wildlife connectivity efforts, trying to maintain linkages for wide ranging mammals like moose to be able to move freely from protected lands."
For the moment, the future of the Adirondack moose is still in question, it's populace still tenuous at best. Rafferty says just getting a glimpse of these magnificent mammals in the wild make all the efforts at preserving them worthwhile. "It's very cool, I've seen one on this property driving on the roads and just two others in the Adirondacks, and I grew up here. So they are hard to see, but it's a very special experience when you do."
Kretser agrees. "Overall, I think people do like the idea of natural re-colonization, and bringing back the wildlife that once roamed these great woods. It's been talked for wolves, it's been talked about for lynx, moose, it's been talked about for mountain lions! So there's a lot of interest in sort of re-wilding these wild areas." | 2024-06-09T01:26:19.665474 | https://example.com/article/7730 |
[Heterochromia complicata Fuchs, crystalline iridopathy and increased immunoglobulin G in aqueous humor. A case report].
We report on a 35 year old patient suffering from heterochromia complicata Fuchs with a mature cataract. Multiple small whitish-yellowish crystal-lines on the iris at the 7 and 12 o'clock position were interpreted as Russell bodies and correlated with an elevated concentration of 7.2 mg/dl IgG in the aqueous humor analysis. IgA und IgM could not be detected (less than 1.1 mg/dl). Three months after an unremarkable extracapsular cataract extraction and posterior chamber lens implantation a positive Tyndall phenomenon and Russell bodies were found at just the same locations on the iris. | 2024-07-09T01:26:19.665474 | https://example.com/article/2886 |
Great Ocean Road
The Great Ocean Road region is named after the road which follows a significant part of the rugged Victorian south-west coast. It was constructed by ex-servicemen and the unemployed between 1918 and 1932 and is dedicated to those that lost their lives in World War One.
The Great Ocean Road commences south of Geelong at Torquay, but doesn't reach the coast until Anglesea from where it generally hugs the scenic coastline passing through the popular holiday resort towns of Lorne and Apollo Bay. It then heads inland, traversing the lush Otway National Park and reaching its highest point at the small town of Lavers Hill. Its final leg covers the Shipwreck Coast where many vessels during the 19th century ran aground due to rough seas and the rugged coastline. These vessels were carrying goods and immigrants to the country, at the rate of up to 50 ships per day during the height of the gold rush. This section of coast is home to a collection of notable rock formations including London Bridge, Loch Ard Gorge and the most famous of them all, the 12 Apostles, which were carved out of the limestone headlands by rough seas over time.
The western end of the Great Ocean Road region includes the coastal city of Warrnambool, the site of Victoria's first permanent settlement at Portland, and the remote untouched coastline which extends westwards to the fishing village of Nelson and the state border with South Australia.
Inland are the lush rainforests of the Otway Ranges and the huge volcanic plain surrounding Colac and Camperdown and Terang with its craters, cones, large lakes and fertile soils. | 2023-09-06T01:26:19.665474 | https://example.com/article/2329 |
Ott scores in 6th round of shootout to lift Sabres to 5-4 win over Maple Leafs
BUFFALO, N.Y. – Steve Ott has quickly won over blue-collar Buffalo fans with his hard-hitting, gritty style, and the forward should become even more popular following a handful of clutch goals.
Ott had the decisive score in the sixth round of the shootout and the Sabres rallied for a 5-4 win over the Toronto Maple Leafs on Thursday.
Two nights earlier, Ott scored twice, including one in overtime, in a 3-2 win at Montreal. And his shootout attempt against Toronto was his first since Oct. 25, 2011, and the fifth overall in his nine-year NHL career.
"I was excited. I'm just glad I was able to go before my 'brother' Mike Weber," Ott said with a laugh, referring to Buffalo's defensive-minded defenseman.
Interim coach Ron Rolston had plenty of confidence in Ott, whose role has gradually increased since being acquired in a trade that sent under-performing center Derek Roy to Dallas last summer.
"He was the guy to go to," Rolston said. "He brings a lot of energy to the team and especially the grit he plays with on a consistent basis. He can be put out in all situations."
With the crowd on its feet cheering Ott to the ice, he drove directly at James Reimer and slipped the puck just inside the right post. Ryan Miller then sealed the win by getting his left pad out to stop former Sabres forward Clarke MacArthur.
Christian Ehrhoff had a goal and two assists for Buffalo, which won consecutive games for only the fourth time this season. Tyler Ennis had a goal and an assist, and Marcus Foligno and Jason Pominville also scored.
Miller stopped 30 shots through overtime, and allowed only Tyler Bozak to score in the tiebreaker.
Nazem Kadri had two goals and an assist for Toronto, which was unable to build off a 4-2 win over Tampa Bay a day earlier. Bozak and Mikhail Grabovski also scored in regulation.
Reimer made 32 saves through OT, but allowed two goals in the shootout, including one to Drew Stafford.
Reimer has allowed four goals in five of seven starts since he returned after missing eight games with a knee injury, but he said there was no reason to be concerned.
"When you score four, you should win. But at least we got a point, which is important," he said. "No reason to pull the alarm, but obviously it's something we're aware of."
And Reimer became confused when asked about the Leafs dropping to 1-3-3 in their past seven.
"To me, that's one game over .500, if that make sense," Reimer said, laughing. "I don't know if it does. Maybe it doesn't, I don't know. Too many pucks to the head."
Bozak and Kadri scored 76 seconds apart to put the Leafs ahead 2-0 midway through the first period. And Toronto appeared in control 8:52 into the second, when Grabovski deflected in Jake Gardiner's shot from the blue line to make it a 3-1 advantage.
The Sabres responded by scoring the next three, capped by Ehrhoff's blast from the blue line 22 seconds into the third.
The Maple Leafs responded six minutes later. Kadri was battling Buffalo's Jordan Leopold in front when he managed to deflect in Cody Franson's shot from the right circle.
Accustomed to blowing two-goal leads, as Buffalo at Montreal on Tuesday, the Sabres showed they could overcome them, too.
"We didn't quit," Miller said. "We battled back and got a lead, which is great."
The Sabres also didn't shy away from the Maple Leafs' physical style.
Maple Leafs forward Colton Orr was ejected 2:09 in after he broke his stick in two cross-checking Patrick Kaleta off a face-off. That sparked a fight between Orr and Kaleta as well as the teams' two heavyweights, Buffalo's John Scott and Toronto's Frazer McLaren.
And Ott made his presence known by exchanging several shoves with Toronto captain Dion Phaneuf in the opening minutes.
"We matched size for size, hit for hit," Ott said. "That's why it was a 4-4 game in a shootout at the end of it all."
NOTES: The Maple Leafs were without LW Joffrey Lupul, who was suspended two games by the NHL earlier in the day for an illegal hit to the head of Tampa Bay defenseman Victor Hedman a day earlier. ... The Sabres were minus leading scorer Thomas Vanek, who is day to day after being struck by a shot in the right hip in a 3-2 OT win at Montreal on Tuesday. ... C Kadri entered the game with a team-leading 30 points (11 goals, 19 assists) in 30 games. At 22, he became the team's youngest player to reach 30 points in 30 games since 22-year-old Ed Olczyk did it in 1988-89. | 2023-08-05T01:26:19.665474 | https://example.com/article/4815 |
173 Ill. App.3d 153 (1988)
527 N.E.2d 436
THE PEOPLE OF THE STATE OF ILLINOIS, Plaintiff-Appellee,
v.
DAVE V. BROOKS et al., Defendants-Appellants.
Nos. 84-2358, 84-2359 cons.
Illinois Appellate Court First District (3rd Division).
Opinion filed July 13, 1988.
*154 Randolph N. Stone, Public Defender, of Chicago (Alison Edwards, Assistant Public Defender, of counsel), for appellants.
Richard M. Daley, State's Attorney, of Chicago (Joan S. Cherry and Paula M. Carstensen, Assistant State's Attorneys, of counsel), for the People.
Reversed and remanded.
JUSTICE RIZZI delivered the opinion of the court:
Following a jury trial, defendants, Dave Brooks and Harlan Hayes, were convicted of armed robbery on a theory of accountability. Defendant Hayes was sentenced to a term of six years' imprisonment in the Illinois Department of Corrections. Defendant Brooks, a juvenile, was assigned to the Illinois Juvenile Detention Center for a period of six years. This appeal followed.
On appeal, both defendants essentially argue that (1) the trial court committed reversible error in refusing to question on voir dire the jurors' attitudes regarding defendants' burdens of proof and the fact that defendants' failure to testify cannot be held against them; (2) the trial court erred in denying defendants' motion for a mistrial; (3) defendants were denied a fair trial on the basis of references to the investigation of the armed robbery being conducted by the gang crimes unit of the Chicago police department; and (4) defendants were denied a fair trial as a result of the trial court's improper response *155 to a question posed by the jury while impaneled. Defendant Brooks individually argues that (1) he was improperly tried as an adult pursuant to the automatic juvenile transfer statute (Ill. Rev. Stat. 1985, ch. 37, par. 702-7(6)), and (2) the automatic juvenile transfer statute violates equal protection and substantive and procedural due process. We reverse and remand.
On August 28, 1983, the victim, John Guilfoyle, was robbed after driving a woman known as Ms. Thomas to her home at 6729 South Ada Street. According to Thomas, a prostitute, she knew the victim from previous encounters. Based on information supplied to the Chicago police, defendants were subsequently arrested on September 8, 1983. Mac, who was identified as a suspect, was never located. Thereafter, both defendants gave substantially the same statements confirming their presence and participation in the robbery of the victim.
Initially, our review of the record indicates that on the night in question, the victim picked up Thomas and arranged to have sexual intercourse with her. The victim then drove Thomas to her home so that she could change her clothes. The victim waited in his car while Thomas changed.
According to defendant Brooks' statement, on the evening in question, he was sitting on Richard Sims' porch with Sims and defendant Hayes when a car driven by a white male parked across the street. Thereafter, Thomas got out of the car and walked into her house. Then a "guy named Mac" approached defendants and asked them what the victim was doing in their "hood." Mac then indicated that he was going to get a gun so that he and defendants could rob the victim. A few minutes later, Mac returned to the porch with a gun. Defendant Brooks then walked to the rear of Thomas' house. As Thomas was exiting the house, defendant Brooks told Thomas that he, Mac and defendant Hayes were going to rob the victim. Defendant Brooks then informed Thomas that he would take her out of the area so that it would not appear Thomas had arranged the robbery.
Shortly thereafter, Mac pulled the victim from his car, pointed the gun in the victim's face, and began yelling at him. Defendant Hayes then entered the victim's car and drove it around the corner into an alley while Mac took the victim to a vacant lot. Once in the vacant lot, Mac called defendant Brooks to come and pat down the victim for weapons. Brooks patted down the victim but did not find any weapons. Upon hearing sirens, Brooks ran away. Brooks further indicated that the gun Mac was carrying looked like a .45 caliber *156 handgun but that it was not. Following defendant Brooks' confession, his statement was reduced to writing by an assistant State's Attorney.
Defendant Hayes' confession is substantially the same as the statement given by defendant Brooks. However, defendant Hayes' statement additionally indicated that (1) the gun in question was black, (2) when he drove the victim's car into the alley, Sims ran up, got in the car, and (3) Hayes then drove the car down the street a block or two and parked in an alley. Defendant Hayes also heard sirens, at which point he jumped from the car and ran. Following his confession, defendant Hayes' statement was also reduced to writing by an assistant State's Attorney.
At trial, Thomas testified for the State as an eyewitness to the robbery. Thomas stated that she was a prostitute and had a drug problem. Thomas' testimony was nearly identical to the facts set forth in defendants' confessions. Thomas further testified that she saw only half of the gun that Mac was holding.
The victim did not testify at trial. He died of unrelated causes prior to the commencement of trial.
1 We initially address defendants' argument that the trial court erred in failing to question prospective jurors in accordance with our supreme court's holding in People v. Zehr (1984), 103 Ill.2d 472, 469 N.E.2d 1062. In Zehr, the court determined that essential to the qualification of jurors in a criminal case is the jurors' awareness that (1) a defendant is presumed innocent; (2) a defendant is not required to offer any evidence in his own behalf; (3) a defendant must be proven guilty beyond a reasonable doubt; and (4) a defendant's failure to testify in his own behalf cannot be held against him. The court therefore concluded that a defendant has a right to question members of the venire to establish their attitudes concerning these factors. Zehr, 103 Ill.2d at 477, 469 N.E.2d at 1064.
The State, however, contends that Zehr is inapplicable here because while the voir dire examination in this case was conducted on July 31, 1984, the decision in Zehr did not become effective until September 28, 1984, upon modification on a denial of a petition for rehearing. The State essentially argues that the trial court was not required to apply the law as set forth in Zehr at the time of defendants' trial because a petition for rehearing had been filed, and the opinion was subsequently modified on September 28, 1984. As a result, the modified opinion of the court as set forth in Zehr superseded and vacated the rule of law concerning voir dire set forth in the opinion issued by the court on July 31, 1984. We find no merit in *157 the State's argument.
2 A judgment of a court of review is entered when the opinion is filed. (Long v. City of New Boston (1982), 91 Ill.2d 456, 462, 440 N.E.2d 625, 627.) Moreover, contrary to the State's position, the filing of a petition for rehearing does not alter the effective date of the judgment of a reviewing court unless the petition for rehearing is granted. (PSL Realty Co. v. Granite Investment Co. (1981), 86 Ill.2d 291, 305, 427 N.E.2d 563, 570.) In the event that a petition for rehearing is allowed, the effective date of the judgment is the date that the judgment is entered on rehearing (PSL Realty Co., 86 Ill.2d at 305, 427 N.E.2d at 570), and only then does the later modification of the filed opinion supersede and vacate the effect of the earlier opinion. Long, 91 Ill.2d at 462, 440 N.E.2d at 627.
3 In the present case, the opinion in Zehr was filed on March 23, 1984. The opinion was later modified upon the denial of a petition for rehearing and refiled on September 28, 1984. While a modification to an opinion following a rehearing does supersede and vacate the earlier opinion (Long, 91 Ill.2d at 462, 440 N.E.2d at 627), this did not occur here. Rather, the petition for rehearing was denied, and the modification concerned a matter completely unrelated to the voir dire issue originally addressed by the supreme court in the July 31, 1984, Zehr opinion. Therefore, the modification of the unrelated issue did not supersede and vacate that portion of Zehr dealing with voir dire. As a result, the law as set forth in Zehr on July 31 was clearly applicable to the voir dire proceeding in defendants' case. Moreover, based on the record here, we find no merit in the State's contention that defendants waived this issue for purposes of appeal.
Having determined that the disposition of this case is governed by the holding of the supreme court as set forth in Zehr, we next address the issue of whether the voir dire examination in defendants' case was conducted in accordance with the requirements of Zehr. Prior to trial, defendant submitted eight questions to be asked of "each juror individually" during the voir dire examination. Among these eight questions, two are at issue here. These two questions stated:
"3. Do you expect defendants to take the stand and testify on their own behalf and tell you their side of the story?
4. Would you take defendants' failure to testify at trial into consideration in arriving at your verdict?"
During discussion of these voir dire questions, the trial court stated:
"I want to make it perfectly clear that I would not discuss, in any way with the jurors, the defendant testifying, or failure to *158 testify. But if you want me to ask a question similar to what you've put down, Question No. 4, I will be happy to do that."
However, neither the trial court's opening remarks to the venire, nor his specific questions to the individual members of the venire, probed their feelings regarding the subject matter of questions three and four. Yet, the State argues that the trial court's comments to the entire venire, prior to the commencement of individual questioning, adequately probed the individual members of the voir dire for any possible bias or prejudice in compliance with the requirements of Zehr. We disagree.
4 In Zehr, the supreme court determined that the trial court committed reversible error in refusing to probe on voir dire members of the venire's attitudes concerning (1) returning a guilty verdict if the State did not sustain its burden of proof; (2) the presumption of innocence; (3) the fact that a defendant need not present any evidence; and (4) the fact that a defendant's failure to testify cannot be held against him in arriving at a verdict. (Zehr, 103 Ill.2d at 477, 469 N.E.2d at 1064.) The Zehr court held that "`[e]ach of these questions goes to the heart of a particular bias or prejudice which would deprive defendant of his right to a fair and impartial jury.'" (103 Ill.2d at 477.) While proposed voir dire questions dealing with these issues need not be asked in the exact form submitted by a defendant, the subject matter of the questions must be covered in the course of interrogation on voir dire. 103 Ill.2d at 477, 469 N.E.2d at 1064.
5 In the present case, the subject matter of the questions at issue was neither covered in the course of voir dire interrogation, nor even set forth in the trial court's general comments to the venire. This court has specifically determined that "[t]he principle that a defendant's failure to testify in his own behalf cannot be held against him is perhaps the most critical guarantee under our criminal process and is vital to the selection of a fair and impartial jury that a juror understand this concept." (People v. Boswell (1985), 132 Ill. App.3d 52, 56, 476 N.E.2d 1154, 1157.) Therefore, in accordance with the supreme court's holding in Zehr that each of these questions is required to be covered in voir dire, we reverse defendants' convictions and remand this cause for a new trial.
As we have reversed defendants' convictions and remanded this cause for a new trial, we shall address those additional issues which are likely to arise again in a new proceeding.
6 Defendant Brooks argues that he was tried and convicted in violation of the Juvenile Court Act (Act) (Ill. Rev. Stat. 1983, ch. 37, *159 par. 701-1 et seq.) because the indictment against him failed to specify that the crime of armed robbery had been committed with a firearm. At the time of defendant's indictment, section 2-7(6) provided in relevant part:
"(a) The definition of a delinquent minor under Section 2-2 of this Act shall not apply to any minor who at the time of an offense was at least 15 years of age and who is charged with murder, rape, deviate sexual assault or armed robbery when the armed robbery was committed with a firearm. These charges and all other charges arising out of the same incident shall be prosecuted pursuant to the Criminal Code of 1961, as amended.
(b) If before trial or plea an information or indictment is filed which does not charge an offense specified in paragraph (a) of subsection (6) of this Section, the State's Attorney may proceed on the lesser charge or charges but only in Juvenile Court pursuant to the other provisions of the Juvenile Court Act, unless prior to trial the minor defendant knowingly and with advice of counsel waives, in writing, his right to have the matter proceed in Juvenile Court." (Ill. Rev. Stat. 1983, ch. 37, pars. 702-7(6)(a), (6)(b).)
Section 2-2 of the Act defined a delinquent minor as "any minor who prior to his 17th birthday has violated or attempted to violate, regardless of where the act occurred, any federal or state law or municipal ordinance." (Ill. Rev. Stat. 1983, ch. 37, par. 702-2.) At the time of defendant Brooks' arrest, he was 15 years old.
In the present case, the complaint for preliminary examination charged defendant Brooks with committing armed robbery "while armed with a dangerous weapon, an unknown caliber handgun." The indictment charged defendant Brooks with armed robbery on a theory of accountability because he:
"Committed the offense of armed robbery in that * * * [he] by the use of force and by threatening the imminent use of force while armed with a dangerous weapon, took an automobile from the person and presence of * * * [the victim], in violation of Chapter 38, Section 18-2-A of the Illinois Revised Statutes 1981 as amended."
It is defendant Brooks' contention that since the count against him for armed robbery did not specifically allege that the robbery was committed "with a firearm," but instead stated that the armed robbery occurred while defendant was armed with a "dangerous weapon," it was error to subject Brooks to automatic transfer under *160 the Act. Brooks therefore argues that the indictment against him should be dismissed, his conviction vacated and his cause remanded for disposition under the Act. We disagree.
In People v. J.S. (1984), 103 Ill.2d 395, 469 N.E.2d 1090, our supreme court addressed the issue raised by defendant Brooks. In J.S., a consolidated appeal, each defendant was charged with committing an armed robbery while armed with a dangerous weapon. The indictment against the defendants did not specify that the robberies were committed with a firearm. On appeal, the defendants contended that they were not subject to automatic transfer under the Act because the indictments were fatally flawed due to the failure to charge the defendants with armed robbery by the use of a firearm.
The J.S. court rejected the defendants' contention and stated:
"[W]e believe that the offenses charged here sufficiently set forth so as to enable the defendants to be apprised of the charges against them, to properly prepare their defenses, and to use any judgments entered against them as a bar to future prosecution for the same offense.
We agree with the State that the charges read as a whole clearly specify that the defendants were charged with armed robbery with a firearm * * *." (103 Ill.2d at 409, 469 N.E.2d at 1097.)
We believe that the court's analysis and conclusion in J.S. is dispositive of the issue raised by defendant Brooks.
Here, Brooks was indicted for committing an armed robbery while armed with a dangerous weapon. Defendant Brooks was also indicted for armed violence. That indictment stated:
"While armed with a dangerous weapon, to wit: a gun by use of force and by threatening the imminent use of force, took the wallet, miscellaneous identification and an automobile from the person and presence of * * * [the victim] in violation of Chapter 38, section 33A-2/I/18-2-A of the Illinois Revised Statutes 1981 as amended."
Moreover, defendant Brooks gave a statement implicating himself in the armed robbery of the victim. Brooks' statement indicates that Mac was carrying what looked like a .45 caliber handgun but was not. We believe that in light of the facts present in this case, specifically the preliminary examination, the indictments against Brooks, his statement to the police and the posture of the charges against him, that the charges read as a whole clearly apprised defendant Brooks that he was going to be tried for armed robbery with a firearm. However, because we have remanded this cause of action for a *161 new trial as a result of the trial court's failure to comply with the requirements of Zehr, we make no finding concerning whether or not the instrument used in the robbery of the victim was actually a firearm. This is a question of fact to be addressed by the trier of fact. People v. J.S. (1984), 103 Ill.2d 395, 409, 469 N.E.2d 1090, 1097.
7 We lastly address defendant Brooks' contention that section 2-7(6)(a) of the Act unconstitutionally deprived him of substantive and procedural due process of law and equal protection.
The Illinois Supreme Court holding in J.S. (103 Ill.2d 395, 469 N.E.2d 1090) is likewise dispositive of this issue. In J.S., the defendants argued that the distinction which is drawn in section 2-7(6) concerning certain crimes is arbitrary and discriminatory and that it deprives defendants of procedural and substantive due process and equal protection of the laws. The J.S. court rejected the defendants' contentions and determined that section 2-7(6) of the Act is constitutional. (103 Ill.2d 395, 469 N.E.2d 1090.) We are bound to follow our supreme court's ruling in J.S.[1] Therefore, since defendant Brooks' contentions concerning this issue are the same as the arguments raised by the defendants in J.S., we find no merit in Brooks' arguments.
Accordingly, for the reasons stated, the judgment of the trial court is reversed and the cause is remanded.
Reversed and remanded.
WHITE, P.J., and McNAMARA, J., concur.
NOTES
[1] In the case of In re D.T. (1986), 141 Ill. App.3d 1036, 1045, 490 N.E.2d 1361, 1367, this court determined that although the J.S. court's analysis of the constitutionality of section 2-7(6) addressed alleged violations of the Federal Constitution, this analysis was equally applicable to D.T.'s arguments concerning section 2-7(6) and its alleged violations of our State's constitution. We then found the constitutional arguments raised by D.T. to be without merit.
| 2023-12-20T01:26:19.665474 | https://example.com/article/6187 |
<?php
new WPCOM_JSON_API_Update_Post_v1_1_Endpoint( array(
'description' => 'Create a post.',
'group' => 'posts',
'stat' => 'posts:new',
'new_version' => '1.2',
'min_version' => '1.1',
'max_version' => '1.1',
'method' => 'POST',
'path' => '/sites/%s/posts/new',
'path_labels' => array(
'$site' => '(int|string) Site ID or domain',
),
'request_format' => array(
// explicitly document all input
'date' => "(ISO 8601 datetime) The post's creation time.",
'title' => '(HTML) The post title.',
'content' => '(HTML) The post content.',
'excerpt' => '(HTML) An optional post excerpt.',
'slug' => '(string) The name (slug) for the post, used in URLs.',
'author' => '(string) The username or ID for the user to assign the post to.',
'publicize' => '(array|bool) True or false if the post be publicized to external services. An array of services if we only want to publicize to a select few. Defaults to true.',
'publicize_message' => '(string) Custom message to be publicized to external services.',
'status' => array(
'publish' => 'Publish the post.',
'private' => 'Privately publish the post.',
'draft' => 'Save the post as a draft.',
'pending' => 'Mark the post as pending editorial approval.',
'future' => 'Schedule the post (alias for publish; you must also set a future date).',
'auto-draft' => 'Save a placeholder for a newly created post, with no content.',
),
'sticky' => array(
'false' => 'Post is not marked as sticky.',
'true' => 'Stick the post to the front page.',
),
'password' => '(string) The plaintext password protecting the post, or, more likely, the empty string if the post is not password protected.',
'parent' => "(int) The post ID of the new post's parent.",
'type' => "(string) The post type. Defaults to 'post'. Post types besides post and page need to be whitelisted using the <code>rest_api_allowed_post_types</code> filter.",
'terms' => '(object) Mapping of taxonomy to comma-separated list or array of terms (name or id)',
'categories' => "(array|string) Comma-separated list or array of categories (name or id)",
'tags' => "(array|string) Comma-separated list or array of tags (name or id)",
'format' => array_merge( array( 'default' => 'Use default post format' ), get_post_format_strings() ),
'featured_image' => "(string) The post ID of an existing attachment to set as the featured image. Pass an empty string to delete the existing image.",
'media' => "(media) An array of files to attach to the post. To upload media, the entire request should be multipart/form-data encoded. Multiple media items will be displayed in a gallery. Accepts jpg, jpeg, png, gif, pdf, doc, ppt, odt, pptx, docx, pps, ppsx, xls, xlsx, key. Audio and Video may also be available. See <code>allowed_file_types</code> in the options response of the site endpoint. Errors produced by media uploads, if any, will be in `media_errors` in the response. <br /><br /><strong>Example</strong>:<br />" .
"<code>curl \<br />--form 'title=Image Post' \<br />--form 'media[0]=@/path/to/file.jpg' \<br />--form 'media_attrs[0][caption]=My Great Photo' \<br />-H 'Authorization: BEARER your-token' \<br />'https://public-api.wordpress.com/rest/v1/sites/123/posts/new'</code>",
'media_urls' => "(array) An array of URLs for images to attach to a post. Sideloads the media in for a post. Errors produced by media sideloading, if any, will be in `media_errors` in the response.",
'media_attrs' => "(array) An array of attributes (`title`, `description` and `caption`) are supported to assign to the media uploaded via the `media` or `media_urls` properties. You must use a numeric index for the keys of `media_attrs` which follow the same sequence as `media` and `media_urls`. <br /><br /><strong>Example</strong>:<br />" .
"<code>curl \<br />--form 'title=Gallery Post' \<br />--form 'media[]=@/path/to/file1.jpg' \<br />--form 'media_urls[]=http://exapmple.com/file2.jpg' \<br /> \<br />--form 'media_attrs[0][caption]=This will be the caption for file1.jpg' \<br />--form 'media_attrs[1][title]=This will be the title for file2.jpg' \<br />-H 'Authorization: BEARER your-token' \<br />'https://public-api.wordpress.com/rest/v1/sites/123/posts/new'</code>",
'metadata' => "(array) Array of metadata objects containing the following properties: `key` (metadata key), `id` (meta ID), `previous_value` (if set, the action will only occur for the provided previous value), `value` (the new value to set the meta to), `operation` (the operation to perform: `update` or `add`; defaults to `update`). All unprotected meta keys are available by default for read requests. Both unprotected and protected meta keys are avaiable for authenticated requests with proper capabilities. Protected meta keys can be made available with the <code>rest_api_allowed_public_metadata</code> filter.",
'discussion' => '(object) A hash containing one or more of the following boolean values, which default to the blog\'s discussion preferences: `comments_open`, `pings_open`',
'likes_enabled' => "(bool) Should the post be open to likes? Defaults to the blog's preference.",
'sharing_enabled' => "(bool) Should sharing buttons show on this post? Defaults to true.",
'menu_order' => "(int) (Pages Only) the order pages should appear in. Use 0 to maintain alphabetical order.",
'page_template' => '(string) (Pages Only) The page template this page should use.',
),
'example_request' => 'https://public-api.wordpress.com/rest/v1.1/sites/82974409/posts/new/',
'example_request_data' => array(
'headers' => array(
'authorization' => 'Bearer YOUR_API_TOKEN'
),
'body' => array(
'title' => 'Hello World',
'content' => 'Hello. I am a test post. I was created by the API',
'tags' => 'tests',
'categories' => 'API'
)
)
) );
new WPCOM_JSON_API_Update_Post_v1_1_Endpoint( array(
'description' => 'Edit a post.',
'group' => 'posts',
'stat' => 'posts:1:POST',
'new_version' => '1.2',
'min_version' => '1.1',
'max_version' => '1.1',
'method' => 'POST',
'path' => '/sites/%s/posts/%d',
'path_labels' => array(
'$site' => '(int|string) Site ID or domain',
'$post_ID' => '(int) The post ID',
),
'request_format' => array(
'date' => "(ISO 8601 datetime) The post's creation time.",
'title' => '(HTML) The post title.',
'content' => '(HTML) The post content.',
'excerpt' => '(HTML) An optional post excerpt.',
'slug' => '(string) The name (slug) for the post, used in URLs.',
'author' => '(string) The username or ID for the user to assign the post to.',
'publicize' => '(array|bool) True or false if the post be publicized to external services. An array of services if we only want to publicize to a select few. Defaults to true.',
'publicize_message' => '(string) Custom message to be publicized to external services.',
'status' => array(
'publish' => 'Publish the post.',
'private' => 'Privately publish the post.',
'draft' => 'Save the post as a draft.',
'future' => 'Schedule the post (alias for publish; you must also set a future date).',
'pending' => 'Mark the post as pending editorial approval.',
'trash' => 'Set the post as trashed.',
),
'sticky' => array(
'false' => 'Post is not marked as sticky.',
'true' => 'Stick the post to the front page.',
),
'password' => '(string) The plaintext password protecting the post, or, more likely, the empty string if the post is not password protected.',
'parent' => "(int) The post ID of the new post's parent.",
'terms' => '(object) Mapping of taxonomy to comma-separated list or array of terms (name or id)',
'categories' => "(array|string) Comma-separated list or array of categories (name or id)",
'tags' => "(array|string) Comma-separated list or array of tags (name or id)",
'format' => array_merge( array( 'default' => 'Use default post format' ), get_post_format_strings() ),
'discussion' => '(object) A hash containing one or more of the following boolean values, which default to the blog\'s discussion preferences: `comments_open`, `pings_open`',
'likes_enabled' => "(bool) Should the post be open to likes?",
'menu_order' => "(int) (Pages only) the order pages should appear in. Use 0 to maintain alphabetical order.",
'page_template' => '(string) (Pages Only) The page template this page should use.',
'sharing_enabled' => "(bool) Should sharing buttons show on this post?",
'featured_image' => "(string) The post ID of an existing attachment to set as the featured image. Pass an empty string to delete the existing image.",
'media' => "(media) An array of files to attach to the post. To upload media, the entire request should be multipart/form-data encoded. Multiple media items will be displayed in a gallery. Accepts jpg, jpeg, png, gif, pdf, doc, ppt, odt, pptx, docx, pps, ppsx, xls, xlsx, key. Audio and Video may also be available. See <code>allowed_file_types</code> in the options resposne of the site endpoint. <br /><br /><strong>Example</strong>:<br />" .
"<code>curl \<br />--form 'title=Image' \<br />--form 'media[]=@/path/to/file.jpg' \<br />-H 'Authorization: BEARER your-token' \<br />'https://public-api.wordpress.com/rest/v1/sites/123/posts/new'</code>",
'media_urls' => "(array) An array of URLs for images to attach to a post. Sideloads the media in for a post.",
'metadata' => "(array) Array of metadata objects containing the following properties: `key` (metadata key), `id` (meta ID), `previous_value` (if set, the action will only occur for the provided previous value), `value` (the new value to set the meta to), `operation` (the operation to perform: `update` or `add`; defaults to `update`). All unprotected meta keys are available by default for read requests. Both unprotected and protected meta keys are available for authenticated requests with proper capabilities. Protected meta keys can be made available with the <code>rest_api_allowed_public_metadata</code> filter.",
),
'example_request' => 'https://public-api.wordpress.com/rest/v1.1/sites/82974409/posts/881',
'example_request_data' => array(
'headers' => array(
'authorization' => 'Bearer YOUR_API_TOKEN'
),
'body' => array(
'title' => 'Hello World (Again)',
'content' => 'Hello. I am an edited post. I was edited by the API',
'tags' => 'tests',
'categories' => 'API'
)
)
) );
new WPCOM_JSON_API_Update_Post_v1_1_Endpoint( array(
'description' => 'Delete a post. Note: If the trash is enabled, this request will send the post to the trash. A second request will permanently delete the post.',
'group' => 'posts',
'stat' => 'posts:1:delete',
'min_version' => '1.1',
'max_version' => '1.1',
'method' => 'POST',
'path' => '/sites/%s/posts/%d/delete',
'path_labels' => array(
'$site' => '(int|string) Site ID or domain',
'$post_ID' => '(int) The post ID',
),
'example_request' => 'https://public-api.wordpress.com/rest/v1.1/sites/82974409/posts/$post_ID/delete/',
'example_request_data' => array(
'headers' => array(
'authorization' => 'Bearer YOUR_API_TOKEN'
)
)
) );
new WPCOM_JSON_API_Update_Post_v1_1_Endpoint( array(
'description' => 'Restore a post or page from the trash to its previous status.',
'group' => 'posts',
'stat' => 'posts:1:restore',
'min_version' => '1.1',
'max_version' => '1.1',
'method' => 'POST',
'path' => '/sites/%s/posts/%d/restore',
'path_labels' => array(
'$site' => '(int|string) Site ID or domain',
'$post_ID' => '(int) The post ID',
),
'example_request' => 'https://public-api.wordpress.com/rest/v1.1/sites/82974409/posts/$post_ID/restore/',
'example_request_data' => array(
'headers' => array(
'authorization' => 'Bearer YOUR_API_TOKEN'
)
)
) );
class WPCOM_JSON_API_Update_Post_v1_1_Endpoint extends WPCOM_JSON_API_Post_v1_1_Endpoint {
function __construct( $args ) {
parent::__construct( $args );
if ( $this->api->ends_with( $this->path, '/delete' ) ) {
$this->post_object_format['status']['deleted'] = 'The post has been deleted permanently.';
}
}
// /sites/%s/posts/new -> $blog_id
// /sites/%s/posts/%d -> $blog_id, $post_id
// /sites/%s/posts/%d/delete -> $blog_id, $post_id
// /sites/%s/posts/%d/restore -> $blog_id, $post_id
function callback( $path = '', $blog_id = 0, $post_id = 0 ) {
$blog_id = $this->api->switch_to_blog_and_validate_user( $this->api->get_blog_id( $blog_id ) );
if ( is_wp_error( $blog_id ) ) {
return $blog_id;
}
if ( $this->api->ends_with( $path, '/delete' ) ) {
return $this->delete_post( $path, $blog_id, $post_id );
} elseif ( $this->api->ends_with( $path, '/restore' ) ) {
return $this->restore_post( $path, $blog_id, $post_id );
} else {
return $this->write_post( $path, $blog_id, $post_id );
}
}
// /sites/%s/posts/new -> $blog_id
// /sites/%s/posts/%d -> $blog_id, $post_id
function write_post( $path, $blog_id, $post_id ) {
global $wpdb;
$new = $this->api->ends_with( $path, '/new' );
$args = $this->query_args();
// unhook publicize, it's hooked again later -- without this, skipping services is impossible
if ( defined( 'IS_WPCOM' ) && IS_WPCOM ) {
remove_action( 'save_post', array( $GLOBALS['publicize_ui']->publicize, 'async_publicize_post' ), 100, 2 );
add_action( 'rest_api_inserted_post', array( $GLOBALS['publicize_ui']->publicize, 'async_publicize_post' ) );
if ( $this->should_load_theme_functions( $post_id ) ) {
$this->load_theme_functions();
}
}
if ( $new ) {
$input = $this->input( true );
// 'future' is an alias for 'publish' for now
if ( 'future' === $input['status'] ) {
$input['status'] = 'publish';
}
if ( 'revision' === $input['type'] ) {
if ( ! isset( $input['parent'] ) ) {
return new WP_Error( 'invalid_input', 'Invalid request input', 400 );
}
$input['status'] = 'inherit'; // force inherit for revision type
$input['slug'] = $input['parent'] . '-autosave-v1';
}
elseif ( !isset( $input['title'] ) && !isset( $input['content'] ) && !isset( $input['excerpt'] ) ) {
return new WP_Error( 'invalid_input', 'Invalid request input', 400 );
}
// default to post
if ( empty( $input['type'] ) )
$input['type'] = 'post';
$post_type = get_post_type_object( $input['type'] );
if ( ! $this->is_post_type_allowed( $input['type'] ) ) {
return new WP_Error( 'unknown_post_type', 'Unknown post type', 404 );
}
if ( ! empty( $input['author'] ) ) {
$author_id = $this->parse_and_set_author( $input['author'], $input['type'] );
unset( $input['author'] );
if ( is_wp_error( $author_id ) )
return $author_id;
}
if ( 'publish' === $input['status'] ) {
if ( ! current_user_can( $post_type->cap->publish_posts ) ) {
if ( current_user_can( $post_type->cap->edit_posts ) ) {
$input['status'] = 'pending';
} else {
return new WP_Error( 'unauthorized', 'User cannot publish posts', 403 );
}
}
} else {
if ( !current_user_can( $post_type->cap->edit_posts ) ) {
return new WP_Error( 'unauthorized', 'User cannot edit posts', 403 );
}
}
} else {
$input = $this->input( false );
if ( !is_array( $input ) || !$input ) {
return new WP_Error( 'invalid_input', 'Invalid request input', 400 );
}
if ( isset( $input['status'] ) && 'trash' === $input['status'] && ! current_user_can( 'delete_post', $post_id ) ) {
return new WP_Error( 'unauthorized', 'User cannot delete post', 403 );
}
// 'future' is an alias for 'publish' for now
if ( isset( $input['status'] ) && 'future' === $input['status'] ) {
$input['status'] = 'publish';
}
$post = get_post( $post_id );
$_post_type = ( ! empty( $input['type'] ) ) ? $input['type'] : $post->post_type;
$post_type = get_post_type_object( $_post_type );
if ( !$post || is_wp_error( $post ) ) {
return new WP_Error( 'unknown_post', 'Unknown post', 404 );
}
if ( !current_user_can( 'edit_post', $post->ID ) ) {
return new WP_Error( 'unauthorized', 'User cannot edit post', 403 );
}
if ( ! empty( $input['author'] ) ) {
$author_id = $this->parse_and_set_author( $input['author'], $_post_type );
unset( $input['author'] );
if ( is_wp_error( $author_id ) )
return $author_id;
}
if ( ( isset( $input['status'] ) && 'publish' === $input['status'] ) && 'publish' !== $post->post_status && !current_user_can( 'publish_post', $post->ID ) ) {
$input['status'] = 'pending';
}
$last_status = $post->post_status;
$new_status = isset( $input['status'] ) ? $input['status'] : $last_status;
// Make sure that drafts get the current date when transitioning to publish if not supplied in the post.
// Similarly, scheduled posts that are manually published before their scheduled date should have the date reset.
$date_in_past = ( strtotime($post->post_date_gmt) < time() );
$reset_draft_date = 'publish' === $new_status && 'draft' === $last_status && ! isset( $input['date_gmt'] ) && $date_in_past;
$reset_scheduled_date = 'publish' === $new_status && 'future' === $last_status && ! isset( $input['date_gmt'] ) && ! $date_in_past;
if ( $reset_draft_date || $reset_scheduled_date ) {
$input['date_gmt'] = gmdate( 'Y-m-d H:i:s' );
}
}
if ( function_exists( 'wpcom_switch_to_blog_locale' ) ) {
// fixes calypso-pre-oss #12476: respect blog locale when creating the post slug
wpcom_switch_to_blog_locale( $blog_id );
}
// If date was set, $this->input will set date_gmt, date still needs to be adjusted for the blog's offset
if ( isset( $input['date_gmt'] ) ) {
$gmt_offset = get_option( 'gmt_offset' );
$time_with_offset = strtotime( $input['date_gmt'] ) + $gmt_offset * HOUR_IN_SECONDS;
$input['date'] = date( 'Y-m-d H:i:s', $time_with_offset );
}
if ( ! empty( $author_id ) && get_current_user_id() != $author_id ) {
if ( ! current_user_can( $post_type->cap->edit_others_posts ) ) {
return new WP_Error( 'unauthorized', "User is not allowed to publish others' posts.", 403 );
} elseif ( ! user_can( $author_id, $post_type->cap->edit_posts ) ) {
return new WP_Error( 'unauthorized', 'Assigned author cannot publish post.', 403 );
}
}
if ( !is_post_type_hierarchical( $post_type->name ) && 'revision' !== $post_type->name ) {
unset( $input['parent'] );
}
$input['terms'] = isset( $input['terms'] ) ? (array) $input['terms'] : array();
// Convert comma-separated terms to array before attempting to
// merge with hardcoded taxonomies
foreach ( $input['terms'] as $taxonomy => $terms ) {
if ( is_string( $terms ) ) {
$input['terms'][ $taxonomy ] = explode( ',', $terms );
} else if ( ! is_array( $terms ) ) {
$input['terms'][ $taxonomy ] = array();
}
}
// For each hard-coded taxonomy, merge into terms object
foreach ( array( 'categories' => 'category', 'tags' => 'post_tag' ) as $taxonomy_key => $taxonomy ) {
if ( ! isset( $input[ $taxonomy_key ] ) ) {
continue;
}
if ( ! isset( $input['terms'][ $taxonomy ] ) ) {
$input['terms'][ $taxonomy ] = array();
}
$terms = $input[ $taxonomy_key ];
if ( is_string( $terms ) ) {
$terms = explode( ',', $terms );
} else if ( ! is_array( $terms ) ) {
continue;
}
$input['terms'][ $taxonomy ] = array_merge(
$input['terms'][ $taxonomy ],
$terms
);
}
$tax_input = array();
foreach ( $input['terms'] as $taxonomy => $terms ) {
$tax_input[ $taxonomy ] = array();
$is_hierarchical = is_taxonomy_hierarchical( $taxonomy );
foreach ( $terms as $term ) {
/**
* `curl --data 'terms[category][]=123'` should be interpreted as a category ID,
* not a category whose name is '123'.
*
* Consequence: To add a category/tag whose name is '123', the client must
* first look up its ID.
*/
$term = (string) $term; // ctype_digit compat
if ( ctype_digit( $term ) ) {
$term = (int) $term;
}
$term_info = term_exists( $term, $taxonomy );
if ( ! $term_info ) {
// A term ID that doesn't already exist. Ignore it: we don't know what name to give it.
if ( is_int( $term ) ){
continue;
}
// only add a new tag/cat if the user has access to
$tax = get_taxonomy( $taxonomy );
// see https://core.trac.wordpress.org/ticket/26409
if ( $is_hierarchical && ! current_user_can( $tax->cap->edit_terms ) ) {
continue;
} else if ( ! current_user_can( $tax->cap->assign_terms ) ) {
continue;
}
$term_info = wp_insert_term( $term, $taxonomy );
}
if ( ! is_wp_error( $term_info ) ) {
if ( $is_hierarchical ) {
// Hierarchical terms must be added by ID
$tax_input[$taxonomy][] = (int) $term_info['term_id'];
} else {
// Non-hierarchical terms must be added by name
if ( is_int( $term ) ) {
$term = get_term( $term, $taxonomy );
$tax_input[$taxonomy][] = $term->name;
} else {
$tax_input[$taxonomy][] = $term;
}
}
}
}
}
if ( isset( $input['terms']['category'] ) && empty( $tax_input['category'] ) && 'revision' !== $post_type->name ) {
$tax_input['category'][] = get_option( 'default_category' );
}
unset( $input['terms'], $input['tags'], $input['categories'] );
$insert = array();
if ( !empty( $input['slug'] ) ) {
$insert['post_name'] = $input['slug'];
unset( $input['slug'] );
}
if ( isset( $input['discussion'] ) ) {
$discussion = (array) $input['discussion'];
foreach ( array( 'comment', 'ping' ) as $discussion_type ) {
$discussion_open = sprintf( '%ss_open', $discussion_type );
$discussion_status = sprintf( '%s_status', $discussion_type );
if ( isset( $discussion[ $discussion_open ] ) ) {
$is_open = WPCOM_JSON_API::is_truthy( $discussion[ $discussion_open ] );
$discussion[ $discussion_status ] = $is_open ? 'open' : 'closed';
}
if ( in_array( $discussion[ $discussion_status ], array( 'open', 'closed' ) ) ) {
$insert[ $discussion_status ] = $discussion[ $discussion_status ];
}
}
}
unset( $input['discussion'] );
if ( isset( $input['menu_order'] ) ) {
$insert['menu_order'] = $input['menu_order'];
unset( $input['menu_order'] );
}
$publicize = isset( $input['publicize'] ) ? $input['publicize'] : null;
unset( $input['publicize'] );
$publicize_custom_message = isset( $input['publicize_message'] ) ? $input['publicize_message'] : null;
unset( $input['publicize_message'] );
if ( isset( $input['featured_image'] ) ) {
$featured_image = trim( $input['featured_image'] );
$delete_featured_image = empty( $featured_image );
unset( $input['featured_image'] );
}
$metadata = isset( $input['metadata'] ) ? $input['metadata'] : null;
unset( $input['metadata'] );
$likes = isset( $input['likes_enabled'] ) ? $input['likes_enabled'] : null;
unset( $input['likes_enabled'] );
$sharing = isset( $input['sharing_enabled'] ) ? $input['sharing_enabled'] : null;
unset( $input['sharing_enabled'] );
$sticky = isset( $input['sticky'] ) ? $input['sticky'] : null;
unset( $input['sticky'] );
foreach ( $input as $key => $value ) {
$insert["post_$key"] = $value;
}
if ( ! empty( $author_id ) ) {
$insert['post_author'] = absint( $author_id );
}
if ( ! empty( $tax_input ) ) {
$insert['tax_input'] = $tax_input;
}
$has_media = ! empty( $input['media'] ) ? count( $input['media'] ) : false;
$has_media_by_url = ! empty( $input['media_urls'] ) ? count( $input['media_urls'] ) : false;
$media_id_string = '';
if ( $has_media || $has_media_by_url ) {
$media_files = ! empty( $input['media'] ) ? $input['media'] : array();
$media_urls = ! empty( $input['media_urls'] ) ? $input['media_urls'] : array();
$media_attrs = ! empty( $input['media_attrs'] ) ? $input['media_attrs'] : array();
$media_results = $this->handle_media_creation_v1_1( $media_files, $media_urls, $media_attrs );
$media_id_string = join( ',', array_filter( array_map( 'absint', $media_results['media_ids'] ) ) );
}
if ( $new ) {
if ( isset( $input['content'] ) && ! has_shortcode( $input['content'], 'gallery' ) && ( $has_media || $has_media_by_url ) ) {
switch ( ( $has_media + $has_media_by_url ) ) {
case 0 :
// No images - do nothing.
break;
case 1 :
// 1 image - make it big
$insert['post_content'] = $input['content'] = sprintf(
"[gallery size=full ids='%s' columns=1]\n\n",
$media_id_string
) . $input['content'];
break;
default :
// Several images - 3 column gallery
$insert['post_content'] = $input['content'] = sprintf(
"[gallery ids='%s']\n\n",
$media_id_string
) . $input['content'];
break;
}
}
$post_id = wp_insert_post( add_magic_quotes( $insert ), true );
} else {
$insert['ID'] = $post->ID;
// wp_update_post ignores date unless edit_date is set
// See: https://codex.wordpress.org/Function_Reference/wp_update_post#Scheduling_posts
// See: https://core.trac.wordpress.org/browser/tags/3.9.2/src/wp-includes/post.php#L3302
if ( isset( $input['date_gmt'] ) || isset( $input['date'] ) ) {
$insert['edit_date'] = true;
}
// this two-step process ensures any changes submitted along with status=trash get saved before trashing
if ( isset( $input['status'] ) && 'trash' === $input['status'] ) {
// if we insert it with status='trash', it will get double-trashed, so insert it as a draft first
unset( $insert['status'] );
$post_id = wp_update_post( (object) $insert );
// now call wp_trash_post so post_meta gets set and any filters get called
wp_trash_post( $post_id );
} else {
$post_id = wp_update_post( (object) $insert );
}
}
if ( !$post_id || is_wp_error( $post_id ) ) {
return $post_id;
}
// make sure this post actually exists and is not an error of some kind (ie, trying to load media in the posts endpoint)
$post_check = $this->get_post_by( 'ID', $post_id, $args['context'] );
if ( is_wp_error( $post_check ) ) {
return $post_check;
}
if ( $media_id_string ) {
// Yes - this is really how wp-admin does it.
$wpdb->query( $wpdb->prepare(
"UPDATE $wpdb->posts SET post_parent = %d WHERE post_type = 'attachment' AND ID IN ( $media_id_string )",
$post_id
) );
foreach ( $media_results['media_ids'] as $media_id ) {
clean_attachment_cache( $media_id );
}
clean_post_cache( $post_id );
}
// set page template for this post..
if ( isset( $input['page_template'] ) && 'page' == $post_type->name ) {
$page_template = $input['page_template'];
$page_templates = wp_get_theme()->get_page_templates( get_post( $post_id ) );
if ( empty( $page_template ) || 'default' == $page_template || isset( $page_templates[ $page_template ] ) ) {
update_post_meta( $post_id, '_wp_page_template', $page_template );
}
}
// Set like status for the post
/** This filter is documented in modules/likes.php */
$sitewide_likes_enabled = (bool) apply_filters( 'wpl_is_enabled_sitewide', ! get_option( 'disabled_likes' ) );
if ( $new ) {
if ( $sitewide_likes_enabled ) {
if ( false === $likes ) {
update_post_meta( $post_id, 'switch_like_status', 0 );
} else {
delete_post_meta( $post_id, 'switch_like_status' );
}
} else {
if ( $likes ) {
update_post_meta( $post_id, 'switch_like_status', 1 );
} else {
delete_post_meta( $post_id, 'switch_like_status' );
}
}
} else {
if ( isset( $likes ) ) {
if ( $sitewide_likes_enabled ) {
if ( false === $likes ) {
update_post_meta( $post_id, 'switch_like_status', 0 );
} else {
delete_post_meta( $post_id, 'switch_like_status' );
}
} else {
if ( true === $likes ) {
update_post_meta( $post_id, 'switch_like_status', 1 );
} else {
delete_post_meta( $post_id, 'switch_like_status' );
}
}
}
}
// Set sharing status of the post
if ( $new ) {
$sharing_enabled = isset( $sharing ) ? (bool) $sharing : true;
if ( false === $sharing_enabled ) {
update_post_meta( $post_id, 'sharing_disabled', 1 );
}
}
else {
if ( isset( $sharing ) && true === $sharing ) {
delete_post_meta( $post_id, 'sharing_disabled' );
} else if ( isset( $sharing ) && false == $sharing ) {
update_post_meta( $post_id, 'sharing_disabled', 1 );
}
}
if ( isset( $sticky ) ) {
if ( true === $sticky ) {
stick_post( $post_id );
} else {
unstick_post( $post_id );
}
}
// WPCOM Specific (Jetpack's will get bumped elsewhere
// Tracks how many posts are published and sets meta
// so we can track some other cool stats (like likes & comments on posts published)
if ( defined( 'IS_WPCOM' ) && IS_WPCOM ) {
if (
( $new && 'publish' == $input['status'] )
|| (
! $new && isset( $last_status )
&& 'publish' != $last_status
&& isset( $new_status )
&& 'publish' == $new_status
)
) {
/** This action is documented in modules/widgets/social-media-icons.php */
do_action( 'jetpack_bump_stats_extras', 'api-insights-posts', $this->api->token_details['client_id'] );
update_post_meta( $post_id, '_rest_api_published', 1 );
update_post_meta( $post_id, '_rest_api_client_id', $this->api->token_details['client_id'] );
}
}
// We ask the user/dev to pass Publicize services he/she wants activated for the post, but Publicize expects us
// to instead flag the ones we don't want to be skipped. proceed with said logic.
// any posts coming from Path (client ID 25952) should also not publicize
if ( $publicize === false || ( isset( $this->api->token_details['client_id'] ) && 25952 == $this->api->token_details['client_id'] ) ) {
// No publicize at all, skip all by ID
foreach ( $GLOBALS['publicize_ui']->publicize->get_services( 'all' ) as $name => $service ) {
delete_post_meta( $post_id, $GLOBALS['publicize_ui']->publicize->POST_SKIP . $name );
$service_connections = $GLOBALS['publicize_ui']->publicize->get_connections( $name );
if ( ! $service_connections ) {
continue;
}
foreach ( $service_connections as $service_connection ) {
update_post_meta( $post_id, $GLOBALS['publicize_ui']->publicize->POST_SKIP . $service_connection->unique_id, 1 );
}
}
} else if ( is_array( $publicize ) && ( count ( $publicize ) > 0 ) ) {
foreach ( $GLOBALS['publicize_ui']->publicize->get_services( 'all' ) as $name => $service ) {
/*
* We support both indexed and associative arrays:
* * indexed are to pass entire services
* * associative are to pass specific connections per service
*
* We do support mixed arrays: mixed integer and string keys (see 3rd example below).
*
* EG: array( 'twitter', 'facebook') will only publicize to those, ignoring the other available services
* Form data: publicize[]=twitter&publicize[]=facebook
* EG: array( 'twitter' => '(int) $pub_conn_id_0, (int) $pub_conn_id_3', 'facebook' => (int) $pub_conn_id_7 ) will publicize to two Twitter accounts, and one Facebook connection, of potentially many.
* Form data: publicize[twitter]=$pub_conn_id_0,$pub_conn_id_3&publicize[facebook]=$pub_conn_id_7
* EG: array( 'twitter', 'facebook' => '(int) $pub_conn_id_0, (int) $pub_conn_id_3' ) will publicize to all available Twitter accounts, but only 2 of potentially many Facebook connections
* Form data: publicize[]=twitter&publicize[facebook]=$pub_conn_id_0,$pub_conn_id_3
*/
// Delete any stale SKIP value for the service by name. We'll add it back by ID.
delete_post_meta( $post_id, $GLOBALS['publicize_ui']->publicize->POST_SKIP . $name );
// Get the user's connections
$service_connections = $GLOBALS['publicize_ui']->publicize->get_connections( $name );
// if the user doesn't have any connections for this service, move on
if ( ! $service_connections ) {
continue;
}
if ( !in_array( $name, $publicize ) && !array_key_exists( $name, $publicize ) ) {
// Skip the whole service by adding each connection ID
foreach ( $service_connections as $service_connection ) {
update_post_meta( $post_id, $GLOBALS['publicize_ui']->publicize->POST_SKIP . $service_connection->unique_id, 1 );
}
} else if ( !empty( $publicize[ $name ] ) ) {
// Seems we're being asked to only push to [a] specific connection[s].
// Explode the list on commas, which will also support a single passed ID
$requested_connections = explode( ',', ( preg_replace( '/[\s]*/', '', $publicize[ $name ] ) ) );
// Flag the connections we can't match with the requested list to be skipped.
foreach ( $service_connections as $service_connection ) {
if ( !in_array( $service_connection->meta['connection_data']->id, $requested_connections ) ) {
update_post_meta( $post_id, $GLOBALS['publicize_ui']->publicize->POST_SKIP . $service_connection->unique_id, 1 );
} else {
delete_post_meta( $post_id, $GLOBALS['publicize_ui']->publicize->POST_SKIP . $service_connection->unique_id );
}
}
} else {
// delete all SKIP values; it's okay to publish to all connected IDs for this service
foreach ( $service_connections as $service_connection ) {
delete_post_meta( $post_id, $GLOBALS['publicize_ui']->publicize->POST_SKIP . $service_connection->unique_id );
}
}
}
}
if ( ! is_null( $publicize_custom_message ) ) {
if ( empty( $publicize_custom_message ) ) {
delete_post_meta( $post_id, $GLOBALS['publicize_ui']->publicize->POST_MESS );
} else {
update_post_meta( $post_id, $GLOBALS['publicize_ui']->publicize->POST_MESS, trim( $publicize_custom_message ) );
}
}
if ( ! empty( $insert['post_format'] ) ) {
if ( 'default' !== strtolower( $insert['post_format'] ) ) {
set_post_format( $post_id, $insert['post_format'] );
}
else {
set_post_format( $post_id, get_option( 'default_post_format' ) );
}
}
if ( isset( $featured_image ) ) {
$this->parse_and_set_featured_image( $post_id, $delete_featured_image, $featured_image );
}
if ( ! empty( $metadata ) ) {
foreach ( (array) $metadata as $meta ) {
$meta = (object) $meta;
// Custom meta description can only be set on sites that have a business subscription.
if ( Jetpack_SEO_Posts::DESCRIPTION_META_KEY == $meta->key && ! Jetpack_SEO_Utils::is_enabled_jetpack_seo() ) {
return new WP_Error( 'unauthorized', __( 'SEO tools are not enabled for this site.', 'jetpack' ), 403 );
}
$existing_meta_item = new stdClass;
if ( empty( $meta->operation ) )
$meta->operation = 'update';
if ( ! empty( $meta->value ) ) {
if ( 'true' == $meta->value )
$meta->value = true;
if ( 'false' == $meta->value )
$meta->value = false;
}
if ( ! empty( $meta->id ) ) {
$meta->id = absint( $meta->id );
$existing_meta_item = get_metadata_by_mid( 'post', $meta->id );
if ( $post_id !== (int) $existing_meta_item->post_id ) {
// Only allow updates for metadata on this post
continue;
}
}
$unslashed_meta_key = wp_unslash( $meta->key ); // should match what the final key will be
$meta->key = wp_slash( $meta->key );
$unslashed_existing_meta_key = wp_unslash( $existing_meta_item->meta_key );
$existing_meta_item->meta_key = wp_slash( $existing_meta_item->meta_key );
// make sure that the meta id passed matches the existing meta key
if ( ! empty( $meta->id ) && ! empty( $meta->key ) ) {
$meta_by_id = get_metadata_by_mid( 'post', $meta->id );
if ( $meta_by_id->meta_key !== $meta->key ) {
continue; // skip this meta
}
}
switch ( $meta->operation ) {
case 'delete':
if ( ! empty( $meta->id ) && ! empty( $existing_meta_item->meta_key ) && current_user_can( 'delete_post_meta', $post_id, $unslashed_existing_meta_key ) ) {
delete_metadata_by_mid( 'post', $meta->id );
} elseif ( ! empty( $meta->key ) && ! empty( $meta->previous_value ) && current_user_can( 'delete_post_meta', $post_id, $unslashed_meta_key ) ) {
delete_post_meta( $post_id, $meta->key, $meta->previous_value );
} elseif ( ! empty( $meta->key ) && current_user_can( 'delete_post_meta', $post_id, $unslashed_meta_key ) ) {
delete_post_meta( $post_id, $meta->key );
}
break;
case 'add':
if ( ! empty( $meta->id ) || ! empty( $meta->previous_value ) ) {
break;
} elseif ( ! empty( $meta->key ) && ! empty( $meta->value ) && ( current_user_can( 'add_post_meta', $post_id, $unslashed_meta_key ) ) || WPCOM_JSON_API_Metadata::is_public( $meta->key ) ) {
add_post_meta( $post_id, $meta->key, $meta->value );
}
break;
case 'update':
if ( ! isset( $meta->value ) ) {
break;
} elseif ( ! empty( $meta->id ) && ! empty( $existing_meta_item->meta_key ) && ( current_user_can( 'edit_post_meta', $post_id, $unslashed_existing_meta_key ) || WPCOM_JSON_API_Metadata::is_public( $meta->key ) ) ) {
update_metadata_by_mid( 'post', $meta->id, $meta->value );
} elseif ( ! empty( $meta->key ) && ! empty( $meta->previous_value ) && ( current_user_can( 'edit_post_meta', $post_id, $unslashed_meta_key ) || WPCOM_JSON_API_Metadata::is_public( $meta->key ) ) ) {
update_post_meta( $post_id, $meta->key,$meta->value, $meta->previous_value );
} elseif ( ! empty( $meta->key ) && ( current_user_can( 'edit_post_meta', $post_id, $unslashed_meta_key ) || WPCOM_JSON_API_Metadata::is_public( $meta->key ) ) ) {
update_post_meta( $post_id, $meta->key, $meta->value );
}
break;
}
}
}
/** This action is documented in json-endpoints/class.wpcom-json-api-update-post-endpoint.php */
do_action( 'rest_api_inserted_post', $post_id, $insert, $new );
$return = $this->get_post_by( 'ID', $post_id, $args['context'] );
if ( !$return || is_wp_error( $return ) ) {
return $return;
}
if ( isset( $input['type'] ) && 'revision' === $input['type'] ) {
$return['preview_nonce'] = wp_create_nonce( 'post_preview_' . $input['parent'] );
}
if ( isset( $sticky ) ) {
// workaround for sticky test occasionally failing, maybe a race condition with stick_post() above
$return['sticky'] = ( true === $sticky );
}
if ( ! empty( $media_results['errors'] ) )
$return['media_errors'] = $media_results['errors'];
if ( 'publish' !== $post->post_status ) {
$sal_site = $this->get_sal_post_by( 'ID', $post_id, $args['context'] );
$return['other_URLs'] = (object) $sal_site->get_permalink_suggestions( $input['title'] );
}
/** This action is documented in json-endpoints/class.wpcom-json-api-site-settings-endpoint.php */
do_action( 'wpcom_json_api_objects', 'posts' );
return $return;
}
// /sites/%s/posts/%d/delete -> $blog_id, $post_id
function delete_post( $path, $blog_id, $post_id ) {
$post = get_post( $post_id );
if ( !$post || is_wp_error( $post ) ) {
return new WP_Error( 'unknown_post', 'Unknown post', 404 );
}
if ( ! $this->is_post_type_allowed( $post->post_type ) ) {
return new WP_Error( 'unknown_post_type', 'Unknown post type', 404 );
}
if ( !current_user_can( 'delete_post', $post->ID ) ) {
return new WP_Error( 'unauthorized', 'User cannot delete posts', 403 );
}
$args = $this->query_args();
$return = $this->get_post_by( 'ID', $post->ID, $args['context'] );
if ( !$return || is_wp_error( $return ) ) {
return $return;
}
/** This action is documented in json-endpoints/class.wpcom-json-api-site-settings-endpoint.php */
do_action( 'wpcom_json_api_objects', 'posts' );
// we need to call wp_trash_post so that untrash will work correctly for all post types
if ( 'trash' === $post->post_status )
wp_delete_post( $post->ID );
else
wp_trash_post( $post->ID );
$status = get_post_status( $post->ID );
if ( false === $status ) {
$return['status'] = 'deleted';
return $return;
}
return $this->get_post_by( 'ID', $post->ID, $args['context'] );
}
// /sites/%s/posts/%d/restore -> $blog_id, $post_id
function restore_post( $path, $blog_id, $post_id ) {
$args = $this->query_args();
$post = get_post( $post_id );
if ( !$post || is_wp_error( $post ) ) {
return new WP_Error( 'unknown_post', 'Unknown post', 404 );
}
if ( !current_user_can( 'delete_post', $post->ID ) ) {
return new WP_Error( 'unauthorized', 'User cannot restore trashed posts', 403 );
}
/** This action is documented in json-endpoints/class.wpcom-json-api-site-settings-endpoint.php */
do_action( 'wpcom_json_api_objects', 'posts' );
wp_untrash_post( $post->ID );
return $this->get_post_by( 'ID', $post->ID, $args['context'] );
}
protected function parse_and_set_featured_image( $post_id, $delete_featured_image, $featured_image ) {
if ( $delete_featured_image ) {
delete_post_thumbnail( $post_id );
return;
}
$featured_image = (string) $featured_image;
// if we got a post ID, we can just set it as the thumbnail
if ( ctype_digit( $featured_image ) && 'attachment' == get_post_type( $featured_image ) ) {
set_post_thumbnail( $post_id, $featured_image );
return $featured_image;
}
$featured_image_id = $this->handle_media_sideload( $featured_image, $post_id, 'image' );
if ( empty( $featured_image_id ) || ! is_int( $featured_image_id ) )
return false;
set_post_thumbnail( $post_id, $featured_image_id );
return $featured_image_id;
}
protected function parse_and_set_author( $author = null, $post_type = 'post' ) {
if ( empty( $author ) || ! post_type_supports( $post_type, 'author' ) )
return get_current_user_id();
$author = (string) $author;
if ( ctype_digit( $author ) ) {
$_user = get_user_by( 'id', $author );
if ( ! $_user || is_wp_error( $_user ) )
return new WP_Error( 'invalid_author', 'Invalid author provided' );
return $_user->ID;
}
$_user = get_user_by( 'login', $author );
if ( ! $_user || is_wp_error( $_user ) )
return new WP_Error( 'invalid_author', 'Invalid author provided' );
return $_user->ID;
}
protected function should_load_theme_functions( $post_id = null ) {
if ( empty( $post_id ) ) {
$input = $this->input( true );
$type = $input['type'];
} else {
$type = get_post_type( $post_id );
}
return ! empty( $type ) && ! in_array( $type, array( 'post', 'revision' ) );
}
}
| 2024-07-20T01:26:19.665474 | https://example.com/article/5701 |
In the IP (Internet Protocol) used in the Internet, information of protocol, source IP address, and destination IP address is managed. In addition, in a part of transport protocols, information of source port and destination port is managed. Packets transmitted using these protocols include information managed by each protocol. Flow measurement is a method for classifying types of communications based on the information included in the packets.
In the flow measurement, packets that have the same attributes are regarded as packets belonging to the same communication. For example, packets having the same information in each item of protocol, source IP address, destination IP address, source port and destination port are regarded as packets belonging to the same communication. A set of packets belonging to the same communication is called a flow. By measuring a data amount or a packet amount of the flow, a plurality of communication services can be monitored among a plurality of locations, so that it becomes possible to specify a section of locations or a communication service in which communication amount is extremely large, and it becomes possible to ascertain communication trend.
The Internet is constructed by interconnecting a plurality of networks each including a plurality of routers for performing routing. A packet transmitted from a source reaches a destination via some routers. Since the router transfers a packet by referring to the IP header of the packet or referring to a header of transport layer in some cases, the router is suitable as an apparatus for performing classification of flows. As a technique for reporting Flow Records of packets passing through the router to another apparatus, there are NetFlow (refer to non-patent document 1) and IPFIX (IP Flow Information, export) and the like.
By transmitting measurement packets, obtained by packetizing Flow Records according to a specific format, to a measurement terminal on the network, communication information of the node can be analyzed. However, according to the non-patent document 2, the number of flows abruptly increases when attack traffic called DDoS (Distributed Denial of Service) occurs in which a large amount of traffic is continuously transmitted by distributing source addresses, and when attack traffic called port scan occurs in which service state and vulnerability are detected by trying to connect to every port of a target host.
In addition, according to the non-patent document 3, in IPFIX for reporting Flow Records, UDP (User Datagram Protocol) which does not have congestion control, and TCP (Transmission Control Protocol) or SCTP (Stream Control Transmission Protocol) having congestion control can be used. When an apparatus for sending and receiving flows performs transmission by using UDP which does not have the congestion control function, as the number of flows increases abruptly, the number of packets transmitted from the flow transmission apparatus such as a router to a measurement terminal increases. As a result, there is a possibility that congestion occurs in a measurement network between the flow transmission apparatus and the measurement terminal.
On the other hand, when the apparatus for sending and receiving flows performs transmission using TCP or SCTP having the congestion control function, even if the number of flows increases abruptly, congestion does not occur. However, since the number of Flow Records that can be sent is limited in the flow transmission apparatus due to the congestion control function, the number of Flow Records that can be transmitted becomes smaller that the number of Flow Records that are generated. Thus, there is a case in which internal transmission buffer overflows. As a result, Flow Records that can be transmitted is limited to Flow Records generated first, so that information of the whole observed traffic cannot be sent. [Non Patent document 1] Browsed on Sep. 8, 2006 on the Internet, B. Claise. Cisco Systems NetFlow Services Export Version 9. RFC 3954 (Informational), October 2004. http://www.ietf.org/rfc/rfc3954.txt [Non Patent document 2] Cristian Eatan, Ken Keys, David Moore, George Varghese: “Building a better netflow”, ACM SIGCOMM Computer Communication Review, 34, Issue 4, pp. 245-256 (2004) [Non Patent document 3] B. Claise. IPFIX Protocol Specification. Internet Draft, June 2006. HYPERLINK “http://tools.ietf.org/id/draft-ietf-ipfix-protocol-22.txt” http://tools.ietf.org/id/draft-ietf-ipfix-protocol-22.txt | 2024-01-16T01:26:19.665474 | https://example.com/article/6850 |
Monthly Archives: March 2017
For those who are not aware, v3.0 of NativeScript is the next major version that will be released. It has been being worked on for several months now as it has parts of it have been completely rewritten to enhance speed of the framework in a number of areas. Some parts of it have undergone radical changes in design, and as such will require changes to plugins to make them compatible.
Unfortunately these changes are a breaking change and will mess up the plugins for a while...
Some pre-release information has trickled out and as such the plugin developers needs to be made aware of these changes, so we can start getting a jump on these changes...
The biggest issue is any plugins that deals with any properties, or stylers has been completely changed. These plugins will HAVE to be changed to support 3.0. This means your plugin will no longer be NS <= 2.5 compatible, these are BREAKING changes.
Plugin Sites and the Package.json
As an aside if your plugin doesn't have any breaking changes and can continue to work fine in all versions of NS; then I am asking you to add a new flag to your package.json file to flag that this plugin is 3.0 compatible but doesn't require 3.0.
The additional "plugin" structure is being introduced to try and capture the data that has been missing from the plugin infrastructure. At this moment each item is totally optional. However, this will allow the plugin sites to categories and filter the plugins based on criteria's that are important to subsets of the users using the plugin sites. Please see: http://fluentreports.com/blog/?p=489 for more details on the new "plugin" structure proposal.
Back to the breaking Changes
The first resource is that some of the NativeScript team produced a video last week https://youtu.be/bwO3cYPb4zQ which will give a decent overview of the changes.
The developers working on the 3.0 project gave a run down of the changes and how they benefit us and/or what changes we need to be concerned about.
They used this document in the above video; but since they only covered an overview and this documents has all the nitty-gritty details.
For this to be a smooth transition I really strongly encourage you on the "core3" attribute be used if you are not just bumping your platforms to 3.0.0. I will be automatically filtering out plugins that I believe are only 2.0 for 2.0 mode and 3.0 for 3.0 mode. So having things tagged correctly will help the end users have a much more painless transition.
Some of you might not be aware but I wrote the plugins.nativescript.rocks. I also helped write the plugins.nativescript.org site (if something doesn't work on plugins.nativescript.org; please blame my cohorts in crime, Nathan Walker and George Edwards. I had NOTHING to do with any bugs! 😛 )
Well, with the new upcoming changes in 3.0 and some data that we have been unable to determine I have decided to request that the plugin authors add a little bit more metadata to the package.json file that comes with the plugin.
This tells nativescript which version of NativeScript runttimes are supported. This allows the TNS command line to throw and error/warning if you are not using the proper versions. This is useful information as this tells us what platforms are supported. However, this fails when people (like me) write dummy wrappers to support the plugin not crashing on the other platform. On the plugins site it will say this plugin supports both Android and iOS, but in all reality the plugin only really supports iOS, and actually doesn't really do anything on Android. I would like to have the plugins site actually reflect this reality rather than you downloading a plugin you thing works on Android and it turns out it doesn't.
In addition since Angular was introduced to the eco-system, there are a large number of plugins that actually do NOT work with NAN (NativeScript Angular) code. And even some that don't work with PAN (Plain Awesome NativeScript) code. In addition in the future we might get other frameworks supported like VueJS (VAN?), etc.
So I would like to start capturing the data so that we can do these additional cool things. And so I am proposing the following additional OPTIONAL meta data to be added.
will be assumed to be false, unless Angular is detected in the Keywords/name/description.
pan = Plain Awesome NativeScript
Will be assumed to be true.
core3 = Supports NS Core Modules 3.
Will be assumed FALSE if platforms.ios/.android < 3
Will be assumed TRUE if platforms.ios/andorid >= 3.
wrapper = Using a dummy wrapper
Will default to false, use "ios" or "android" to signify platform which is using a wrapper.
category
This is the category to put the plugin in. Valid categories currently are: "Interface", "Processing", "Templates", "Developer", "Utilities"
I am up for other category suggestions. But at this point these are the primary categories that I use on plugins.nativescript.rocks.
Please note each key is optional; however, your plugin will get extra points for having an category key. And you will LOSE points if we detect a plugin is using a wrapper and you haven't tagged it, as this is a issue the plugin users really hate seeing faulty data about the plugins support.
I would recommend plugin authors start adding this to any releases of there plugins so that we can capture the data. This will become critical with the 3.0 release as a large chunk of plugins are not going to be compatible between NS 2 & NS 3. | 2024-05-16T01:26:19.665474 | https://example.com/article/2441 |
<?php
// Copyright 2004-present Facebook. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
namespace Facebook\WebDriver;
use InvalidArgumentException;
/**
* Set values of an cookie.
*
* Implements ArrayAccess for backwards compatibility.
*
* @see https://w3c.github.io/webdriver/webdriver-spec.html#cookies
*/
class Cookie implements \ArrayAccess
{
/** @var array */
protected $cookie = [
'name' => null,
'value' => null,
'path' => null,
'domain' => null,
'expiry' => null,
'secure' => null,
'httpOnly' => null,
];
/**
* @param string $name The name of the cookie; may not be null or an empty string.
* @param string $value The cookie value; may not be null.
*/
public function __construct($name, $value)
{
$this->validateCookieName($name);
$this->validateCookieValue($value);
$this->cookie['name'] = $name;
$this->cookie['value'] = $value;
}
/**
* @param array $cookieArray
* @return Cookie
*/
public static function createFromArray(array $cookieArray)
{
$cookie = new self($cookieArray['name'], $cookieArray['value']);
if (isset($cookieArray['path'])) {
$cookie->setPath($cookieArray['path']);
}
if (isset($cookieArray['domain'])) {
$cookie->setDomain($cookieArray['domain']);
}
if (isset($cookieArray['expiry'])) {
$cookie->setExpiry($cookieArray['expiry']);
}
if (isset($cookieArray['secure'])) {
$cookie->setSecure($cookieArray['secure']);
}
if (isset($cookieArray['httpOnly'])) {
$cookie->setHttpOnly($cookieArray['httpOnly']);
}
return $cookie;
}
/**
* @return string
*/
public function getName()
{
return $this->cookie['name'];
}
/**
* @return string
*/
public function getValue()
{
return $this->cookie['value'];
}
/**
* The path the cookie is visible to. Defaults to "/" if omitted.
*
* @param string $path
*/
public function setPath($path)
{
$this->cookie['path'] = $path;
}
/**
* @return string|null
*/
public function getPath()
{
return $this->cookie['path'];
}
/**
* The domain the cookie is visible to. Defaults to the current browsing context's document's URL domain if omitted.
*
* @param string $domain
*/
public function setDomain($domain)
{
if (mb_strpos($domain, ':') !== false) {
throw new InvalidArgumentException(sprintf('Cookie domain "%s" should not contain a port', $domain));
}
$this->cookie['domain'] = $domain;
}
/**
* @return string|null
*/
public function getDomain()
{
return $this->cookie['domain'];
}
/**
* The cookie's expiration date, specified in seconds since Unix Epoch.
*
* @param int $expiry
*/
public function setExpiry($expiry)
{
$this->cookie['expiry'] = (int) $expiry;
}
/**
* @return int|null
*/
public function getExpiry()
{
return $this->cookie['expiry'];
}
/**
* Whether this cookie requires a secure connection (https). Defaults to false if omitted.
*
* @param bool $secure
*/
public function setSecure($secure)
{
$this->cookie['secure'] = $secure;
}
/**
* @return bool|null
*/
public function isSecure()
{
return $this->cookie['secure'];
}
/**
* Whether the cookie is an HTTP only cookie. Defaults to false if omitted.
*
* @param bool $httpOnly
*/
public function setHttpOnly($httpOnly)
{
$this->cookie['httpOnly'] = $httpOnly;
}
/**
* @return bool|null
*/
public function isHttpOnly()
{
return $this->cookie['httpOnly'];
}
/**
* @return array
*/
public function toArray()
{
return $this->cookie;
}
public function offsetExists($offset)
{
return isset($this->cookie[$offset]);
}
public function offsetGet($offset)
{
return $this->cookie[$offset];
}
public function offsetSet($offset, $value)
{
$this->cookie[$offset] = $value;
}
public function offsetUnset($offset)
{
unset($this->cookie[$offset]);
}
/**
* @param string $name
*/
protected function validateCookieName($name)
{
if ($name === null || $name === '') {
throw new InvalidArgumentException('Cookie name should be non-empty');
}
if (mb_strpos($name, ';') !== false) {
throw new InvalidArgumentException('Cookie name should not contain a ";"');
}
}
/**
* @param string $value
*/
protected function validateCookieValue($value)
{
if ($value === null) {
throw new InvalidArgumentException('Cookie value is required when setting a cookie');
}
}
}
| 2023-09-06T01:26:19.665474 | https://example.com/article/1959 |
First Impressions : Davina’s Kitchen Favourites
There’s nothing I love more than scrolling through a new cook book and eagerly working out what I am going to cook first! I have already filmed the chicken and egg fried rice which I will be sharing with you soon 🙂 | 2023-11-01T01:26:19.665474 | https://example.com/article/9222 |
By submitting your personal information, you agree to receive emails regarding relevant products and special offers from TechTarget and its partners. You also agree that your personal information may be transferred and processed in the United States, and that you have read and agree to the Terms of Use and the Privacy Policy.
Data store clusters do for storage what VMware Distributed Resource Scheduler (DRS) clusters do for memory and
network resources. Namely, they allow admins to take an array of data stores that potentially
reside on different storage arrays and create a single data store cluster object.
When virtualisation admins create a new virtual machine (VM) on vSphere 5, they no longer have
to worry about which data store it should reside on, just as they don’t worry about which ESX host
a VM executes on in a HA/DRS cluster.
Users often create data store clusters based on class (gold, silver, bronze) or on different
performance attributes (FC/SSD, FC/SAS, iSCSI/SAS, NFS/SATA); or with other
attributes, for example clusters that support storage vendor snapshots or replication.
It mirrors cloud computing where the underlying plumbing (storage, network and servers) is
abstracted to create a commodity-based model of data centre resources.
VMware vSphere 5 data store clusters are a radical departure from individual data stores, but
data store clusters align nicely to the trend of storage tiering -- where storage arrays are no
longer managed as stand-alone units but as an array of arrays, as it were.
Problems can occur when the storage array as well as the Storage DRS can move data volumes
around, so it makes more sense to use either data store clusters or Storage DRS, but not both.
Another alternative is to use Storage DRS for initial placement but not for moving VMs.
Here are some considerations to keep in mind when working with data store clusters:
• Data stores that make up a data store cluster can reside on different storage arrays.
• Data stores work with different storage protocols (FC, iSCSI, NFS), and users are advised not to
mix these together in a single data store cluster.
• It is possible to mix VMFS-3 and VMFS-5 data stores together, although it isn’t
recommended.
• Data stores in a data store cluster:
- have the same performance characteristics such as the same number of spindles, disk types and
RAID levels;
- have the same attributes. Consistency is the key here. For example, all the data stores in the
cluster should be enabled for replication based on the type and on frequency;
- are only accessible by ESXi5.
VSphere 5 Storage DRS
Another vSphere 5 storage highlight is Storage
DRS, or SDRS. SDRS is complementary to data store clusters in that users must create data store
clusters in order to use SDRS.
The job of vSphere
5 Storage DRS is to place a VM’s virtual disks on the right data stores within the right
cluster -- just like its sister technology DRS puts the VM on the correct ESX hosts
within its cluster to balance CPU and memory resources.
SDRS can also move VMs from one data store to another within a data store cluster to improve
overall use.
Figure 1: Admins can
select the data stores they want to use within a particular data store cluster.
Figure 2: In this example, four data stores on four different
storage arrays were added to the data store cluster.
DRS, like SDRS, has affinity and anti-affinity rules to ensure that VMs or virtual disks with
similar storage I/O requirements don’t compete for disk time.
By default, all the virtual disks that make up a VM with multiple Virtual Machine Disks (VMDKs)
are placed in the same data store, but with anti-affinity rules, admins can reorganise and
distribute them across many data stores for optimal performance.
They can also indicate that two virtual disks must never reside on the same data store to avoid
friction in the infrastructure. Additionally, there is a “maintenance mode” feature that allows
admins to fully empty a data store for maintenance purposes.
SDRS is compatible with all the main VMware vSphere features (VMware snapshots, RDMs, NFS and VMFS), but it is only
compatible with ESX5i.
SDRS manifests itself when users create, clone or build a new VM from a template.
Figure 3: SDRS is
visible when building a new virtual machine from a template. There’s also an option to disable SDRS
for a particular VM.
SDRS controls disk activity according to these metrics:
• SDRS uses a combination of free space and the latency to the storage to calculate the best
data store within a data store cluster.
• It uses only the latency to decide if a virtual machine’s files should be moved to improve
performance.
• It checks space on a data store every five minutes using vCenter.
• SDRS checks latency only every eight hours, so don’t expect VMs to whiz about from one data store
to another within a cluster every millisecond.
Data store clusters are created in the “Data stores and data store clusters” view in vCenter and
then assigned to an existing HA/DRS cluster. Users must ensure that all ESX hosts in the target
cluster can see all the data stores in the cluster. Theoretically, if you don’t do that, problems
would occur. Let’s say one data store cluster is constructed of ten data stores, but one of your
ESX hosts can only see nine of the data stores. You’d be in trouble with features like HA, DRS and
VMotion.
Finally, disk-intensive backup windows can really mess up SDRS. Users can schedule SDRS to
ignore backup windows so its calculations are based on true operational disk activity.
Figure 4: SDRS Scheduling is fully automated, but admins
can configure SDRS to schedule it so that it ignores backup windows.
With VSA, storage intelligence, which is usually based on some type of Linux/FreeBSD
distribution, is ported out of hardware (or “storage controllers”) into a virtual machine.
The VMware vSphere 5 VSA can use the local server’s storage and share it across the network,
turning direct-attached storage (DAS) into an NFS
appliance.
VSA is deployed in two ways:
1. With two ESX hosts and a separate vCenter server. Here, the vCenter server must run a clustering
and management service so it can act as witness to the two ESX hosts and determine if a error has
occurred.
2. With three or more ESX host without a vCenter host to act as witness. In case of a physical
failure of an ESX host, the data is protected by creating replicas to other ESX hosts in the
cluster that are also running the VSA.
The idea of the VSA is to create a distributed array of shared storage without a single point of
failure and without using an expensive centralised SAN system. It is aimed at companies in the
SMB/SME market that don’t have the budget for a SAN.
But, in its first release, some industry insiders say VSA’s purchase price combined with the
disk space it uses for the replicas make it a poor choice. With that said, recent updates to the
different RAID types available have significantly improved VSA’s actual disk use ratios. The same
system with the same level of protections can now deliver more disk space capacity in real
terms.
Here are some best practice tips for vSphere 5 VSA:
• The VSA has two virtual NICs -- one for
the front-end and one for the back-end ports. The front-end NIC advertises an IP address for
inbound connections and for ESX hosts to mount the NFS data stores. The back-end virtual NIC is
used for management and for the cluster network.
• The VSA’s default memory usage is 24 GB, up to 8 disks and one SCSI controller. VMware recommends
a one gigabit network interface as minimum.
• VMware recommends that local direct-attached storage be configured with RAID, either RAID1+0,
RAID5 or RAID6.
VMware has made huge improvements to vSphere 5, especially from a storage
perspective. VMware vSphere 5’s storage capabilities along with these best practice
recommendations will help IT maximise the use of the product.
Mike Laverick is a former VMware instructor with 17 years of
experience in technologies such as Novell, Windows, Citrix and VMware. Since 2003, he has been
involved with the VMware community. Laverick is a VMware forum moderator and member of the London
VMware User Group. He is also the man behind the virtualisation website and blog RTFM Education, where he publishes free guides and utilities
for VMware customers. Laverick received the VMware vExpert award in 2009, 2010 and 2011.
Since joining TechTarget as a contributor, Laverick has also found the time to run a weekly
podcast called the Chinwag and the Vendorwag. He helped found the Irish and Scottish VMware user
groups and now speaks regularly at larger regional events organised by the global VMware user group
in North America, EMEA and APAC. Laverick published books on VMware Virtual Infrastructure 3,
vSphere4, Site Recovery Manager and View.
Email Alerts
By submitting my Email address I confirm that I have read and accepted the Terms of Use and Declaration of Consent.
By submitting your personal information, you agree to receive emails regarding relevant products and special offers from TechTarget and its partners. You also agree that your personal information may be transferred and processed in the United States, and that you have read and agree to the Terms of Use and the Privacy Policy.
Disclaimer:
Our Tips Exchange is a forum for you to share technical advice and expertise with your peers and to learn from other enterprise IT professionals. TechTarget provides the infrastructure to facilitate this sharing of information. However, we cannot guarantee the accuracy or validity of the material submitted. You agree that your use of the Ask The Expert services and your reliance on any questions, answers, information or other materials received through this Web site is at your own risk.
It can be tempting to stray from the security roadmap security professionals have put in place when data breaches like the Sony and Anthem breaches are all over the news. But experts say it's crucial to stick to the security basics.
The Open Data Platform has arrived, but not all Hadoop vendors are on board. The initiative, aimed at boosting interoperability, formed a backdrop for discussion at the Strata + Hadoop World 2015 conference. | 2024-04-10T01:26:19.665474 | https://example.com/article/1647 |
WELLINGTON CITY
Some of those who still believe the threat of "climate change" is a reality, as they have not been advised about the thousands of leaked e-mails from the University of East Anglia’s Climate Research Unit, that show "man-made global warming" is a scam, will be meeting on Saturday, the 5th of December at Civic Square in Wellington at 1pm, and marching to parliament to unwittingly promote world government and global tyranny, that will be foisted upon the public via the global warming fraud.
All the well-informed people left in Wellington, are called on to be there with signs and placards to stand and challenge these ill-informed souls.
The counter protest/march starts at 12.45pm and meets at Civic Square next to the ill-informed "climate change" advocates. Be sure to bring some good signs and we will march with those who have been brainwashed by the mainstream media to parliament together. Hopefully, some of them may wake up on the way.
For those who have not yet cottoned on to the fact that the UN Framework Convention on Climate Change Treaty is really about creating a communist world government, go and have a look at it for yourself. A draft copy of this treaty, dated the 15th of September, which has already been through several negotiation rounds, can be found on the Internet. Its ultimate purpose is readily apparent on page 18, where it states: "The scheme for the new institutional arrangement under the Convention will be based on three basic pillars: government; facilitative mechanism; and financial mechanism...” According to treaty expert, Lord Christopher Monckton, who worked as an advisor to Margaret Thatcher, if this dreadful treaty is signed, it will create a tyrannical one world government, which has the power to dictate to elected bodies, as well as control all free markets. He urges people to stop the treaty in its tracks now, by contacting Members of Parliament and telling them they must not sign it.
Related:
To find the plethora of evidence that shows "climate change" is a hoax Google "Climategate."
Lawrence Solomon: New Zealand's Climategate | 2023-08-24T01:26:19.665474 | https://example.com/article/1190 |
using System;
using JabbR.Models;
namespace JabbR.Services
{
public interface IChatNotificationService
{
void OnUserNameChanged(ChatUser user, string oldUserName, string newUserName);
void UpdateUnreadMentions(ChatUser mentionedUser, int unread);
}
} | 2024-01-24T01:26:19.665474 | https://example.com/article/1424 |
package railo.commons.io.auto;
import java.io.IOException;
import java.io.OutputStream;
/**
* Close the Stream automaticlly when object will destroyed by the garbage
*/
public final class AutoCloseOutputStream extends OutputStream {
private final OutputStream os;
/**
* constructor of the class
* @param os
*/
public AutoCloseOutputStream(OutputStream os) {
this.os=os;
}
@Override
public void write(int b) throws IOException {
os.write(b);
}
@Override
public void close() throws IOException {
os.close();
}
@Override
public void flush() throws IOException {
os.flush();
}
@Override
public void write(byte[] b, int off, int len) throws IOException {
os.write(b, off, len);
}
@Override
public void write(byte[] b) throws IOException {
os.write(b);
}
@Override
public void finalize() throws Throwable {
super.finalize();
try {
os.close();
}
catch(Exception e) {}
}
}
| 2024-01-12T01:26:19.665474 | https://example.com/article/9054 |
Tank cycling basics. Learn about reducing ammonia levels to keep your fish healthy, how ammonia is produced and what you can do to prevent buildup.
Did you listen to the local store employee and run your filter for 24 hours before adding fish? I did and like you, I now know the error of my ways. So what to do after you put fish in the tank and then learn about the need to “cycle” your tank? What is cycling anyway? I don’t even own a bicycle!
In a nutshell and at the very basic level, cycling a tank is allowing bacterial colonies that consume harmful compounds to grow to a level to keep your fish healthy. The first bacteria to appear consume Ammonia (NH3) and excrete Nitrite (NO2). The next to show up consume Nitrite and excrete NitrAte (NO3). Both Ammonia and Nitrite can hurt fish long term or be deadly on the short term. Nitrate (NO3) is less harmful and fish can acclimate to it. I prefer to keep my levels under 20 PPM, but up to 80 PPM can be fish safe. Where does the Ammonia come from? Your fish produce it in their waste and any left over food (or rotting plants) decompose into Ammonia. A fishless cycle, which is preferable by most standards, involves adding an ammonia source (usually a decaying shrimp or pure non-scented ammonia) and allowing the bacterial colonies to grow before fish are added. But what if you didn’t know about any of this before buying those gorgeous fish? This is the point that a lot of folks (including me) start to get a bit overwhelmed. There’s really no need for it though. Get a liquid test kit (API Master FW is my favorite) and follow the directions. Don’t waste your money on test strips. They are more expensive in the long run and a lot less accurate. The test results will tell you what to do.
If Ammonia or Nitrite equal .25 PPM or higher, it’s time to do a water change! Remember to use a good dechlorinator, like Seachem’s Prime. If you measure .50 PPM and do a 50% change, you will be at .25. Do another 50% change and you’ll be at .125, etc. Ok, well that’s all well and good but I already have fish! What should I do now?! Seriously think about returning some or all of your fish and doing a fishless cycle. There’s a great sticky on it here… http://www.aquariumadvice.com/forums/f15/fishless-cycling-for-dummies-103339.html. If you absolutely can’t bear to part with your new finned friends, it’s time to roll up your sleeves and get dedicated. TEST your water daily (or more) and change it as needed! You may need to do this more than once a day so don’t be surprised. This regimen shouldn’t last longer than a month or so. Despite perpetual rumor and misinformation, changing water WILL NOT slow down your cycle and will keep your fish healthy. The bacteria that we need for a healthy “cycled” system live in the filter media, gravel, and décor, but don’t really exist in substantial amounts in the water itself. Can I do anything to speed things up? Yes! Get some nasty old filter media (Filter pad, bioballs, biowheel, etc.) or a handful of used gravel from a healthy established tank and put it into your filter or a filter sock in your tank. This will “seed” your system with the bacteria needed and significantly speed up cycling for you.
How do I know when my tank is cycled? Your Ammonia levels will gradually give way to higher Nitrite levels. Nitrite will lower to zero and Nitrates will start to rise. When you consistently test zero for Ammonia & Nitrite and have increasing Nitrate, you have a cycled tank! (Woohoo!) Remember that each fish you add will add more Ammonia and that time should be allowed for the bacteria to catch up. Add slowly and responsibly and you will enjoy the hobby even more and your fish will thrive. Happy Fishkeeping!
Filed under Articles | 2024-07-17T01:26:19.665474 | https://example.com/article/5200 |
Hats
Find the perfect Ohio State University hat today at BuckeyeCorner.com. Choose from a big selection of Ohio State hats like dad hats, fitted hats, and snapbacks to support your Buckeyes today! Find the latest styles like the Nike hats and Top of the World dad hats. Get ready for spring with OSU visors and mesh trucker hats! Find a hat for everyone in the family including women's hats, kid's hats and infant hats at BuckeyeCorner.com today! | 2023-11-18T01:26:19.665474 | https://example.com/article/2967 |
Flynn’s ready for next stop on global tour
Steve Wilson
Gregory: None of the pros care I won British Amateur
Kevin Flynn is ready to continue his globe-trotting after his success in spreading the golfing gospel in Libya.
The advanced PGA professional at Tournerbury Golf Club, Hayling island, spent a week in Tripoli educating coaches recently as part of the Royal and Ancient’s mission to develop the game in all corners of the world.
Working with the European PGA, Flynn held seminars and specific sessions to educate the local coaches in the African country, with an exam at the end of the week to test their skills.
And even though there is just one grass course in the whole of the country, with the rest sand-based, Flynn believes there is a passion for the sport in Libya, with just investment needed.
He explained: ‘It was a fantastic experience. I was treated like royalty out there.
‘The coaches out there are of a really good standard.
‘They had a real passion and good knowledge, although they worked too much on swing shape and not how the swing shape would effect impact and ball flight.
‘My mission description for Libya was to set up a simple coach education programme to educate the coaches on the golf swing, help structure the PGA of Libya and also to get them ready to apply to the PGAs of Europe for their membership in due course.
‘But there were a lot of youngsters who were keen to play and it was also nice to see so many girls playing the sport out there.’
While Libya may not have the facilities just yet, they certainly have the potential as Flynn discovered on his trip to Tajoura Golf Club.
He said: ‘Everywhere was just sand – you are shattered after nine holes.
‘But the course itself was spectacular.
‘If they can get the funding to turn it into a grass course, it would be championship standard, no question.
‘The R&A have said they will spend £50,000 to help get it on track because they have little irrigation out there.
‘But it comes down to getting someone in the government interested in playing golf.
‘If you got one of Colonel Gadaffi’s sons playing, you would see the investment instantly.’
While some courses require a mat to play from the sand fairways, Flynn faced the difficulties at first hand.
He said: ‘On some of the courses, you don’t even have a mat.
‘When you get to 75 yards inwards, it’s hopeless trying to hit a delicate chip shot.
‘It’s also hard getting to grips with hitting a shot into a green.
‘It’s hard sand and just bounces through, but if it’s short, it just stops dead. So it’s difficult to play.’
While Flynn will take his level two exam for Plane Truth instruction next month in Houston, USA, he is set for another stint overseas later this year in the next phase of his quest.
He said: ‘The R & A invest in developing golf in other countries around the world.
‘My skill set is about teaching coaches and I could be heading to Argentina or Trinidad next.’ | 2024-02-18T01:26:19.665474 | https://example.com/article/7470 |
#ifndef ENTT_LOCATOR_LOCATOR_HPP
#define ENTT_LOCATOR_LOCATOR_HPP
#include <memory>
#include <utility>
#include "../config/config.h"
namespace entt {
/**
* @brief Service locator, nothing more.
*
* A service locator can be used to do what it promises: locate services.<br/>
* Usually service locators are tightly bound to the services they expose and
* thus it's hard to define a general purpose class to do that. This template
* based implementation tries to fill the gap and to get rid of the burden of
* defining a different specific locator for each application.
*
* @tparam Service Type of service managed by the locator.
*/
template<typename Service>
struct service_locator {
/*! @brief Type of service offered. */
using service_type = Service;
/*! @brief Default constructor, deleted on purpose. */
service_locator() = delete;
/*! @brief Default destructor, deleted on purpose. */
~service_locator() = delete;
/**
* @brief Tests if a valid service implementation is set.
* @return True if the service is set, false otherwise.
*/
[[nodiscard]] static bool empty() ENTT_NOEXCEPT {
return !static_cast<bool>(service);
}
/**
* @brief Returns a weak pointer to a service implementation, if any.
*
* Clients of a service shouldn't retain references to it. The recommended
* way is to retrieve the service implementation currently set each and
* every time the need of using it arises. Otherwise users can incur in
* unexpected behaviors.
*
* @return A reference to the service implementation currently set, if any.
*/
[[nodiscard]] static std::weak_ptr<Service> get() ENTT_NOEXCEPT {
return service;
}
/**
* @brief Returns a weak reference to a service implementation, if any.
*
* Clients of a service shouldn't retain references to it. The recommended
* way is to retrieve the service implementation currently set each and
* every time the need of using it arises. Otherwise users can incur in
* unexpected behaviors.
*
* @warning
* In case no service implementation has been set, a call to this function
* results in undefined behavior.
*
* @return A reference to the service implementation currently set, if any.
*/
[[nodiscard]] static Service & ref() ENTT_NOEXCEPT {
return *service;
}
/**
* @brief Sets or replaces a service.
* @tparam Impl Type of the new service to use.
* @tparam Args Types of arguments to use to construct the service.
* @param args Parameters to use to construct the service.
*/
template<typename Impl = Service, typename... Args>
static void set(Args &&... args) {
service = std::make_shared<Impl>(std::forward<Args>(args)...);
}
/**
* @brief Sets or replaces a service.
* @param ptr Service to use to replace the current one.
*/
static void set(std::shared_ptr<Service> ptr) {
ENTT_ASSERT(static_cast<bool>(ptr));
service = std::move(ptr);
}
/**
* @brief Resets a service.
*
* The service is no longer valid after a reset.
*/
static void reset() {
service.reset();
}
private:
inline static std::shared_ptr<Service> service = nullptr;
};
}
#endif
| 2023-09-06T01:26:19.665474 | https://example.com/article/7916 |
Q:
Correct way to define Python source code encoding
PEP 263 defines how to declare Python source code encoding.
Normally, the first 2 lines of a Python file should start with:
#!/usr/bin/python
# -*- coding: <encoding name> -*-
But I have seen a lot of files starting with:
#!/usr/bin/python
# -*- encoding: <encoding name> -*-
=> encoding instead of coding.
So what is the correct way of declaring the file encoding?
Is encoding permitted because the regex used is lazy? Or is it just another form of declaring the file encoding?
I'm asking this question because the PEP does not talk about encoding, it just talks about coding.
A:
Check the docs here:
"If a comment in the first or second line of the Python script matches the regular expression coding[=:]\s*([-\w.]+), this comment is processed as an encoding declaration"
"The recommended forms of this expression are
# -*- coding: <encoding-name> -*-
which is recognized also by GNU Emacs, and
# vim:fileencoding=<encoding-name>
which is recognized by Bram Moolenaar’s VIM."
So, you can put pretty much anything before the "coding" part, but stick to "coding" (with no prefix) if you want to be 100% python-docs-recommendation-compatible.
More specifically, you need to use whatever is recognized by Python and the specific editing software you use (if it needs/accepts anything at all). E.g. the coding form is recognized (out of the box) by GNU Emacs but not Vim (yes, without a universal agreement, it's essentially a turf war).
A:
PEP 263:
the first or second line must match
the regular
expression "coding[:=]\s*([-\w.]+)"
So, "encoding: UTF-8" matches.
PEP provides some examples:
#!/usr/bin/python
# vim: set fileencoding=<encoding name> :
# This Python file uses the following encoding: utf-8
import os, sys
A:
Just copy paste below statement on the top of your program.It will solve character encoding problems
#!/usr/bin/env python
# -*- coding: utf-8 -*-
| 2023-12-25T01:26:19.665474 | https://example.com/article/8246 |
Why Use Docker with Payara Platform? Benefits for your Business
There's a lot of noise revolving around Docker at the moment, and with the current industry focus on the cloud, there's a good reason for that.
I hope you would already know why you might want to use Payara Platform in your business, so in this blog I'm going to focus more on why you'd specifically want to use it with Docker in a "business" context. For a start, if you're unfamiliar with Docker, please refer back to our introductory blog: What is Docker and How is it Used with the Payara Platform for a primer.
It's Lighter than a "Full-Fat" Virtual or Physical Machine
In a micro-services architecture, spinning up a whole virtual or physical machine to run a micro-service can be a bit overkill. As will have been mentioned in the "What is Docker?" blog, Docker containers are not full Virtual Machines, and so they can use fewer resources than VMs by being more efficient with the resources they are provided (even when idle, a full OS is typically doing quite a bit in the background). Using fewer resources can potentially help you save money by reducing the need to run multiple VMs in the cloud or on your servers.
It Can be Simple to Scale
Docker containers are lighter than full VMs so that you can run more of them per machine. As starting a container is similar to simply starting up a process, Docker can be easily taken advantage of to allow scalable architectures.
There are many solutions already using Docker, with some of the most well-known ones being OpenShift, Amazon ECS, Kubernetes, and Docker Swarm. Each of these solutions use Docker in some capacity and are touted for their scalability; with a bit of wrangling you can even get them to auto-scale to demand.
We should have a separate blog coming at a later date as to why you may want to use Kubernetes with Docker - keep your eyes peeled! In a single sentence though, combining Kubernetes & Docker with micro-services architectures allows for the creation of very easy to scale and update environments.
It's Almost Tailor-Made for Micro-Services
Those paying attention may have noticed that the previous two points about the Docker containers being both small and scalable - bywords of the micro-services architecture. If your application is designed in the micro-services style (key point! You can't jam a square peg into a round hole without ruining the peg or the hole), you can run each of your micro-services in separate Docker containers, allowing you to provide good separation of your micro-service containers. Running them in separate containers also lets you edit and scale your micro-services independently of one another.
You Can Essentially Put Your Setup into Version Control
In a certain sense, using the Payara Platform with Docker allows you to put your setup of Payara Platform (as in the Dockerfile) into version control. Using pre-boot & post-boot command files or the asadmin CLI, you can set up your Dockerfile to perform the setup of the Payara Platform instance and do any deployments upon creation of the Docker container, and this can afford easier development by multiple people, as well as allowing you to track and revert changes.
You can even extend this by having your Dockerfile download a particular version of the Payara Platform, affording you the ability to make upgrading the version of Payara Platform that you're using as easy as changing a number (assuming nothing breaks of course! But then you can just revert the change; you see? It's all connected...)
With the availability of Docker Hub and its integrations with GitHub & Bitbucket, it's not a stretch to make an argument for this.
Wrapping Up
This was a very high-level view of why you may want to consider using Docker with Payara Micro for your next or current application - by all means, this is not an exhaustive list, just as much as it likely doesn't apply to all situations! It should hopefully though spark your interest and give you an idea of what you could possibly achieve using Docker with Payara Micro. If you're already using Payara Micro with Docker, we'd love to hear what you're doing with it in the comments below!
About Payara
Payara Server - Derived from GlassFish, with 24/7 Production Support. Payara Server is a drop in replacement for GlassFish Server Open Source Edition, with the peace of mind of quarterly releases containing enhancements, bug fixes and patches. | 2023-09-20T01:26:19.665474 | https://example.com/article/1876 |
Q:
Why use template<> without specialization?
I was reading the STL source code (which turned out to be both fun and very useful), and I came across this kind of thing
//file backwards/auto_ptr.h, but also found on many others.
template<typename _Tp>
class auto_ptr
//Question is about this:
template<>
class auto_ptr<void>
Is the template<> part added to avoid class duplication?
A:
That's specialization. For example:
template <typename T>
struct is_void
{
static const bool value = false;
};
This template would have is_void<T>::value as false for any type, which is obviously incorrect. What you can do is use this syntax to say "I'm filling in T myself, and specializing":
template <> // I'm gonna make a type specifically
struct is_void<void> // and that type is void
{
static const bool value = true; // and now I can change it however I want
};
Now is_void<T>::value is false except when T is void. Then the compiler chooses the more specialized version, and we get true.
So, in your case, it has a generic implementation of auto_ptr. But that implementation has a problem with void. Specifically, it cannot be dereferenced, since it has no type associated with it.
So what we can do is specialize the void variant of auto_ptr to remove those functions.
| 2024-05-14T01:26:19.665474 | https://example.com/article/2837 |
This subproject is one of many research subprojects utilizing the resources provided by a Center grant funded by NIH/NCRR. Primary support for the subproject and the subproject's principal investigator may have been provided by other sources, including other NIH sources. The Total Cost listed for the subproject likely represents the estimated amount of Center infrastructure utilized by the subproject, not direct funding provided by the NCRR grant to the subproject or subproject staff. Pittsburgh Supercomputing Center (PSC) scientists have many years of experience teaching bioinformatics techniques to both academic and industrial researchers. In collaboration with minority serving partner institutions, a special technology transfer and outreach program has been developed to help increase minority participation in biomedical research. This effort, funded through the National Institutes of Health Minority Access to Research Careers (MARC) program includes: * An intense two-week Summer Institute in Bioinformatics at the Pittsburgh Supercomputing Center open to faculty and graduate students at minority serving institutions. The session focuses on preparing faculty to teach a semester long bioinformatics course organized around analyzing a gene or protein family. * A seven-week research internship at the Pittsburgh Supercomputing Center for students that have completed bioinformatics training on their local campus. * The development of a model curriculum for Bioinformatics including related course materials for the fields of Biology, Computational Science and Mathematics. Elements from these courses are incorporated into the workshop as they are completed. * Assistance in establishing and strengthening the Bioinformatics programs at two minority serving campuses each year, including teaching assistance for newly established courses. | 2024-06-27T01:26:19.665474 | https://example.com/article/2313 |
About the College
Learn More About A&S
Instrument Shop
The Instrument Shop builds, modifies, and repairs custom scientific instruments for the UA research community. The shop uses state-of-the-art machinery and maintains a broadinventory of materials, including stainless steel, aluminum, oxygen-free copper, brass, and other specialty metals and plastics.
Equipment
Precision milling, turning and drilling
CNC Haas VF5 Mill
CNC Super Mini Mill 2
Bridgeport mills
Drill presses
Tapping machine
CNC Haas TL3 turning lathe
Monarch lathes
Romi metric lathe
Sheet metal fabrication
Shears
Brakes
Roller
Sandblaster machine
Marvel horizontal saw
Software
CAD SolidWorks
Master CAM
Welding and brazing
Ultra-high vacuum
TIG
MIG
Oxygen-Acetylene
Other
H2O water jet
Woodcutting machines
Contact
For more information, contact David Key, Chief Mechanician, at (205) 348-3778 or dkey@ua.edu. | 2023-12-14T01:26:19.665474 | https://example.com/article/2541 |
1. Introduction {#sec1-ijms-18-00172}
===============
Hepatitis C virus (HCV) is a positive-stranded RNA virus \~9.6 kb in length, and it belongs to the genus *Hepacivirus* within the *Flaviviridae* family \[[@B1-ijms-18-00172]\]. It is estimated that 170 million people worldwide are chronically infected with HCV and at risk of liver fibrosis, cirrhosis and hepatocellular carcinoma \[[@B2-ijms-18-00172]\]. The HCV genome encodes at least three structural proteins: core and two envelope glycoproteins (E1 and E2); and seven nonstructural (NS) proteins (p7, NS2, NS3, NS4A, NS4B, NS5A and NS5B).
Globally, at least seven HCV genotypes and 67 subtypes have been characterized. HCV isolates differ by \>15% among different genotypes or subtypes \[[@B3-ijms-18-00172]\]. HCV genotypes and subtypes are distributed differently among different areas of the world. In Japan, HCV genotype 1b is the major genotype (70%), followed by HCV genotype 2a (20%) and 2b (10%) \[[@B4-ijms-18-00172],[@B5-ijms-18-00172]\]. Several methods have been established to determine HCV genotypes, such as sequence analysis \[[@B6-ijms-18-00172]\], restriction fragment length polymorphism \[[@B7-ijms-18-00172]\], hybridization of PCR products with specific probes \[[@B8-ijms-18-00172]\] and next-generation sequencing \[[@B5-ijms-18-00172]\].
Accurate determination of the HCV genotypes is essential for the selection of proper antiviral drugs and treatment regimens in HCV-infected individuals \[[@B9-ijms-18-00172]\]. Treatment response also depends on HCV subgenotypes \[[@B10-ijms-18-00172]\]. In Japan, the choice of antiviral regimen against HCV is generally based on the results of HCV serotyping as the national health insurance system currently approves only HCV serotyping methods \[[@B11-ijms-18-00172]\]. Although previous reports indicate that the detection rate of serotyping is generally high and the risk of misdiagnosis is considered rare \[[@B11-ijms-18-00172]\], we have experienced few cases with misidentified genotypes in the treatment of hepatitis C. It is important to elucidate the mechanism responsible for the discrepancy between HCV genotyping based on directly sequencing of the HCV genome and HCV serotyping by enzyme immunoassay (EIA).
In the present study, we described cases with discrepant results between HCV genotyping and serotyping assays that were based on the 5′-untranslated region (5′-UTR) and NS4 regions, respectively. Sanger sequencing was performed for the HCV core and NS4 regions, and deep sequencing was conducted for HCV NS4 regions to study the validity of the serotyping and genotyping results.
2. Results {#sec2-ijms-18-00172}
==========
2.1. Patients' Characteristics {#sec2dot1-ijms-18-00172}
------------------------------
In the present study, the patients consisted of 11 males and seven females with a mean age of 62.2 years ([Table 1](#ijms-18-00172-t001){ref-type="table"}). Among these 18 patients, histories of multiple intravenous injections, tattooing and blood transfusion were observed in 4, 5 and 2 patients, respectively. Family histories of liver diseases existed in four patients, of whom three were anti-HCV positive. Only one patient (No. 18) had very low HCV RNA (\<1.2 logIU/mL).
2.2. HCV Genotyping and HCV Serotyping {#sec2dot2-ijms-18-00172}
--------------------------------------
Among 18 HCV isolates, HCV genotyping assays judged 4, 5 and 9 isolates as HCV genotypes 1b, 2a and 2b, respectively ([Table 2](#ijms-18-00172-t002){ref-type="table"}). Results of HCV serotyping are also shown in [Table 2](#ijms-18-00172-t002){ref-type="table"}. Among four HCV genotype 1b isolates, 2, 1 and 1 were judged as HCV serotype 2, not determined and mixed, respectively. Among five HCV genotype 2a isolates, 3, 1 and 1 were judged as HCV serotype 1, not determined and mixed, respectively. Among nine HCV genotype 2b isolates, six and two were judged as HCV serotype 1 and not determined, respectively. For No. 17, genotype could not be determined by the type-specific PCR method in the core region \[[@B12-ijms-18-00172]\], thus included in the study. This sample was determined as serotype 2, consistent with the sequencing results of genotype 2b.
2.3. Sanger Sequencing Results of Clones from HCV Core and NS4 Regions {#sec2dot3-ijms-18-00172}
----------------------------------------------------------------------
[Table 2](#ijms-18-00172-t002){ref-type="table"} shows the Sanger sequencing results in the core and NS4 regions, respectively. In 17 of 18 samples, both results of cloned sequencing in the core and NS4 regions were consistent with the HCV genotyping by 5′-UTR, but not with those of HCV serotyping. PCR amplification for one isolate (No. 18) could not be performed for core and NS4 regions due to the low titer of HCV RNA. Nucleotide sequences of the HCV core region obtained by sequencing were compared with those of reference sequences ([Table 3](#ijms-18-00172-t003){ref-type="table"}). Nucleotide sequences and amino acid sequences of HCV NS4 regions were also compared with those of reference sequences ([Table 4](#ijms-18-00172-t004){ref-type="table"} and [Table 5](#ijms-18-00172-t005){ref-type="table"}). Sequence identity was generally high within the same subgenotype and higher in the core region than in the NS4 region.
2.4. Cloning Analysis of NS4 Epitope Regions {#sec2dot4-ijms-18-00172}
--------------------------------------------
Epitope regions of NS4 antigen were described previously \[[@B14-ijms-18-00172]\]. To examine whether the variations of NS4 epitope regions could affect the enzyme immunoassay results, amino acids of epitope regions in each clone were compared with those of group specific peptides 1 and 2 ([Table 5](#ijms-18-00172-t005){ref-type="table"}B,C). Except for a few variations between amino acid sequences of the five clones, most of the sequences were the same within each sample. In comparison with the reference peptide, there were several amino acid variations in each sample except in No. 8. However, only a few cases had same amino acid as the reference of different genotypes (underlined in [Table 5](#ijms-18-00172-t005){ref-type="table"}B,C), and each sequence was much closer to the reference peptide with the same genotype.
2.5. Results of HCV Genotyping by Deep Sequencing in NS4 Regions {#sec2dot5-ijms-18-00172}
----------------------------------------------------------------
Genotyping results investigated by deep sequencing of HCV NS4 regions using the MiSeq Illumina sequencing method are shown in [Figure 1](#ijms-18-00172-f001){ref-type="fig"} and [Table 6](#ijms-18-00172-t006){ref-type="table"}. For all 17 samples, more than 99% of the sequence reads were assigned to the same genotype, as with the genotyping results by other methods. The results were quite similar when different sequencing kits were compared between the nano kit and standard kit, within the same samples (No. 2, 7, 17) (data not shown). Among the 17 samples, genotype 2a (No. 5) and genotype 2b (No. 13) had minor populations assigned to different genotypes (genotype 2b and 2a, respectively) from major populations and were considered as mixed infection. The prevalence of these minor populations was between 0.5% and 1% (457 reads (0.58%) for Sample 5 and 5010 reads (0.57%) for Sample 13). These minor populations are unlikely to be related to the serotyping results, because serotype 1 was detected from both samples ([Table 2](#ijms-18-00172-t002){ref-type="table"}).
3. Discussion {#sec3-ijms-18-00172}
=============
The distribution of HCV genotypes is different among individuals with different infection routes such as blood transfusion, intravenous drug use and tattooing. In Japan, HCV genotype 1b had been associated with blood transfusion, whereas genotypes 2a and 2b were associated with injections or tattooing \[[@B15-ijms-18-00172]\]. In the present study, 14 of 18 patients were confirmed as HCV genotype 2, supporting the previous report that described that the NS4-based serotyping assay had lower concordance for genotype 2b specimens \[[@B16-ijms-18-00172]\]. Among these 14 patients, seven (50%) had a history of intravenous injection or tattooing. These patients are considered to be multiple HCV-exposed individuals, and some of them might have been exposed to mixed infection with different HCV genotypes and subtypes.
In the present study, the same genotype was detected by cloned Sanger sequencing at the core and NS4 regions of the HCV genome as the genotype determined by 5′-UTR. No case was detected as mixed infection. As the possibility of the coexistence of a few minor variants could not be completely ruled out because only five clones were analyzed in this study, we further examined this possibility by the deep sequencing method. Deep sequencing of the HCV NS4 region produced large sequence depth ranging from 45 to 993 thousands of reads from each sample. In all 17 cases of the present study, more than 99% of the reads were assigned to the same genotype as determined by other genotyping methods. Furthermore, in two of the 17 samples, a minor population of sequences with different HCV subgenotypes was detected, suggesting the possibility of mixed infection in these samples. It is difficult to detect minor variants, which is between 0.5% and 1%, by the standard cloning method. Deep sequencing could rapidly and easily detect minor variants with a large sequence depth compared to the cloning method. As for sequencing error, our study using the JFH1 clone revealed that up to 0.4% of reads were assigned to incorrect genotype (2b), consistent with a previous report describing a sequencing error rate of below 0.4% in the Illumina platform \[[@B17-ijms-18-00172]\]. However, Thomson et al. \[[@B18-ijms-18-00172]\] suggested that the detection of minority variants is less reliable at lower ratios. The cases of mixed infection with different HCV genotypes, as well as the recombinant forms of HCV are very rare, even in a highly exposed group, such as intravenous drug users (IVDU) \[[@B19-ijms-18-00172]\].
Among the 18 cases, 11 were confirmed to show different HCV serotyping results from HCV genotyping; two cases showed HCV genotype 1b with HCV serotype 2; while nine cases showed HCV genotype 2 with HCV serotype 1. Compared to the core region, nucleotide sequence variations in the NS4 region were more commonly existing between samples within the same genotype ([Table 3](#ijms-18-00172-t003){ref-type="table"} and [Table 4](#ijms-18-00172-t004){ref-type="table"}). We examined the possibility that variations of the amino acid sequence could affect the serotyping results. Both consensus sequence ([Table 5](#ijms-18-00172-t005){ref-type="table"}A) and cloning analysis in the NS4 region ([Table 5](#ijms-18-00172-t005){ref-type="table"}B,C) showed that although several amino acid variations existed when compared with the HCV genotype-specific reference peptide, each sequence was much closer to the reference peptide with the same genotype. It should be noted that the error rate of Taq DNA polymerase is approximately 2 × 10^−5^, estimating that one nucleotide error could appear in 50% of the amplified DNA molecules by nested PCR. Furthermore, comparative analysis with the control group will be needed to better understand the mechanism of discrepancy.
Among the 18 HCV specimens serotyped in our series, four samples (22.2%) were defined as "not determined". It has been suggested that HCV serotyping analysis provides an indirect typing method based on the production of type-specific antibodies by the infected host. Therefore, the sensitivity of HCV serotyping depends on the immune response of the HCV-infected host. Therefore, the possibility that some of the patients with HCV infection did not produce antibody against NS4 protein should be considered. By the HCV serotyping assay, two of the 18 patients (11.1%) had mixed reactions. Multiple exposures to different HCV strains could be one possible mechanism for these findings, although the possibility of non-specific reaction also exists. For such cases, instead of HCV serotyping, HCV genotype testing can be a good alternative method.
A major limitation of our study is that we could not describe the sensitivity and concordance between HCV serotyping and genotyping data in the entire cohort due to lack of a comparable group. Previous studies \[[@B11-ijms-18-00172],[@B20-ijms-18-00172]\] using the same NS4 antigen (C14) as ours showed that the sensitivity of serotypes 1 and 2 is 95.8%--100%, and there was no discordant case. We repeated the initial serotyping assay using the same antigen with the newly-updated chemiluminescent EIA system (HISCL HCV Gr reagents, Sysmex, Japan). According to the manufacturer's instruction, the sensitivity of the new system is 84.4%, and the serotyping results showed 100% concordance with those of the older system. Our re-serotyping analysis showed that new system could correctly detect type 2-specific antibody in six samples (Nos. 6, 8, 10, 12, 17 and 18) and type 1 antibody in two samples (Nos. 2 and 4). However, it could not detect any antibodies in eight cases, showed the incorrect serotype in one case (No. 14) and mixed serotype in one case (No. 11). These results suggest that comparing different serotyping systems would be useful to understand the discrepant cases further.
After the identification of serotype-specific linear epitopes in the core and NS4 region \[[@B11-ijms-18-00172],[@B14-ijms-18-00172]\], several serotyping systems have been reported using peptides corresponding to these regions. Besides the C14 serotyping system we used, the MUREX HCV Serotyping 1--6 assay (Abbott Diagnostics) has been developed \[[@B21-ijms-18-00172]\] using type-specific NS4 antigen corresponding to HCV types 1--6. Prescott et al. \[[@B22-ijms-18-00172]\] reported that 15 discrepant cases (151/166) could be detected using the old version of this assay (HC02). According to the instructions, the concordance (96.12%) of the new version (2G26) becomes higher than an earlier version (HC02) (90.37%), while the sensitivity of serotypes 1 and 2 is 76.8% and 75.5% in the new version. The RIBA HCV serotyping assay (Chiron Corporation) \[[@B16-ijms-18-00172],[@B23-ijms-18-00172],[@B24-ijms-18-00172]\] detected anti-HCV antibodies against five synthetic peptides from the NS4 region and three peptides from the core region, and it has been reported that the sensitivity/concordance of serotypes 1 and 2 is 84.7%--96.5%/96.2%--100% and 82.4%--100%/93.4%--100%, respectively. Taken together, serotyping generally shows high concordance with genotyping. However, attention should be paid to the fact that there could occur cases with no response to the antigen and also with discrepancy even if they can be regarded as uncommon events based on previous studies.
Only sequence analysis of specific gene regions of the HCV genome that are predictive of HCV genotype was completely reliable for HCV genotyping \[[@B25-ijms-18-00172]\]. There are several genotyping methodologies available, such as sequencing, primer-specific PCR, real-time PCR and the line probe assay. Chantratita et al. \[[@B26-ijms-18-00172]\] compared two PCR-based line probe assays and recently reported that the six HCV genotyping 9G test showed overall sensitivity and specificity higher than 92.5% and 99.4%, respectively. On the other hand, this method based on PCR may not be correctly estimated in cases with low-level viremia, such as No. 18 in this study, or if serum samples have not been stored correctly \[[@B27-ijms-18-00172]\]. Meanwhile, one sample (No. 17) could not be genotyped by the HCV genotype-specific PCR method. In this case, even small sequence variations in the primer region could affect the results due to high sequence similarity in this region ([Table 3](#ijms-18-00172-t003){ref-type="table"}).
Serological methods are based on the detection of antibodies against HCV serotype-specific epitopes and are easier and faster for determining HCV serotypes. It is possible that HCV serotyping may allow the determination of HCV types of both past and present infection. Moreover, its high performance on samples with low viral load offers an obvious advantage over the genotyping methods. The cost of the HCV serotyping method is less than that of HCV genotyping. ELISA of the HCV serotyping method is seldom associated with the risk of contamination. Thus, HCV serotyping is particularly suitable for epidemiologic studies.
Treatment outcomes of patients are shown in [Table 7](#ijms-18-00172-t007){ref-type="table"}. In two of four patients with HCV genotype 1b, treatment with daclatasvir plus asunaprevir for 24 weeks could lead to sustained virological response at 24 weeks after the stoppage of treatment (SVR24). In four of five patients with HCV genotype 2a, treatment with sofosbuvir plus ribavirin was performed for 12 weeks, and all four patients achieved SVR24. Among the total of nine patients with HCV genotype 2b, five were treated with interferon-free or interferon-including regimens and achieved SVR24. Although six patients (Nos. 1, 2, 3, 4, 9, 16) had been treated with interferon-including treatments without DAAs based on serotyping results, some patients (Nos. 2, 4, 9), developed relapse after completing the treatment course, and others (Nos. 1, 3, 16) had no response.
Better treatment outcome can be obtained when the treatment is based on HCV genotyping results. For example, Patient No. 2 had been a candidate for sofosbuvir and ribavirin regimen based on the serotyping results (serotype 2), but the patient was treated with daclatasvir and asunaprevir regimen after getting the genotyping result of genotype 1b and achieved SVR24. Sohda et al. \[[@B28-ijms-18-00172]\] reported the non-response to daclatasvir and asunaprevir therapy in patients coinfected with hepatitis C virus genotypes 1 and 2.
4. Patients and Methods {#sec4-ijms-18-00172}
=======================
4.1. Study Population {#sec4dot1-ijms-18-00172}
---------------------
Sera were obtained from 18 Japanese patients (11 males and 7 females) with chronic HCV infection and were stored at −20 °C until testing at Chiba University, Graduate School of Medicine. These patients had been previously serotyped when the direct-acting antiviral agents (DAAs) were not available. They were genotyped at the time of DAA treatment; however, discrepant results between HCV genotyping and serotyping were observed in all of these samples. All specimens were HCV RNA-positive by the Taqman reverse transcriptase-polymerase chain reaction (RT-PCR) assay (Roche Diagnostics, Tokyo, Japan), with levels ranging from 1.2 to 7.5 log IU/mL, and were negative for HBsAg or anti-HIV by ELISA. There was no sign of hepatocellular carcinoma in any of the patients, although Patient Nos. 7 and 14 had histories of cirrhosis. More clinical information is described in [Table 1](#ijms-18-00172-t001){ref-type="table"}. This study was approved by ethics committee of Chiba University, Graduate School of Medicine (No. 415/1753/2153). Participation in the study was posted at our institutions.
4.2. HCV Genotyping and Serotyping Assays {#sec4dot2-ijms-18-00172}
-----------------------------------------
HCV genotyping was performed by RT-PCR followed by Sanger direct sequencing at 5′-UTR of the HCV genome, except Sample No. 17, which was determined by the type-specific PCR method in the core region \[[@B12-ijms-18-00172]\]. HCV serotyping was determined by detecting antibodies against group-specific recombinant protein for serotypes 1 and 2 in the putative HCV NS4 protein region by an EIA that is commonly used in Japan \[[@B11-ijms-18-00172]\]. According to this assay, HCV serotypes 1 and 2 correspond to HCV genotypes 1a/1b and 2a/2b, respectively. The results were described as "not determined" when no antibody could be detected and as "mixed" when both antigens reacted with serum samples.
4.3. HCV RNA Extraction and RT-PCR {#sec4dot3-ijms-18-00172}
----------------------------------
Briefly, total RNA was extracted from 200 μL of sera by the High Pure Viral RNA Kit (Roche, Mannheim, Germany). cDNA was synthesized from viral RNA using SuperScript III First-Strand Synthesis SuperMix (Invitrogen, Carlsbad, CA, USA) according to the manufacturer's instructions. The presence of HCV RNA in serum was confirmed by nested RT-PCR using primers of the 5′-UTR-core; Sc2 (sense; 5′-GGGAGGTCTCGTAGACCGTGCACCATG-3′, nucleotide position 318--344) and Ac2 (antisense; 5′-GAGMGGKATRTACCCCATGAGRTCGGC-3′, 758--732) \[[@B12-ijms-18-00172]\] and primers of NS4 regions; 5668 (sense; 5′-ATGCATGTCRGCTGAYCTGGA-3′, 5282--5302) and 007 (antisense; 5′-AACTCGAGTATCCCACTGATGAAGTTCCACAT-3′, 5665--5634) \[[@B22-ijms-18-00172]\] in the first round. Two microliters of cDNA were amplified for 40 cycles with the following parameters: a preliminary 20 cycles of amplification at 94 °C for 1 min (denaturing), 45 °C for 1 min (annealing) and 72 °C for 1 min (extension), followed by 20 additional cycles at 94 °C for 1 min, 55 °C for 1 min and 72 °C for 1 min using the HotStarTaq Master Mix Kit (QIAGEN, Hilden, Germany). For second-round PCR, 2 μL of first-round PCR product were amplified for 35 cycles; each cycle consisted of 94 °C for 1 min, 55 °C for 1 min and 72 °C for 1 min using HotStarTaq Master Mix Kit (QIAGEN) with primers of the 5′-UTR-core; S7 (sense; 5′-AGACCGTGCACCATGAGCAC-3′, 330--349) and A5 (antisense; 5′-TACGCCGGGGGTCAKTRGGGCCCCA-3′, 684--660) \[[@B12-ijms-18-00172]\] or those of NS4 regions; 865 (sense; 5′-CTGGAGGTTATCACNAGCACNTGG-3′, 5298--5321) and 220 (antisense; 5′-CACATGTGCTTCGCCCAGAA-3′, 5638--5619) \[[@B22-ijms-18-00172]\]. Nucleotide positions are described according to the sequence H77 (Accession No. AF009606). Two rounds of amplification were performed on TaKaRa PCR Thermal Cycler Dice (Takara, Otsu, Japan).
4.4. Cloning and Sanger Sequencing {#sec4dot4-ijms-18-00172}
----------------------------------
Cloning of each PCR product into the TOPO vector (Invitrogen) was carried out using the TOPO TA Cloning Kit (Invitrogen) according to the manufacturer's instructions. Up to 5 different clones were picked up for sequencing. Inserted fragments were confirmed by colony PCR using M13 primers (sense, 5′-GTAAAACGACGGCCAG-3′, and antisense, 5′-CAGGAAACAGCTATGAC-3′) with the following conditions: amplification for 30 cycles at 94 °C for 1 min, 55 °C for 1 min and 72 °C for 1 min.
PCR products were purified by the QIAquick PCR Purification Kit (QIAGEN), and submitted to Sanger DNA sequencing using an ABI3730XL DNA analyzer (Applied Biosystems, Waltham, MA, USA). Obtained sequences were aligned and phylogenetic analysis was performed using Genetyx software (Genetyx Corp., Tokyo, Japan). Reference sequences for each genotype were used as follows: AF009606 (H77) for genotype 1a; D10934 (HC-C2), D90208 (HCV-J) and AJ238799 (Con1) for HCV genotype 1b; AB047639 (JFH-1) and D00944 (HC-J6) for HCV genotype 2a; and AY232731 (MD2b1-2) and D10988 (HC-J8) for HCV genotype 2b \[[@B13-ijms-18-00172]\]. The consensus sequence for each sample was determined from the cloned sequence data. The sequences determined in this study have been deposited in the GenBank database (Accession Nos. LC131493--LC131642, LC191872--LC191891).
4.5. Deep Sequencing {#sec4dot5-ijms-18-00172}
--------------------
The MiSeq platform (Illumina, San Diego, CA, USA) was used to analyze the HCV NS4 region, which was amplified by nested PCR using adaptor primers (sense, 5′-TCGTCGGCAGCGTCAGATGTGTATAAGAGACAGCTGGAGGTTATCACNAGCACNTGG-3′, and antisense, 5′-GTCTCGTGGGCTCGGAGATGTGTATAAGAGACAGCACATGTGCTTCGCCCAGAA-3′) with a KAPA HiFi HotStart ReadyMix PCR kit (KAPA BIO, Boston, MA, USA). PCR for the first round was 5 min at 95 °C; followed by 30 cycles for 30 s at 95 °C, 30 s at 65 °C and 30 s at 72 °C. Purified PCR products were amplified using different sets of index primers in the Nextera XT Index kit (Illumina) for each sample. The PCR conditions for the amplification were 95 °C for 3 min, followed by 8 cycles of 95 °C for 30 s, 55 °C for 30 s and 72 °C for 30 s with a final extension of 72 °C for 5 min. After purification, the products were quantified using the Quant-iT PicoGreen double-stranded DNA (dsDNA) Reagent and Kit (Invitrogen) according to the manufacturer's instructions. Paired-end sequencing was conducted on the Illumina MiSeq platform using a MiSeq Reagent Kit Nano v2 500 cycles (Illumina) according to the manufacturer's protocol. A MiSeq v2 Reagent Kit 500 cycles (Illumina) was used in Nos. 13, 14 and JFH-1 for control.
4.6. Data Analysis of Deep Sequencing {#sec4dot6-ijms-18-00172}
-------------------------------------
After filtering by PANDAseq (v2.8) with the parameters "-L 300 -L 200" \[[@B29-ijms-18-00172]\], an average of 70,157 reads was produced from each sample by the Nano kit while an average of 937,428 reads was produced by the standard kit. Each read was aligned against reference HCV genomes (H77, HCV-J, HC-J6 and HC-J8) using BLASTN (v2.2.28+) with the parameters "-evalue 1e-4--task blastn" \[[@B30-ijms-18-00172]\]. Reference sequences with non-identical genotypes to samples were also included (D17763 (NZL1) for HCV genotype 3a, Y11604 (ED43) for 4a, Y13184 (EUH1480) for 5a, Y12083 (EUHK2) for 6a and EF108306 (QC69) for 7a, respectively) \[[@B13-ijms-18-00172]\]. Mixed infection was identified by the generation of multiple contigs against two or more reference genomes. For control experiments, HCV RNA was prepared from conditioned medium on Huh7 cells transfected with HCV JFH1 \[[@B1-ijms-18-00172]\] and was subjected to deep sequencing. Among 994,646 hit reads to the references, 990,097 reads (99.54%) were correctly assigned to the original JFH1 sequences. When HC-J6 was used as a reference of genotype 2a, 988,099 (99.42%) were correctly assigned to genotype 2a, while 4058 reads (0.41%) were assigned as genotype 2b, which were regarded as sequencing errors. Based on these data, the threshold for the detection of minor populations with different genotypes was set at 0.5%.
5. Conclusions {#sec5-ijms-18-00172}
==============
Although the HCV serotyping method is a widely applicable method especially when many samples are to be tested, there are some discrepant cases, as well. Currently, there are several serotyping systems with high accuracy available. Using the latest system and combining different methods should minimize the risk of discrepancy. Rapid genotyping systems can be a better alternative to serotyping, although it is still costly. The present study highlighted ultra-deep sequencing, and it may be useful for the diagnosis of patients with mixed HCV genotypes. Until the availability of effective antiviral regimens for the pangenotypes of HCV, HCV genotyping should be correctly determined before antiviral therapy, especially in certain regimens, for which the efficacy is dependent on HCV genotypes.
The authors thank Kohji Takahashi, Yuki Haga, Reina Sasaki, Akinobu Tawada, Makoto Arai, Takayuki Ishige, Kouichi Kitamura, Sakae Itoga and Fumio Nomura for valuable discussions. We also thank Takaji Wakita for providing us with the JFH-1 clone. This work was partly supported by the grants from the Chugai Pharmaceutics (Tatsuo Kanda), the Abbvie (Tatsuo Kanda), and JSPS KAKENHI Grant Number JP26860491 (Shingo Nakamoto).
Nan Nwe Win, Shingo Nakamoto, Tatsuo Kanda, Hiroki Takahashi, Tohru Gonoi and Hiroshi Shirasawa conceived of and designed the experiments. Nan Nwe Win, Shingo Nakamoto and Azusa Takahashi-Nakaguchi performed the experiments. Nan Nwe Win, Shingo Nakamoto, Shuang Wu and Hiroki Takahashi analyzed the data. Tatsuo Kanda, Shin Yasui, Masato Nakamura, Fumio Imazeki, Shigeru Mikami and Osamu Yokosuka saw the patients. Tatsuo Kanda, Shingo Nakamoto, Hiroki Takahashi, Tohru Gonoi, Fumio Imazeki, Osamu Yokosuka and Hiroshi Shirasawa contributed reagents/materials/analysis tools. Nan Nwe Win, Shingo Nakamoto, Tatsuo Kanda, Hiroki Takahashi, Tohru Gonoi and Hiroshi Shirasawa wrote the manuscript. All authors approved the manuscript.
Tatsuo Kanda received research grants from Merck Sharp and Dohme (MSD), Chugai Pharm and Abbvie. The other authors declare no conflict of interest.
HCV
Hepatitis C virus
NS
Nonstructural
5′-UTR
5′-untranslated region
RNA
Ribonucleic acid
PCR
Polymerase chain reaction
EIA
Enzyme immunoassay
IVDU
Intravenous drug user
ALT
Alanine transaminase
PLT
Platelet counts
ALB
Albumin
AFP
α-fetoprotein
GT
Genotype
SVR
Sustained virological response
HBsAg
Hepatitis B surface antigen
Anti-HIV
HIV antibody
ELISA
Enzyme-linked immunosorbent assay
cDNA
Complementary deoxyribonucleic acid
{#ijms-18-00172-f001}
ijms-18-00172-t001_Table 1
######
Background of patients enrolled in the present study.
No. Age/Sex Risk Factors Family Histories of Liver Diseases HCV RNA (Log IU/mL) ALT (IU/L) PLT (10^4^/μL) ALB (g/dL) Total Bilirubin (mg/dL) AFP (ng/mL)
------ --------- ------------------- ------------------------------------ --------------------- ------------ ---------------- ------------ ------------------------- -------------
1\. 56/F Unknown − 6.6 52 19.6 4.1 0.5 20.2
2\. 71/M None \+ 7.5 61 12 4.6 1.3 2.3
3\. 66/M Tattoo − 5.9 52 9.3 3.8 1.0 8.1
4\. 51/M Blood transfusion − 6.7 53 26.2 4.5 0.5 6.1
5\. 61/F None \+ \>5.9 14 18.7 4.1 0.9 3.4
6\. 81/M None \+ 6 253 12.3 4.3 0.6 10.4
7\. 74/M IVDU − 6.4 62 9.1 3.7 0.9 11.5
8\. 57/F Unknown − \>5.9 17 21.3 4.4 0.5 6
9\. 50/M IVDU − 7.2 24 15.4 4.3 0.7 3
10\. 47/M Tattoo and IVDU − 6.8 52 18.5 4 0.5 2.6
11\. 58/F Tattoo − \>5.9 22 19.3 4.8 1.1 2.7
12\. 65/F Tattoo − 7 18 22.3 4.7 0.7 2.2
13\. 71/F None − 6.2 16 17.3 4.2 0.6 2.6
14\. 62/M Blood transfusion − 6.4 21 15.7 4.6 0.4 5.1
15\. 58/M Unknown − NA 23 13.9 3.4 0.3 3.4
16\. 62/F None \+ 6 38 19.5 4.3 0.6 2.5
17\. 57/M Tattoo − 4.4 130 13.7 4.1 0.7 6
18\. 72/M IVDU − \<1.2 21 13.2 4.1 0.5 3.6
No., patient number; HCV, hepatitis C virus; ALT, alanine transaminase; PLT, platelet counts; ALB, albumin; AFP, α-fetoprotein; M, male; F, female; −, negative; +, positive; NA, not available.
ijms-18-00172-t002_Table 2
######
Comparison between the results of serotyping, genotyping, Sanger sequencing and deep sequencing methods for discrepant samples.
No. Serotyping Genotyping (5′-UTR) Genotyping by Cloned Sanger Sequencing Genotyping by Deep Sequencing (NS4)
------ ------------ --------------------- ---------------------------------------- ------------------------------------- ------------
1\. 2 1b 1b 1b 1b
2\. 2 1b 1b 1b 1b
3\. ND 1b 1b 1b 1b
4\. Mixed 1b 1b 1b 1b
5\. 1 2a 2a 2a 2a + 2b \*
6\. 1 2a 2a 2a 2a
7\. 1 2a 2a 2a 2a
8\. ND 2a 2a 2a 2a
9\. 1 2b 2b 2b 2b
10\. 1 2b 2b 2b 2b
11\. 1 2b 2b 2b 2b
12\. 1 2b 2b 2b 2b
13\. 1 2b 2b 2b 2a \* + 2b
14\. 1 2b 2b 2b 2b
15\. ND 2b 2b 2b 2b
16\. ND 2b 2b 2b 2b
17\. 2 ND \*\* 2b 2b 2b
18\. Mixed 2a ND ND ND
ND, not determined; Mixed, co-reaction to serotypes 1 and 2. \* Minor different genotypes; \*\* this sample was included in the study due to the failure for genotyping by type-specific PCR method in the core region, but later found as genotype 2b by sequence-based genotyping.
######
Comparison between HCV core nucleotide sequences obtained from discrepant samples and reference sequences (Ref.) specific for each genotype by cloned Sanger sequencing. (**A**) Identity between core nucleotide sequences of genotype 1b obtained by sequencing (5 clones) and reference sequences of genotype 1 (1a and 1b); (**B**) Identity between core nucleotide sequences of genotype 2a obtained by sequencing (5 clones) and reference sequences of genotype 2a; (**C**) Identity between core nucleotide sequences of genotype 2b obtained by sequencing (5 clones) and reference sequences of genotype 2b.
ijms-18-00172-t003a_Table 3
######
(**A**)
Ref. H77 HC-C2 HCV-J Con1 No. 1 No. 2 No. 3 No. 4
------- ------------------- ------------------- ------------------- ------------------- ------------------- ------------------- ------------------- -------------------
H77 100% (309/309)
HC-C2 92.6% (286/309) 100% (309/309)
HCV-J 92.2% (285/309) 97.1% (300/309) 100% (309/309)
Con1 92.2% (285/309) 97.4% (301/309) 97.1% (300/309) 100% (309/309)
No. 1 91.8% (1419/1545) 97.4% (1505/1545) 96.7% (1494/1545) 97.8% (1511/1545) 98.9% (4585/4635)
No. 2 90.8% (1403/1545) 96.6% (1493/1545) 97.0% (1498/1545) 96.0% (1483/1545) 95.9% (7405/7725) 99.8% (4627/4635)
No. 3 91.8% (1419/1545) 97.7% (1509/1545) 98.1% (1516/1545) 97.2% (1501/1545) 97.0% (7495/7725) 96.9% (7485/7725) 99.5% (4613/4635)
No. 4 92.3% (1426/1545) 96.6% (1492/1545) 97.5% (1506/1545) 97.2% (1502/1545) 96.8% (7480/7725) 96.4% (7450/7725) 97.4% (7525/7725) 99.7% (4619/4635)
Reference sequences for each genotype were used as follows: H77 for HCV genotype 1a; HC-C2, HCV-J and Con1 for HCV genotype 1b; JFH-1 and HC-J6 for HCV genotype 2a; MD2b1-2 and HC-J8 for HCV genotype 2b \[[@B13-ijms-18-00172]\].
ijms-18-00172-t003b_Table 3
######
(**B**)
Ref. JFH-1 HC-J6 No. 5 No. 6 No. 7 No. 8
------- ------------------- ------------------- ------------------- ------------------- ------------------- -------------------
JFH-1 100% (309/309)
HC-J6 94.5% (292/309) 100% (309/309)
No. 5 94.5% (1460/1545) 96.1% (1486/1545) 98.8% (4581/4635)
No. 6 94.6% (1461/1545) 96.8% (1496/1545) 95.3% (7365/7725) 99.7% (4621/4635)
No. 7 93.3% (1442/1545) 95.6% (1477/1545) 95.0% (7339/7725) 94.4% (7290/7725) 97.4% (4515/4635)
No. 8 94.8% (1465/1545) 96.8% (1496/1545) 96.1% (7424/7725) 95.9% (7405/7725) 94.8% (7324/7725) 99.4% (4605/4635)
ijms-18-00172-t003c_Table 3
######
(**C**)
Ref. HC-J8 MD2b1-2 No. 9 No. 10 No. 11 No. 12 No. 13 No. 14 No. 15 No. 16 No. 17
--------- ------------------- ------------------- ------------------- ------------------- ------------------- ------------------- ------------------- ------------------- ------------------- ------------------- -------------------
HC-J8 100% (309/309)
MD2b1-2 97.4% (301/309) 100% (309/309)
No. 9 95.6% (1477/1545) 95.6% (1477/1545) 99.6% (4615/4635)
No. 10 96.9% (1497/1545) 97.4% (1505/1545) 95.5% (7375/7725) 98.7% (4574/4635)
No. 11 96.0% (1483/1545) 96.1% (1485/1545) 94.6% (7310/7725) 96.6% (7466/7725) 98.6% (4572/4635)
No. 12 96.5% (1491/1545) 97.2% (1501/1545) 96.3% (7439/7725) 97.5% (7535/7725) 96.5% (7452/7725) 98.8% (4581/4635)
No. 13 95.5% (1475/1545) 95.7% (1479/1545) 93.7% (7328/7725) 94.5% (7300/7725) 94.3% (7285/7725) 94.7% (7316/7725) 97.7% (4528/4635)
No. 14 97.5% (1506/1545) 97.5% (1506/1545) 95.8% (7400/7725) 96.8% (7478/7725) 96.0% (7416/7725) 96.8% (7478/7725) 95.5% (7377/7725) 99.6% (4616/4635)
No. 15 96.4% (1489/1545) 96.0% (1483/1545) 95.5% (7375/7725) 95.8% (7400/7725) 95.4% (7372/7725) 96.0% (7413/7725) 94.8% (7323/7725) 95.8% (7400/7725) 99.2% (4598/4635)
No. 16 95.0% (1468/1545) 95.5% (1476/1545) 95.2% (7352/7725) 95.5% (7381/7725) 94.8% (7325/7725) 95.6% (7384/7725) 93.8% (7246/7725) 95.2% (7354/7725) 95.0% (7340/7725) 99.1% (4591/4635)
No. 17 94.6% (1462/1545) 95.3% (1472/1545) 94.2% (7280/7725) 94.8% (7320/7725) 93.3% (7205/7725) 94.8% (7320/7725) 93.1% (7192/7725) 94.7% (7316/7725) 94.1% (7270/7725) 94.0% (7265/7725) 99.4% (4609/4635)
######
Comparison between HCV NS4 nucleotide sequences obtained from discrepant samples and reference sequences (Ref.) specific for each genotype by cloned Sanger sequencing. (**A**) Identity between NS4 nucleotide sequences of genotype 1b obtained by sequencing (5 clones) and reference sequences of genotype 1 (1a and 1b); (**B**) Identity between NS4 nucleotide sequences of genotype 2a obtained by sequencing (5 clones) and reference sequences of genotype 2a; (**C**) Identity between NS4 nucleotide sequences of genotype 2b obtained by sequencing (5 clones) and reference sequences of genotype 2b.
ijms-18-00172-t004a_Table 4
######
(**A**)
Ref. H77 HC-C2 HCV-J Con1 No. 1 No. 2 No. 3 No. 4
------- ------------------- ------------------- ------------------- ------------------- ------------------- ------------------- ------------------- -------------------
H77 100% (297/297)
HC-C2 76.1% (226/297) 100% (297/297)
HCV-J 78.1% (232/297) 90.9% (270/297) 100% (297/297)
Con1 77.1% (229/297) 88.9% (264/297) 90.9% (270/297) 100% (297/297)
No. 1 77.3% (1148/1485) 89.4% (1328/1485) 94.3% (1400/1485) 91.0% (1352/1485) 98.2% (4376/4455)
No. 2 76.4% (1134/1485) 88.8% (1319/1485) 89.6% (1331/1485) 87.8% (1304/1485) 88.7% (6585/7425) 99.9% (4451/4455)
No. 3 74.7% (1110/1485) 90.9% (1350/1485) 89.4% (1328/1485) 90.4% (1343/1485) 88.9% (6601/7425) 88.5% (6570/7425) 99.2% (4421/4455)
No. 4 77.5% (1151/1485) 92.3% (1370/1485) 92.3% (1370/1485) 92.3% (1371/1485) 91.9% (6825/7425) 91.6% (6800/7425) 93.5% (6940/7425) 99.7% (4441/4455)
ijms-18-00172-t004b_Table 4
######
(**B**)
Ref. JFH-1 HC-J6 No. 5 No. 6 No. 7 No. 8
------- ------------------- ------------------- ------------------- ------------------- ------------------- -------------------
JFH-1 100% (297/297)
HC-J6 88.6% (263/297) 100% (297/297)
No. 5 89.3% (1326/1485) 91.6% (1361/1485) 98.5% (4389/4455)
No. 6 91.2% (1355/1485) 91.7% (1362/1485) 94.0% (6980/7425) 98.5% (4389/4455)
No. 7 87.8% (1304/1485) 90.8% (1349/1485) 92.3% (6850/7425) 90.9% (6748/7425) 96.6% (4304/4455)
No. 8 90.6% (1346/1485) 92.1% (1368/1485) 92.3% (6854/7425) 90.7% (6735/7425) 91.2% (6771/7425) 99.2% (4419/4455)
ijms-18-00172-t004c_Table 4
######
(**C**)
Ref. HC-J8 MD2b1-2 No. 9 No. 10 No. 11 No. 12 No. 13 No. 14 No. 15 No. 16 No. 17
--------- ------------------- ------------------- ------------------- ------------------- ------------------- ------------------- ------------------- ------------------- ------------------- ------------------- -------------------
HC-J8 100% (297/297)
MD2b1-2 94.6% (281/297) 100% (297/297)
No. 9 92.4% (1372/1485) 89.7% (1332/1485) 99.7% (4442/4455)
No. 10 91.9% (1365/1485) 93.1% (1383/1485) 88.8% (6590/7425) 98.7% (4397/4455)
No. 11 93.9% (1395/1485) 93.9% (1395/1485) 89.8% (6665/7425) 93.3% (6925/7425) 98.9% (4407/4455)
No. 12 95.0% (1411/1485) 95.9% (1424/1485) 90.4% (6710/7425) 94.1% (6984/7425) 95.3% (7073/7425) 98.9% (4407/4455)
No. 13 92.4% (1372/1485) 92.1% (1368/1485) 89.9% (6675/7425) 91.8% (6816/7425) 92.7% (6883/7425) 93.2% (6920/7425) 99.8% (4446/4455)
No. 14 92.8% (1378/1485) 92.9% (1380/1485) 89.3% (6631/7425) 93.1% (6913/7425) 93.7% (6957/7425) 93.9% (6972/7425) 93.3% (6928/7425) 99.2% (4419/4455)
No. 15 92.9% (1380/1485) 91.6% (1360/1485) 90.5% (6720/7425) 92.3% (6856/7425) 92.7% (6885/7425) 93.8% (6965/7425) 91.3% (6779/7425) 92.7% (6883/7425) 99.4% (4427/4455)
No. 16 92.0% (1366/1485) 90.6% (1346/1485) 90.2% (6695/7425) 90.6% (6726/7425) 91.1% (6765/7425) 91.7% (6810/7425) 92.4% (6861/7425) 91.3% (6779/7425) 90.8% (6740/7425) 99.3% (4423/4455)
No. 17 89.6% (1330/1485) 87.2% (1295/1485) 89.7% (6660/7425) 85.5% (6351/7425) 87.3% (6484/7425) 88.2% (6551/7425) 89.4% (6638/7425) 86.3% (6408/7425) 86.3% (6406/7425) 88.1% (6540/7425) 98.2% (4375/4455)
######
(**A**) Comparison between consensus NS4 amino acid sequences obtained from discrepant samples, group-specific peptides and reference sequences by cloned Sanger sequencing; (**B**) Comparison between epitope regions of NS4 amino acid sequences from genotype 1b samples and those of group 1 peptide by cloned Sanger sequencing; (**C**) Comparison between epitope regions of NS4 amino acid sequences from samples of genotype 2a, 2b and those of Group 2 peptide by cloned Sanger sequencing.
ijms-18-00172-t005a_Table 5
######
(**A**)
Group 1 Peptide `FTTGSVVIVGRIILSGRPAVIPDREVLYREFDEMEECASHLPYIEQGMQLAEQFKQKALGLLQTATKHAEAAAPVVESKWRALET`
----------------- -----------------------------------------------------------------------------------------
HC-C2 `L...........V...............Q........G.........................I...Q.................`
HCV-J `L...........................Q......................................Q................V`
Con1 `L...............K..I......................................I........Q.............T..A`
No. 1 `L...............K...........Q......................................Q............Q...A`
No. 2 `L...................V................T.............................Q......M.....Q...A`
No. 3 `L..................................................................Q.................`
No. 4 `L...............K..................................................Q.................`
Group 2 peptide `.A..C.S.I..LHINQ.AV.A..K....EA.........RAAL..E.QRI..ML.S.IQ....Q.S.Q.QDIK.A.QTS.PKV.Q`
JFH-1 `LA..C.S.I..LHVNQ.VV.A..K....EA.........RAAL..E.QRI..ML.S.IQ....Q.S.Q.QDIQ.AMQAS.PKV.Q`
HC-J6 `LA..C.C.I..LHVNQ.AV.A..K....EA.........RAAL..E.QRI..ML.S.IQ....Q.S.Q.QDIQ.A.QAS.PKV.Q`
No. 5 `LA..C.S.I..LHINQ.AV.A..K....EA.........KAAL..E.QRI..ML.S.IQ....Q.S.Q.QDIQ.A.QAS.PKV.Q`
No. 6 `LA..CVT.I..LHVNQ.AV.A..K....EA.........KAAL..E.QRI..ML.S.IQ....Q.S.Q.QDIQ.A.QAS.PKV.Q`
No. 7 `LA..C.S.I..LHINQ.AV.A..K....EA.........KAAL..E.QRI..ML.S.IQ....Q.S.Q.QDIQ.A.QAS.PKV.Q`
No. 8 `LA..C.S.I..LHINQ.AV.A..K....EA.........RAAL..E.QRI..ML.S.IQ....Q.S.Q.QDIQ.A.QAS.PKV.Q`
HC-J8 `LA..CIS.I..LH.ND.VV.A..K.I..EA.........KAAL..E.QRM..ML.S.IQ....Q..RQ.QDIQ.AIQ.S.PK..Q`
MD2b1-2 `LA..C.S.I...H.ND.AV.A..K....EA.........KAAL..E.QRM..ML.S.IQ....Q..RQ.QDIQ.AIQ.S.PK..Q`
No. 9 `LA..CIS.I..LH.ND.VV.T..K....EA.........KAAL..E.QRM..ML.S.IQ....Q..RQ.QDIQ.AIQ.S.PK..Q`
No. 10 `LA..CICSI...H.NDQVV.A..K....EA.........KAAL..E.QRM..ML.S.IQ....Q...Q.QDIQ.AIQ.S.PK..Q`
No. 11 `LA..CIS.I...H.ND.VV.A..K.I..EA.........KAAL..E.QRM..ML.S.IQ....Q..RQ.QDIQ.AIQ.S.PK..Q`
No. 12 `LA..CIS.I...H.NDHVV.A..K.I..EA.........KAAL..E.QRM..ML.S.IQ....Q..RQ.QDIQ.AIQ.S.PK..Q`
No. 13 `LA..CIS.I..LH.ND.VV.A..K.I..EA.........KAAL..E.QRM..ML.S.IQ....Q...Q.QDIQ.A.Q.S.PK..Q`
No. 14 `LA..CIS.I..LH.ND.VV.A..K.I..EA.........KAAL..E.QRM..ML.S.IQ....Q...Q.QDIQ.AIQ.S.PK..Q`
No. 15 `LA..CIS.I..LH.NDHVV.A..K.I..EA.........KAAL..E.QRM..ML.S.IQ....Q...Q.QDIQ.TIQ.S.PK..Q`
No. 16 `LA..CIS.I..VH.ND.VV.T..K.I..EA.........KAAL..E.QRI..ML.S.IQ....Q...Q.QDIQ.A.Q.S.PK..Q`
No. 17 `LA..CIS.I..LH.ND.VV.T..K.I..EA.........KAAL..E.QRI..ML.S.IQ....Q..RQ.QDIQ.AMQ.S.PK..Q`
The same amino acids with group 1 peptide in each position are described as dots.
ijms-18-00172-t005b_Table 5
######
(**B**)
No. Genotype Region 1 Region 2
-------------------------- ---------------------- --------------------------- ---------------------------
1\. 1b `K-----------Q----- (5)` `------------------- (5)`
2\. 1b `----V------------- (5) ` `--T---------------- (5)`
3\. 1b `------------------ (3) ` `------------------- (5)`
`--V--------------- (1)`
`---------------G-- (1)`
4\. 1b `K----------------- (3)` `------------------- (5)`
`K-----------Q----- (2)`
Group 2 peptide `-AV-A--K----EA----` `----RAAL--E-QRI--ML`
The same amino acids with group 1 peptides are indicated by dashes. Group 2 peptides are also shown in the bottom line. The numbers of clones with the same sequence patterns are indicated in the parentheses. The amino acids that are the same as those of reference peptides of different genotypes are underlined.
ijms-18-00172-t005c_Table 5
######
(**C**)
No. Genotype Region 1 Region 2
--------------------------- --------------------------- -------------------------- ---------------------------
5\. 2a `------------------ (5)` `----K-------------- (4)`
`---PK-------------- (1)`
6\. 2a `------------------ (5)` `----K-------------- (4)`
`----K---T---------- (1)`
7\. 2a `------------------ (5)` `----K-------------- (4)`
`------------------- (1)`
8\. 2a `------------------ (5)` `------------------- (5)`
9\. 2b `-V--T------------- (4)` `----K---------M---- (4)`
`-I--T------------- (1)` `--------------M---- (1)`
10\. 2b `QV---------------- (4)` `----K---------M---- (4)`
`QV-------I-------- (1)` `----KV--------T---- (1)`
11\. 2b `-V-------I-------- (4)` `----K---------M---- (5)`
`-VA------I-------- (1)`
12\. 2b `HV-------I-------- (4)` `----K---------M---- (5)`
`HV-M-----I-------- (1)`
13\. 2b `-V-------I-------- (5)` `----K---------M---- (5)`
14\. 2b `-V-------I-------- (5)` `----K---------M---- (5)`
15\. 2b `HV-------I-------- (4)` `----K---------M---- (5)`
`HV-------I----S--- (1)`
16\. 2b `-V--T----I-------- (5)` `----K-------------- (3)`
`----K-----K-------- (1)`
`----K-D------------ (1)`
17\. 2b `-V--T----I-------- (4)` `----K-------------- (5)`
`-V--T----I-C------ (1)`
Group 1 peptide `-PA-I--R----RE----` `----HLPY--Q-MQL--QF`
The same amino acids with group 2 peptides are indicated by dashes. Group 1 peptides are also shown in the bottom line. The numbers of clones with the same sequence patterns are indicated in the parentheses. The amino acids that are the same as those of reference peptides of different genotypes are underlined.
ijms-18-00172-t006_Table 6
######
HCV genotypes by deep sequencing of HCV NS4 regions.
No. H-77 (GT 1a) HCV-J (GT 1b) HC-J6 (GT 2a) HC-J8 (GT 2b) NZL1 (GT 3a), ED43 (GT 4a), EUH1480 (GT 5a), EUHK2 (GT 6a), QC69 (GT 7a) Total log10 Reads
------- -------------- --------------- --------------- --------------- -------------------------------------------------------------------------- -------------------
1\. \<0.5% 99.40% \<0.5% \<0.5% \<0.5% 4.9
2\. \<0.5% 99.52% \<0.5% \<0.5% \<0.5% 4.9
3\. \<0.5% 99.49% \<0.5% \<0.5% \<0.5% 4.9
4\. \<0.5% 99.87% \<0.5% \<0.5% \<0.5% 4.7
5\. \<0.5% \<0.5% 99.41% 0.58% \* \<0.5% 4.9
6\. \<0.5% \<0.5% 99.34% \<0.5% \<0.5% 4.8
7\. \<0.5% \<0.5% 99.92% \<0.5% \<0.5% 4.7
8\. \<0.5% \<0.5% 99.87% \<0.5% \<0.5% 4.7
9\. \<0.5% \<0.5% \<0.5% 99.55% \<0.5% 4.9
10\. \<0.5% \<0.5% \<0.5% 99.87% \<0.5% 4.9
11\. \<0.5% \<0.5% \<0.5% 99.78% \<0.5% 4.9
12\. \<0.5% \<0.5% \<0.5% 99.80% \<0.5% 4.8
13\. \<0.5% \<0.5% 0.57% \* 99.26% \<0.5% 5.9
14\. \<0.5% \<0.5% \<0.5% 99.51% \<0.5% 6.0
15\. \<0.5% \<0.5% \<0.5% 99.57% \<0.5% 4.9
16\. \<0.5% \<0.5% \<0.5% 99.26% \<0.5% 4.9
17\. \<0.5% \<0.5% \<0.5% 99.32% \<0.5% 4.8
JFH-1 \<0.5% \<0.5% 99.42% \<0.5% \<0.5% 6.0
GT, genotype; \* Indicates minor populations.
ijms-18-00172-t007_Table 7
######
Treatment outcomes with or without the direct-acting antiviral agents (DAAs) in the present study.
No. Treatment with DAAs Outcomes (with DAAs) Previous Treatment (without DAAs) Outcomes (without DAAs)
------ ---------------------------------------------- ---------------------- ----------------------------------- -------------------------
1\. Daclatasvir plus asunaprevir for 24 weeks SVR (+) Null response
2\. Daclatasvir plus asunaprevir for 24 weeks SVR (+) Relapse
3\. NA NA (+) Null response
4\. NA NA (+) Relapse
5\. Sofosbuvir plus ribavirin for 12 weeks SVR (−) NA
6\. Sofosbuvir plus ribavirin for 12 weeks SVR (−) NA
7\. Sofosbuvir plus ribavirin for 12 weeks SVR (−) NA
8\. Sofosbuvir plus ribavirin for 12 weeks SVR (−) NA
9\. Sofosbuvir plus ribavirin for 12 weeks SVR (+) Relapse
10\. Telaprevir with peginterferon plus ribavirin SVR (−) NA
11\. Sofosbuvir plus ribavirin for 12 weeks SVR (−) NA
12\. Peginterferon plus ribavirin for 24 weeks SVR (−) NA
13\. NA NA (−) NA
14\. NA NA (−) NA
15\. NA NA (−) NA
16\. NA NA (+) Null response
17\. Sofosbuvir plus ribavirin for 12 weeks SVR (−) NA
18\. NA NA (−) NA
SVR, sustained virological response; NA, not available; (−), not performed; (+), performed.
| 2023-10-17T01:26:19.665474 | https://example.com/article/9891 |
[Effect adrenaline and ephedrine against a background of aminazine on systemic, cerebral and extracerebral hemodynamics].
Experiments with anesthetized cats demonstrated chlorpromazine (2 mg/kg) to pervert the vasopressor action of adrenaline (5 mkg/kg) and ephedrine (2 mg/kg) on the systemic arterial pressure, cerebral and extracerebral tonicity of the vessels. The authors believe that with chlorpromazine pretreatment the adrenomimetics produce a selective stimulation of the beta-adrenergic receptors, this accounting for the observed depressor effect. | 2023-08-28T01:26:19.665474 | https://example.com/article/4880 |
NMR study of the electron transfer complex of plant ferredoxin and sulfite reductase: mapping the interaction sites of ferredoxin.
Plant ferredoxin serves as the physiological electron donor for sulfite reductase, which catalyzes the reduction of sulfite to sulfide. Ferredoxin and sulfite reductase form an electrostatically stabilized 1:1 complex for the intermolecular electron transfer. The protein-protein interaction between these proteins from maize leaves was analyzed by nuclear magnetic resonance spectroscopy. Chemical shift perturbation and cross-saturation experiments successfully mapped the location of two major interaction sites of ferredoxin: region 1 including Glu-29, Glu-30, and Asp-34 and region 2 including Glu-92, Glu-93, and Glu-94. The importance of these two acidic patches for interaction with sulfite reductase was confirmed by site-specific mutation of acidic ferredoxin residues in regions 1 and 2, separately and in combination, by which the ability of mutant ferredoxins to transfer electrons and bind to sulfite reductase was additively lowered. Taken together, this study gives a clear illustration of the molecular interaction between ferredoxin and sulfite reductase. We also present data showing that this interaction surface of ferredoxin significantly differs from that when ferredoxin-NADP(+) reductase is the interaction partner. | 2023-11-25T01:26:19.665474 | https://example.com/article/5810 |
---
author:
- 'Shahar MENDELSON${}^{1}$'
- 'Alain PAJOR$^{2}$'
- 'Nicole TOMCZAK-JAEGERMANN${}^{3}$'
date: 'June 12, 2005'
title: Reconstruction and subgaussian operators
---
[**Abstract:**]{}
We present a randomized method to approximate any vector $v$ from some set $T \subset {\mathbb{R}}^n$. The data one is given is the set $T$ and $k$ scalar products $({\left<X_i,v\right>})_{i=1}^k$, where $(X_i)_{i=1}^k$ are i.i.d. isotropic subgaussian random vectors in ${\mathbb{R}}^n$, and $k
\ll n$. We show that with high probability, any $y \in T$ for which $({\left<X_i,y\right>})_{i=1}^k$ is close to the data vector $({\left<X_i,v\right>})_{i=1}^k$ will be a good approximation of $v$, and that the degree of approximation is determined by a natural geometric parameter associated with the set $T$.
We also investigate a random method to identify exactly any vector which has a relatively short support using linear subgaussian measurements as above. It turns out that our analysis, when applied to $\{-1,1\}$-valued vectors with i.i.d, symmetric entries, yields new information on the geometry of faces of random $\{-1,1\}$-polytope; we show that a $k$-dimensional random $\{-1,1\}$-polytope with $n$ vertices is $m$-neighborly for very large $m\le {c k/\log (c'
n/k)}$.
The proofs are based on new estimates on the behavior of the empirical process $\sup_{f \in F} \left|k^{-1}\sum_{i=1}^k f^2(X_i)
-{\mathbb{E}}f^2 \right|$ when $F$ is a subset of the $L_2$ sphere. The estimates are given in terms of the $\gamma_2$ functional with respect to the $\psi_2$ metric on $F$, and hold both in exponential probability and in expectation.
Introduction {#sec:intro}
============
The aim of this article is to investigate the linear “approximate reconstruction" problem in ${\mathbb{R}}^n$. In such a problem, one is given a set $T \subset {\mathbb{R}}^n$ and the goal is to be able to approximate any unknown $v \in T$ using random linear measurements. In other words, one is given the set of values $({\left<X_i,v\right>})_{i=1}^k$, where $X_1,...,X_k$ are independent random vectors in ${\mathbb{R}}^n$ selected according to some probability measure $\mu$. Using this information (and the fact that the unknown vector $v$ belongs to $T$) one has to produce, with very high probability with respect to $\mu^k$, some $t \in T$, such that the Euclidean norm $|t-v|
\leq {\varepsilon}(k)$ for ${\varepsilon}(k)$ as small as possible. Of course, the random sampling method has to be “universal" in some sense and not tailored to a specific set $T$; and it is natural to expect that the degree of approximation ${\varepsilon}(k)$ depends on some geometric parameter associated with $T$.
Questions of a similar flavor have been thoroughly studied in nonparametric statistics and statistical learning theory (see, for example, [@BBL] and [@M; @MT] and references therein). This particular problem has been addressed by several authors (see [@CT1; @CT2; @CT3; @RV] for the most recent contributions), in a rather restricted context. First of all, the sets considered were either the unit ball in $\ell_1^n$ or the unit balls in weak $\ell_p^n$ spaces for $0<p<1$ - and the proofs of the approximation estimates depended on the choice of those particular sets. Second, the sampling process was done when $X_i$ were distributed according to the Gaussian measure on ${\mathbb{R}}^n$ or in [@CT1] for Fourier ensemble.
In contrast, we present a method which is very general. Our results hold for [*any*]{} set $T \subset {\mathbb{R}}^n$, and the class of measures that could be used is broad; it contains all probability measures on ${\mathbb{R}}^n$ which are isotropic and subgaussian, that is, satisfy that for every $y \in {\mathbb{R}}^n$, ${\mathbb{E}}|{\left<X,y\right>}|^2=|y|^2$, and the random variables ${\left<X,y\right>}$ are subgaussian with constant $\alpha |y|$ for some $\alpha \geq 1$. (see Definition \[def:measure\], below). This class of measures contains, among others, the Gaussian measure on ${\mathbb{R}}^n$, the uniform measure on the vertices of the combinatorial cube and the normalized volume measure on various convex, symmetric bodies (e.g. the unit balls of $\ell_p^n$ for $2 \leq p \leq \infty$).
It turns out that the key parameter in the estimate on the degree of approximation ${\varepsilon}(k)$ is indeed geometric in nature. Moreover, the analysis of the approximation problem is centered around the way the random operator $\Gamma=\sum_{i=1}^k
{\left<X_i,\cdot\right>}e_i$ (where $(e_i)_{i=1}^k$ is the standard basis in ${\mathbb{R}}^k$) acts on subsets of the unit sphere.
Our geometric approach, when applied to the sets $T$ considered in [@CT1], yields the optimal estimates on ${\varepsilon}(k)$, and with better probability estimates (of the order of $1-\exp(-ck)$), even when the sampling is done according to an arbitrary isotropic, subgaussian measure. Moreover, our result is more robust that the one from [@CT1] in the following sense. The reconstruction method suggested in [@CT1] is to find some $y \in T$ such that for every $1 \leq i \leq k$, ${\left<X_i,v\right>}={\left<X_i,y\right>}$, and then to show that such $v$ and $y$ must necessarily be close. From our theorem it is clear that one can choose any $y \in T$ for which $\sum_{i=1}^k
{\left<X_i,y-v\right>}^2$ is relatively small, which is far more stable algorithmically.
For the moment, let us present a simple version of the main result we prove in this direction, and to that end we require the following notation. Let $(g_i)_{i=1}^n$ be independent standard Gaussian random variables. Let $T\subset{\mathbb{R}}^n$ be a star-shaped set (i.e. for every $t\in T$ and $0 \le \lambda\le 1$, $\lambda t\in T$) and consider the following geometric parameter $${\ell_*}(T)={\mathbb{E}}\sup_{t \in T}
\left| \sum_{i=1}^n g_i t_i\right|$$ which is, up to a factor of the order of $2\sqrt n$, the mean width of the body $T$. We now define a more sensitive parameter, which as we will see in this article, is the right parameter to control the error term ${\varepsilon}(k)$ for a star-shaped set $T$: $$r^*_k (\theta,T) := \inf \Bigl\{ \rho>0 : \rho \ge c \, \alpha^2 {\ell_*}(T\cap \rho S^{n-1})/ \theta\sqrt k \Bigr\}.$$
We may now state a version of our main result concerning approximate reconstruction.
[**Theorem A** ]{} [*There exist an absolute constant $c_1$ for which the following holds. Let $T$ be a star-shaped subset of ${\mathbb{R}}^n$. Let $\mu$ be an isotropic, subgaussian measure with constant $\alpha$ on ${\mathbb{R}}^n$ and set $X_1,...,X_k$ be independent, selected according to $\mu$. Then, with probability at least $1-\exp(-c_1k/\alpha^4)$, every $y,v \in
T$ satisfy that $$|y-v| \leq 2\left(\frac{1}{k} \sum_{i=1}^k
\left({\left<X_i,v\right>}-{\left<X_i,y\right>}\right)^2 \right)^{1/2} +r_k^*(1/2, T-T).$$* ]{}
The parameter $r^*_k (\theta,T)$ can be estimated for unit ball of classical normed or quasi-normed spaces (see Section \[sec:app-rec\]). In particular, if $T=B_1^n$ then with the same hypothesis and probability as above, one has $$|y-v| \leq 2\left(\frac{1}{k} \sum_{i=1}^k
\left({\left<X_i,v\right>}-{\left<X_i,y\right>}\right)^2 \right)^{1/2} +
c \alpha^2 \sqrt{\frac{\log({c\alpha^4 n/k})}{k}}$$ where $c >0 $ is an absolute constant; this leads to the optimal estimate for ${\varepsilon}(k)$ for that set.
The main idea in the proof of this theorem is the fact that excluding a set with exponentially small probability, the random operator $\frac{1}{\sqrt{k}}\sum_{i=1}^k {\left<X_i,\cdot\right>}$ is a very good isomorphism on elements of $T$ whose Euclidean norm is large enough (see Section \[random\_proj\] for more details).
A question of a similar nature we investigate here focuses on “exact reconstruction" of vectors in ${\mathbb{R}}^n$ that have a short support. Suppose that $z \in {\mathbb{R}}^n$ is in the unit Euclidean ball, and has a relatively short support $m <<n$. The aim is to use a random sampling procedure to identify $z$ exactly, rather than just to approximate it. The motivation for this problem comes from [*error correcting codes*]{}, in which one has to overcome random noise that corrupts a part of a transmitted signal. The noise is modelled by adding to the transmitted vector the noise vector $z$. The assumption that the noise does not change many bits in the original signal implies that $z$ has a short support, and thus, in order to correct the code, one has to identify the noise vector $z$ exactly. Since error correcting codes are not the main focus of this article, we will not explore this topic further, but rather refer the interested reader to [@MS; @CT2; @RV] and references therein for more information.
In the geometric context we are interested in, the problem has been studied in [@CT2; @RV], where it was shown that if $z$ has a short support relative to the dimension $n$ and the size of the sample $k$, and if $\Gamma$ is a $k \times n$ matrix whose entries are independent, standard Gaussian variables, then with probability at least $1-\exp(-ck)$, the minimization problem $$(P)\qquad \min \|v\|_{\ell_1^n} \ \ \ {\rm for} \ \ \Gamma v =
\Gamma z,$$ has a unique solution, which is $z$. Thus, solution to this minimization problem will pin-point the “noise vector" $z$. The idea of using this minimization problem was first suggested in [@CDS].
We extend this result to a general random matrix whose rows are $X_1,...,X_k$, sampled independently according to an isotropic, subgaussian measure.
[**Theorem B** ]{} [*Let $\Gamma$ be as above. With probability at least $1-
\exp(- c_1\, k/\alpha ^4)$, any vector $z$ whose support has cardinality less than $ {c_2k/\log (c_3 n/k)}$ is the unique minimizer of the problem $(P)$, where $c_1, c_2, c_3$ are absolute constants.* ]{}
Interestingly enough, the same analysis yields some information on the geometry of $\{-1,1\}$ random polytopes. Indeed, consider the $k \times n$ matrix $\Gamma$ whose entries are independent symmetric $\{-1,1\}$ valued random variables. Thus, $\Gamma$ is a random operator selected according to the uniform measure on the combinatorial cube $\{-1,1\}^n$, which is an isotropic, subgaussian measure with constant $\alpha=1$. The columns of $\Gamma$, denoted by $v_1,...,v_n$ are vectors in $\{-1,1\}^k$ and let $K^+ = {\rm conv}(v_1,...,v_n)$ be the convex polytope generated by $\Gamma$; $K^+$ is thus called a random $\{-1,1\}$-polytope.
A convex polytope is called $m$-neighborly if any set of less than $m$ of its vertices is the vertex set of a face (see [@Z]). The following result yields the surprising fact that a random $\{-1,1\}$-polytope is $m$-neighborly for a relatively large $m$. In particular, it will have the maximal number of $r$-faces for $r\le m$.
[**Theorem C** ]{} [ *There exist absolute constants $c_1,c_2, c_3$ for which the following holds. For $1\le k\le n$, with probability larger than $1-
\exp(- c_1 k)$, a $k$-dimensional random $\{-1,1\}$ convex polytope with $n$ vertices is $m$-neighborly provided that $$m\le {c_2k\over \log (c_3 n/k)}\cdot$$* ]{}
The main technical tool we use throughout this article, which is of independent interest, is an estimate of the behavior of the supremum of the empirical process $f \to Z_f =k^{-1}\sum_{i=1}^k
f^2(X_i) - 1$ indexed by a subset of the $L_2$ sphere. That is, $$\sup_{f \in F} \left|\frac{1}{k} \sum_{i=1}^k f^2(X_i) -1 \right|,$$ where $X_1,...,X_k$ are independent, distributed according to the a probability measure $\mu$, and under the assumption that every $f \in F$ satisfies that ${\mathbb{E}}f^2 =1$. We assume further that $F$ is a bounded set with respect to the $\psi_2$ norm, defined for a random variable $Y$ by $$\|Y\|_{\psi_2} = \inf \left\{ u>0 : {\mathbb{E}}\exp(Y^2/u^2) \leq 2
\right\}.$$
To formulate the following result, we require the notion of the $\gamma_p$ functional [@Tal-book]. For a metric space $(T,d)$, an [*admissible sequence*]{} of $T$ is a collection of subsets of $T$, $\{T_s : s \geq 0\}$, such that for every $s \geq 1$, $|T_s|=2^{2^s}$ and $|T_0|=1$. for $p=1, 2$, we define the $\gamma_p$ functional by $$\gamma_p(T,d) =\inf \sup_{t \in T} \sum_{s=0}^\infty
2^{s/p}d(t,T_s),$$ where the infimum is taken with respect to all admissible sequences of $T$.
[**Theorem D** ]{} [*There exist absolute constants $c_1, c_2, c_3$ and for which the following holds. Let $(\Omega,\mu)$ be a probability space, set $F$ be a subset of the unit sphere of $L_2(\mu)$ and assume that ${\rm diam}(F, \| \ \|_{\psi_2}) = \alpha$. Then, for any $\theta>0$ and $k\ge 1$ satisfying $$c_1\alpha \gamma_2(F, \| \
\|_{\psi_2})\leq \theta\sqrt{k},$$ with probability at least $1-\exp(-c_2\theta^2k/\alpha^4)$, $$\sup_{f \in F} |Z_f| \leq \theta.$$ Moreover, if $F$ is symmetric, then $${\mathbb{E}}\sup_{f \in F} |Z_f| \leq c_3\alpha^2 \frac{\gamma_2(F, \| \
\|_{\psi_2})}{\sqrt{k}}.$$* ]{}
Theorem D improves a result of a similar flavor from [@KM] in two ways. First of all, the bound on the probability is exponential in the sample size $k$ which was not the case in [@KM]. Second, we were able to bound the expectation of the supremum of the empirical process using only a $\gamma_2$ term. This fact is surprising because the expectation of the supremum of empirical processes is usually controlled by two terms; the first one bounds the subgaussian part of the process and is captured by the $\gamma_2$ functional with respect to the underlying metric. The other term is needed to control sub-exponential part of the empirical process, and is bounded by the $\gamma_1$ functional with respect to an appropriate metric (see [@Tal-book] for more information on the connections between the $\gamma_p$ functionals and empirical processes). Theorem D shows that the expectation of the supremum of $|Z_f|$ behaves as if $\{Z_f\,:\, f\in F\}$ were a subgaussian process with respect to the $\psi_2$ metric (although it is not), and this is due to the key fact that all the functions in $F$ have the same second moment.
We end this introduction with the organization of the article. In Section \[empirical\] we present the proof of Theorem D and some of its corollaries we require. In Section \[random\_proj\] we illustrate these results in the case of linear processes which corresponds to linear measurements. In Section \[sec:app-rec\] we investigate the approximate reconstruction problem for a general set, and in Section \[sec:exact\] we present a proof for the exact reconstruction scheme, with its application to the geometry of random $\{-1,1\}$-polytopes.
Throughout this article we will use letters such as $c,c_1,..$ to denote absolute constants which may change depending on the context. We denote by $(e_i)_{i=1}^n$ the canonical basis of ${\mathbb{R}}^n$, by $|x|$ the Euclidean norm of a vector $x$ and by $B_2^n$ the Euclidean unit ball. We also denote by $|I|$ the cardinality of a finite set $I$.
[*Acknowledgement:*]{} A part of this work was done when the second and the third authors were visiting the Australian National University, Canberra; and when the first and the third authors were visiting Université de Marne-la-Vallée. They wish to thank these institutions for their hospitality.
Empirical Processes {#empirical}
===================
In this section we present some results in empirical processes that will be central to our analysis. All the results focus on understanding the process $ Z_f=\frac{1}{k}\sum_{i=1}^k f^2(X_i) -{\mathbb{E}}f^2 $, where $k \ge 1$ and $X_1,...,X_k$ are independent random variables distributed according to a probability measure $\mu$. In particular, we investigate the behavior of $\sup_{f \in F} |Z_f|$ in terms of various metric structures on $F$, and under the key assumption that every $f \in F$ has the same second moment. The parameters involved are standard in [*generic chaining*]{} type estimates (see [@Tal-book] for a comprehensive study of this topic).
Recall that the $\psi_p$ norm of a random variable $X$ is defined as $$\|X\|_{\psi_p}=\inf \left\{u >0:
{\mathbb{E}}\exp\left({|X|^p}/{u^p}\right) \leq 2 \right\}.$$ It is standard to verify (see for example [@VW]) that if $X$ has a bounded $\psi_2$ norm, then it is subgaussian with parameter $c\|X\|_{\psi_2}$ for some absolute constant $c$. More generally, a bounded $\psi_p$ norm implies that $X$ has a tail behavior, ${\mathbb{P}}(|X| >u)$, of the type $\exp (- cu^p/\|X\|_{\psi_p})$.
Our first fundamental ingredient is the well known Bernstein’s inequality which we shall use in the form of a $\psi_1$ estimates ([@VW]).
\[Bernst-psi1\] Let $Y_1,...,Y_k$ be independent random variables with zero mean such that for some $b >0$ and every $i$, $\|Y_i\|_{\psi_1} \le b$. Then, for any $u>0$, $$\label{eq:Bern}
{\mathbb{P}}\Bigl(\bigl|\frac{1}{k}\sum_{i=i}^k Y_i \bigr| >u
\Bigr) \leq 2\exp \left(-c\, k\min \Bigl(
\frac{u}{b},\, \frac{u^2}{b^2}\Bigr)\right),$$ where $c >0$ is an absolute constant.
We will be interested in classes of functions $F \subset L_2(\mu)$ bounded in the $\psi_2$-norm; we assume without loss of generality that $F$ is symmetric and we let ${\mathop{\rm diam}}(F, \|\ \|_{\psi_2}) : = 2
\sup_{f \in F} \|f\|_{\psi_2}$. Additionally, in many technical arguments we shall often consider classes $F \subset S_{L_2}$, where $S_{L_2} = \{ f : \|f\|_{L_2} =1 \}$ is the unit sphere in $L_2(\mu)$.
Let $X_1, X_2, \ldots$ be independent random variables distributed according to $\mu$. Fix $k \ge 1$ and for $f \in F$ define the random variables $Z_f$ and $W_f$ by $$Z_f=\frac{1}{k}\sum_{i=1}^k f^2(X_i)- {\mathbb{E}}f^2
\qquad \mbox{\rm and} \qquad
W_f=\left(\frac{1}{k} \sum_{i=1}^k f^2(X_i)\right)^{1/2}.$$
The first lemma follows easily from Bernstein’s inequality. We state it in the form convenient for further use, and give a proof of one part, for completeness.
\[lemma:psi-2-est\] There exists an absolute constant $c_1 >0$ for which the following holds. Let $F \subset S_{L_2}$, $\alpha = {\mathop{\rm diam}}(F, \|\ \|_{\psi_2})$ and set $k \ge 1$. For every $f, g \in F$ and every $u \geq 2$ we have $${\mathbb{P}}\bigl(W_{f-g} \geq u\|f-g\|_{\psi_2} \bigr) \le
2 \exp(-c_1 k u ^2).$$ Also, for every $u >0$, $${\mathbb{P}}\left( \left|Z_f - Z_g\right| \ge u\, \alpha\,
\|f-g\|_{\psi_2} \right)
\le 2 \exp \bigl(-c_1 k \min (u, u^2)\bigr),$$ and $${\mathbb{P}}\left( \left|Z_f \right| \ge u\, \alpha^2 \right)
\le 2 \exp \bigl(-c_1 k \min (u, u^2)\bigr).$$
We show the standard proof of the first estimate. Other estimates are proved similarly (see e.g., [@KM], Lemma 3.2).
Clearly, $${\mathbb{E}}\,W_{f-g}^2 = \frac{1}{k} {\mathbb{E}}\sum_{i=1}^k (f-g)^2 (X_i)
= {\mathbb{E}}(f-g)^2 (X_1) = \|f-g\|_{L_2}^2.$$ Applying Bernstein’s inequality it follows that for $t >0$, $$\begin{aligned}
\lefteqn{{\mathbb{P}}\left(\left|W_{f-g}^2 - \|f-g\|_{L_2}^2 \right| \ge t
\right)}\\
&\le& 2\, \exp \left(- c k \min \left(\frac{t}{\| (f-g)^2\|_{\psi_1}},
\Bigl(\frac{t}{\| (f-g)^2 \|_{\psi_1}}\Bigr)^2\right)\right).\end{aligned}$$ Since $\|h^2 \|_{\psi_1} = \|h \|_{\psi_2}^2 $ for every function $h$, then letting $t = (u^2-1) \| f-g\|_{\psi_2}^2$, $$\begin{aligned}
{\mathbb{P}}\Bigl( W_{f-g}^2 &\ge & u^2 \| f-g\|_{\psi_2}^2 \Bigr)\\
& \le &
{\mathbb{P}}\Bigl( W_{f-g}^2 - \|f-g\|_{L_2}^2 \ge
(u^2-1) \| f-g\|_{\psi_2}^2
\Bigr) \\
& \le & 2\, \exp \left(- c k \min (u^2/2, u^4/4) \right),\end{aligned}$$ as promised.
Now we return to one of the basic notions used in this paper, that of the $\gamma_2$-functional. Let $(T,d)$ be a metric space. Recall that an [*admissible sequence*]{} of $T$ is a collection of subsets of $T$, $\{T_s : s \geq 0\}$, such that for every $s \geq 1$, $|T_s|=2^{2^s}$ and $|T_0|=1$.
\[def:gamma-2\] For a metric space $(T,d)$ and $p = 1, 2$, define $$\gamma_p(T,d) =\inf \sup_{t \in T} \sum_{s=0}^\infty
2^{s/p}d(t,T_s),$$ where the infimum is taken with respect to all admissible sequences of $T$. In cases where the metric is clear from the context, we will denote the $\gamma_p$ functional by $\gamma_p(T)$.
Set $\pi_s:T \to T_s$ to be a metric projection function onto $T_s$, that is, for every $t \in T$, $\pi_s(t)$ is a nearest element to $t$ in $T_s$ with respect to the metric $d$. It is easy to verify by the triangle inequality that for every admissible sequence and every $t \in T$, $\sum_{s=0}^\infty 2^{s/2}d(\pi_{s+1}(t),\pi_{s}(t)) \leq ( 1 +
1/\sqrt 2) \sum_{s=0}^\infty 2^{s/2}d(t,T_s)$ and that ${\mathop{\rm diam}}(T,d) \leq
2\gamma_2(T,d)$.
We say that a set $F$ is [*star-shaped*]{} if the fact that $f \in F$ implies that $\lambda f \in F$ for every $0 \leq \lambda \leq 1$.
The next Theorem shows that excluding a set with exponentially small probability, $W_f$ is close to being an isometry in the $L_2(\mu)$ sense for functions in $F$ that have a relatively large norm.
\[thm:general\] There exist absolute constants $c, \bar{c} >0$ for which the following holds. Let $F \subset L_2(\mu)$ be star-shaped, $\alpha =
{\mathop{\rm diam}}(F, \|\ \|_{\psi_2})$ and $k \ge 1$. For any $0 < \theta <1$, with probability at least $1- \exp(- \bar{c}\, \theta^2 k/\alpha
^4)$, then for all $f \in F$ satisfying ${\mathbb{E}}f^2 \ge
r_k^*(\theta)^2$, we have $$\label{two_sided_gen}
(1 - \theta) {\mathbb{E}}f^2 \le \frac{ W_f^2 }{ k} \le (1+\theta) {\mathbb{E}}f^2,$$ where $$\label{rho_star0}
r^*_k(\theta) = r^*_k (\theta, F) := \inf \Bigl\{ \rho>0 : \rho \ge
c \, \alpha
\frac{\gamma_2 (F \cap \rho S_{L_2}, \|\ \|_{\psi_2})}
{ \theta \sqrt k } \Bigr\}.$$
The two-sided inequality (\[two\_sided\_gen\]) is intimately related to an estimate on $\sup_{f \in F} |Z_f|$, which, in turn, is based on two ingredients. The first one shows, in the language of the standard chaining approach, that one can control the “end parts" of all chains. its proof is essentially the same as Lemma 2.3 from [@KM].
\[lemma:chain\] There exists an absolute constant $C$ for which the following holds. Let $F \subset S_{L_2}$, $\alpha = {\mathop{\rm diam}}(F, \|\ \|_{\psi_2})$ and $k \ge 1$. There is $F' \subset F$ such that $|F'| \leq 4^k$ and with probability at least $1-\exp(- k)$, we have, for every $f \in F$, $$\label{chain}
W_{f-\pi_{F'}(f)} \leq C {\gamma_2(F, \| \ \|_{\psi_2})}/{\sqrt{k}},$$ where $\pi_{F'}(f)$ is a nearest point to $f$ in $F'$ with respect to the $\psi_2$ metric.
Let $\{F_s: s \geq 0\}$ be an “almost optimal” admissible sequence of $F$. Then for every $f \in F$, $$\sum_{s=0}^\infty
2^{s/2}\, \|\pi_{s+1}(f)-\pi_{s}(f)\|_{\psi_2} \leq 2 \gamma_2(F, \| \
\|_{\psi_2} ).$$
Let $s_0$ be the minimal integer such that $2^{s_0} > k$, and let $F'=F_{s_0}$. Then $|F'| \leq 2^{2k} = 4^k$. Write $$f-\pi_{s_0}(f)=\sum_{s=s_0}^\infty
\left(\pi_{s+1}(f)-\pi_s(f)\right).$$ Since $W$ is sub-additive then $$W_{f-\pi_{s_0}}(f) \leq \sum_{s=s_0}^\infty
W_{\pi_{s+1}(f)-\pi_s(f)}.$$ For any $f \in F$, $s \ge s_0$ and $\xi \ge 2$, noting that $2^s>k$, it follows by Lemma \[lemma:psi-2-est\] that $$\label{term_est}
{\mathbb{P}}\left(W_{\pi_{s+1}(f)-\pi_s(f)} \geq \xi
\frac{2^{s/2}}{\sqrt{k}}\|\pi_{s+1}(f)-\pi_s(f)\|_{\psi_2}\right)
\leq 2 \exp(-c_1 \xi^2 2^{s}).$$
Since $|F_s| \leq 2^{2^s}$, there are at most $2^{2^{s+2}}$ pairs of $\pi_{s+1}(f)$ and $\pi_s(f)$. Thus, for every $s \ge s_0$, the probability of the event from (\[term\_est\]) holding for some $f \in F$ is less than or equal to $2^{2^{s+2}}\cdot
2 \exp (- c_1 \xi^2 2^{s}) \le \exp (2^{s+3} - c \xi^2 2^{s})$, which, for $\xi \ge \xi_0 := \max(4/\sqrt {c_1}, 2)$, does not exceed $\exp (- c_1 \xi^2 2^{s-1})$.
Combining these estimates together it follows that $$W_{f-\pi_{s_0}(f)} \leq \xi \sum_{s=s_0}^\infty
\frac{2^{s/2}}{\sqrt{k}}\|\pi_{s+1}(f)-\pi_s(f)\|_{\psi_2} \le
2 \xi\, \frac{\gamma_2\bigl(F, \|\ \|_{\psi_2}\bigr)}{\sqrt{k}},$$ outside a set of probability $$\sum_{s=s_0}^\infty \exp (- c_1 \xi^2 2^{s-1}) \le \exp(- c_1
\xi^2 2^{s_0} /4).$$ We complete the proof setting, for example $\xi =
\max(\xi_0, 2/\sqrt{c_1})$ and recalling that $2^{s_0} > k$.
\[rem:expect\] [The proof of the lemma shows that there exist absolute constants $c', c'' >0$ such that for every $\xi \ge c'$, $${\mathbb{P}}\Biggl(\sup_{f \in F} W_{f-\pi(f)} \geq \xi
\frac{\gamma_2\bigl(F, \|\ \|_{\psi_2}\bigr)}{\sqrt{k}} \Biggr)
\leq \exp(-c'' \xi^2 k),$$ a fact which will be used later.]{}
The next lemma estimates the supremum $\sup_{f \in F'} |Z_f|$, where the supremum is taken over a subset $F'$ of $F$ of a relatively small cardinality, or in other words, over the “beginning part” of a chain. However, in order to get an exponential in $k$ estimates on probability, we require a separate argument (generic chaining) for the “middle part” of a chain while for the “very beginning” it is sufficient to use a standard concentration estimate.
\[lemma:small-set\] There exist absolute constants $C$ and $c''' >0$ for which the following holds. Let $F \subset S_{L_2}$ and $\alpha = {\mathop{\rm diam}}(F, \|\
\|_{\psi_2})$. Let $k \ge 1$ and $F' \subset F$ such that $|F'| \leq 4^k$. Then for every $ w > 0 $, $$\sup_{f \in F'}|Z_f| \leq C \alpha
\frac{\gamma_2(F, \| \ \|_{\psi_2})}{\sqrt{k}}+
\alpha^2 w,$$ with probability larger than or equal to $1 - 3 \exp(- c''' \,k\, \min (w, w^2) )$.
Let $(F_s)_{s=0}^\infty$ be an almost optimal admissible sequence of $F'$, set $s_0$ to be the minimal integer such that $2^{s_0} > 2 k$ and fix $s_1 \le s_0$ to be determined later. Since $|F'| \leq 4^k$, it follows that $F_s=F'$ for every $s \geq
s_0$, and that $$Z_f-Z_{\pi_{s_1}(f)}=\sum_{s=s_1+1}^{s_0}
\left(Z_{\pi_s(f)}-Z_{\pi_{s-1}(f)}\right).$$
By Lemma \[lemma:psi-2-est\], for every $f \in F'$, $1 \le s
\leq s_0$ and $u >0$, $$\begin{aligned}
{\mathbb{P}}\Bigl( \left |Z_{\pi_{s}(f)}-Z_{\pi_{s-1}(f)}\right| &\geq
u\alpha \sqrt{\frac{2^{s}}{k}}\,
\|\pi_{s+1}(x)-\pi_s(x)\|_{\psi_2}\Bigr) \\
& \leq 2\exp(-c_1 \min\bigl( (u \sqrt{2^s/k}), (u \sqrt{2^s/k})^2\bigr))\\
& \leq 2\exp(-c_1 \min(u,u^2)2^{s-2}).\end{aligned}$$ (For the latter inequality observe that if $s \leq s_0$ then $2^s/k \leq 4$, and thus $\min\bigl( (u \sqrt{2^s/k}), (u
\sqrt{2^s/k})^2 \bigr) \ge \min (u, u^2 )\, 2^s/(4\,k)$.)
Taking $u$ large enough (for example, $u = 2^5/c_1$ will suffice) we may ensure that $$\sum_{s=s_1+1}^{s_0} 2^{2^{s+2}}\exp(-c_1 u2^{s-2})
\leq \sum_{s=s_1+1}^{s_0} \exp(-2^{s+3})
\leq
\exp(- 2^{s_1}).$$ Therefore, since there are at most $2^{2^{s+2}}$ possible pairs of $\pi_{s+1}(f)$ and $\pi_s(f)$, there is a set of probability at most $\exp(- 2^{s_1})$ such that outside this set we have $$\sup_{f \in F'}|Z_f-Z_{\pi_{s_1}(f)}|
\leq \frac{\alpha \, u }{\sqrt k}\,
\sum_{s=s_1}^{s_0} 2^{s/2}
\|\pi_{s+1}(x)-\pi_s(x)\|_{\psi_2} \le
c' \alpha \frac{\gamma_2(F)}{\sqrt{k}}.$$
Denote $F_{s_1}$ by $F''$ and observe that $|F''| \le 2^{2^{s_1}}$. Thus the later estimate implies $$\sup_{f \in F'}|Z_f| \leq
c' \alpha \frac{\gamma_2(F)}{\sqrt{k}} +
\sup_{g \in F''}|Z_g|.$$ Applying Lemma \[lemma:psi-2-est\], for every $ w >0$ we get $${\mathbb{P}}\left(|Z_{g}| \geq \alpha ^2 w \right) \leq
2\exp(- c_1 k \min(w, w^2)).$$ Thus, given $w >0$, choose $s_1\le s_0$ to be the largest integer such that $2^{s_1} < c_1 k \min(w, w^2)/2$. Therefore, outside a set of probability less than or equal to $|F''|\, 2\exp(- c_1 k \min(w, w^2)) \le
\exp(- c_1 k \min(w, w^2)/2)$ we have $|Z_g| \le \alpha ^2 w $ for all $g \in F''$. To conclude, outside a set of probability $3\exp(- c_1 k \min(w, w^2)/2 )$, $$\sup_{f \in F'}|Z_f| \leq
c' \alpha \frac{\gamma_2(F)}{\sqrt{k}} + \alpha ^2 w,$$ as required.
[**Proof of Theorem \[thm:general\].**]{} Fix an arbitrary $\rho >0$, and for the purpose of this proof we let $ F(\rho) = F/\rho \cap S_{L_2}$, where $F/\rho=\{f/\rho : f\in F\}$.
Our first and main aim is to estimate $\sup_{f \in F(\rho)} |Z_f|$ on a set of probability close to 1.
Fix $ u, w >0$ to be determined later. Let $F' \subset F(\rho) $ be as Lemma \[lemma:chain\], with $|F'|
\le 4^ k$. For every $f \in F(\rho) $ denote by $\pi(f)=\pi_{F'}(f)$ a closest point to $f$ with respect to the $\psi_2$ metric on $F(\rho)$. By writing $ f =( f -\pi(f))+\pi(f)$, it is evident that $$|Z_f| \leq W_{f-\pi(f)}^2+2W_{f-\pi(f)}W_{\pi(f)} + |Z_{\pi(f)}|,$$ and thus, $$\label{three_terms}
\sup_{f \in F(\rho)} |Z_f| \leq \sup_{f \in F(\rho)} W_{f-\pi(f)}^2
+ 2\sup_{f \in F(\rho)} W_{f-\pi(f)} \sup_{g \in F'} W_{g}
+ \sup_{g \in F'}|Z_g|.$$
Applying Lemma \[lemma:chain\], the first term in (\[three\_terms\]) is estimated using the fact that $$\sup_{f \in F(\rho)} W_{f-\pi(f)}
\le C \frac{\gamma_2(F(\rho), \| \ \|_{\psi_2})}{\sqrt k},$$ with probability at least $1 - \exp(- k)$.
For every $f \in F(\rho) $ and every $u >0$ we have $${\mathbb{P}}\left\{ W_f \ge 1 + u \alpha ^2 \right\} \le
{\mathbb{P}}\left\{ W_f^2 \ge 1 + u \alpha ^2 \right\}
\le {\mathbb{P}}\left\{ | Z_f| \ge u \alpha ^2 \right\}$$ and, by Lemma \[lemma:psi-2-est\], the latter probability is at most $ 2 \exp (-c\, k \min(u, u^2))$, where $c >0$ is an absolute constant.
Combining these two estimates with Lemma \[lemma:small-set\] and substituting into (\[three\_terms\]), $\sup_{f \in F(\rho)} |Z_f|$ is upper bounded by
$$\begin{aligned}
\label{three_terms_est}
\lefteqn{
C^2 \frac{\gamma_2(F(\rho), \| \ \|_{\psi_2})^2} { k}
+ C \frac{\gamma_2(F(\rho), \| \ \|_{\psi_2})}{\sqrt k}
(1+ u \alpha^2)} \nonumber\\
& & + C'' \alpha
\frac{\gamma_2(F(\rho), \| \ \|_{\psi_2})}{\sqrt k} +
\alpha^2 w,\end{aligned}$$
with probability at least $1 - 2 e^{- k} - 2 e^{-c k
\min(u, u^2)} - 3 e^{-c k \min(w, w^2)} $.
Given $0 < \theta < 1$ we want the condition $\sup_{f \in F(\rho)}
|Z_f| \le \theta $ to be satisfied with probability close to 1. This can be achieved by imposing suitable conditions on the parameters involved. Namely, if $u = 1/\alpha^2 < 1$ and if $\rho > 0 $ and $ w >0$ satisfy $$\label{condition1}
\tilde{C}\, \alpha\, \frac{\gamma_2(F(\rho), \| \ \|_{\psi_2})}{\sqrt k}
\le \theta/4,
\qquad
C''' \alpha^2 w \le \theta/4,$$ where $\tilde {C} = \max(2\, C, C'')$, then each of the last three terms in (\[three\_terms\_est\]) is less than or equal to $\theta /4$, and the first term is less than or equal to $(\theta/4)^2$.
In order to ensure that (\[condition1\]) holds, we let $w =
\min\bigl(1, \theta/(4 C''' \alpha^2)\bigr)$. The above discussion shows that as long as $\rho$ satisfies $$\label{condition-rho}
4 \,\tilde{C}\, \alpha\, \frac{\gamma_2(F(\rho), \| \ \|_{\psi_2})}
{\theta \sqrt k} \le 1,$$ then $\sup_{f \in F(\rho)} |Z_f| \le \theta$ on a set of measure larger than or equal to $1 - 7 e^{- c k \theta^2/\alpha^4}$, where $c >0$ is an absolute constant. Hence, whenever $\rho$ satisfies (\[condition-rho\]) then (\[two\_sided\_gen\]) holds for all $f \in F(\rho)$. Finally, note that $\gamma_2(F(\rho), \|\ \|_{\psi_2}) = (1/\rho)\gamma_2(F \cap \rho
S_{L_2}, \|\ \|_{\psi_2})$, and thus (\[condition-rho\]) is equivalent to the inequality in the definition of $ r^*_k(\theta) $.
To conclude the proof, for a fixed $0 < \theta <1$ set $r = r_k^*(\theta)$, with $c = 4 \tilde{C}$ being the constant from (\[condition-rho\]). Note that if $X_1,...,X_k$ satisfy (\[two\_sided\_gen\]) for all $f \in
F(r)$ then, since $F$ is star-shaped, the homogeneity of this condition implies that the same holds for all $f \in F$ with ${\mathbb{E}}f^2 \ge r^2$, as claimed.
Let us note two consequences for the supremum of the process $Z_f$, which is of independent interest.
\[est\_sup\_Z\] There exist absolute constants $C', c' >0$ for which the following holds. Let $F \subset S_{L_2}$, $\alpha = {\mathop{\rm diam}}(F,
\|\ \|_{\psi_2})$ and $k \ge 1$. With probability at least $1- \exp\left(- c'
\gamma_2^2(F, \| \ \|_{\psi_2}) /\alpha^3\right)$ one has $$\sup_{f \in F} |Z_f| \le
C'\, \alpha\, \max\Bigl(\frac{\gamma_2(F, \| \ \|_{\psi_2})}{\sqrt k},
\frac{\gamma_2^2(F, \| \ \|_{\psi_2})}{ k}\Bigr).$$ Moreover, if $F$ is symmetric, $${\mathbb{E}}\sup_{f \in F} |Z_f| \le
C'\, \alpha\, \max\Bigl(\frac{\gamma_2(F, \| \ \|_{\psi_2})}{\sqrt k},
\frac{\gamma_2^2(F, \| \ \|_{\psi_2})}{ k}\Bigr).$$
This follows from the proof of Theorem \[thm:general\] with $\rho =1$. More precisely, the first part is a direct consequence of (\[three\_terms\_est\]).
For the “moreover part” first use (\[three\_terms\]) for expectations, estimate the middle term by Cauchy-Schwarz inequality and note that $W_g^2 \le 1 + Z_g$ for all $g \in F'$ to yield that in order to estimate ${\mathbb{E}}\sup_{f \in
F} |Z_f|$ it suffice to bound $${\mathbb{E}}\sup_{f \in F'} |Z_f|
\quad {\rm and} \quad
{\mathbb{E}}\sup_{f \in F} W_{f -\pi(f)}^2.$$
For simplicity denote $\gamma_2(F, \| \ \|_{\psi_2})$ by $\gamma_2(F)$ and let us begin with the second term. Applying Remark \[rem:expect\] and setting $G=\{f - \pi(f) : f \in F\}$ and $u = c'
\gamma_2(F)/\sqrt{k}$, where $c' $ is the constant from the remark, we obtain $$\begin{aligned}
\int_0^\infty {\mathbb{P}}\left(\sup_{g \in G} W_g^2 \geq t\right)dt & \leq
u^2 + \int_{u^2}^\infty {\mathbb{P}}\left(\sup_{g \in G} W_g^2 \geq t
\right)dt
\\
& \leq u^2+u^2 \int_1^\infty \exp(-c'' v k). dv,\end{aligned}$$ where the last inequality follows by changing the integration variable to $t=u^2 v$. This implies that $${\mathbb{E}}\sup_{f \in F} W_{f-\pi(f)}^2 \leq C' \frac{\gamma_2^2(F,
\|\ \|_{\psi_2})}{k},$$ for some absolute constant $C' >1$.
Next, we have to bound ${\mathbb{E}}\sup_{f \in F'} |Z_f|$, and to that end we use Lemma \[lemma:small-set\]. Setting $u = 2 C \alpha \gamma_2(F)/\sqrt{k}$, and then changing the integration variable to $t = u/2+\alpha^2w$, it is evident that $$\begin{aligned}
\int_0^\infty {\mathbb{P}}\left(\sup_{f \in F'} |Z_f| \geq t \right)dt
\leq & \, u + \int_{u}^\infty {\mathbb{P}}\left(\sup_{f \in F'}
|Z_f| \geq t \right)dt \\
= & \,u + \alpha^2 \int_{u/2\alpha^2}^\infty {\mathbb{P}}\left(\sup_{f \in F'}
|Z_f|
\geq \frac{u}{2} +\alpha^2 w \right)dw \\
\leq &\, u + 3 \, \alpha^2
\int_{u/2\alpha^2}^\infty \exp\left(-c'''k\min(w^2,w)\right)dw.\end{aligned}$$ Changing variables in the last integral $w=ru/2\alpha^2$ and using the fact that $\gamma_2(F) \geq 1$ for a symmetric set $F$ in the unit sphere, the last expression is bounded above by $$u + (3/2) u
\int_{1}^\infty \exp\left(-c'''\min\left(\frac{r}{2\alpha^2},
\Bigl(\frac{r}{2\alpha^2}\Bigr)^2\right)\right)dr =
C' u,$$ where $C >0$ is an absolute constant.
Subgaussian Operators {#random_proj}
=====================
We now illustrate the general result of Section \[empirical\] in the case of linear processes, which was the topic that motivated our study. The processes corresponds then to random matrices with rows distributed according to measures on ${\mathbb{R}}^n$ satisfying some natural geometric conditions. Our result imply concentration estimates for related random subgaussian operators, which eventually lead to the desired reconstruction results for linear measurements for general sets.
The fundamental result that allows us to pass from the purely metric statement of the previous section to the geometric result we present below follows from Talagrand’s lower bound on the expectation of the supremum of a Gaussian process in terms of $\gamma_2$ of the indexing set. To present our result, the starting point is the fundamental definition of the ${\ell_*}$-functional (which is in fact the so-called $\ell$-functional of a polar set).
\[def:gauss\] Let $T \subset {\mathbb{R}}^n$ and let $g_1,...,g_n$ be independent standard Gaussian random variables. Denote by ${\ell_*}(T)={\mathbb{E}}\sup_{t \in T}
\left| \sum_{i=1}^n g_i t_i\right|$, where $ t = (t_i)_{i=1}^n \in
{\mathbb{R}}^n$.
There is a close connection between the ${\ell_*}$- and $\gamma_2$- functionals given by the majorizing measure Theorem. Let $\{G_t: t \in T\}$ be a Gaussian process indexed by a set $T$, and for every $s, t \in T$, let $d^2 (s,t) = {\mathbb{E}}|G_s-G_t|^2$. Then $$c_2 \gamma_2(T,d) \leq {\mathbb{E}}\sup_{t \in T} |G_t| \leq c_3 \gamma_2(T,d),$$ where $c_2, c_3>0$ are absolute constants. The upper bound is due to Fernique [@F] and the lower bound was established by Talagrand [@Ta]. The proof of both parts and the most recent survey on the topic can be found in [@Tal-book]
In particular, if $T \subset {\mathbb{R}}^n$ and $G_t = \sum g_i t_i$, then $d(s, t)
= |s-t|$, and thus $$\label{maj_meas}
c_2 \gamma_2(T,|\ |) \leq {\ell_*}(T)\leq c_3 \gamma_2(T,|\ |),$$
\[def:measure\] A probability measure $\mu$ on ${\mathbb{R}}^n$ is called isotropic if for every $y \in {\mathbb{R}}^n$, ${\mathbb{E}}|{\left<X,y\right>}|^2 = |y|^2$, where $X$ is distributed according to $\mu$.
A measure $\mu$ satisfies a $\psi_2$ condition with a constant $\alpha$ if for every $y \in
{\mathbb{R}}^n$, $\|{\left<X,y\right>}\|_{\psi_2} \leq \alpha |y|$.
A subgaussian or $\psi_2$ operator is a random operator of the form $$\label{Gamma}
\Gamma=\sum_{i=1}^k {\left<X_i,\,.\,\right>}e_i$$ where the $X_i$ are distributed according to an isotropic $\psi_2$ measure.
Perhaps the most important example of an isotropic $\psi_2$ probability measure on ${\mathbb{R}}^n$ with a bounded constant other than the Gaussian measure is the uniform measure on $\{-1,1\}^n$. Naturally, if $X$ is distributed according to a general isotropic $\psi_2$ measure then the coordinates of $X$ need no longer be independent. For example, the normalized Lebesgue measure on an appropriate multiple of the unit ball in $\ell_p^n$ for $2 \leq p \leq \infty$ is an isotropic $\psi_2$ measure with a constant independent of $n$ and $p$. For more details on such measures see [@MP].
For a set $T \subset {\mathbb{R}}^n$ and $\rho >0$ let $$\label{Trho}
T_\rho = T \cap \rho S^{n-1}.$$
The next result shows that given $T\subset {\mathbb{R}}^n$, subgaussian operators are very close to being an isometry on the subset of elements of $T$ which have a “large enough" norm.
\[thm:psi2\] There exist absolute constants $c, \bar{c} >0$ for which the following holds. Let $T \subset {\mathbb{R}}^n$ be a star-shaped set. Let $\mu$ be an isotropic $\psi_2$ probability measure with constant $\alpha \ge 1$. Let $ k \ge 1$, and $X_1, \ldots, X_k$ be independent, distributed according to $\mu$ and define $\Gamma$ by (\[Gamma\]). For $0 < \theta <1$, with probability at least $1- \exp(- \bar{c}\, \theta^2 k/\alpha ^4)$, then for all $x \in T$ such that $|x| \ge r_k^*(\theta)$, we have $$\label{two_sided}
(1 - \theta) |x|^2 \le \frac{|\Gamma x|^2}{ k} \le (1+\theta) |x|^2,$$ where $$\label{rho_star}
r^*_k(\theta) = r^*_k (\theta, T) := \inf \Bigl\{ \rho>0 : \rho \ge
c \, \alpha^2 {\ell_*}(T_\rho)/ \bigl(\theta \sqrt k \bigr) \Bigr\}.$$ In particular, with the same probability, every $x \in T$ satisfies $$|x|^2 \leq \max \Bigl\{(1-\theta)^{-1} |\Gamma x|^2/k, \
r^*_k(\theta)^2 \Bigr\}.$$
We use Theorem \[thm:general\] for the set of functions $F $ consisting of linear functionals of the form $f = f_x = {\left<\cdot,
x\right>}$, for $x \in T$. By the isotropicity of $\mu$, $\|f\|_{L_2} =
|x| $ for $f = f_x \in F $. Also, since $\mu$ is $\psi_2$ with constant $\alpha$ then it follows by (\[maj\_meas\]) that for all $\rho >0$, $$\gamma_2(F \cap \rho S_{L_2},
\|\ \|_{\psi_2}) \le \alpha\, \gamma_2(F \cap \rho S_{L_2}, \|\ \|_{L_2})
\le ( \alpha /c_2) \, {\ell_*}(T_\rho),$$ as promised.
\[theta-large\] [It is clear from the proof of Theorem \[thm:general\] that the upper estimates in (\[two\_sided\]) hold for $\theta \ge 1$ as well, with appropriate probability estimates and a modified expression for $r_k^*$ in (\[rho\_star\]). Note that of course in this case the lower estimate in (\[two\_sided\]) became vacuous. The same remark is valid for the estimate in (\[two\_sided\_gen\]) as well. ]{}
The last result immediately leads to an estimate for the diameter of random sections of a set $T$ in ${\mathbb{R}}^n$, given by kernels of random operators $\Gamma$, and which is a $\psi_2$-counterpart of the main result from [@PT] (see also [@PTsem]).
\[sections\] There exist absolute constants $\tilde{c}, \tilde{c}' >0$ for which the following holds. Let $T \subset {\mathbb{R}}^n$ be a star-shaped set and let $\mu$ be an isotropic $\psi_2$ probability measure with constant $\alpha \ge 1$. Set $ k \ge 1$, put $X_1, \ldots, X_k$ to be independent, distributed according to $\mu$ and define $\Gamma$ by (\[Gamma\]). Then, with probability at least $1
- \exp(- \tilde{c} k /\alpha^4)$, $${\mathop{\rm diam}}(\ker \Gamma \cap T) \le r_k^*(T),$$ where $$\label{rho_star2}
r^*_k = r^*_k ( T) := \inf \Bigl\{ \rho>0 : \rho \ge
\tilde{c}' \, \alpha^2 {\ell_*}(T_\rho)/ \sqrt k \Bigr\}.$$ Moreover, with the same probability, ${\mathop{\rm diam}}(\ker \Gamma \cap T) \le
\tilde{c}' \, \alpha^2 {\ell_*}(T)/ \sqrt k $.
The Gaussian case (that is, when $\mu$ is the standard Gaussian measure on ${\mathbb{R}}^n$), although not explicitly stated in [@PT], follows immediately from the proof in that paper. The parameter $\inf\{\rho>0 : {\ell_*}(T\cap \rho B_2^n)\le C\rho\sqrt k \}$ was introduced in [@PTsem].
A version of Corollary \[sections\] for random $\pm1$-vectors follows from the result in [@Art], as observed in [@MP].
[**Proof of Corollary \[sections\].**]{} Applying Theorem \[thm:psi2\] with $\theta = 1/2$, say, we get that if $x \in T$ and $|x| \ge r_k^*(1/2)$ then $\Gamma x \ne 0$. Thus for $ x \in \ker \Gamma \cap T$ we have $|x| \le r_k^*(1/2)$ and the first conclusion follows by adjusting the constants.
Observe that since the function ${\ell_*}(T_\rho)/\rho =
{\ell_*}\bigl((1/\rho)T \cap S^{n-1}\bigr)$ is decreasing in $\rho$ then $r_k^*$ actually satisfies the equality in the defining formula (\[rho\_star2\]). Combining this and the obvious estimate ${\ell_*}(T_\rho) \le {\ell_*}(T)$, concludes the “moreover” part.
Finally, let us note a special case of Theorem \[thm:psi2\] for subsets of the sphere.
\[expect\] Let $T \subset S^{n-1}$ and let $\mu$, $\alpha$, $k$, $X_i$, $\Gamma$ and $\theta$ be the same as in Theorem \[thm:psi2\]. As long as $k$ satisfies $k \ge (c' \, \alpha^4/\theta^2) {\ell_*}(T
)^2$, then with probability at least $1- \exp(- \bar{c}\, \theta^2
k/\alpha ^4)$, for all $x \in T$, $$\label{two_sided-2}
1 - \theta \le \frac{|\Gamma x|}{ \sqrt k} \le 1+\theta,$$ where $c, \bar{c} >0$ are absolute constants.
Let $c, \bar{c} >0$ be the constants from Theorem \[thm:psi2\]. Observe that the condition on $k$, with $c' =
c^2$, is equivalent to $r_k^*(\theta, \tilde{T}) \le 1$, where $\tilde{T} = \{ \lambda x : x \in T, 0 \le \lambda \le 1 \}$. Then (\[two\_sided-2\]) immediately follows from (\[two\_sided\]).
Approximate reconstruction {#sec:app-rec}
==========================
Next, we show how one can apply Theorem \[thm:psi2\] to reconstruct any fixed $v \in T$ for any set $T \subset {\mathbb{R}}^n$, where the data at hand are linear subgaussian measurements of the form ${\left<X_i,v\right>}$.
The reconstruction algorithm we choose is as follows: for a fixed ${\varepsilon}>0$, find some $t \in T$ such that $$\left(\frac{1}{k}\sum_{i=1}^k \left({\left<X_i,v\right>} -
{\left<X_i,t\right>}\right)^2\right)^{1/2} \leq {\varepsilon}.$$ The fact that we only need to find $t$ for which $({\left<X_i,t\right>})_{i=1}^k$ is close to $({\left<X_i,v\right>})_{i=1}^k$ rather than equal to it, is very important algorithmically because it is a far simpler problem.
Let us show why such an algorithm can be used to solve the approximate reconstruction problem.
Consider ${\bar T}=\left\{\lambda(t-s):t,s \in T, \ 0 \leq \lambda
\leq 1\right\}$ and observe that by Theorem \[thm:psi2\], for every $0 < \theta <1$, with high probability, every such $t \in T$ satisfies that $$|t-v| \leq \frac{{\varepsilon}}{1-\theta} + r_k^*(\theta,\bar{T}).$$ Hence, to bound the reconstruction error, one needs to estimate $r_k^*(\theta,\bar{T})$. Of course, if $T$ happens to be convex and symmetric then ${\bar T} \subset 2T$ which is star-shaped and thus $$|t-v| \leq \frac{{\varepsilon}}{1-\theta}+r^*_k(\theta,2T).$$ In a more general case, when $T$ is symmetric and quasi-convex with constant $a \ge 1$, (i.e., $T + T \subset 2 a T$ and $T$ is star-shaped), then $$|t-v| \leq \frac{{\varepsilon}}{1-\theta} + r_k^*(\theta, a T).$$
Therefore, in the quasi-convex case, the ability to approximate any point in $T$ using this kind of random sampling depends on the expectation of the supremum of a Gaussian process indexed by the intersection of $T$ and a sphere of a radius $\rho$ as a function of the radius. For a general set $T$, the reconstruction error is controlled by the behavior of the expectation of the supremum of the Gaussian process indexed by the intersection of $\bar{T}$ with spheres of radius $\rho$, and this function of $\rho$ is just the modulus of continuity of the Gaussian process indexed by the set $\{\lambda t : 0 \leq \lambda \leq 1, \ t \in T \}$ (i.e, the expectation of the supremum of the Gaussian process indexed by the set $\{\lambda(t-s) : 0 \leq \lambda \leq 1, \ t,s \in T, \ |t-s| =
\rho\}$).
The parameters $r_k^*(\theta, T)$ can be estimated for the unit ball of classical normed or quasi-normed spaces. The two example we consider here are the unit ball in $\ell_1^n$, denoted by $B_1^n$, and the unit balls in the weak-$\ell_p^n$ spaces $\ell_{p,\infty}^n$ for $0<p<1$, denoted by $B_{p,\infty}^n$. Recall that $B_{p,\infty}^n$ is the set of all $x = (x_i)_{i=1}^n \in {\mathbb{R}}^n$ such that the cardinality $|\{i: |x_i|\ge s\}|\le s^{-p}$ for all $
s>0$, and observe that $B_{p,\infty}^n$ is a quasi convex body with constant $a = 2^{1/p}$. Let us mention that there is nothing “magical" about the examples we consider here. Those are simply the cases considered in [@CT1; @RV].
In order to bound $r_k^*$ for these sets we shall use the approach from [@GLMP], and combine it with Theorem \[thm:psi2\] to recover and extend the results from [@CT1; @RV].
\[cor:B\_1\^n\] There is an absolute constant ${\bar c}$ for which the following holds. Let $1 \le k \le n$ and $0<\theta <1$, and set ${\varepsilon}>0$. Let $\mu$ be an isotropic $\psi_2$ probability measure on ${\mathbb{R}}^n$ with constant $\alpha$, and let $X_1, \ldots, X_k$ be independent, distributed according to $\mu$. For any $0 < p <1 $, with probability at least $1-\exp(-{\bar{c}}\theta^2k/\alpha^4)$, if $v,y
\in B_{p, \infty}^n$ satisfy that $(\sum_{i=1}^k {\left<X_i,v-y\right>}^2 /k)^{1/2} \leq
{\varepsilon}$, then $$|y-v| \leq \frac{{\varepsilon}}{1-\theta} +
2^{1/p +1} \Bigl(\frac{1}{p} -1\Bigr)^{-1}
\left(C_{\alpha, \theta}\, \frac{\log(C_{\alpha, \theta}n/k)}{k}\right)
^{1/p - 1/2},$$ where $C_{\alpha, \theta} = c \alpha^4/\theta^2$ and $c >0$ is an absolute constant.
If $v,y \in B_1^n$ satisfy the same assumption then with the same probability estimate, $$|y-v| \leq \frac{{\varepsilon}}{1-\theta} +
\left( C_{\alpha, \theta}\,\frac{\log(C_{\alpha, \theta}n/k)}{k}
\right)^{1/2}.$$
To prove Theorem \[cor:B\_1\^n\], we require the following elementary fact.
\[lemma:inter\] Let $0 < p <1$ and $1 \leq m \leq n$. Then, for every $x \in
{\mathbb{R}}^n$, $$\sup_{z \in B_{p,\infty}^n\cap \rho B_2^n} {\left<x,z \right>} \leq 2\rho
\left(\sum_{i=1}^m {x_i^*}^2 \right)^{1/2},$$ where $\rho=\left(1/p-1\right)^{-1}m^{1/2-1/p}$ and $(x_i^*)_{i=1}^n$ is a non-increasing rearrangement of $(|x_i|)_{i=1}^n$.
Moreover, $$\sup_{z \in B_1^n \cap \rho B_2^n } {\left<x,z \right>} \leq 2\rho
\left(\sum_{i=1}^m {x_i^*}^2 \right)^{1/2},$$ with $\rho=1/\sqrt{m}$.
We will present a proof for the case $0<p<1$. The case of $B_1^n$ is similar.
Recall a well known fact that for two sequences of positive numbers $a_i, b_i$ such that $a_1 \ge a_2 \ge \ldots$, the sum $\sum a_i b_{\pi(i)}$ is maximal over all permutations $\pi$ of the index set, if $b_{\pi(1)} \ge b_{\pi(2)} \ge \ldots$. It follows that, for any $\rho>0, m\ge 1$ and $z \in
B_{p,\infty}^n \cap \rho B_2^n$, $$\begin{aligned}
{\left<x,z \right>} \leq & \rho \left(\sum_{i=1}^m {x_i^*}^2
\right)^{1/2} +\sum_{i>m} \frac{x_i^*}{i^{1/p}} \\
\leq & \left(\sum_{i=1}^m {x_i^*}^2 \right)^{1/2}
\left(\rho+\frac{1}{\sqrt{m}}\sum_{i >m} \frac{1}{i^{1/p}} \right)
\\
\leq & \left(\sum_{i=1}^m {x_i^*}^2 \right)^{1/2}
\left(\rho+\left(\frac{1}{p}-1\right)^{-1}
\frac{1}{m^{1/p-1/2}}\right).\end{aligned}$$ By the definition of $\rho$, this completes the proof.
Consider the set of elements in the unit ball with “short support", defined by $$U_m=\left\{x \in S^{n-1} : \left| \left\{i: x_i \not = 0 \right\}
\right| \leq m \right\}.$$
Note that Lemma \[lemma:inter\] combined with a duality argument implies that for every $1 \leq m \leq n$ and every $I \subset
\{1,...,n\}$ with $|I| \leq m$, $$\label{eq:Um}
\sqrt{|I|}\, B^n_1\cap S^{n-1}\subset 2 {\mathop{\rm conv}}U_{m} \cap
S^{n-1}.$$
The next step is to bound the expectation of the supremum of the Gaussian process indexed by $U_m$.
\[lemma:gauss-Um\] There exist an absolute constant $c$ such that for every $1 \leq m
\leq n$, $${\ell_*}({\mathop{\rm conv}}U_m) \leq c \sqrt{m\log(cn/m)}.$$
Recall that for every $1 \leq m \leq n$, there is a set $\Lambda_m$ of cardinality at most $5^m$ such that $B_2^m \subset 2
{\mathop{\rm conv}}\Lambda_m$ (for example, a successive approximation shows that we may take as $\Lambda_m$ an $1/2$-net in $B_2^m $). Hence there is a subset of $B_2^n$ of cardinality at most $5^m
\binom{n}{m}$ such that $U_m \subset 2{\mathop{\rm conv}}\Lambda $. It is well known (see for example[@LT]) that for every $T \subset B_2^n$, $${\ell_*}({\mathop{\rm conv}}T)={\ell_*}(T) \leq c\sqrt{\log(|T|)},$$ and thus, $${\ell_*}({\mathop{\rm conv}}U_m) \leq c\sqrt{\log\Bigl(5^m \binom{n}{m}\Bigr)},$$ from which the claim follows.
Finally, we are ready to estimate $r_k^*(\theta,B_{p,\infty}^n)$ and $r_k^*(\theta,B_1^n)$.
\[lemma:r-est\] There exists an absolute constant $c$ such that for any $0<p<1$ and $1 \leq k \leq n$, $$r_k^*(\theta,B_{p,\infty}^n) \leq c\left(\frac{1}{p}-1\right)^{-1}
\left(\frac{\log(cn\alpha^4/\theta^2k)}{\theta^2k/\alpha^4}\right)^{1/p-1/2}$$ and $$r_k^*(\theta,B_1^n) \leq c
\left(\frac{\log(cn\alpha^4/\theta^2k)}{\theta^2k/\alpha^4}\right)^{1/2}.$$
Again, we present a proof for $B_{p,\infty}^n$, while the treatment of $B_1^n$ is similar and thus omitted.
Let $0<p<1$ and $1 \leq k \leq n$, and set $1 \leq m \leq n$ to be determined later. Clearly, $$\left(\sum_{i=1}^m {x_i^*}^2 \right)^{1/2} = \sup_{y \in U_m}
{\left<x,y\right>},$$ and thus, by Lemma \[lemma:inter\], ${\ell_*}(B_{p,\infty}^n \cap \rho
B_2^n) \leq 2\rho{\ell_*}(U_m)$, where $\rho=(1/p-1)^{-1}m^{1/2-1/p}$. From the definition of $r_k^*(\theta)$ in Theorem \[thm:psi2\], it suffices to determine $m$ (and thus $\rho$) such that $$c\alpha^2{\ell_*}(U_m) \leq \theta \sqrt{k},$$ which by Lemma \[lemma:gauss-Um\], comes to $c\alpha^2
\sqrt{m\log(cn/m)} \leq \theta \sqrt{k}$ for some other numerical constant $c$. It is standard to verify that the last inequality holds true provided that $$m \leq
c\frac{\theta^2k/\alpha^4}{\log\left(cn\alpha^4/\theta^2k\right)},$$ and thus $$r_k^*(\theta,B_{p,\infty}^n) \leq c \left(\frac{1}{p}-1\right)^{-1}
\left(\frac{\log\left(cn\alpha^4/\theta^2k\right)}
{\theta^2k/\alpha^4}\right)^{1/p-1/2}.$$
[**Proof of Corollary \[cor:B\_1\^n\].**]{} The proof follows immediately from Theorem \[thm:psi2\] and Lemma \[lemma:r-est\].
Exact reconstruction {#sec:exact}
====================
Let us consider the following problem from the error correcting code theory. A linear code is given by an $n\times (n-k)$ real matrix $A$. Thus, a vector $x\in {\mathbb{R}}^{n-k}$ generates the vector $Ax\in{\mathbb{R}}^n$. Suppose that $Ax$ is corrupted by a noise vector $z\in{\mathbb{R}}^n$ and the assumption we make is that $z$ is [*sparse*]{}, that is, has a short support, which we denote by supp$(z)$=$\{i\,:\,
z_i\not=0\}$. The problem is to [*reconstruct*]{} $x$ from the data, which is the noisy output $y=Ax+z$.
For this purpose, consider a $k\times n$ matrix $\Gamma$ such that $\Gamma A=0$. Thus $\Gamma z=\Gamma y$ and correcting the noise is reduced to identifying the sparse vector $z$ (rather than approximating it) from the data $\Gamma z$ - which is the problem we focus on here.
In this context, a linear programming approach called the [*basis pursuit*]{} algorithm, was recently shown to be relevant for this goal [@CDS]. This method is the following minimization problem $$(P)\qquad \min \|t\|_{\ell_1},\quad \Gamma t=\Gamma z$$ (and recall that the ${\ell_1}$-norm is defined by $\|t\|_{\ell_1}=\sum_{i=1}^n |t_i|$ for any $t=(t_i)_{i=1}^n
\in{\mathbb{R}}^n$).
For the analysis of the reconstruction of sparse vectors by this basis pursuit algorithm, we refer to [@CDS] and the recent papers [@CT2; @CT3].
In this section, we show that if $\Gamma$ is an isotropic $\psi_2$ matrix then with high probability, for any vector $z$ whose support has size less than ${Ck\over \log (Cn/k)}$ (for some absolute constant $C$), the problem $(P)$ above has a unique solution that is equal to $z$. It means that such random matrices can be used to reconstruct any sparse vector, as long as the size of the support is not too large. This extends the recent result proved in [@CT2] and [@RV] for Gaussian matrices.
\[thm:reconstruction\] There exist absolute constants $c,C$ and $\bar{c}$ for which the following holds. Let $\mu$ be an isotropic $\psi_2$ probability measure with constant $\alpha \ge 1$. For $ 1\le k \le n$, set $X_1, \ldots, X_k$ to be independent, distributed according to $\mu$ and let $\Gamma=\sum_{i=1}^k {\left<X_i,\cdot\right>}e_i$. Then with probability at least $1- \exp(- \bar{c}\, k/\alpha ^4)$, any vector $z$ satisfying $$|{\mathop{\rm supp}}(z)|\le {C k\over \alpha^4 \log (cn \alpha^4/k)}$$ is the unique minimizer of the problem $$(P)\qquad \min \|t\|_{\ell_1},\quad \Gamma t=\Gamma z.$$
The proof of Theorem \[thm:reconstruction\] is based on a scheme of the proof of [@CT2], but is simpler and more general, as it holds for an arbitrary isotropic $\psi_2$ random matrix.
Let us remark that problem $(P)$ is equivalent to the following one $$(P')\qquad
\min_{t\in {\mathbb{R}}^n} \|y-At\|_{\ell_1}$$ where $\Gamma A=0$. Thus we obtain the reconstruction result:
\[thm:reconstruction2\] Let $A$ be a $n\times (n-k)$ matrix. Set $\Gamma$ to be a $k\times
n$ matrix that satisfies the conclusion of the previous Theorem with the constants $c$ and $C$, and for which $\Gamma A=0$. For any $x\in {\mathbb{R}}^{n-k}$ and any $y=Ax+z$, if $|{\rm supp}(z)|\le
{Ck\over \log (cn/k)}$, then $x$ is the unique minimizer of the problem $$\min_{t\in {\mathbb{R}}^n} \|y-At\|_{\ell_1}.$$
The condition $\Gamma A=0$ means that the range of $A$ is a subspace of the kernel of $\Gamma$. Due to the rotation invariance of a Gaussian matrix (from both sides), the range and the kernel are random elements of the Grassmann manifold of the corresponding dimensions. Therefore, random Gaussian matrices $A$ and $\Gamma$ satisfy the conclusion of Corollary \[thm:reconstruction2\].
Proof of Theorem \[thm:reconstruction\]
---------------------------------------
As in [@CT2], the proof consists of finding a simple condition for a fixed matrix $\Gamma$ to satisfy the conclusion of our Theorem. We then apply a result from the previous section to show that random matrices satisfy this condition.
The first step is to provide some criteria which ensure that the problem $(P)$ has a unique solution as specified in Theorem \[thm:reconstruction\]. This convex optimization problem can be represented as a linear programming problem. Indeed, let $z\in{\mathbb{R}}^n$ and set $$\label{supports}
I^+=\{i\,:\ z_i>0\},\quad I^-=\{i\,:\ z_i<0\}, \quad I=I^+\cup I^- .$$ Denote by $ {\cal C}$ the cone of constraint $${\cal C}=\{t\in{\mathbb{R}}^n\,:\, \sum_{i\in I^+} t_i-\sum_{i\in I^-} t_i+
\sum_{i\in I^c} |t_i|\le 0\}$$ corresponding to the $\ell_1$-norm.
Note that if $|t|$ is small enough then $\|z+t\|_{\ell_1}=
\sum_{i\in I^+}(z_i + t_i)-\sum_{i\in I^-} (z_i+ t_i)+
\sum_{i\in I^c} |t_i|$. Thus, the solution of $(P)$ is unique and equals to $z$ if and only if $$\label{constraint condition}
\ker \Gamma\cap {\cal C}=\{0\}$$
By the Hahn-Banach separation Theorem, the latter is equivalent to the existence of a linear form $\tilde w\in {\mathbb{R}}^n$ vanishing on $\ker \Gamma$ and positive on ${\cal C}\setminus\{0\}$.
After appropriate normalization, it is easy to check that such an $\tilde w$ satisfies that $\tilde w=\sum_{i=1}^k\alpha_i X_i$, ${\left<\tilde w,e_i\right>}=1$ for all $i\in I^+$, ${\left<\tilde w,e_i\right>}=-1$ for all $i\in I^-$, and $|{\left<\tilde w,e_i\right>}|<1$ for all $i\in I^c$. Setting $w=\sum_{i=1}^k\alpha_i e_i$ and noticing that ${\left<\tilde
w,e_i\right>}={\left<w,\Gamma e_i\right>}$ we arrive at the following criterion.
\[face condition\] Let $\Gamma$ be a $k\times n$ matrix and $z\in{\mathbb{R}}^n$. With the notation (\[supports\]), the problem $$(P)\qquad \min \|t\|_{\ell_1},\quad \Gamma t=\Gamma z$$ has a unique solution which equals to $z$, if and only if there exists $w\in{\mathbb{R}}^k$ such that $$\forall i\in I^+ \ {\left<w,\Gamma e_i\right>}=1,\quad
\forall i\in I^- \ {\left<w,\Gamma e_i\right>}=-1,\quad \forall i\in I^c \
|{\left< w,\Gamma e_i\right>}|<1.$$
The second preliminary result we require follows from Corollary \[expect\] and the estimates of the previous section.
\[thm:Um\] There exist absolute constants $c, C$ and $\bar{c}$ for which the following holds. Let $\mu$, $\alpha$, $k$ and $\Gamma$ be as in Theorem \[thm:reconstruction\]. Then, for every $0 < \theta <1$, with probability at least $1- \exp(- \bar{c}\, \theta^2 k/\alpha
^4)$, every $x \in 2 {\mathop{\rm conv}}U_{4m} \cap S^{n-1}$ satisfies that $$\label{lower}
(1 - \theta) |x|^2 \le \frac{|\Gamma x|^2}{ k} \le (1+\theta) |x|^2,$$ provided that $$m \leq
C\frac{\theta^2k/\alpha^4}{\log\left(cn\alpha^4/\theta^2k\right)}.$$
[**Proof.**]{} Applying Corollary \[expect\] to $T=2 {\mathop{\rm conv}}U_{4m} \cap S^{n-1}$, we only have to check that $k \ge (c' \,
\alpha^4/\theta^2) {\ell_*}(T )^2$, which from Lemma \[lemma:gauss-Um\] reduces to verifying that $ k\ge (c'\alpha^4
/\theta^2)cm\log(cn/m) $. The conclusion now follows from the same computation as in the proof of Lemma \[lemma:r-est\].
[**Proof of Theorem \[thm:reconstruction\].**]{} Observe that if $t\in {\cal C}\cap S^{n-1}$ then $\|t\|_{\ell_1}\le 2\sum_{i\in I} |t_i|\le 2\sqrt{|I|}$, where $I$ is the support of $z$. Hence, $${\cal C}\cap S^{n-1}\subset \sqrt{4|I|}\, B^n_1\cap S^{n-1}.$$ This inclusion and condition (\[constraint condition\]) clearly imply that if $\Gamma$ does not vanish on any point of $\sqrt{4|I|}\,
B^n_1\cap S^{n-1}$, then the solution of $(P)$ is unique and equals to $z$. By we have $$\sqrt{4|I|}\, B^n_1\cap S^{n-1}\subset 2{\mathop{\rm conv}}U_{4m} \cap
S^{n-1}.$$ Therefore, if $\Gamma$ does not vanish on any point of $2{\mathop{\rm conv}}U_{4m}\cap S^{n-1}$ then $z$ is the unique solution of $(P)$. Applying Theorem \[thm:Um\], the lower bound in (\[lower\]) shows that indeed, $\Gamma$ does not vanish on any point of the required set, provided that $$m\le {Ck\over \alpha^4\log (cn \alpha^4/k)}$$ for some suitable constants $c$ and $C$.
The geometry of faces of random polytopes {#faces}
-----------------------------------------
Next, we investigate the geometry of random polytopes. Let $\Gamma$ be a $k\times n$ isotropic $\psi_2$ matrix. For $1 \leq i \leq n$ let $v_i=\Gamma(e_i)$ be the vector columns of the matrix $\Gamma$ and set $K^+(\Gamma)$ (resp. $K(\Gamma)$) to be the convex hull (resp., the symmetric convex hull) of these vectors.
In this situation, the random model that makes sense is when $X=(x_i)_{i=1}^n$, where $(x_i)_{i=1}^n$ are independent, identically distributed random variables for which ${\mathbb{E}}|x_i|^2 =1$ and $\|x_i\|_{\psi_2} \leq \alpha$. It is standard to verify that in this case $X =(x_i)_{i=1}^n $ is an isotropic $\psi_2$ vector with constant $\alpha$, and moreover, each vertex of the polytope is given by $v_i=(x_{i,j})_{j=1}^k$.
A polytope is called [*$m$-neighborly*]{} if any set of less than $m$ vertices is the vertex set of a face. In the symmetric setting, we will say that a symmetric polytope is $m$-symmetric-neighborly if any set of less than $m$ vertices containing no-opposite pairs, is the vertex set of a face.
The condition of Lemma \[face condition\] may be reformulated by saying that the set $\{v_i\,:\, i\in I^+\}\cup \{-v_i\,:\, i\in
I^-\}$ is the vertex set of a face of the polytope $K(\Gamma)$. Thus, the condition for the exact reconstruction using the basis pursuit method for any vector $z$ with $|{\mathop{\rm supp}}(z)| \leq m$ may be reformulated as a geometric property of the polytope $K(\Gamma)$ (see [@CT2; @RV]); namely, that [*for all disjoint subsets $I^+$ and $I^-$ of $\{1,\dots, n\}$ such that $|I^+|+|I^-|\le m$, the set $\{v_i\,:\, i\in I^+\}\cup \{-v_i\,:\,
i\in I^-\}$ is the vertex set of a face of the polytope $K(\Gamma)$.*]{} That is, $K(\Gamma)$ is $m$-symmetric-neighborly. A similar analysis may be done in the non-symmetric case, for $K^+(\Gamma)$, where now $I^-$ is empty.
Let $\Gamma$, $K(\Gamma)$ and $K^+(\Gamma)$ be as above. Then the problem $$(P)\qquad \min \|t\|_{\ell_1},\quad \Gamma t=\Gamma z$$ has a unique solution which equals to $z$ for any vector $z$ (resp., $z\ge 0$) such that $|{\rm\, supp} (z)|\le m$, if and only if $K(\Gamma)$ (resp., $K^+(\Gamma)$) is $m$-symmetric-neighborly (resp., $m$-neighborly).
Applying Theorem \[thm:reconstruction\], we obtain
\[thm:neighborly\] There exist absolute constants $c,C$ and $\bar{c}$ for which the following holds. Let $\mu$ be an isotropic $\psi_2$ probability measure with constant $\alpha \ge 1$ and let $k$ and $\Gamma$ be as above. Then, with probability at least $1- \exp(- \bar{c}\, k/\alpha
^4)$, the polytopes $K^+(\Gamma)$ and $K(\Gamma)$ are $m$-neighborly and $m$-symmetric-neighborly, respectively, for every $m$ satisfying $$m\le {Ck\over \alpha^4\log (cn \alpha^4/k)}\cdot$$
The statement of Theorem \[thm:neighborly\] for $K(\Gamma)$ and for a Gaussian matrix $\Gamma$ is the main result of [@RV]. However, a striking fact is that the same results holds for a random $\{-1,1\}$-matrix. In such a case, $K^+(\Gamma)$ is the convex hull of $n$ random vertices of the discrete cube $\{-1,+1\}^k$, also known as a random $\{-1,1\}$-polytope. With high probability, every $(m-1)$-dimensional face of $K^+(\Gamma)$ is a simplex and there are ${n\choose m}$ such faces, for $m\le Ck/ \log (cn/k)$.
[Let us mention some related results about random $\{-1,1\}$-polytopes. A result of [@BP] states that for such polytopes, the number of facets , which are the $k-1$-dimensional faces, may be super-exponential in the dimension $k$, for an appropriate choice of the number $n$ of vertices. Denote by $f_{q}(K^+(\Gamma))$ the number of $q$-dimensional faces of the polytope $K^+(\Gamma)$. The quantitative estimate in [@BP] was recently improved in [@GGM] where it is shown that there are positive constants $a,b$ such that for $k^a\le n\le \exp(bk)$, one has ${\mathbb{E}}f_{k-1}(K^+(\Gamma))\ge (\ln n/\ln k^a)^{k/2}.$ For lower dimensional faces a threshold of $f_{q}(K^+(\Gamma))/ {n \choose
q+1}$ was established in [@K].]{}
[**S. Mendelson**]{} [Centre for Mathematics and its Applications, The Australian National University, Canberra, ACT 0200, Australia\
]{} [shahar.mendelson@anu.edu.au]{}\
\[.05cm\] [**A. Pajor** ]{}[Laboratoire d’Analyse et Mathématiques Appliquées, Université de Marne-la-Vallée, 5 boulevard Descartes, Champs sur Marne, 77454 Marne-la-Vallee, Cedex 2, France\
]{} [ alain.pajor@univ-mlv.fr]{}\
\[.05cm\] [**N. Tomczak-Jaegermann**]{} [Department of Mathematical and Statistical Sciences,\
University of Alberta, Edmonton, Alberta, Canada T6G 2G1\
]{} [ nicole@ellpspace.math.ualberta.ca]{}
| 2023-10-29T01:26:19.665474 | https://example.com/article/1626 |
>Between the Lines: JJ Thatcher and the Art of Picking Your Battles
It is a lesson that I have learned a couple of times in my young lifetime; one that I will no doubt learn again. Take last week, for example. When his protégé won occupancy of Flagstaff House (side note: I refuse to call the building by its current name until it is given a more sensible one), one would have assumed that Uncle JJ was going to fade to black, moving away from the public eye, his mission to return his people to power accomplished. There was one moment in particular, after the results were announced, when he told journalists and cameramen gathered outside his house in anticipation of a ‘Boomism’ to head to town and follow the crowd as he should not be the centre of attention. Uncle JJ? Shying away from publicity? Glad that someone was capturing this genuinely historic moment on film, I thought to myself that we really had arrived at the end of an era. Alas: this was an assumption and, as I said earlier, one must never assume.
Last week, our former leader was back in the news berating President Mills for his directive to District and Municipal Chief Executives of the outgoing administration to remain in their posts until new appointments have been made. I wonder what Old Boom-Boom expected? Mills’ nickname is not ‘God of War’: it is ‘Asomdwehene’. Besides, the NDC won the elections by so small a margin that reassuring the half of the country whose feelings towards their party range from mild dislike to full-blown terror makes far more sense than taking revenge for treatment meted out to the party faithful all the way back in 2001.
Rawlings is by no means the first former President to make his successor’s life difficult. Britain’s Margaret Thatcher became a bane to John Major, who stepped into her high heels as head of the Conservative Party after she was pressured to resign in 1990. Thatcher was held in reverence by most party faithful but evoked very negative feelings not just among the rest of the electorate but among some members of her own party. Sound familiar? Thatcher too handpicked Major as her successor but accidentally let it slip that she expected to be a backseat driver of his administration. Even if she had not said anything, the perception was already there and it put Major in a tricky position, forcing him to demonstrate his independence from her. I repeat: does any of this sound familiar?
Major took new directions on key policies – regional integration and social policy spring to mind – and his leadership style was far less autocratic. Thatcher indicated her disappointment in him, making unhelpful comments and interventions that only deepened his public image of being a weak leader. After Tony Blair’s Labour Party crushed the Conservatives in the next election, Major would look back and describe Thatcher’s behaviour towards him as “intolerable,” accusing her of turning his government into a Greek tragedy.
Of course, Britain and Ghana are different countries. The conciliatory approach that President Mills has adopted has been largely praised and, while Mills’ movements have been described as being somewhat effeminate by one cheeky (but on-point) journalist, his performance thus far has not been considered weak.
Until now.
Health concerns eventually forced Thatcher to fade into the background and it has done her image a world of good. In recent polls though, she has come out a British hero; one that even the current Prime Minister Gordon Brown – a Labour Party supremo – has not been ashamed to say he shares certain convictions with. Ghana is held in high regard the world over for what happened here in December, but how much more amazing will it be when our presidents are able to invite their predecessors – past and historic – over for dinner at Flagstaff and freely state their admiration for them without feeling like scrubbing themselves down with industrial-strength stain remover immediately afterwards?
While I feel for President Mills, I sympathize with those who point to our constitution, indicating that it leaves Uncle JJ free to free his mind. I just wish that he would pick his moments a little more carefully. One week into your own man’s administration is a little too early to start knocking the man down. Even worse is the fact that he will go down in Ghanaian history as the first major critic to the Mills Administration… and all for a decision that has not even fully played out yet.
Rawlings’ fighting talk had its uses during the elections, mobilizing NDC supporters to march to the voting booth and kick out their supposed ‘oppressors’. However for every NDC supporter inspired by their Great Founder, there were probably two floating voters who found his style and opinion completely off-putting. A fear of moving backwards to the revolutionary days that Rawlings seems to reminisce so fondly over kept many from voting NDC in spite of the fact that they too clamoured for change. I have a feeling that the margin by which the NDC won the election might have been wider had he stayed in the background and allowed his memory to foster a little nostalgia instead of constantly holding press conferences and reminding so many people why they dislike him.
The cliché thing to do when talking about African leaders is to wheel out Saint Nelson and start using words like ‘legendary’, ‘heroic’ and ‘exemplary’. While I am loath to fall into that trap, it has to be said that Uncle JJ’s fellow Forum of African Elder Statesmen member sure knew how to pick his battles with his successor’s administration. The biggest single criticism that Mandela had of his successor’s government – one he never let him forget – was the Mbeki Administration’s stance on HIV/AIDS. Now that is something worth fighting your own about.
One might have assumed that the Rawlings Clan would have spent the past few days celebrating the dropping of charges against Mrs. Rawlings and bonding over the juicy prospect of suing the former President. | 2024-04-24T01:26:19.665474 | https://example.com/article/8425 |
News/Reviews:
1 It is commonly said: If a man put away his wife, and she go from him, and marry another man, shall he return to her any more? shall not that woman be polluted, and defiled? but thou hast prostituted thyself to many lovers: nevertheless return to me, saith the Lord, and I will receive thee.
2 Lift up thy eyes on high: and see where thou hast not prostuted thyself: Thou didst sit in the ways, waiting for them as a robber in the wilderness: and thou hast polluted the land with thy fornications, and with thy wickedness.
3 Therefore the showers were withholden, and there was no lateward rain: thou hadst a harlot's forehead, thou wouldst not blush.
6 And the Lord said to me in the days of king Josias: Hast thou seen what rebellious Israel hast done? she hath gone out of herself upon every high mountain, and under every green tree, and hath played the harlot there.
7 And when she had done all these things, I said: Return to me, and she did not return. And her treacherous sister Juda saw,
8 That because the rebellious Israel had played the harlot, I had put her away, and had given her a bill of divorce: yet her treacherous sister Juda was not afraid, but went and played the harlot also herself.
10 And after all this, her treacherous sister Juda hath not returned to me with her whole heart, but with falsehood, saith the Lord.
11 And the Lord said to me: The rebellious Israel hath justified her soul, in comparison of the treacherous Juda.
12 Go, and proclaim these words toward the north, and thou shalt say: Return, O rebellious Israel, saith the Lord, and I will not turn away my face from you: for I am holy, saith the Lord, and I will not be angry for ever.
16 And when you shall be multiplied, and increase in the land in those days, saith the Lord, they shall say no more: The ark of the covenant of the Lord: neither shall it come upon the heart, neither shall they remember it, neither shall it be visited, neither shall that be done any more.
18 In those days the house of Juda shall go to the house of Israel, and they shall come together out of the land of the north to the land which I gave to your fathers.
19 But I said: How shall I put thee among the children, and give thee a lovelyland, the goodly inheritance of the armies of the Gentiles? And I said: Thou shalt call me father and shalt cease to walk after me.
25 We shall sleep in our confusion, and our shame shall cover us, because we have sinned against the Lord our God, we and our fathers from our youth even to this day, and we have not hearkened to the voice of theLord our God. | 2023-12-28T01:26:19.665474 | https://example.com/article/6077 |
Only 8% of the 80’000 block-projects are actively working. At the same time, the life expectancy of the others does not exceed 1.2 years. This is stated in the study of the Chinese Academy of Information and Communication Technologies (CAICT) on industry trends. In that way, there arises a question, whether it is relevant to start such a project and whether it will be successful and bring any profit.
Currently, there are more than 1,000 kinds of tokens available on different stock exchanges. At the same time, their number is constantly growing. It is important to understand that not all cryptocurrencies have their own network since most are released on top of another blockchain.
One of the best aspects of blockchain technology is its transparency. This means that a business that uses a public chain can make its activity completely open. Each user can check the promises given by the company. One of the potential applications for this kind of openness is gambling and online casinos with cryptocurrency, also known as the gambling industry.
It is notable that blockchain technology is built of neutral objects known as blocks. Each block is interconnected in a network, which is known as a chain of blocks. The way the blocks are formed and an entry in them (the transaction) that passes through the chain must have the correct value in each previous block. This leads to a much higher level of fairness (neutrality) and accuracy, as well as transparency.
Many users of a particular crypto exchange are facing the question – how to transfer the cryptocurrency from one exchange to another? The answer is simple: you need to go to the exchange, where funds will be transferred to create the purse of the desired currency and transfer funds to it. However, it is still not completely clear what currency to choose. | 2023-10-22T01:26:19.665474 | https://example.com/article/5373 |
Diastereoselective synthesis of (±)-heliotropamide by a one-pot, four-component reaction.
The first synthesis of heliotropamide is reported. The preparation of this 2-oxopyrrolidine (γ-lactam) natural product relied on a diastereoselective one-pot, four-component reaction (4CR) for the assembly of the core structure. On the basis of chemical shift correlation and NOESY experiments, the previously unknown alkene geometry of heliotropamide is assigned as E. | 2024-01-03T01:26:19.665474 | https://example.com/article/1520 |
Electa Nobles Lincoln Walton
Electa Nobles Lincoln Walton (née Electa Nobles Lincoln; 12 May 1824 - 1908) was an American educator, lecturer, writer, and suffragist from the U.S. state of New York. Though she was co-author of a series of arithmetics books, the publishers decided that her name should be withheld. She became an advocate for the enfranchisement of women. She was said to be the "first woman to administer a state normal school". She was an officer of the Massachusetts Woman Suffrage Association, an active member and director in the New England Women's Educational Club of Boston, and president of the West Newton Woman's Educational Club since its organization in 1880. Though not a prolific writer, she sometimes contributed to the press. She was an occasional lecturer upon literary and philanthropic subjects.
Early years and education
Electa Nobles Lincoln was born in Watertown, New York, 12 May 1824. She was the youngest daughter of Martin and Susan Freeman Lincoln. On the paternal side, she was a descendant of Samuel Lincoln, who settled at Hingham, in 1637, and of his son Mordecai, who was born in Hingham in 1657. These two ancestors of Mrs. Walton were also ancestors of the President, Abraham Lincoln, who was of the same generation that she is—the seventh. Mrs. Walton's father, Martin Lincoln, was born in Cohasset in 1795. A teacher by profession, he taught in the public schools of Lancaster, also in the Lancaster Academy, and afterward for some years kept a private school in Boston. Mrs. Walton's mother, whose maiden name was Susan White Freeman, was the daughter of Adam and Margaret (White) Freeman. Adam Freeman, grandfather of Mrs. Walton, emigrated with a party from Frankfurt am Main about 1780, and settled in the locality then known as the "German Flats," afterward named Frankfort, New York. His wife, Margaret White Freeman, Mrs. Walton's maternal grandmother, was from Windsor, Vermont. Archibald White, Jr., and William White, who are on record as living in the town in 1786, were her brothers.
At the age of two, she removed to Lancaster with her family. She resided afterwards in Roxbury, and later in Boston. Under the tutoring of Dr. Nathaniel Thayer, of Lancaster, and Dr. George Putnam, of Roxbury, she studied the doctrines of Unitarianism. During the ministration of Rev. J. T. Sargent and under the impulse occasioned by the preaching of Rev. Theodore Parker, she devoted herself to religious work. Her first and principal teacher was her father. At the age of seventeen, she entered the State Normal School in Lexington, and was graduated.
Career
In 1843, having completed the normal course of study and having received her diploma, she became an assistant in the Franklin Grammar School, Boston. After teaching there for a few weeks, she was appointed assistant in the Normal School, her alma mater, where she began to teach on May 7, 1843, five days before she turned 19. She retained her position as assistant at the State Normal School for seven years, one at Lexington and six at West Newton (when the school was removed in 1844), and served under three principals—the Rev. Cyrus Peirce, the Rev. Samuel Joseph May, and Eben S. Stearns. In the interregnum between the resignation of Peirce and the accession of Stearns, Lincoln served as principal of the school; and it was the expressed wish of Peirce that she should succeed him as permanent principal. Lincoln was thus the first woman in the United States to act as principal of a State Normal School, but to make her the permanent principal was too great an innovation to be seriously thought of by those in authority in that time.
She married George Augustus Walton, of Lawrence, in August, 1850, and for 18 years, they resided in Lawrence. After her marriage, Walton devoted her spare time to benevolent and philanthropic enterprises, and was a leader in church and charitable work. She received instruction in vocal culture from Professor James E. Murdock and William Russell. She was employed by Gen. George Armstrong Custer, conducting a teachers' institute of the graduating class in Hampton. During the American Civil War, turning the sympathies of the Lawrence people toward the Sanitary Commission, she aided in organizing the whole community into a body of co-laborers with the army in the field.
She was co-author with her husband of a series of arithmetics books. She believed in the equal rights of woman and that they should be credited for their work. Her beliefs were intensified by the decision of the publishers, that her name should be withheld as co-author of the arithmetics. From being simply a believer in the right of woman suffrage, she became an earnest advocate for the complete enfranchisement of woman. She was always a zealous advocate of temperance and during a residence in Westfield, held the office of president of the Woman's Christian Temperance Union of that town. After her removal to West Newton, she became actively interested in promoting woman suffrage, believing that through woman suffrage the cause of temperance and other reforms were best advanced. She was an officer of the Massachusetts Woman Suffrage Association, an active member and director in the New England Women's Educational Club of Boston, and president of the West Newton Woman's Educational Club since its organization in 1880. Though not a prolific writer, she sometimes contributed to the press. She was an occasional lecturer upon literary and philanthropic subjects.
Having received thorough instruction in vocal culture from Professors James E. Murdock and William Russell, she was for years employed as a teacher of reading and vocal training in the teachers' institutes of Massachusetts. She also taught in the State Normal Institutes of Virginia, and for five successive years, by invitation of General Armstrong, conducted a teachers' institute of the graduating class in Hampton.
Private life
She married George Augustus Walton on August 27, 1850. At that time and for a number of years after, he was principal of the Oliver Grammar School in Lawrence, Massachusetts. Subsequently, as a teacher in teachers' institutes in New England, also in New York and Virginia, he became widely known and influential. For 25 years, from 1871, he was agent of the Massachusetts State Board of Education. Mr. Walton was a graduate of the Bridgewater Normal School. He received the degree of Master of Arts from Williams College. Born in South Reading (now Wakefield), Mass., February 18, 1822, son of James and Elizabeth (Bryant) Walton, he was a lineal descendant of the Rev. William Walton, whose services as minister of the gospel at Marblehead covered a period of 30 years, 1638–68.
The Waltons were the parents of five children, of whom three survived: Harriet Peirce, wife of Judge James R. Dunbar, of the Massachusetts Superior Court; Dr. George Lincoln Walton (Harv. Univ., A.B. 1875, M.D. 1880), neurologist, of Boston: and Alice Walton (Smith Coll., A.B. 1887; Cornell, Ph.D. 1892), who became associate professor of Latin and archaeology at Wellesley College.
Walton died on March 15, 1908 in Newton, Massachusetts.
Selected works
1866, A pictorial primary arithmetic : on the plan of object-lessons (with G. A. Walton)
1869, An intellectual arithmetic : with an introduction (with G. A. Walton)
1869, The illustrative practical arithmetic by a natural method (with G. A. Walton)
1869, A key to The illustrative practical arithmetic (with G. A. Walton)
1871, A manual of arithmetic ... to which is appended a key to Waltons' Illustrative practical arithmetic (with G. A. Walton)
1914, Historical sketches of the Framingham State Normal School (with Eben S Stearns & Grace F Shepard)
References
Citations
Attribution
Bibliography
External links
Electa Nobles Lincoln Walton on Find a Grave
Category:1824 births
Category:1908 deaths
Category:19th-century American writers
Category:American school administrators
Category:American suffragists
Category:19th-century American women writers
Category:Educators from New York (state)
Category:Framingham State University alumni
Category:Framingham State University faculty
Category:American school principals
Category:American women educators | 2023-11-25T01:26:19.665474 | https://example.com/article/3732 |
Greetings:
I am attaching my Excel spreadsheet that takes you through the various iterations of the capital expenses and cost-of-service calculations for the Elba Island LNG Terminal. I have included certain "estimate" information that reflects my attempt to derive cost-of-service information to augment what Southern LNG has provided.
Rod Hayslett has kindly agreed to develop the proper algorithm to more accurately calculate this information and to work with Eric Groves on integrating this as a module in Eric's LNG economic evaluation model for the Atlantic Basin. I believe we will need this in order to analyze the proposals that we expect to receive from El Paso Merchant Energy this week.
I would also like James McMillan to take a look at the commodity charges and electric power rates for the nitrogen injection, air injection and gas liquids stripping cases, as well as at the fixed O&M costs. Some updating of the "estimate" information is likely needed.
I will send copies of the relevant background information received from Southern LNG to each of you. In addition, I will be sending out a general memo to a wider audience with the attached spreadsheet.
Regards.
Les Webber | 2024-07-02T01:26:19.665474 | https://example.com/article/5289 |
The carrier wars are heating up. Last week, T-Mobile unveiled the next phase of its “Un-carrier” strategy with a fresh take on phone upgrades, giving subscribers the ability to trade in their devices up to twice a year. A day after T-Mobile’s announcement, Sprint launched a new wireless plan that guarantees subscribers will receive unlimited Internet data, an attack on AT&T and Verizon, which have dropped unlimited plans. This week, it’s AT&T’s turn. The carrier has scheduled an announcement for Tuesday, teasing it with the line, “Get ready for what’s next in wireless.” Andy Vuong, The Denver Post
In a letter to Ajit V. Pai, the Federal Communications Commission Chairman who proposed the rollback, U.S. Rep Mike Coffman from Colorado said that altering the rules “may well have significant unanticipated negative consequences.” He asked Pai to let Congress hold hearings on the issue and pass open Internet laws. | 2024-01-06T01:26:19.665474 | https://example.com/article/7721 |
[Carcinoma of the gastroesophageal junction following variceal sclerosis: more than a coincidence?].
In the last decade, several cases of patients with esophageal varices treated with endoscopic sclerotherapy who posteriorly developed carcinoma of the gastroesophageal junction have been reported in the literature. This may only be a coincidence, although the existence of an undemonstrated relationship direct cannot be discarded. The case of a patient diagnosed with alcoholic liver cirrhosis with portal hypertension and esophageal varices who underwent several sessions of endoscopic sclerotherapy with ethanolamine oleate is presented. During follow-up dysphagia was observed due to adenocarcinoma of the lower third of the esophagus. Carcinoma of the esophagus should be taken into account as a rare diagnostic possibility in a patient with dysphagia of recent appearance with a history of esophageal varix sclerotherapy. | 2024-06-22T01:26:19.665474 | https://example.com/article/2861 |
The present invention relates to a method for optimising print images output by color printers onto substrate surfaces, in particular onto non-white substrate surfaces and for optimising the printing ink quantities used, an image motif being processed by means of a computer-aided image processing system to form an image original which is ready to output. The invention also relates to packaging films printed by the method according to the invention.
Image motifs, with which a substrate surface is to be printed, are acquired or created by means of computer-aided image processing systems and brought into image originals which are ready to output. The processing of the image data into image originals which are ready to output takes place with the aid of appropriate image processing software, such as for example PageMaker(copyright), Quark Xpress(copyright), Barco Packedge(copyright) or Macromedia Freehand(copyright) on DTP systems, wherein xe2x80x9cDTPxe2x80x9d stands for xe2x80x9cDesktop Publishingxe2x80x9d. DTP is a current designation for creating print publications by means of computers. The image data are displayed on a screen according to the principle of additive color mixing, for example in the known RGB format (R=red, G=green, B=blue).
The image original ready to output is then passed to an image output system, converted into a format which can be read by an image output system and printed by a color printer, wherein when printing a non-white substrate surface, a white underprint is executed before the actual print of the image motif. If the printing is a counter-print, white is applied as the last color, in other words after printing the actual image motif, as an overprint on the print image.
As the mode of operation of color printers, such as for example color printing systems operating by the electrophotographic method, is based on the principle of subtractive color mixing, the image data are converted into a subtractive color format prior to transfer to the image output system. The known CMY color space is generally used for this purpose, comprising the three primary color planes cyan (C), magenta (M) and yellow (Y). Cyan corresponds approximately to a blue-green and magenta approximately to a purple. The printing systems here use a cyan, magenta and yellow printing ink, from which further colors can be produced, wherein the primary printing colors act as color filters. Light which falls through a C, M or Y primary printing color, is absorbed or filtered in certain spectral ranges by the printing ink, so only light in a limited spectral range is reflected by the printing ink, and perceived by the human eye as the color of the toner. Theoretically, black can be produced by the ideal mixing of the primary printing colors C, M, and Y, as now all the light is absorbed or filtered. However, in practice a particularly deep and strong black cannot be produced by mixing the primary printing color, so apart from the CMY primary printing color a black (K) printing ink completely absorbing the light is used for black portions and grey levels. The color space supplemented by black is designated the CMYK color space.
In digital image processing systems, the image original is divided into individual image points, also called pixels. A respective value for each of the four primary printing colors is allocated to each image point, for example by using the CMYK printing space. This value represents the so-called color density. For each image point various mixed colors can be shown with the four color planes and the color density values allocated to them.
The color density, also called color covering, is a standardised variable for the applied quantity of printing ink. The color densities FC for C (cyan), FM for M (magenta), FY for Y (yellow) and FK for K (black) are in a defined range of 0 to 1 or 0 to 100%, wherein 1 represents a maximum application of the corresponding printing ink and 0 no application of the corresponding printing ink. The sum of the individual applied color densities are called the total color density. The application of the maximum color density FC, FM and FY of the three primary colors CMY therefore produces the high total color density or total color covering of 3 or 300%.
The image data of the image original are either acquired in the form of raster or vector data. Accordingly, the image original can be present in a bitmap or vector graphics data file format. Standardised vector graphics data file formats are, for example PostScript (PS) which inter alia includes Encapsulated Postscripts (EPS) or Portable Document Format (PDF). A standardised bitmap or raster graphics format is, for example Tagged Image File Format (TIFF).
Generally the image originals which are ready to output are placed in PostScript data files, these data files, apart from the actual image data, containing further information necessary for further processing of the image data, for example with reference to formatting and instructions such as, for example, control instructions to the image output system. Postscript data files may also contain inter alia image objects present in a bitmap format. The image original can therefore be, for example, an object embedded in the PostScript data file and present in a raster graphics format, for example TIFF format.
Digital image output systems generally contain a xe2x80x9cRaster Image Processorxe2x80x9d (RIP) and a printing unit. The raster image processor (RIP) determines from the image original supplied, for example in a PostScript data file, the size, quantity and position of the image points (pixels) and converts these into a format which can be interpreted by a printing unit. The image data converted into printing instructions are converted in the printing unit into a color print.
The current image processing systems are designed for printing white substrate surfaces, in particular white paper. The white substrate surface is thus generally included in the coloring process in image processing. White is for example generated by the allocation of the total color density CMYK=0, in other words image points with the corresponding zero value contain no color application. Apart from showing white surfaces, white is also necessary for showing the color brightness. The color brightness can be determined, on the one hand, by varying color densities and, on the other hand, by a raster display of the image points.
To obtain the same or a comparable color impression when printing colored, translucent or transparent substrates, as is produced during printing of a white underlay, the substrate surface provided for printing is therefore underprinted with white prior to the actual application of the print image. Underprint means that the white printing is located under the actual print image directly on the substrate surface. In the case of counter-printing on a transparent or translucent substrate with a colored, translucent or transparent, in particular non-white, substrate resting on the counter-print, a white overprint is applied for the above-mentioned reasons directly on the print image.
However, in image regions with an adequately large total color density, i.e. in image regions in which the white substrate surface is not visible or does not shine through owing to a large color application, a white underprint is not necessary. The dark color tones, i.e. the places with a high total color density, are frequently faded or they even appear unsaturated owing to the white underprint, so in order to achieve deep colors the overlying total color density has to be additionally increased.
Because of the surface-covering white underprint or overprint and the print image arranged thereabove, a very high color application is also, moreover, often achieved and this can impair the melting of the toner, for example, in the electrophotographic printing method and the image quality.
Basically, the image processing can be oriented to the specific color properties of the substrate surfaces to be printed. However, this requires adaptation of the corresponding image processing systems, in particular image processing software, connected with high expenditure.
The object of the invention is therefore to provide a method for creating an image original which is ready for printing for visually colored, translucent, transparent, specular or metallic appearing substrate surfaces or substrates and for the printing thereof, wherein the drawbacks resulting because of the above-described white underprint or overprint are to be eliminated, without expensive adaptation of the image processing software being necessary.
According to the invention, the object is achieved by a method for optimizing colored images emitted by a color printer on the non-white surfaces of substrates and for optimizing the amounts of printing ink used, wherein an image motif is processed by a computer-assisted image processing system in order to form a master copy which is ready for output. The method determines, for each pixel, whether and with what color density, a white underprint can be applied to a corresponding pixel, using an algorithm based on overall color density SF. The surface of the substrate is thus only underprinted with white on the pixels of the master copy where the overall color density is lacking or low. | 2023-09-09T01:26:19.665474 | https://example.com/article/9143 |
rocket stove and butt warmer
My son bought me a Ryobi laser thermometer for Xmas. I am having so much fun with it as I slowly add the cob. It does not work on shiny metal, as my burnt fingers can attest to! It also only gives readings up to 650 F (+/- 5 F), so I can not measure the burn tunnel.
Anyway here are some numbers from my 8" system covered with a standard 55 gal drum:
The top was around 600 F over the burn tube and was as low as 500 F towards the edges. On the side of the barrel it was around 400 at the top and 450 four inches down, then tapered off to around 250 at the bottom of the barrel.
The pot stuck in the clean out tube next to the barrel was around 180 at the bottom of the pot. The exhaust was 83 F.
I have not been to a store to get PH strips yet.
Cliff (Start a rEVOLution, grow a garden)
Clifford Reinke
Joined: Nov 26, 2010
Posts: 124
Location: Puget Sound
4
posted Dec 30, 2010 20:40:18
0
Today while working on the stove, my brother in law came up with a great idea. We installed a piece of re-bar in the J tube about 2" from the back of the feed, at the height of the burn tunnel. I load the wood in front of the re-bar towards the burn tunnel.
So far I would guess it has reduced the fire maintenance at least 50%. Also since putting the re-bar in, I have had no smoke back or fire creep. In effect I have an 8" x 2" air hole behind the fire wood. The wood can not fall away from the burn tunnel until it has burned down below the top of the burn tunnel, which seems to keep it nice and rockety. I'll keep it in this configuration for a few days but so far it seems to be a big improvement.
Here is a pic looking down into the wood burn area, the burn tunnel is on the right. I'm not sure how to post the pic directly.
Clifford Reinke
Joined: Nov 26, 2010
Posts: 124
Location: Puget Sound
4
posted Dec 31, 2010 00:40:03
0
<img src="http://sphotos.ak.fbcdn.net/hphotos-ak-snc6/hs003.snc6/165308_1592216997822_1007803355_31279218_3972167_n.jpg"> Lets see if this works. OK, the burn tunnel is to the right.
Joined: Nov 20, 2010
Posts: 140
posted Dec 31, 2010 09:46:37
0
Good idea. What you have basically done is added a separate draft to the system.
Joined: Nov 30, 2010
Posts: 53
posted Jan 02, 2011 19:49:51
0
what i am looking at is 2 rockets . on on a lean to green houseprobally sand fill trench in concrete so it drains water any over flow as well the one in the barn on the other hand for cleanliness i would want to encase it in cement with top pointing clean outs.. and the first question how log can the heat output .. that goes under the bench or under the floor
can 90 degree heavy cast pipe point down be used as the feed shute/chamber? so thereis no flas in the bottom and eleminates a ccorner .. or 45 degrees of it
careinke wrote: Today while working on the stove, my brother in law came up with a great idea. We installed a piece of re-bar in the J tube about 2" from the back of the feed, at the height of the burn tunnel. I load the wood in front of the re-bar towards the burn tunnel.
So far I would guess it has reduced the fire maintenance at least 50%. Also since putting the re-bar in, I have had no smoke back or fire creep. In effect I have an 8" x 2" air hole behind the fire wood. The wood can not fall away from the burn tunnel until it has burned down below the top of the burn tunnel, which seems to keep it nice and rockety. I'll keep it in this configuration for a few days but so far it seems to be a big improvement.
Interesting, and congratulations on the innovation. Do you mind saying what your fuel wood is? We get our best results with a full wood box (mixed conifer and fruit woods), but I've seen systems with richer fuels (madrone and oak) do better on about the same amount of fuel in your picture.
Tinknal - I think it's important that the air feed is not completely separate, but the air is still drawing downward through and past the wood. We've seen people create a completely separate air feed with much poorer results.
sticky_burr wrote: what i am looking at is 2 rockets . on on a lean to green houseprobally sand fill trench in concrete so it drains water any over flow as well the one in the barn on the other hand for cleanliness i would want to encase it in cement with top pointing clean outs.. and the first question how log can the heat output .. that goes under the bench or under the floor
can 90 degree heavy cast pipe point down be used as the feed shute/chamber? so thereis no flas in the bottom and eleminates a ccorner .. or 45 degrees of it
Sticky Burr: You want to replace a known quantity (cob/adobe thermal mass) with two unknowns: - sand (which is more insulative) and - concrete (which is more conductive of heat, but prone to cracking and degeneration at high temperatures). - And you want these to work with various amounts of water/moisture. I don't think anyone can currently give you an accurate prediction of the thermal performance of your proposed system. Maybe an engineer, if you can find one who is willing to get familiar with rocket stove dynamics first. Not sure what you mean by "how log can the heat output" - how to track it, or how long can the pipe still give off significant heat?
As far as cast pipe - eliminating the corner reduces the burn efficiency, this has been tried with thinner steel duct with disappointing results. Also metal pipe gives some problems with thermal expansion, so you need expansion joins that don't leak gas, yet protect the masonry from cracking.
careinke wrote: Today while working on the stove, my brother in law came up with a great idea. We installed a piece of re-bar in the J tube about 2" from the back of the feed, at the height of the burn tunnel. I load the wood in front of the re-bar towards the burn tunnel.
Interesting idea, just thinking this would act as an air wash and perhaps permit the installation of a window for viewing the fire without having the "glass" subjected to too much thermal shock and splitting. Be something to try anyway.
Erica, just looking at your permitting page and it indicated no dampers. Just wondering about preventing cold air backing into the system if the wind happened to be from the wrong direction say? Perhaps not too great a problem in the pacific NW but arctic temperatures (-30 celsius) and swirling winds are not uncommon here in NE Ontario.
Joined: Nov 20, 2010
Posts: 140
posted Jan 05, 2011 16:34:25
0
Erica Wisner wrote:
Tinknal - I think it's important that the air feed is not completely separate, but the air is still drawing downward through and past the wood. We've seen people create a completely separate air feed with much poorer results.
-Erica Wisner
I understand this. By putting in that rebar they improved the airflow without actually adding a separate draft.
Clifford Reinke
Joined: Nov 26, 2010
Posts: 124
Location: Puget Sound
4
posted Jan 05, 2011 21:08:42
0
Erica Wisner wrote: Interesting, and congratulations on the innovation. Do you mind saying what your fuel wood is? We get our best results with a full wood box (mixed conifer and fruit woods), but I've seen systems with richer fuels (madrone and oak) do better on about the same amount of fuel in your picture.
Tinknal - I think it's important that the air feed is not completely separate, but the air is still drawing downward through and past the wood. We've seen people create a completely separate air feed with much poorer results.
Erica,
I have a large variety of wood from the property. Probably half is broad leaf maple, with Hemlock, Doug Fir, Alder, and Cherry thrown in. The maple works pretty well, but if I want to crank it up, I add Doug fir or Hemlock.
Yesterday I moved the re-bar back so the airflow gap was only about an inch. I had two fire creeps when using fir. I think I will put it back in the original position. I think we got lucky and guessed right the first time.
It is a pretty easy mod, and can be removed if it does not work. It would be nice if some others would try it out and see how it works for them. As always, my ideas are public domain, feel free to steal and use in any way you want.
I just wanted to tell everyone with questions and ideas to build rocket stoves to heat a room, or house, barn or power a space ship, to start out here and read this thread to get a basic understanding. Advanced info is here too...ALSO at the bottom of each page Paul has a book listed that will save you a lot of time and money do do it right the first time and save you from re-building.
Other threads with RMH in alternate energy are also about Rocket Mass Heaters.
Sometimes the answer is not to cross an old bridge, nor to burn it, but to build a better bridge.
Seven examples of the rocket mass heater draw and/or the sideways fire / burn that is an essential component of the rocket mass heater. You can see the fire / flames actually going sideways. And you can see the smoke re-burning and making the rocket sound - sometimes with fire coming out the top of a chimney.
This shows several examples of dry stacked bricks that will eventually be a rocket mass heater core - complete with fire dmonstrating the sideways burn and the draw.
Talking about temp, I would like to know how big of a gap between heat riser and top of barrel. Everyone seems to have some higher temps than do I. Mine can burn the fingers but I can touch it quick without to much problems. 200 degree temp is pretty much max on top.
Using a 1 3/4 inch gap on 6 inch system.
Rocketstoves, cob, ferrocement, strawbale, all make the world go round.
sixnone wrote: Talking about temp, I would like to know how big of a gap between heat riser and top of barrel. Everyone seems to have some higher temps than do I. Mine can burn the fingers but I can touch it quick without to much problems. 200 degree temp is pretty much max on top.
Using a 1 3/4 inch gap on 6 inch system.
I don't know for sure. how hard is it to change? 1 1/2 inches gives the same CSA as the 6 inch pipe (6X3.14X1.5=2 I would try that first. I read that changing the gap moves the heat torus up or down so you may have the highest temp down from the top a bit or have spread the heat out a bit. I am not sure that this would make the mass get hot any slower though... is there any reason you want the top of the barrel real hot? Is the rocket force not strong enough?
Michael Duhl
Joined: Mar 11, 2010
Posts: 31
Location: Ohio river valley
posted Feb 25, 2011 11:24:07
0
Len wrote: I don't know for sure. how hard is it to change? 1 1/2 inches gives the same CSA as the 6 inch pipe (6X3.14X1.5=2 I would try that first. I read that changing the gap moves the heat torus up or down so you may have the highest temp down from the top a bit or have spread the heat out a bit. I am not sure that this would make the mass get hot any slower though... is there any reason you want the top of the barrel real hot? Is the rocket force not strong enough?
My thoughts are, if its not broke, dont fix it. It works fine and as it should. Its safe. But its not the same as boiling water in a pot like you would over a rocket stove vs. rocket mass heater. Maple syrup time you know?
Question: Ok, so you seemingly have an efficient wood stove - the problem that is apparent to me is that with this design, you have effectively created a negative pressure in the house (since the draft that makes the fire burn sideways is sucking hair out of the building.
It would seem that you would need an air exchanger box (adjustable) to fuel the wood feed so that the draft that is being sucked in (and then out) is fresh air, instead of air from the house.
Does that make sense or I am misunderstanding something?
Thanks for building this forum, Paul.
Len Ovens
pollinator
Joined: Aug 26, 2010
Posts: 1330
Location: Vancouver Island
18
posted Mar 15, 2011 16:53:57
0
Kegs wrote: Question: Ok, so you seemingly have an efficient wood stove - the problem that is apparent to me is that with this design, you have effectively created a negative pressure in the house (since the draft that makes the fire burn sideways is sucking hair out of the building.
It would seem that you would need an air exchanger box (adjustable) to fuel the wood feed so that the draft that is being sucked in (and then out) is fresh air, instead of air from the house.
Does that make sense or I am misunderstanding something?
All, wood (or gas, or oil) burning appliances do that... from fire places to gas furnaces. It was popular to use an external air source for a while but I think I read somewhere it was found to be unsafe in some cases. Something about gases going the wrong way at any leaks in the system. The good thing with a massheater though is that you are warming mass... and the people instead of the air. So this is less of a problem than it may seem. The reports from people who have them is that the homes are comfortable.
Geoff Kegs
Joined: Mar 13, 2011
Posts: 30
Location: Northern lower Michigan
posted Mar 15, 2011 18:59:39
0
Len wrote: All, wood (or gas, or oil) burning appliances do that... from fire places to gas furnaces. It was popular to use an external air source for a while but I think I read somewhere it was found to be unsafe in some cases. Something about gases going the wrong way at any leaks in the system. The good thing with a massheater though is that you are warming mass... and the people instead of the air. So this is less of a problem than it may seem. The reports from people who have them is that the homes are comfortable.
Interesting. I guess I will just have to experiment with some designs in a temporary tight shelter and measure air balance to modify from there...
One question that has been bugging me regarding rocket mass stoves, or any fireplace for that matter is the effect of air intake cooling your home.
When you're operating any fireplace (rocket stove or not), if your air intake is within your home you are effectively creating a vacuum inside your house, pulling cold air into your house.
Why not put your feed box on the outside of your home and pipe the hot exhaust into your house? This way the air going into and out of the stove is from the outside and you don't pull cold air into your house.
You could keep the feed box inside your house for easy access so long as their's a door flap over it and an air intake pipe leading to the outside.
Thanks.
Matt
Geoff Kegs
Joined: Mar 13, 2011
Posts: 30
Location: Northern lower Michigan
posted Mar 25, 2011 07:35:01
0
Matt Hennek wrote: Hi All,
New to the forum. Looks like a great place.
One question that has been bugging me regarding rocket mass stoves, or any fireplace for that matter is the effect of air intake cooling your home.
When you're operating any fireplace (rocket stove or not), if your air intake is within your home you are effectively creating a vacuum inside your house, pulling cold air into your house.
Why not put your feed box on the outside of your home and pipe the hot exhaust into your house? This way the air going into and out of the stove is from the outside and you don't pull cold air into your house.
You could keep the feed box inside your house for easy access so long as their's a door flap over it and an air intake pipe leading to the outside.
Thanks.
Matt
I just wrote about the negative pressure issue 3 posts ago. This is absolutely correct. However, you don't want to pipe an exhaust from any wood stove into the house for the simple reason that it's not going to have 100% combustion at all times, which without the means for full combustion, you will produce some amount of carbon monoxide, which is of course poison gas.
This system needs an air intake. FOR SURE.
New construction codes in my state require combustion air to be incorporated into the design of a wood fired stove for a very good reason.
The problem with the rocket stove so far as I see is that its current design (I do not own the book, but have seen Paul's videos and have looked elsewhere on the internet where I have found several other designs) is that is has effectively been designed for use in the outdoors only, or where a little drafts and dust is okay. This is not a very good design for an efficiently built (e.g. "tight" heavily insulated) home, which is extremely important for Northern climates.
People may have these in their homes currently and find them comfortable, but if they use the current design, they are absolutely creating a negative air pressure in their homes, and that air is being pulled through any cracks in the seams of the home construction, effectively creating drafts and dust, among other problems (such as permanent leaky seals of windows and doors).
The solution of course is to design the fuel box with a fresh air feed that is fully insulated and adjustable from open to fully closed. I would expect that in order to work effectively, this tube needs to be installed at a lower height than the rocket combustion chamber.
Another issue is the exhaust vent. You can't have this open, or just have a stove pipe sticking out an 8" hole in your wall or ceiling as I have seen on the examples. It needs to be very much insulated between the thermal mass and the outlet (or it will transfer cold through convection), and some form of closure needs to be installed on the exterior to ensure animals (insects, birds, spiders) do not build in the tube.
I think other parts of the design are exceptional and superior to current interior wood stove systems.
The extra efficiency of the combustion chamber seem to solve the problem of the ~75% efficiency of the current "high efficiency" wood stoves being sold commercially. A wood stove with 95% efficiency will solve a whole lot of energy and heating problems worldwide.
Geoff Kegs
Joined: Mar 13, 2011
Posts: 30
Location: Northern lower Michigan
posted Mar 25, 2011 07:57:33
0
Len wrote: All, wood (or gas, or oil) burning appliances do that... from fire places to gas furnaces. It was popular to use an external air source for a while but I think I read somewhere it was found to be unsafe in some cases. Something about gases going the wrong way at any leaks in the system. The good thing with a massheater though is that you are warming mass... and the people instead of the air. So this is less of a problem than it may seem. The reports from people who have them is that the homes are comfortable.
Len - I checked about the external air source issue. It isn't found to be unsafe, but rather it is required by new building codes.
It may be nice feeling comfortable (and that is what it is all about right?), but negative air pressure caused by feeding air through the stove will ultimately have to pull outside air from somewhere inside the house - and that somewhere is between gaps in doors, windows, foundation sill area, roof vents, etc. - all areas where ventilation is not currently controlled. This is more of a problem than you may be aware of. It increases dust, moves mold spores and other potential pathogens around, permanently breaks seals so that even when the negative pressure stops, cold air leaks out and insects and spiders find ways in.
All this can be prevented by incorporating a fresh air feed into the unit. Really, that's not hard at all do do either!
Len Ovens
pollinator
Joined: Aug 26, 2010
Posts: 1330
Location: Vancouver Island
18
posted Mar 25, 2011 08:14:29
0
Kegs wrote: This is not a very good design for an efficiently built (e.g. "tight" heavily insulated) home, which is extremely important for Northern climates.
People may have these in their homes currently and find them comfortable, but if they use the current design, they are absolutely creating a negative air pressure in their homes, and that air is being pulled through any cracks in the seams of the home construction, effectively creating drafts and dust, among other problems (such as permanent leaky seals of windows and doors).
Ok, that makes sense.... building codes for "tight" heavily insulated homes also require air exchanges. That is once you tighten the home so there are no air leaks, you have to (by code) put a leak in and then drive it with a fan. This applies to homes even with no wood or gas burning appliances. There are heat exchange products out there that try to save the heat from exhausting to heat the incoming air... but few contractors do (unless told to) as it doesn't sell a spec home. So there has to be incoming air anyway, why not use the stove to exhaust it? If you have a tight home and have to run a fan anyway then drag your intake air past some of the warm mass some where that it is not radiating to your room... only one problem is that a fan requires power. but the time you are sure to be using the stove is when there is none. Make sure your design allows a second source of air... both for the stove, but also to breath. Having a stove that pulls air through the living space first could be life saving.
Kegs wrote: Len - I checked about the external air source issue. It isn't found to be unsafe, but rather it is required by new building codes.
Yes but it doesn't have to be sealed. Any home has to have proper air input to cover all use including breathing. using leaks in walls etc. is not the best way. but it still has to be there. The air has to be exchanged.
Fresh air intakes are required by code because if you seal a house and do not bring new air in the house will become unhealthy and ultimately unsafe to live in.
I renovated a 100 year old home, sealing much of the house, but not everywhere. I intentionally left a couple spots open. The reason for this is that the house was heated with a gas boiler. Our neighbor sealed his house a couple of years earlier. Did a good job, added insulation, new high efficiency boiler boiler, etc. One day he had some people over for dinner, nice casual evening with good discussion... then they all started to get headaches, one person got nauseated... Lucky for them one of the guests realized what it might be and insisted that they open all the doors and some windows. They were all being gassed by Carbon Monoxide.
When he sealed the house he neglected to add a fresh air intake. That meant that the furnace eventually sucked much of the good air out of the house and was no longer venting properly. The exhaust started to collect in the house.
The old houses were designed to leak air. This was an intentional thing to try an improve the air quality. Think about all of the materials we have in our houses now that offgas VOCs. If you do not have fresh air circulation to dilute those VOCs, you will create a toxic environment in your house, even if you have good exhaust for the fireplace.
For myself I like the idea of the fresh air intake that feeds directly into the fireplace. BUT, I also want a bit of air exchange happening in the rest of the house as well. Setup the air intake "near" the fireplace maybe. Run the pipe past the heated mass to temper the air, but allow it to mix with the room air before entering the fire. This will help to clean the air in the room. If you are concerned about dust/pollen/spores then include filtration into that intake. You can have multiple filters and still get air flow. It just wont be as fast an exchange rate.
Geoff Kegs
Joined: Mar 13, 2011
Posts: 30
Location: Northern lower Michigan
posted Mar 27, 2011 05:27:42
0
The point is managing the system. The HVAC of course needs two systems - if one were to incorporate tight construction, a rocket stove with a fresh air intake and an properly designed air exchanger, that would solve the problems I was mentioning earlier - since your air would be entering and exiting a controlled location allowing for a neutral exchange of fresh air.
No filters would be necessary, as it wouldn't be pulling air from areas (such as sill locations) which are right at the level mold spores are outside while it rains, and dust when it is dry.
Totally true. Plan it out and then manage it. If you don't take care of it, it doesn't matter if you have a good system in place.
My preference is a passive system if at all possible, but it is harder to keep the air exchange rate even and adequate. It would be great if the amount coming in and going out could be the same all the time and remain passive but that is not possible. The compromise is sizing the intake to match the fireplace outflow, then seal everything else perfectly.
Just for interest... The university my wife works at has codes in place that insist that all office space have an air exchange rate of 4 times per hour, that means all the air has to be changed 4 times every hour. BUT, for the animal lab she runs, the animal housing areas must have an exchange rate of 12 times an hour! That's right, the rats and mice must have an exchange rate 3X greater then the humans. They must have cleaner and fresher air then the people working in the same building. The animals even have their own ventilation system that is separate from the human system.
Len Ovens
pollinator
Joined: Aug 26, 2010
Posts: 1330
Location: Vancouver Island
18
posted Mar 28, 2011 01:18:25
0
RMH and well insulated air tight homes.... and the grid and off the grid and code. All fun stuff. Code is there to keep people out of court, it is part of the grid tied world. The house is lightly build and so has to be well insulated and as air tight as possible to use less grid. The thermostat is on the wall ... set it and forget it. Once there is no grid and power is not assured... a tight house starts to make less sense. There is no guarantee of a fan to move air. Heat comes from wood that they have chopped not the flick of a switch. Any, materials for building or repairs either come from the ground (dirt, trees, etc.) or cost much to truck to the site. In this case a high mass house may make more sense.... In fact in a world where energy is running out, grid reliance may not be a great idea. This is the background the RMH came out of. Not the world of contractors and high profits. Built with on site materials it is a high mass heater at a fraction of the cost of a masonry heater. It is not expected to keep air temperature within .5 degrees, but to keep people comfortable who are used to spending a lot of time outside (getting wood, food, etc) who think nothing of changing the set of clothes to regulate body temp and will seek the warmest place in the room rather than heat the whole living space to 72F even when they are sitting in one room for hours and in fact may not enter some rooms for weeks. This is not to say these people want to where outside clothes in side, they want to remove their parka or rain coat when entering too. But a light sweater is not a problem. So that is where the RMH started. It worked well... really well and now people want to use it on their grid tied home. Great stuff. But now there are other concerns like making sure there is enough oxygen for the fire as well as the people. Those used to a thermostat way of life will have some learning curve... any good wood appliance requires some skill to operate properly, it is some steps beyond the set the thermostat kind of house. People do die if they get it wrong. I personally do not think any amount of new code will change that.... some people can do anything wrong. So far building a RMH to code is a bit of a dream, though some people are working to change that, they are either built where code is not a concern or on the sly without permit.
Wow, that post is full of so many assumptions that are more imaginary based rather than reality based I have no idea where to even start, but I'll give it my best:
1. The obvious reason for building codes is safety, not to prevent litigation or promote economic consumption.
Without a question, building code has saved lives. Some examples? Span lengths of dimensional wood (keep roof or floor from falling in because of insufficient support), fire code drywall required between garages and occupied areas (resists fire and allows for additional time for occupants to exit house in cases where fuel catches fire), GFCI switches in bathrooms (prevents electrocution) and numerous other codes which prevent dangerous situations from happening.
2. There isn't any part of the known Earth's inhabitants that don't rely on some outside trade which includes products not directly acquired from primary (natural) resources.
3. Every example of a RMH design in any videos or pics I've seen have used metal pipe. Clearly this has not been procured directly from primary resource extraction. Everything starts there, but stove pipes are a manufactured product, not a primary resource.
4. Many homes in the developed world, regardless whether they have commercially supplied utilities or not, are heated with wood fired heating systems. Such systems do not necessarily require a thermostat. For sure, many homes do have a thermostat, but many do not.
My own home has this possibility, though I have decided to rely on a geothermal system simply because I was not aware of a more efficient system when we built this home.
Humanity is not very likely going back to the Paleolithic age anytime soon.
Humanity is currently in a changing environment of burgeoning populations built on a system of cheap efficient energy (oil) that is a temporary, decreasing non-replaceable resource.
Once that resource no longer becomes affordable to extract (and that period will come sooner than most imagine), there will be an energy crisis.
The U.S. is in an economic pinch due to a history (directly parallel with oil use) of over consumption of material goods and an inefficient use of primary resources.
Many changes are going to prove necessary, including the level of all energy systems efficiency. This includes heating. It would be foolish to think that RMH design is not going to be produced commercially soon to combat current heating inefficiencies.
Personally, I do not think "off the grid" necessitates avoiding the consumer economy, since that really isn't much of an option if you think about it.
Getting away from commercial power however is an option. This doesn't mean you have to build things illegally - after all "on the sly" means illegally, since following building codes is a legal requirement for the construction of an occupied structure anywhere in the developed world.
I have no idea where you are coming from, but I see a lot of future for the RMH design and if this design provides an efficiency advantage from typical wood stoves (which certainly it appears that it does), it will soon proliferate on the commercial market.
I know I will be incorporating the RMH design in a heating element of the next occupied structure I build.
Best Regards, Kegs
Len wrote: RMH and well insulated air tight homes.... and the grid and off the grid and code. All fun stuff. Code is there to keep people out of court, it is part of the grid tied world. The house is lightly build and so has to be well insulated and as air tight as possible to use less grid. The thermostat is on the wall ... set it and forget it. Once there is no grid and power is not assured... a tight house starts to make less sense. There is no guarantee of a fan to move air. Heat comes from wood that they have chopped not the flick of a switch. Any, materials for building or repairs either come from the ground (dirt, trees, etc.) or cost much to truck to the site. In this case a high mass house may make more sense.... In fact in a world where energy is running out, grid reliance may not be a great idea. This is the background the RMH came out of. Not the world of contractors and high profits. Built with on site materials it is a high mass heater at a fraction of the cost of a masonry heater. It is not expected to keep air temperature within .5 degrees, but to keep people comfortable who are used to spending a lot of time outside (getting wood, food, etc) who think nothing of changing the set of clothes to regulate body temp and will seek the warmest place in the room rather than heat the whole living space to 72F even when they are sitting in one room for hours and in fact may not enter some rooms for weeks. This is not to say these people want to where outside clothes in side, they want to remove their parka or rain coat when entering too. But a light sweater is not a problem. So that is where the RMH started. It worked well... really well and now people want to use it on their grid tied home. Great stuff. But now there are other concerns like making sure there is enough oxygen for the fire as well as the people. Those used to a thermostat way of life will have some learning curve... any good wood appliance requires some skill to operate properly, it is some steps beyond the set the thermostat kind of house. People do die if they get it wrong. I personally do not think any amount of new code will change that.... some people can do anything wrong. So far building a RMH to code is a bit of a dream, though some people are working to change that, they are either built where code is not a concern or on the sly without permit.
Geoff Kegs
Joined: Mar 13, 2011
Posts: 30
Location: Northern lower Michigan
posted Mar 28, 2011 11:05:52
0
Interesting. The only number I have seen from commercial air exchanger website is 0.35 air changes per hour.
I do not think it would be thermally possible to heat that much air in any building, regardless of the "mass" of the building.
It is interesting, none the less.
It would be even more interesting to see the requirements for laboratories dealing with high level infectious diseases.
klorinth wrote: Kegs,
Totally true. Plan it out and then manage it. If you don't take care of it, it doesn't matter if you have a good system in place.
My preference is a passive system if at all possible, but it is harder to keep the air exchange rate even and adequate. It would be great if the amount coming in and going out could be the same all the time and remain passive but that is not possible. The compromise is sizing the intake to match the fireplace outflow, then seal everything else perfectly.
Just for interest... The university my wife works at has codes in place that insist that all office space have an air exchange rate of 4 times per hour, that means all the air has to be changed 4 times every hour. BUT, for the animal lab she runs, the animal housing areas must have an exchange rate of 12 times an hour! That's right, the rats and mice must have an exchange rate 3X greater then the humans. They must have cleaner and fresher air then the people working in the same building. The animals even have their own ventilation system that is separate from the human system.
You are completely correct. I missed a key detail and misspoke, recirculation not exchange. The exchange rate is controlled for the entire building along more "normal" rates.
Thank you.
Len Ovens
pollinator
Joined: Aug 26, 2010
Posts: 1330
Location: Vancouver Island
18
posted Mar 28, 2011 19:15:38
0
Kegs wrote: Wow, that post is full of so many assumptions that are more imaginary based rather than reality based I have no idea where to even start, but I'll give it my best:
I have explained things wrong or poorly and so you seem to have missed the point.... I will try removing one foot from mouth so I can insert the other...
1. The obvious reason for building codes is safety, not to prevent litigation or promote economic consumption.
municipalities have been sued for houses that have fallen down after they have signed them off. I agree that is safety. It is not a one or the other thing.... though I am trying to understand how having outlets a certain spacing is about safety. Anyway, that is not my argument and probably better left unsaid.
2. There isn't any part of the known Earth's inhabitants that don't rely on some outside trade which includes products not directly acquired from primary (natural) resources.
missed the point again. I was merely referring the average contractor stick houses that crowd our urban areas. built and designed to be grid connected. The point was that the RMH was originally used in cob houses as part of their design. These are not high insulation houses, but high mass houses in the same way the RMH is a high mass heater.
3. Every example of a RMH design in any videos or pics I've seen have used metal pipe. Clearly this has not been procured directly from primary resource extraction. Everything starts there, but stove pipes are a manufactured product, not a primary resource.
missed the point.... the metal parts of a RMH by mass are a very small percentage of the heater. What makes it work is the large amount of site supplied mass. The metal parts are used because they could be found used and generally free or cheap. This is not a get away from technology deal, but using the right technology for the situation.
4. Many homes in the developed world, regardless whether they have commercially supplied utilities or not, are heated with wood fired heating systems. Such systems do not necessarily require a thermostat. For sure, many homes do have a thermostat, but many do not.
The average subdivision? are you talking percentages? Even in the small city I am in (20000) the percentage of homes using wood for heat is small, even though wood is plentiful. All of the new houses I have seen built are gas heated and electric blown. The most common upgrade is to a heat pump sometimes with a loop underground in the yard. All power reliant. The area I came from with some millions of people had next to no wood burning heat because it was too hard to get... maybe some pellet stuff....
My own home has this possibility, though I have decided to rely on a geothermal system simply because I was not aware of a more efficient system when we built this home.
probably for an insulated stick built home you are right.... better than gas anyway. My home went down about $200 a month going from gas to electric... because I can spot heat when/where needed and perhaps the gas system was not very good. Heat pump may be better but I don't have the capital to do that.
Let's get real.
Humanity is not very likely going back to the Paleolithic age anytime soon.
was there ever a question or suggestion of that? I am not suggesting going hunter gatherer.... The RMH is too big to move from place to place just for a start... yes "let"s get real".
Personally, I do not think "off the grid" necessitates avoiding the consumer economy, since that really isn't much of an option if you think about it.
I am sorry you got that idea from what I posted... I think it is kind of a long stretch.. community is very important for a person's mental health and generally there are people better at doing things than I am.
Getting away from commercial power however is an option. This doesn't mean you have to build things illegally - after all "on the sly" means illegally, since following building codes is a legal requirement for the construction of an occupied structure anywhere in the developed world.
I was not encouraging people to do things outside of the law, just pointing out that any RMH built where codes apply was probably illegal. also that there is an ongoing effort to change that by bringing engineering and spec to the RMH.
I have no idea where you are coming from, but I see a lot of future for the RMH design and if this design provides an efficiency advantage from typical wood stoves (which certainly it appears that it does), it will soon proliferate on the commercial market.
maybe. From some of the discussion I have seen, the RMH is not more "efficient" (or very little more) ... we had someone with a lot more knowledge than most of us set us straight on that... it does tend to be more effective when used right though. I personally think part of this is designing the house around the heater instead of designing the house and then deciding how to heat (see the discussion above on air exchange for one, but I have also seen that people design so that the RMH can be a part of as many rooms as possible). It is not a good heater for a large house unless you are just interested in heating one room with it. It also falls short in a tiny house where it may take too much space. It has been interesting to study masonry heaters and the houses designed around them both recently and in ages past and depending on the climate where they where built. The RMH shares a lot with them.
I know I will be incorporating the RMH design in a heating element of the next occupied structure I build.
I had thought the same... I am not sure at this point if I will. I have a pile of bricks and barrel to play with and have started building a temporary one in the back (not in a living space) to play with and see if it will fit into what I would like to do down the road. I suspect the easiest way to make a RMH that is within code or that can be permit-ed would be to enclose the whole smoke path in steel to ensure it is air tight and add the mass on the outside and fire brick on the inside for the riser and feed. It would cost more to build.... but there would be no smoke leaks from cracked cob either. I also think that the foundation would need to be engineered to support the weight just the same as with a masonry heater (or a masonry fireplace for that matter). I am not sure anyone has worked out what the weight of a RMH per sqft would be... and then they are all unique. In the end I don't know that building a masonry heater would end up costing any more than a "permitted" RMH... and I could get better cooking out of a masonry heater too (at least with the state of the art in RMHs). The RMH has some great stuff going for it, but it needs some work in the design end before it becomes "mainstream" (if it ever will, masonry heaters, while big in Europe are scarce in North America). There is still lots of room for tinkering....
... though I am trying to understand how having outlets a certain spacing is about safety.
Common appliances have a maximum cord length of six feet, counter-top appliances have cords two feet long. The minimum spacing requirements are to prevent people from needing extension cords strung over the floors or counters, avoiding trip hazards and fire hazards (from the cords overheating).
Geoff Kegs
Joined: Mar 13, 2011
Posts: 30
Location: Northern lower Michigan
posted Mar 29, 2011 15:37:15
0
Len wrote:though I am trying to understand how having outlets a certain spacing is about safety.
Yeah, you and me both. That part does perturb me.
The average subdivision? are you talking percentages? Even in the small city I am in (20000) the percentage of homes using wood for heat is small, even though wood is plentiful. All of the new houses I have seen built are gas heated and electric blown. The most common upgrade is to a heat pump sometimes with a loop underground in the yard. All power reliant. The area I came from with some millions of people had next to no wood burning heat because it was too hard to get... maybe some pellet stuff....
I do wonder how the massive populations are going to heat once oil is gone. I just read an article suggesting oil will be completely exhausted by 2049. I suspect it will be quite some time before that. Obviously, oil is what fuels commercial transportation of oil, timber, coal stone, plutonium, wind turbine blades, photovoltaic panels, ore and most other primary commodities, including food where it can't otherwise be grown.
I do not know the answer.
Heat pump may be better but I don't have the capital to do that.
It cost us $10k to put this in our 1560 sq. ft. house in 2002 - and we do not have a closed loop. I wish we did. That would save us a lot of hassle, but that's ANOTHER $10k (at least). Yeah, they are NOT cheap...and still you're on the grid. It's got a compressor which surges a LOT of amperage and would probably be quite difficult to run using alternative energy alone.
From some of the discussion I have seen, the RMH is not more "efficient" (or very little more) ... we had someone with a lot more knowledge than most of us set us straight on that...
Well this is interesting - can you share any links on that?
The RMH has some great stuff going for it, but it needs some work in the design end before it becomes "mainstream" (if it ever will, masonry heaters, while big in Europe are scarce in North America). There is still lots of room for tinkering....
Yeah. Awesome to hear you are going to be playing with this.
Thank you for your reply!!!
Len Ovens
pollinator
Joined: Aug 26, 2010
Posts: 1330
Location: Vancouver Island
18
posted Mar 29, 2011 17:15:07
0
Kegs wrote:
Well this is interesting - can you share any links on that?
This link should be page two of the thread. The interesting explanation is towards the bottom of the page.
Professor Rich takes a scenario with two kinds of stoves, one a mass heater and one an iron stove and makes them 100% efficient(no flue... no people either but works for calculations). He uses a known amount of fuel in each (a one pound propane cylinder), and then shows where the heat goes. Basically more of the heat produced in the mass heater stays in the room and so the mass heater is more effective even though it is running at the same efficiency.
A mass heater masonry or rocket work on the same idea and are very close to each other in effectiveness. The RMH can be cheaper to build, has some iron exposed for more instant heat to a cold room and uses fuel at a slower rate due to the small size of the burn chamber. The masonry heater is also a mass heater, but uses pre-made firebrick core and brick facing which is generally more expensive and requires more skill to build. However, the masonry heater is a batch heater where you burn a load of wood once every 12 to 24 hours... light and forget pretty much instead of feeding all the time like the RMH for the length of burn (the RMH burn is about the same length as the masonry heater, not constant like an iron stove).
Both Mass heaters add mass to the living area which helps in both summer and winder to temper the inside environment.... that is in the winter heat comes from the heated heater over a long period of time and in the summer the mass absorbs heat during the day to keep the house cooler and looses it slowly at night to the breeze when we leave our windows open.... A house with lots of mass in the walls and floors (inside the insulating layer) has some of this same effect.
The major advantage to a masonry heater right now, is that it is much easier to get a permit for it.... as such, if you need a permit where you live, the masonry heater may end up being cheaper... and less stressful. You may even be able to face it with cob, I have seen them faced with earth bricks made on site. I think if the RMS heater was treated as a masonry heater with a separate core and facing... the core is the part that needs the most engineering, so if a standard core could be designed that can be standards approved (CSA in Canada, UL or EPA in the US) then the facing could be much easier to deal with... it would still have to be fireproof... but cob should be possible. | 2023-09-18T01:26:19.665474 | https://example.com/article/7320 |
For five friends, a two-year, worldwide motorcycle adventure is coming to an end here on America’s eastern coast.
With little to no experience riding motorcycles, Anne, Johannes, Elisabeth, Kaupo and Efy set out to traverse the globe. The adventure began in Germany in July of 2014 and is slowly coming to an end. The machines: old URAL 650 sidecar rigs.
There is a whole page on their website (http://leavinghomefunktion.com/) dedicated to explaining their decision for choosing these Russian steeds from a bygone era. However, in order to understand their selections, one must first understand the attitudes and rationale of the folks behind this undertaking.
Spending two years escaping the norms of everyday living and experiencing countless new cultures and traditions. Photo by leavinghomefunktion.com
As young artists working in their respective studios, they felt it was time for a change. They had grown comfortable in their daily lives and routines and they wanted a change. Together, the group decided to set out on an adventure to explore new cultures and find new influences in an effort to change their processes, both at work and in play.
Personally, I think it’s easy to look at their mission statement as overly idealistic. Sure, I would love to leave my desk and deadlines behind, put an “out of office” reply on my company email and hit the road. But unfortunately, I have responsibilities that I can’t, or won’t, walk away from. That’s my reality. And it’s one that I am guessing most of the folks reading Common Tread would probably agree with to one extent or another. However, what if that wasn’t the case?
Roadside stops. Photo by leavinghomefunktion.com
I think a lot of the time we put self-imposed restrictions on ourselves that reflect the norms of society or the views of our raising. And for the most part, I am OK with that. I like my life. But what if that weren’t the case? What if we could just say, “Hell, I am going to take a two-year trip to escape society's norms and do something different.”
It’s one of the reasons I wrote about Bowman’s Odyssey when discussing the best articles of 2016 from other outlets. Zach Bowman took his family along for the ride, balancing the demands of family life with the eccentricities of taking a different approach to life. The Leavinghomefunction group posted a comment on their Facebook page back in December citing David Woodburn as an inspiration. David took his wife Emy and daughter Mattea on a 10-year trip around the world in a sidecar rig. Point is, there are ways to find a balance amidst the chaos.
I think it’s part of the reason we all ride motorcycles at the end of the day. For a few weeks a year (if we’re lucky), we get to escape our regular lives and become motorcycle adventurers. It doesn’t matter if your adventure is rolling across the Great Plains on a Road King, traveling through South America on a prepaid tour, or heading out with your friends on a hodgepodge of beat-up bikes for a trip on spring break. Adventure is what we make.
For this group of friends, they wanted their adventure to focus more about the experiences of the road, the people they met and interacted with, and the cultures they could learn from. It was less about getting around the world in a set amount of time and more about what they could bring back with them. Hence the Ural.
Even in the remote villages of Mongolia you can find someone to MacGyver repairs on a Ural. Photo by leavinghomefunktion.com
Their choice of motorcycles nearly ensured they would have interactions with the people and cultures along their route. They readily admit not only their lack of motorcycling experience, but also the lack of mechanical experience. In the blogs on their site they talked about the breakdowns and problems they faced with the bike, but in the former Soviet countries they traveled through, there was always someone in town with a spare part or a ingenious solution to a specific problem. Plus, the bikes themselves are conversation starters. It’s hard to see a ragtag group of Russian sidecar rigs roll through town and not be curious. Unlike a large group of aggressive machines rolling into town that could be seen as threatening, these bikes disarmed people. From an outsider's perspective it became less about who these strangers were and what they were doing and more about how they could help.
My favorite part of their trip was reading about the conversion of the Urals to amphibious vehicles with the addition of pontoons and propellers. Photo by leavinghomefunktion.com
It bears mention that some of the costs of the trip were crowdfunded by donations mostly from individuals, in addition to a few select sponsors. Often relying on the kindness of strangers as they made their way from Germany westward across the globe in four stages. Reading through their website and blog my favorite section was Stage Three, when they converted their road going Urals to amphibious vehicles using large pontoons. Necessity, the mother of all invention.
The amphibious Urals in action. Photo by leavinghomefunktion.com
What’s impressive about their story isn’t the fact that most folks couldn’t do the same. Most of us could. What’s impressive is that they chose to actually do it when most of us choose to continue with the responsibilities we have designed for ourselves, and there is nothing wrong with that.
When I was teaching at a center city high school in Los Angeles, I had the freedom to take six-week long summer motorcycle adventures across the country. It was great for where I was at in life at the time. Fast forward seven years and I wouldn’t trade the responsibilities and deadlines that come with getting to create motorcycle videos and articles for RevZilla for the freedoms that come with my previous career choice. Like many others out there, I have chosen the responsibilities that outline my daily existence. (Plus, could you imagine how lonely Lemmy would get if I took off for two years without him? Poor guy wouldn’t know what to do with himself.)
It's just a puddle. Photo by leavinghomefunktion.com
But luckily for all of us here, we still get to ride motorcycles and have our small adventures every year. And if my heart ever begins to ache for the road, or the daily office grind begins to wear me down, I can sit back at my desk and escape into sites like http://leavinghomefunktion.com/.
For those of you who live in or around the Philadelphia area, Moto Guild will host a welcoming party for the road-worn travelers as they pass thru town telling their tales at 7 p.m. tomorrow (Jan. 6, 2017) at their shop at 98 Dekalb Pike, Bridgeport, Pennsylvania. | 2023-12-24T01:26:19.665474 | https://example.com/article/2019 |
D-Isomer of gly-tyr-pro-cys-pro-his-pro peptide: a novel and sensitive in vitro trapping agent to detect reactive metabolites by electrospray mass spectrometry.
This paper describes a D-peptide isomer-based trapping assay using an LC/MS ion-trap spectrometer with an electrospray ionization (ESI) source as the analytical tool to study bioactivation of xenobiotics. Reactive metabolites were generated from parent compounds in in vitro incubations with different sources of CYP enzymes. A short D-isomer of gly-tyr-pro-cys-pro-his-pro proved to be a sensitive trapping agent and resistant to proteases. This method was tested with 16 probe substances. Acetaminophen, 1-chloro 2,4-dinitrobenzene, clozapine, diclofenac, imipramine, menthofuran, propranolol, pulegone and ticlopidine all formed D-peptide adducts, which were analogous to the GSH adducts previously described in the literature. New adducts were identified with clopidogrel (-Cl+peptide), nicotine (-CH(3+)H+peptide), nimesulide (+peptide) and tolcapone (+peptide), i.e., no GSH adducts of those drugs have been described in the literature. No adducts were identified with ciprofloxacin, ketoconazole and verapamil. In the literature no GSH adducts have been described with ciprofloxacin and verapamil. D-Peptide-based trapping proved to be a reliable and reproducible method to identify bioactivated intermediates. D-Peptide is a new and convenient protein trapping agent for use in early phase screening of bioactivation of new chemical entities and evaluation of toxic properties of chemicals. | 2024-01-26T01:26:19.665474 | https://example.com/article/4660 |
Sen. Bernie Sanders, I-Vt., is facing pushback for his call over the weekend to make a future coronavirus vaccine free to the public, with critics noting the move would disincentivize pharmaceutical companies developing such a vaccine.
"Once a vaccine for coronavirus is developed, it should be free," the Democratic presidential candidate tweeted Sunday.
Tom Schatz, the president of Citizens Against Government Waste, called the pitch "a bad idea" and questioned how the government would be able to force a private company to hand over a vaccine free of charge.
“The initial attempt by Democrats to make it free is a really bad idea,” Schatz told Fox News on Monday. “Who’s going to want to make a new drug if the government is just going to come along and confiscate the profit?”
CORONAVIRUS: WHAT YOU NEED TO KNOW
Sanders' call faced ample criticism from conservatives and libertarians on Twitter making a similar argument.
Schatz’s comments come amid a mounting political debate over a future vaccine for the quickly spreading virus. Sanders and other progressive lawmakers in Washington want the vaccine to be distributed free of charge to every American, while members of the Trump administration and others say this would give drug companies no incentive to develop the vaccine.
“We would want to ensure that we work to make it affordable,” Health and Human Services Secretary Alex Azar said last week during a hearing on Capitol Hill. “But we can’t control that price, because we need the private sector to invest. The priority is to get vaccines and therapeutics. Price controls won’t get us there.”
Vice President Mike Pence did promise last week that the coronavirus testing kits would be covered by all private health insurance plans in the country, and by Medicare and Medicaid.
Sanders' campaign did not respond to a request for comment from Fox News. But progressive lawmakers in the House held a press conference last week to pressure President Trump to implement price controls for an eventual coronavirus vaccine.
“The big question now: ‘Are we going to turn this over to the pharmaceutical companies who are going to decide if there’s enough profit in this to make sure that all consumers can get what they need?’” Rep. Jan Schakowsky, D-Ill., said, according to Stat. “I say absolutely not.”
CORONAVIRUS IN THE US: STATE-BY-STATE BREAKDOWN
Before the question of cost can be addressed, however, a vaccine must be developed and ready for the market.
Dozens of research groups around the world are racing to create a vaccine as COVID-19 cases continue to grow. Importantly, they’re pursuing different types of vaccines — shots developed from new technologies that not only are faster to make than traditional inoculations but might prove more potent. Some researchers even aim for temporary vaccines, such as shots that might guard people's health a month or two at a time while longer-lasting protection is developed.
“Until we test them in humans we have absolutely no idea what the immune response will be,” cautioned vaccine expert Dr. Judith O’Donnell, infectious disease chief at Penn Presbyterian Medical Center. “Having a lot of different vaccines -- with a lot of different theories behind the science of generating immunity -- all on a parallel track really ultimately gives us the best chance of getting something successful.”
First-step testing in small numbers of young, healthy volunteers is set to start soon. There's no chance participants could get infected from the shots, because they don’t contain the virus itself. The goal is purely to check that the vaccines show no worrisome side effects, setting the stage for larger tests of whether they protect.
First in line is the Kaiser Permanente Washington Health Research Institute in Seattle. It is preparing to test 45 volunteers with different doses of shots co-developed by NIH and Moderna Inc.
CLICK HERE FOR COMPLETE CORONAVIRUS COVERAGE
Next, Inovio Pharmaceuticals aims to begin safety tests of its vaccine candidate next month in a few dozen volunteers at the University of Pennsylvania and a testing center in Kansas City, Mo., followed by a similar study in China and South Korea.
Even if initial safety tests go well, “you’re talking about a year to a year and a half” before any vaccine could be ready for widespread use, stressed Dr. Anthony Fauci, director of NIH’s National Institute of Allergy and Infectious Diseases.
That still would be a record-setting pace. But manufacturers know the wait -- required because it takes additional studies of thousands of people to tell if a vaccine truly protects and does no harm -- is hard for a frightened public.
Fox News' Gregg Re and The Associated Press contributed to this report. | 2024-01-05T01:26:19.665474 | https://example.com/article/3149 |
1. Field of the Invention
The present invention relates to a side lock apparatus for locking, for example, a glove box itself which is mounted in an instrument panel of a motor vehicle in such a manner as to be opened and closed or an independent lid thereof.
2. Description of the Related Art
Although not shown specifically, conventionally, a side lock apparatus of this type is constructed such that a pinion gear is rotatably supported within a housing which is fixed to a glove box main body's side, proximal end portions of a pair of left and right rods are provided in such a manner as to move back and forth, racks formed on surfaces of the proximal end portions of the rods are brought into mesh engagement with the pinion gear from opposite directions, while one of the rods is biased in a direction towards a lock hole opened in an instrument panel by virtue of a biasing spring pressure, and an operation knob is provided on a front surface of the glove box main body, whereby the operation knob can be operated to withdraw the other rod against the biasing spring pressure (refer to, for example, JP-A-2004-211383).
In such a state that the glove box main body is closed, distal end portions of the pair of left and right rods stay in corresponding lock holes in the instrument panel in an engaged fashion by virtue of the biasing spring pressure, so as to lock the glove box in its closed position. Then, when this locked state is released to open the glove box, the operation knob is operated to withdraw the other rod against the biasing spring pressure so that the distal end portion of the rod is withdrawn from the corresponding lock hole. As this occurs, the pinion gear rotates, and the remaining rod moves in association with the rotation of the pinion gear to withdraw its distal end portion from the corresponding lock hole in the similar way as the other rod, whereby the glove box main body can be moved in an opening direction. | 2024-04-21T01:26:19.665474 | https://example.com/article/8338 |
The background description includes information that may be useful in understanding the present invention. It is not an admission that any of the information provided herein is prior art or relevant to the presently claimed invention, or that any publication specifically or implicitly referenced is prior art.
When a company has a series of tasks that need to be completed, a manager typically allocates employees towards each task. Computer scheduling systems, for example Microsoft Outlook®, can be helpful to visualize such schedules. For example, a manager could use a computer scheduling system to block off specific times of the day for employees to perform certain tasks, and assign specific employees to that task. Each employee would then have a calendar of tasks to do throughout each day, week, and month, which could be easily visualized and organized. In order for a manager to assign specific employees to each task, however, the manager needs to manually track each employee's schedule and allocate each employee to the appropriate task.
US 2009/0315735 to Bhavani teaches a computer system for managing patient flow in a hospital, where a manager could tag specific patients, medical employees, and resources with RFID chips to determine where each patient, employee, and resource is, and allocate each resource accordingly as needed. For example, if there are too many patients waiting for an examination room, a patient could be automatically relocated to an examination room with a shorter line by sending a message to an available employee to redirect that patient. Bhavani, however, requires the system to manually track each patient, employee, and resource by a unique identifier.
U.S. Pat. No. 7,587,329 to Thompson teaches a computer system for managing a health clinic, where a manager could input a series of attributes into a computer that an on-duty nurse needs to have to accomplish a specific task. The system then matches available nurses with those requirements with the task in order to accomplish the task, and can send out schedules to each nurse, letting that nurse know what tasks to perform.
Additionally, these systems and other prior-art systems fail to continue to ensure a device viability for a scheduled task as the schedule develops and manage the functions of the devices such that the devices are capable of use for the scheduled tasks with changing schedules. The prior-art systems similarly fail to provide for the seamless inclusion of devices having processing and communications capabilities with legacy or other ‘dumb’ devices that have no such capacity.
Bhavani, Thompson, and all other extrinsic materials discussed herein are incorporated by reference to the same extent as if each individual extrinsic material was specifically and individually indicated to be incorporated by reference. Where a definition or use of a term in an incorporated reference is inconsistent or contrary to the definition of that term provided herein, the definition of that term provided herein applies and the definition of that term in the reference does not apply.
Thus, there is a need for a scheduling system that provide simultaneous management of functions of connected devices for scheduled tasks including adjustments of output and functionality for those connected development, optimized exception resolution for these devices, and for the incorporation of non-connected “dumb” devices into an online scheduling system within a facility. | 2024-03-06T01:26:19.665474 | https://example.com/article/6300 |
Breaking Bad’s Aaron Paul joins Westworld’s third season
Yeah, bitch?
Share this:
Aaron Paul emerged white hot from Breaking Bad, the AMC series that found his foul-mouthed stoner evolve into a tremendously layered and tragic character that landed him three Emmy Awards across the show’s run. Since then, he’s worked consistently on series like Hulu’s The Path and Apple’s Are You Sleeping?, but has yet to find a role that resonates in the same manner as Jesse Pinkman. That may change, however, as Variety reports that Paul has been cast in the third season HBO’s sci-fi tentpole Westworld.
Westworld, an endlessly frustrating series that continually squanders its considerably potential, is due for a reboot of sorts next season, what with the seismic, yet still confusing, ending of its second outing. Paul will be a series regular, but details on his character remain on the hush hush.
Whether or not the series is able to craft a coherent narrative in its next season—or simply continue to feed the Redditors—it’s still likely that Paul could score himself another statuette. The series’ second season was nominated for a stunning 21 Emmy Awards. | 2024-03-14T01:26:19.665474 | https://example.com/article/6856 |
Q:
Java 8 Optional - how to handle nested Object structures
Is there any simple way to reduce the lines of code to print the innermost not null object using Optional as alternative to the below code. I feels like we have to write more lines of code to avoid the null checks now.
Is there any easy way to make this code short and sweet in Java 8?
import java.util.Optional;
public class OptionalInnerStruct {
public static void main(String[] args) {
// creepy initialization step, dont worry
Employee employee = new Employee();
employee.setHuman(Optional.empty());
// with optional
Optional<Human> optionalHuman = employee.getHuman();
if (optionalHuman.isPresent()) {
Human human = optionalHuman.get();
Optional<Male> optionalMale = human.getMale();
if (optionalMale.isPresent()) {
Male male = optionalMale.get();
Optional<Integer> optionalAge = male.getAge();
if (optionalAge.isPresent()) {
System.out.println("I discovered the variable finally " + optionalAge.get());
}
}
}
// without optional in picture, it will be something like:
/*if(null! = employee.getHuman() && null!= employee.getHuman().getMale() && null! = employee.getHuman().getMale().getAge()) {
System.out.println("So easy to find variable " + employee.getHuman().getMale().getAge());
}*/
}
static class Employee {
Optional<Human> human;
public Optional<Human> getHuman() {
return human;
}
public void setHuman(Optional<Human> human) {
this.human = human;
}
}
class Human {
Optional<Male> male;
public Optional<Male> getMale() {
return male;
}
public void setMale(Optional<Male> male) {
this.male = male;
}
}
class Male {
Optional<Integer> age;
public Optional<Integer> getAge() {
return age;
}
public void setAge(Optional<Integer> age) {
this.age = age;
}
}
}
A:
You can use Optional.flatMap here
employee.getHuman()
.flatMap(Human::getMale)
.flatMap(Male::getAge)
.ifPresent(age -> System.out.println("I discovered the variable finally " + age);
| 2024-07-03T01:26:19.665474 | https://example.com/article/1009 |
Creative staffing models in medical transcription.
This article is a summary of staffing alternatives for medical transcription developed over the years and proven successful in certain hospital facilities. Creative approaches to staffing the medical transcription unit have proved to be effective tools in the maintenance and even expansion of transcription services on a cost-effective basis in a changing marketplace. | 2024-01-20T01:26:19.665474 | https://example.com/article/3573 |
Rubio Narrowly Leads Crist in Latest Poll
Marco Rubio, who once trailed Gov. Charlie Crist by 31 points in the 2010 Republican Senate primary, now has a narrow lead, according to a Quinnipiac University poll. The poll shows former Florida House speaker Rubio with 47 percent to Crist’s 44 percent among Republicans, according to a Palm Beach Post report. | 2023-12-07T01:26:19.665474 | https://example.com/article/7939 |
Usage Concerns
Our Member Account Representatives and Energy Advisor are always ready to assist you with your usage and billing questions. Before you contact us, please read through the checklist below to help diagnose your inquiry. You can also log in to your online account to view your hourly, daily and monthly electric use along with temperature data.
Accurate History
Review how much power you've used for the last 13 months. We call this the kilowatt hour (kWh) history. Your usage history is provided for you on every statement in the upper right corner and you can log in to your online account to view your electric consumption down to the hour. Compare your most recent month to that same month one year ago, keeping in mind that weather fluctuations may be a significant factor in any major differences.
The kilowatt hours you use are the main driver of costs on your electric bill. The average Corn Belt household will average about 1,000 kWh of energy use per month.
True Electric Bill
Check to be sure this is a true high electric bill. Are there other charges beyond electric service? Any additional service fees (i.e deposits, connection/disconnection fees or returned check fees)?
Have any past-due amounts from a previous bill been added to the total?
Are there ancillary charges added to the bill for other Corn Belt products or services (surge protection service, security lights, etc.)?
Days of Use
Check the number of days that are billed for your electric use. This varies from bill to bill due to the number of days in a month and a billing cycle may be a bit shorter or a bit longer so as not to make your bill due on a weekend or holiday.
Is the number of days greater than other months in question because of meter readings or meter reading cycles? Is the daily average significantly different from other months in question?
Seasonal Changes
Check the kilowatt hour total by month. From the history, are the winter months higher (indicating some form of electric heat, higher hot water heater use or heaters being used on water beds)?
The additional heating or cooling load will cause an increase in electric use. Heating and cooling your home can average over 40% of you total energy use. Using space heaters, fireplaces, livestock heaters or vehicle block heaters in the winter can dramatically increase your energy consumption. Running a dehumidifier or watering of lawns, gardens and animals in the summer months will increase your energy use. Hot tubs and pool pumps can also add to your electric bill.
Corn Belt offers free Levelized Billing to help average out seasonal fluctations.
I wasn't Home...
If leave your home for an extended period of time for business or vacation, any appliance you leave plugged in or connected will continue to use electricity even while you are gone. Your hot water heater, freezer, refrigerator, HVAC system, landscape irrigation, well pump, etc. keep on running when you're not home. Before you leave, make sure to turn off or unplug appliances that aren't needed and adjust your thermostat to keep heating/cooling costs down.
Lifestyle
No two households use energy the same way, so comparing your energy bill to your neighbor's is like comparing apples to oranges. It's best to compare your current use to your past use. Consider the following:
Has the size of your household increased?
Have you added a new swimming pool or hot tub in your backyard?
Have you had guests stay for an extended period?
Do you have hobbies that include the use of power tools, ovens or other high electrical resistance tools or appliances?
Do you have an aquarium?
These calculators can help you identify which devices and appliances are using the most energy. Corn Belt sells Kill A Watt usage monitors to help members identify those energy hogs.
Lighting and Appliances
Lighting, refrigeration, cooking and appliances can account for over 40% of the total energy use in a typcial household.
The location of refrigerators and freezers is very important: Never place a refrigerator or freezer in direct sunlight or in unconditioned space such as a breezeway, garage or out-building. The refrigerator or freezer will have to work harder to overcome excessive heat during warmer months.
Make sure that your refrigerators and freezers have adequate ventilation.
Do you have an older fridge in the basement or garage?
If an appliance is more than 15 years old, the efficiency of that appliance may be decreasing significantly and requiring more energy to do its job.
It's important to clean or replace the condenser, coils or filters on some appliances regularly. You may need to replace the appliance itself. Many times old electrical wiring will have loose connections resulting in increased electrical use and create potential safety hazards.
Is My Meter Bad?
Meters are often blamed for a higher bill, but are rarely the cause. In fact, if a meter is malfunctioning, it's more likely to run slow instead of fast. Fewer than 2 out of 1,000 meters prove to be defective when tested.
If a member requests a meter test, Corn Belt will come out to replace the meter and have a third party test the suspected meter. The member is charged a $50 fee for a meter test, which is refunded if the meter is found to be defective.
Energy Advisor
If you feel your concern has not been addressed by the information above, please call us at 1-800-879-0339 during normal business hours and we'll assist you in identifying the cause of your high usage. If needed, our Energy Advisor can come out to your home or business to evaluate your situation and suggest ways to conserve energy. | 2023-10-11T01:26:19.665474 | https://example.com/article/7575 |
Swiss stocks - Factors to watch on Nov 22
The following are some of the main factors expected to
affect Swiss stocks.
LAFARGEHOLCIM
Moody's on Monday placed all ratings of world's biggest
cement maker on review for downgrade. LafargeHolcim said in a
statement following the announcement that it remains confident
in plans to achieve its 2018 targets.
* EFG International announced the redemption of two
notes on their first optional call date.
* Sempione Retail said it held nearly 83 percent of Charles
Voegele shares after offer period and confirmed its
intention to delist shares in the Swiss retailer.
* SHL Telemedicine said it plans an extraordinary
general shareholders meeting on Jan. 27, where it aims to elect
an independent member of the board of directors for a three-year
period.
* Aevis Victoria SA said it posted net revenues
(medical fees excluded) of 367.8 million Swiss francs ($364.56
million)in first nine months of 2016. The company expects to
realise gross revenues of about 600 million francs in 2016.
* Bank Vontobel said agreements had been signed to
restructure its shareholder base with a follow-up shareholder
pool created controlling 50.7 percent of the votes in the bank.
* Gurit said it has renewed a supply contract for
aircraft interior materials with Diehl Aircabin as existing
aerospace customer. The agreement has a term of 3 years and
represents a business volume of around 30 million Swiss francs
over the full contract period.
ECONOMY
* Exports from Switzerland fell year-on-year by a real,
work-day adjusted 6.1 percent in October to 17.81 billion Swiss
francs ($17.64 billion), the Federal Customs Office said on
Tuesday. Watch exports fell a nominal 16.4 percent year on year.
ZURICH, Dec 9 Swiss prosecutors have opened
criminal proceedings and seized evidence from the AMAG
dealership network after an appellate court ruled Swiss
investigators must conduct their own investigation of an
emissions scandal at German carmaker Volkswagen AG,
they said on Friday. | 2024-03-12T01:26:19.665474 | https://example.com/article/2946 |
Q:
"App not installed" trying to install signed-apk
I'm facing with App not installed error on different devices - some I have not even installed debug apk before! -So it's not same package name-.
I'm able to install the apk by changing the alias of generated key, but the apk only install once and after deleting the app on my phone, it won't install again!
I've tried all of the solutions I've found on similar questions but non works.
I've tried changing version code and version name
I've tried checking only V1 or V2 and both
android:testOnly="false"
android.injected.testOnly=false
minifyEnabled false
and build variants is set to release
I have tested on more than 10 devices from different brands -samsung, sony, huawei-
I'm using android studio 3.3 and Gradle 3.3.0 and
compileSdkVersion 28
minSdkVersion 22
targetSdkVersion 28
I'd appreciate any help. Thanks.
A:
For those who might still be looking for an answer, here it is
a) You should submit your app for google and after a while, about 10-14 days, the problem will be solved. More info...
b) Turning off PlayProtect which is not the correct answer but it can keep you going.
| 2024-04-02T01:26:19.665474 | https://example.com/article/2773 |
Q:
What's the name of this puzzle?
In this puzzle there are 25 squares, and you have to connect the pink square with the purple square.
Here are the rules:
You walk on every square to arrive the purple square before arriving.
You can't walk diagonnally
You aren't allowed to step only once on every cell
What's the name of this puzzle?
A:
I would call it impossible.
Imagine the board is a checkerboard, so the moves allow you to step from a black cell to a white one and vice versa.
Let's position the board that the pink cell is above a white checkerboard cell, so there are 13 white cells and 12 black cells in the setup given.
If you are allowed to step only once on every cell (this is not clearly stated in the problem statement), then you cannot do this, as there is no path through 25 (or any other odd-numbered) cells, which starts at a white cell but ends at a black one.
| 2023-11-29T01:26:19.665474 | https://example.com/article/1885 |
[Cervicogenic headache. An over- or underdiagnosed headache syndrome?].
"Cervicogenic headache" (CEH) is a strictly unilateral constant dull, dragging, boring background pain of varying intensity which does not alternate sides and persists for a few hours to several days. It is triggered or intensified by head movements, and typically radiates from the neck to the fronto-temporal region. Occasionally, the ipsilateral shoulder and arm are also affected, with no definite radicular pattern. There is overall restriction of head movements. Ipsilateral accompanying symptoms may include conjunctival injection, lacrimation and lid edema. Migraine-like symptoms such as nausea, vomiting, sound and light sensitivity, and ipsilateral visual blurring may occur, as well as dizziness and difficulties in swallowing. A C2-blockade always leads to temporary pain relief. The possible pathophysiology of CEH, and its differential diagnosis are discussed. | 2024-01-30T01:26:19.665474 | https://example.com/article/5452 |
Q:
Concerning the Linearity of the adjoint of $T$
While proving the Linearity of the adjoint of the linear map $T:V\to W$the authour of my text book writes that
\begin{align}\langle v,T^*(w_1+w_2)\rangle&= \langle Tv,w_1+w_2\rangle\\& = \langle Tv,w_1\rangle+\langle Tv,w_2\rangle\\& = \langle v,T^*w_1\rangle+\langle v,T^*w_2\rangle\\& = \langle v,T^*w_1+T^*w_2\rangle\end{align}
But he immedietly follows this with the declartation that $T^*(w_1+w_2) = T^*w_1+T^*w_2$. How is this write?
A:
If $w$ and $w'$ are vectors such that$$(\forall v\in V):\langle v,w\rangle=\langle v,w'\rangle,$$then $w=w'$. This is so because\begin{align}\|w-w'\|^2&=\langle w-w',w-w'\rangle\\&=\langle w-w',w\rangle-\langle w-w',w'\rangle\\&=0.\end{align}
| 2024-01-17T01:26:19.665474 | https://example.com/article/6423 |
Video: Eric S. Lander on “Secrets of the Human Genome”
Video: Field Biology in Costa Rica
Students enrolled in “Seminar in Tropical Biology” took an Interterm field trip to Costa Rica, where they studied the biota of terrestrial habitat types and the invertebrates of the rocky intertidal zone.
Biology Prizes & Awards
Meet Our Faculty
Close collaboration between faculty and students is a hallmark of Biology at Amherst. We have a long tradition of active faculty research and student participation in this research. Meet the entire department. | 2023-08-15T01:26:19.665474 | https://example.com/article/2298 |
Q:
Accessing other slides' titles in Beamer
Suppose I'm making a custom theme for Beamer presentations, and on each frame I'd like to display not only the frame's title itself, but also the titles of the previous and next frames (de-emphasized in some manner, of course). Does anyone know whether there's some way to construct a macro that I can use to access the title of a neighboring frame? Or whether this has already been done somewhere?
A:
Here's a solution to insert access other slides' frame titles (in contrast to other section titles as in my first answer). The procedure is very similar, so I'll just explain the parts which have changed:
\usepackage{etoolbox}
\makeatletter
\apptocmd{\beamer@@frametitle}{\write\@auxout{\string\@writefile{frm}{\string\frametitleentry{\the\c@framenumber}{#1}{#2}}}}{}{}
\newcommand*{\frametitleentry}[3]{\@namedef{frametitleshort#1}{#2}\@namedef{frametitle#1}{#3}}
\AtEndDocument{\if@filesw\newwrite\tf@frm\immediate\openout\tf@frm\jobname.frm\relax\fi}
\@input{\jobname.frm}
\newcommand*{\insertpreviousframetitle}[1][1]{\bgroup\advance\c@framenumber by -#1\relax\@ifstar{\@nameuse{frametitleshort\the\c@framenumber}\egroup}{\@nameuse{frametitle\the\c@framenumber}\egroup}}
\newcommand*{\insertnextframetitle}[1][1]{\bgroup\advance\c@framenumber by #1\relax\@ifstar{\@nameuse{frametitleshort\the\c@framenumber}\egroup}{\@nameuse{frametitle\the\c@framenumber}\egroup}}
\makeatother
As in the first solution, the command capable of the title (\beamer@@frametitle this time) is patched in order to save the frame title. However, there's no storage of the frame titles in the .nav file, so we'll have to create a new auxiliary file .frm for this purpose. The frame titles are written to the .aux file at first and are flushed to the .frm file at the end of the document.
This file is input at the beginning of the next LaTeX run, where it is used to store the frame titles in the macros \frametitle1/\frametitleshort1, ... These values are read by the user macros \insertpreviousframetitle/\insertnextframetitle.
Usage:
Put the above code into the preamble of your document. Now, you'll be able to insert the previous and next frame titles with the macros \insertpreviousframetitle/\insertnextframetitle. The starred forms \insertpreviousframetitle*/\insertnextframetitle* yield the short frame title you can specify in the optional argument of the \frametitle command. The macros can also take an optional argument: \insertnextframetitle[2] for example inserts the title of the next frame but one.
(Of course, you must supply a frame title in the desired frame, otherwise, the output of the commands will be empty.)
| 2023-10-02T01:26:19.665474 | https://example.com/article/5345 |
Pages
Saturday, April 12, 2014
K is for Kuiper Belt
The Kuiper Belt is names after its discoverer Gerard Kuiper in 1992. This is the area beyond the orbit of Jupiter, but still under the gravitational pull of our Sun. It contains Pluto, other dwarf planets, and miscellaneous icy objects.
It is similar to the Asteroid Belt, but it is far larger—20 times as wide and 20 to 200 times as massive. Like the asteroid belt, it consists mainly of small bodies, or remnants from the Solar System's formation. Although some asteroids are composed primarily of rock and metal, most Kuiper belt objects are composed largely of frozen volatiles (termed "ices"), such as methane, ammonia and water.
The classical belt is home to at least three dwarf planets: Pluto, Haumea, and Makemake. Some of the Solar System's moons, such as Neptune's Triton and Saturn’s Phoebe, are also believed to have originated in the region
In 2006 NASA dispatched an ambassador to the planetary frontier: The New Horizons spacecraft, now more than halfway between Earth and Pluto, is on approach for a dramatic flight past the icy dwarf planet and its moons in July 2015.
It's mission is to find the origin of Pluto and it's moon Charon.
After 10 years and more than 3 billion miles, on a historic voyage that has already taken it over the storms and around the moons of Jupiter, New Horizons will shed light on new kinds of worlds on the outskirts of the solar system.
Pluto gets closer by the day, and New Horizons continues into rare territory, as just the fifth probe to traverse interplanetary space so far from the sun. And the first ever to travel to Pluto
New Horizons Space Probe
Did You Know: An Astronomical Unit (AU) is 93,000,000 miles, or the distance from the Sun to the Earth.
Fun Facts: New Horizons is about 29 AU’s from Earth and about 3.5 AU’s from Pluto. It will take 4.5 hours to relay a transmission. | 2024-06-14T01:26:19.665474 | https://example.com/article/3296 |
America's largest women's rights organisation delivered a snub to Sarah Palin's history-making candidacy yesterday by endorsing Barack Obama and Joe Biden's bid for power.
The National Organisation for Women (NOW) is 500,000 strong and hugely influential. The feminist organisation almost never supports a presidential candidate, but the Alaska governor's Christian fundamentalist faith and her opposition to abortion rights has forced its hand.
Other women's rights organisations are also campaigning against Governor Palin, pushed along by a spontaneous anti-Palin movement among women.
Download the new Independent Premium app Sharing the full story, not just the headlines
In Alaska at the weekend, a Welcome Home rally for Mrs Palin was dwarfed by a demonstration organised by Alaska Women Reject Palin, which was held on the lawn of a downtown Anchorage library.
After triggering a huge surge of enthusiasm for John McCain's campaign and sending him into the lead over Barack Obama, Mrs Palin has come under intense scrutiny from the national media. The investigation known as "Troopergate" threatens to expose her as a bullying governor who fired Alaska's de facto police chief for family rather than professional reasons.
The controversy moved up a notch yesterday when Mrs Palin's campaign team announced her refusal to co-operate with the Troopergate investigation because it was "tainted" by politics.
NOW's decision to back Senator Obama when a woman is within striking distance of becoming elected is a bold step for the group and a setback for John McCain's hopes of luring the millions of women who supported Hillary Clinton in the Democratic primaries.
"The addition of Sarah Palin gave us a new sense of urgency," Kim Gandy, the head of NOW, told National Public Radio. "She is being portrayed as a supporter of women's rights... as a feminist when in fact her positions on so many of the issues are really anathema to ours.
"A lot of women think it's a great thing for a woman to be running for vice-president," she continued, "but they are completely dismayed when they find out her positions. The idea that she opposes abortion even in cases of rape and incest – those kinds of positions are completely out of step with American women and once they find out about those positions, they get a little less excited."
Last week, Alaska legislators ordered Mrs Palin's husband, Todd, as well as her chief of staff and deputy chief of staff to answer questions at the Troopergate inquiry. Several years ago, Governor Palin's sister went through a custody battle with her ex-husband Trooper Mike Wooten. Efforts to sack Trooper Wooten failed and Governor Palin sacked the safety director Walt Monegan instead. Mrs Palin's refusal to co-operate with the panel could raise more doubts her suitability as a national candidate in the minds of voters.
The governor's creationist beliefs have also emerged as a hurdle. In Wasilla, where she cut her teeth as a Republican mayor, the Rev Howard Bess, a Baptist minister and author of the book Pastor, I am Gay, says his work was on then Mayor Palin's "hit list" for removal from the town's library. "People in city government have confirmed to me what Sarah was trying to do," he said.
Another Wasilla resident, Phil Munger, a music composer and teacher, says she pushed an evangelical agenda in the town. "She wanted to get people who believed in creationism on the [school] board. I bumped into her after my band played at a graduation ceremony at the Assembly of God [a church]. I said, 'Sarah, how can you believe in creationism – your father's a science teacher.' And she said, 'We don't have to agree on everything.' I pushed her on the earth's creation, whether it was really less than 7,000 years old and whether dinosaurs and humans walked the earth at the same time. And she said yes, she'd seen images somewhere of dinosaur fossils with human footprints in them."
Mr Munger also asked Mrs Palin if she believed in the End of Days, the doomsday scenario when the Messiah will return. "She looked in my eyes and said, 'Yes, I think I will see Jesus come back to earth in my lifetime'." | 2023-08-10T01:26:19.665474 | https://example.com/article/3218 |
Islam and the Theology of Power
Since the early 1980s, commentators have argued that Islam is suffering a crisis of identity, as the crumbling of Islamic civilization in the modern age has left Muslims with a profound sense of alienation and injury. Challenges confronting Muslim nations -- failures of development projects, entrenched authoritarian regimes and the inability to respond effectively to Israeli belligerence -- have induced deep-seated frustration and anger that, in turn, contributed to the rise of fundamentalist movements, or as most commentators have preferred to say, political Islam. But most commentators have been caught off guard by the ferocity of the acts of mass murder recently committed in New York and Washington. The basic cruelty and moral depravity of these attacks came as a shock not only to non-Muslims, but to Muslims as well.
The extreme political violence we call terrorism is not a simple aberration unrelated to the political dynamics of a society. Generally, terrorism is the quintessential crime of those who feel powerless seeking to undermine the perceived power of a targeted group. Like many crimes of power, terrorism is also a hate crime, for it relies on a polarized rhetoric of belligerence toward a particular group that is demonized to the point of being denied any moral worth. To recruit and communicate effectively, this rhetoric of belligerence needs to tap into and exploit an already radicalized discourse with the expectation of resonating with the social and political frustrations of a people. If acts of terrorism find little resonance within a society, such acts and their ideological defenders are marginalized. But if these acts do find a degree of resonance, terrorism becomes incrementally more acute and severe, and its ideological justifications become progressively more radical.
Asking Why
To what extent are the September 11 attacks in the US symptomatic of more pervasive ideological undercurrents in the Muslim world today? Obviously, not all social or political frustrations lead to the use of violence. While national liberation movements often resort to violence, the recent attacks are set apart from such movements. The perpetrators did not seem to be acting on behalf of an ethnic group or nation. They presented no specific territorial claims or political agenda, and were not keen to claim responsibility for their acts. One can speculate that the perpetrators' list of grievances included persistent Israeli abuses of Palestinians, near-daily bombings of Iraq and the presence of American troops in the Gulf, but the fact remains that the attacks were not followed by a list of demands or even a set of articulated goals. The attacks exhibit a profound sense of frustration and extreme despair, rather than a struggle to achieve clear-cut objectives.
Some commentators have viewed the underpinnings of the recent attacks as part of a "clash of civilizations" between Western values and Islamic culture. According to these commentators, the issue is not religious fundamentalism or political Islam, but an essential conflict between competing visions of morality and ethics. From this perspective, it is hardly surprising that the terrorists do not present concrete demands, do not have specific territorial objectives and do not rush to take responsibility. The September 11 attacks aimed to strike at the symbols of Western civilization, and to challenge its perceived hegemony, in the hope of empowering and reinvigorating Islamic civilization.
The "clash of civilizations" approach assumes, in deeply prejudiced fashion, that puritanism and terrorism are somehow authentic expressions of the predominant values of the Islamic tradition, and hence is a dangerous interpretation of the present moment. But the common responses to this interpretation, focusing on either the crisis of identity or acute social frustration in the Muslim world, do not adequately explain the theological positions adopted by radical Islamist groups, or how extreme violence can be legitimated in the modern age. Further, none of these perspectives engage the classical tradition in Islamic thought regarding the employment of political violence, and how contemporary Muslims reconstruct the classical tradition. How might the classical or contemporary doctrines of Islamic theology contribute to the use of terrorism by modern Islamic movements?
Classical Islamic Law and Political Violence
By the eleventh century, Muslim jurists had developed a sophisticated discourse on the proper limits on the conduct of warfare, political violence and terrorism. The Qur'an exhorted Muslims in general terms to perform jihad by waging war against their enemies. The Qur'anic prescriptions simply call upon Muslims to fight in the way of God, establish justice and refrain from exceeding the limits of justice in fighting their enemies. Muslim jurists, reflecting their historical circumstances and context, tended to divide the world into three conceptual categories: the abode of Islam, the abode of war and the abode of peace or non-belligerence. These were not clear or precise categories, but generally they connoted territories belonging to Muslims, territories belonging to enemies and territories considered neutral or non-hostile for one reason or another. But Muslim jurists could not agree on exactly how to define the abode of Muslims versus the abode of others, especially when sectarian divisions within Islam were involved, and when dealing with conquered Muslim territories or territories where sizable Muslim minorities resided. [1] Furthermore, Muslim jurists disagreed on the legal cause for fighting non-Muslims. Some contended that non-Muslims are to be fought because they are infidels, while the majority argued that non-Muslims should be fought only if they pose a danger to Muslims. The majority of early jurists argued that a treaty of non-aggression between Muslims and non-Muslims ought to be limited to a ten-year term. Nonetheless, after the tenth century an increasing number of jurists argued that such treaties could be renewed indefinitely, or be of permanent or indefinite duration. [2]
Importantly, Muslim jurists did not focus on the idea of just cause for war. Other than emphasizing that if Muslim territory is attacked, Muslims must fight back, the jurists seemed to relegate the decision to make war or peace to political authorities. There is a considerable body of legal writing prohibiting Muslim rulers from violating treaties, indulging in treachery or attacking an enemy without first giving notice, but the literature on the conditions that warrant a jihad is sparse. It is not that the classical jurists believed that war is always justified or appropriate; rather, they seemed to assume that the decision to wage war is fundamentally political. However, the methods of war were the subject of a substantial jurisprudential discourse.
Building upon the proscriptions of the Prophet Muhammad, Muslim jurists insisted that there are legal restrictions upon the conduct of war. In general, Muslim armies may not kill women, children, seniors, hermits, pacifists, peasants or slaves unless they are combatants. Vegetation and property may not be destroyed, water holes may not be poisoned, and flame-throwers may not be used unless out of necessity, and even then only to a limited extent. Torture, mutilation and murder of hostages were forbidden under all circumstances. Importantly, the classical jurists reached these determinations not simply as a matter of textual interpretation, but as moral or ethical assertions. The classical jurists spoke from the vantage point of a moral civilization, in other words, from a perspective that betrayed a strong sense of confidence in the normative message of Islam. In contrast to their pragmatism regarding whether a war should be waged, the classical jurists accepted the necessity of moral constraints upon the way war is conducted.
An Offense Against God and Society
Muslim jurists exhibited a remarkable tolerance toward the idea of political rebellion. Because of historical circumstances in the first three centuries of Islam, Muslim jurists, in principle, prohibited rebellions even against unjust rulers. At the same time, they refused to give the government unfettered discretion against rebels. The classical jurists argued that the law of God prohibited the execution of rebels or needless destruction or confiscation of their property. Rebels should not be tortured or even imprisoned if they take an oath promising to abandon their rebellion. Most importantly, according to the majority point of view, rebellion, for a plausible cause, is not a sin or moral infraction, but merely a political wrong because of the chaos and civil strife that result. This approach effectively made political rebellion a civil, and not a religious, infraction.
The classical juristic approach to terrorism was quite different. Since the very first century of Islam, Muslims suffered from extremist theologies that not only rejected the political institutions of the Islamic empire, but also refused to concede legitimacy to the juristic class. Although not organized in a church or a single institutional structure, the juristic class in Islam had clear and distinctive insignia of investiture. They attended particular colleges, received training in a particular methodology of juristic inquiry, and developed a specialized technical language, the mastery of which became the gateway to inclusion.
Significantly, the juristic class engaged as a rule in discussion and debate. On each point of law, there are ten different opinions and a considerable amount of debate among the various legal schools of thought. Various puritan theological movements in Islamic history resolutely rejected this juristic tradition, which reveled in indeterminacy. The hallmark of these puritan movements was an intolerant theology displaying extreme hostility not only to non-Muslims but also to Muslims who belonged to different schools of thought or even remained neutral. These movements considered opponents and indifferent Muslims to have exited the fold of Islam, and therefore legitimate targets of violence. These groups' preferred methods of violence were stealth attacks and the dissemination of terror in the general population.
Muslim jurists reacted sharply to these groups, considering them enemies of humankind. They were designated as muharibs (literally, those who fight society). A muharib was defined as someone who attacks defenseless victims by stealth, and spreads terror in society. They were not to be given quarter or refuge by anyone or at any place. In fact, Muslim jurists argued that any Muslim or non-Muslim territory sheltering such a group is hostile territory that may be attacked by the mainstream Islamic forces. Although the classical jurists agreed on the definition of a muharib, they disagreed about which types of criminal acts should be considered crimes of terror. Many jurists classified rape, armed robbery, assassinations, arson and murder by poisoning as crimes of terror and argued that such crimes must be punished vigorously regardless of the motivations of the criminal. Most importantly, these doctrines were asserted as religious imperatives. Regardless of the desired goals or ideological justifications, the terrorizing of the defenseless was recognized as a moral wrong and an offense against society and God.
Demise of the Classical Tradition
It is often stated that terrorism is the weapon of the weak. Notably, classical juristic discourse was developed when Islamic civilization was supreme, and this supremacy was reflected in the benevolent attitude of the juristic class. Pre-modern Muslim juristic discourses navigated a course between principled thinking and real-life pragmatic concerns and demands. Ultimately, these jurists spoke with a sense of urgency, but not desperation. Power and political supremacy were not their sole pursuits.
Much has changed in the modern age. Islamic civilization has crumbled, and the traditional institutions that once sustained the juristic discourse have all but vanished. The moral foundations that once mapped out Islamic law and theology have disintegrated, leaving an unsettling vacuum. More to the point, the juristic discourses on tolerance towards rebellion and hostility to the use of terror are no longer part of the normative categories of contemporary Muslims. Contemporary Muslim discourses either give lip service to the classical doctrines without a sense of commitment or ignore and neglect them all together.
There are many factors that contributed to this modern reality. Among the pertinent factors is the undeniably traumatic experience of colonialism, which dismantled the traditional institutions of civil society. The emergence of highly centralized, despotic and often corrupt governments, and the nationalization of the institutions of religious learning undermined the mediating role of jurists in Muslim societies. Nearly all charitable religious endowments became state-controlled entities, and Muslim jurists in most Muslim nations became salaried state employees, effectively transforming them into what may be called "court priests." The establishment of the state of Israel, the expulsion of the Palestinians and the persistent military conflicts in which Arab states suffered heavy losses all contributed to a widespread siege mentality and a highly polarized and belligerent political discourse. Perhaps most importantly, Western cultural symbols, modes of production and social values aggressively penetrated the Muslim world, seriously challenging inherited values and practices, and adding to a profound sense of alienation.
Two developments became particularly relevant to the withering away of Islamic jurisprudence. Most Muslim nations experienced the wholesale borrowing of civil law concepts. Instead of the dialectical and indeterminate methodology of traditional Islamic jurisprudence, Muslim nations opted for more centralized and often code-based systems of law. Even Muslim modernists who attempted to reform Islamic jurisprudence were heavily influenced by the civil law system, and sought to resist the fluidity of Islamic law and increase its unitary and centralized character. Not only were the concepts of law heavily influenced by the European legal tradition, the ideologies of resistance employed by Muslims were laden with Third World notions of national liberation and self-determination. For instance, modern nationalistic thought exercised a greater influence on the resistance ideologies of Muslim and Arab national liberation movements than anything in the Islamic tradition. The Islamic tradition was reconstructed to fit Third World nationalistic ideologies of anti-colonialism and anti-imperialism rather than the other way around.
While national liberation movements -- such as the Palestinian or Algerian resistance -- resorted to guerrilla or non-conventional warfare, modern day terrorism of the variety promoted by Osama bin Laden is rooted in a different ideological paradigm. There is little doubt that organizations such as the Jihad, al-Qaeda, Hizb al-Tahrir and Jama'at al-Muslimin were influenced by national liberation and anti-colonialist ideologies, but they have anchored themselves in a theology that can be described as puritan, supremacist and thoroughly opportunistic. This theology is the byproduct of the emergence and eventual dominance of Wahhabism, Salafism and apologetic discourses in modern Islam.
Contemporary Puritan Islam
The foundations of Wahhabi theology were put in place by the eighteenth-century evangelist Muhammad ibn 'Abd al-Wahhab in the Arabian Peninsula. With a puritanical zeal, 'Abd al-Wahhab sought to rid Islam of corruptions that he believed had crept into the religion. Wahhabism resisted the indeterminacy of the modern age by escaping to a strict literalism in which the text became the sole source of legitimacy. In this context, Wahhabism exhibited extreme hostility to intellectualism, mysticism and any sectarian divisions within Islam. The Wahhabi creed also considered any form of moral thought that was not entirely dependent on the text as a form of self-idolatry, and treated humanistic fields of knowledge, especially philosophy, as "the sciences of the devil." According to the Wahhabi creed, it was imperative to return to a presumed pristine, simple and straightforward Islam, which could be entirely reclaimed by literal implementation of the commands of the Prophet, and by strict adherence to correct ritual practice. Importantly, Wahhabism rejected any attempt to interpret the divine law from a historical, contextual perspective, and treated the vast majority of Islamic history as a corruption of the true and authentic Islam. The classical jurisprudential tradition was considered at best to be mere sophistry. Wahhabism became very intolerant of the long-established Islamic practice of considering a variety of schools of thought to be equally orthodox. Orthodoxy was narrowly defined, and 'Abd al-Wahhab himself was fond of creating long lists of beliefs and acts which he considered hypocritical, the adoption or commission of which immediately rendered a Muslim an unbeliever.
In the late eighteenth century, the Al Sa'ud family united with the Wahhabi movement and rebelled against Ottoman rule in Arabia. Egyptian forces quashed this rebellion in 1818. Nevertheless, Wahhabi ideology was resuscitated in the early twentieth century under the leadership of 'Abd al-'Aziz ibn Sa'ud who allied himself with the tribes of Najd, in the beginnings of what would become Saudi Arabia. The Wahhabi rebellions of the nineteenth and twentieth centuries were very bloody because the Wahhabis indiscriminately slaughtered and terrorized Muslims and non-Muslims alike. Mainstream jurists writing at the time, such as the Hanafi Ibn 'Abidin and the Maliki al-Sawi, described the Wahhabis as a fanatic fringe group. [3]
Wahhabism Ascendant
Nevertheless, Wahhabism survived and, in fact, thrived in contemporary Islam for several reasons. By treating Muslim Ottoman rule as a foreign occupying power, Wahhabism set a powerful precedent for notions of Arab self-determination and autonomy. In advocating a return to the pristine and pure origins of Islam, Wahhabism rejected the cumulative weight of historical baggage. This idea was intuitively liberating for Muslim reformers since it meant the rebirth of ijtihad, or the return to de novo examination and determination of legal issues unencumbered by the accretions of precedents and inherited doctrines. Most importantly, the discovery and exploitation of oil provided Saudi Arabia with high liquidity. Especially after 1975, with the sharp rise in oil prices, Saudi Arabia aggressively promoted Wahhabi thought around the Muslim world. Even a cursory examination of predominant ideas and practices reveals the widespread influence of Wahhabi thought on the Muslim world today.
But Wahhabism did not spread in the modern Muslim world under its own banner. Even the term "Wahhabism" is considered derogatory by its adherents, since Wahhabis prefer to see themselves as the representatives of Islamic orthodoxy. To them, Wahhabism is not a school of thought within Islam, but is Islam. The fact that Wahhabism rejected a label gave it a diffuse quality, making many of its doctrines and methodologies eminently transferable. Wahhabi thought exercised its greatest influence not under its own label, but under the rubric of Salafism. In their literature, Wahhabi clerics have consistently described themselves as Salafis, and not Wahhabis.
Beset with Contradictions
Salafism is a creed founded in the late nineteenth century by Muslim reformers such as Muhammad 'Abduh, al-Afghani and Rashid Rida. Salafism appealed to a very basic concept in Islam: Muslims ought to follow the precedent of the Prophet and his companions (al-salaf al-salih). Methodologically, Salafism was nearly identical to Wahhabism except that Wahhabism is far less tolerant of diversity and differences of opinion. The founders of Salafism maintained that on all issues Muslims ought to return to the Qur'an and the sunna (precedent) of the Prophet. In doing so, Muslims ought to reinterpret the original sources in light of modern needs and demands, without being slavishly bound to the interpretations of earlier Muslim generations.
As originally conceived, Salafism was not necessarily anti-intellectual, but like Wahhabism, it did tend to be uninterested in history. By emphasizing a presumed golden age in Islam, the adherents of Salafism idealized the time of the Prophet and his companions, and ignored or demonized the balance of Islamic history. By rejecting juristic precedents and undervaluing tradition, Salafism adopted a form of egalitarianism that deconstructed any notions of established authority within Islam. Effectively, anyone was considered qualified to return to the original sources and speak for the divine will. By liberating Muslims from the tradition of the jurists, Salafism contributed to a real vacuum of authority in contemporary Islam. Importantly, Salafism was founded by Muslim nationalists who were eager to read the values of modernism into the original sources of Islam. Hence, Salafism was not necessarily anti-Western. In fact, its founders strove to project contemporary institutions such as democracy, constitutions or socialism into the foundational texts, and to justify the modern nation-state within Islam.
The liberal age of Salafism came to an end in the 1960s. After 1975, Wahhabism was able to rid itself of its extreme intolerance, and proceeded to coopt Salafism until the two became practically indistinguishable. Both theologies imagined a golden age within Islam, entailing a belief in a historical utopia that can be reproduced in contemporary Islam. Both remained uninterested in critical historical inquiry and responded to the challenge of modernity by escaping to the secure haven of the text. Both advocated a form of egalitarianism and anti-elitism to the point that they came to consider intellectualism and rational moral insight to be inaccessible and, thus, corruptions of the purity of the Islamic message. Wahhabism and Salafism were beset with contradictions that made them simultaneously idealistic and pragmatic and infested both creeds (especially in the 1980s and 1990s) with a kind of supremacist thinking that prevails until today.
Between Apologetics and Supremacy
The predominant intellectual response to the challenge of modernity in Islam has been apologetics. Apologetics consisted of an effort by a large number of commentators to defend the Islamic system of beliefs from the onslaught of Orientalism, Westernization and modernity by simultaneously emphasizing the compatibility and supremacy of Islam. Apologists responded to the intellectual challenges coming from the West by adopting pietistic fictions about the Islamic traditions. Such fictions eschewed any critical evaluation of Islamic doctrines, and celebrated the presumed perfection of Islam. A common apologist argument was that any meritorious or worthwhile modern institution was first invented by Muslims. According to the apologists, Islam liberated women, created a democracy, endorsed pluralism, protected human rights and guaranteed social security long before these institutions ever existed in the West. These concepts were not asserted out of critical understanding or ideological commitment, but primarily as a means of resisting Western hegemony and affirming self-worth. The main effect of apologetics, however, was to contribute to a sense of intellectual self-sufficiency that often descended into moral arrogance. To the extent that apologetics were habit-forming, it produced a culture that eschewed self-critical and introspective insight, and embraced the projection of blame and a fantasy-like level of confidence.
In many ways, the apologetic response was fundamentally centered on power. Its main purpose was not to integrate particular values within Islamic culture, but to empower Islam against its civilizational rival. Muslim apologetics tended to be opportunistic and rather unprincipled, and, in fact, they lent support to the tendency among many intellectuals and activists to give precedence to the logic of pragmatism over any other competing demands. Invoking the logic of necessity or public interest to justify courses of action, at the expense of moral imperatives, became common practice. Effectively, apologists got into the habit of paying homage to the presumed superiority of the Islamic tradition, but marginalized this idealistic image in everyday life.
Post-1970s Salafism adopted many of the premises of the apologetic discourse, but it also took these premises to their logical extreme. Instead of simple apologetics, Salafism responded to feelings of powerlessness and defeat with uncompromising and arrogant symbolic displays of power, not only against non-Muslims, but also against Muslim women. Fundamentally, Salafism, which by the 1970s had become a virulent puritan theology, further anchored itself in the confident security of texts. Nonetheless, contrary to the assertions of its proponents, Salafism did not necessarily pursue objective or balanced interpretations of Islamic texts, but primarily projected its own frustrations and aspirations upon the text. Its proponents no longer concerned themselves with coopting or claiming Western institutions as their own, but defined Islam as the exact antithesis of the West, under the guise of reclaiming the true and real Islam. Whatever the West was perceived to be, Islam was understood to be the exact opposite.
Alienation from Tradition
Of course, neither Wahhabism nor Salafism is represented by some formal institution. They are theological orientations and not structured schools of thought. Nevertheless, the lapsing and bonding of the theologies of Wahhabism and Salafism produced a contemporary orientation that is anchored in profound feelings of defeat, frustration and alienation, not only from modern institutions of power, but also from the Islamic heritage and tradition. The outcome of the apologist, Wahhabi and Salafi legacies is a supremacist puritanism that compensates for feelings of defeat, disempowerment and alienation with a distinct sense of self-righteous arrogance vis-á-vis the nondescript "other" -- whether the other is the West, non-believers in general or even Muslims of a different sect and Muslim women. In this sense, it is accurate to describe this widespread modern trend as supremacist, for it sees the world from the perspective of stations of merit and extreme polarization.
In the wake of the September 11 attacks, several commentators posed the question of whether Islam somehow encourages violence and terrorism. Some commentators argued that the Islamic concept of jihad or the notion of the dar al-harb (the abode of war) is to blame for the contemporary violence. These arguments are anachronistic and Orientalist. They project Western categories and historical experiences upon a situation that is very particular and fairly complex. One can easily locate an ethical discourse within the Islamic tradition that is uncompromisingly hostile to acts of terrorism. One can also locate a discourse that is tolerant toward the other, and mindful of the dignity and worth of all human beings. But one must also come to terms with the fact that supremacist puritanism in contemporary Islam is dismissive of all moral norms or ethical values, regardless of the identity of their origins or foundations. The prime and nearly singular concern is power and its symbols. Somehow, all other values are made subservient. | 2024-02-16T01:26:19.665474 | https://example.com/article/1073 |
The opinions expressed by columnists are their own and do not necessarily represent the views of Townhall.com.
The New York Times just published what’s known as an “on background” conversation between one of their business reporters, James B. Stewart, and the now dead billionaire pedophile Jeffrey Epstein.
“On background” is journalism speak for an interview subject leaking information to a media outlet and in exchange remaining anonymous in the story. You the viewer/reader won’t ever know the source of the information.
In 2019, it’s the media standard when you’re talking about Donald Trump. “Sources say, close associate says, a White House insider tells us, a close friend of the President says, a former employee says…” That’s “on background.”
Personally, I think it’s shoddy and disgusting to build supposed “news” stories around people who aren’t willing to identify themselves when discussing political matters. Of course there are times anonymity is critical.
It could mean saving lives or uncovering criminal activity. I understand whistleblowers and low-level employees fearing retribution or people who may endure physical harm or harassment by going public. That’s different. “On background” reporting makes sense in those cases.
But billionaire sex criminals and White House staffers trying to make trouble peddling idle gossip aren’t victims. Usually they’re opportunists who’re seeking fame or trying to ingratiate themselves to media types for attention when they find themselves on the outs of popularity.
Such was the case a year ago when the Times contacted Jeffrey Epstein for an interview. Business reporter James Stewart came across information Epstein had been counseling Elon Musk of Tesla through their widely publicized leadership and financial turmoil of 2018. (Musk denies the story, incidentally.)
This week’s published details of that on background conversation are making news, not because of Tesla or Musk In fact, it’s clear Epstein duped the reporter into the conversation more as a means to talk about himself and perhaps strike up a weird friendship with the reporter in the process.
It’s classic, actually. A famous criminal befriends a beat reporter and the reporter agonizes over the human frailty of the subject they’re interviewing. They strike up a long, strained friendship as the reporter tries to separate the misdeeds of the felon he’s befriended from the personal bond they share. I think I’ve seen at least six movies like it.
What’s key about this interview with Epstein is it’s not quite a year old. In August of 2018, when Stewart reached out to Epstein, he was already a convicted sex offender, a well-known pedophile. Still, it didn’t stop influential media people in New York from continuing to fraternize with the guy.
It’s reported George Stephanopoulos and Katie Couric were just two of many well-connected people who dined at Epstein’s home well after his child sex conviction.
Stop for a moment and ask yourself, “Would I do that? Would I call up and visit the home of a convicted pedophile on background for a business story? Would I attend dinner at a convicted pedophile’s home because he invited me, particularly if I am a wealthy, famous person myself?”
Mr. Stewart’s New York Times piece this week explains he learned nothing about Tesla from Epstein when he visited his home last year, but what he did learn and see was a lot of creepy stuff. The young woman with an Eastern European accent who greeted Stewart at the door he guessed was in her “late teens or perhaps 20.” Stewart added, “Given Mr. Epstein’s past, this struck me as far too close to the line.”
Uhhhh, yeah.
Stewart detailed the photos of notable people with Epstein prominently displayed throughout the home. People like Woody Allen and Bill Clinton. Again, Stewart said it “struck him as odd” displaying photos of celebrities who’d been caught up in their own sex scandals (with young girls).”
Uhhhh, yeah again.
But the most cringe-worthy part of the published on background interview was Stewart’s recollection that Epstein didn’t really have anything meaningful to say about Tesla, rather he just seemed to want a buddy to riff with about sex with teen girls.
“If he was reticent about Tesla, he was more at ease discussing his interest in young women,” Stewart wrote. “He said that criminalizing sex with teenage girls was a cultural aberration and that at times in history it was perfectly acceptable. He pointed out that homosexuality had long been considered a crime and was still punishable by death in some parts of the world.”
Epstein was a sick but unashamed, unrepentant, and unapologetic figure until his death in a jail cell last week. Powerful media people knew it and hobnobbed with him anyway. Stewart sat on this story for a year.
But Stewart decided his 2018 on background agreement with Epstein would make a great piece in the New York Times this week because the rules of on background “had lapsed with his death.” Click bait, baby!
Time to stop and ask yourself another question: Would the New York Times sit on a story like this if we replaced Epstein’s name with Donald Trump? We all know the answer.
One month before the 2016 election, someone (who’s never been identified) leaked raw footage and audio of a private conversation between Donald Trump and then “Access Hollywood” correspondent Billy Bush that never made air when originally recorded in 2005. It was cutting room floor material. It was lewd talk between two men never meant for public consumption, but mainstream media like the Washington Post and New York Times ran with it anyway.
That’s just an appetizer on a long menu of Oval Office meetings, phone conversations, tax forms, you name it, that media outlets like the New York Times publish with glee the second they can get their hands on it. No discussion of standards, or ethics, or background, or confidentiality, or privacy. Just, “Give us the dirt on Trump!”
But when it comes to figures like Epstein – a child rapist – the New York Times let us know how important it was to them they maintain his dignity and their credibility until he was dead.
Bottom line, the New York Times and their media brethren believe a convicted pedophile who continued leading a flagrantly creepy lifestyle was due more dignity, protection, and respect than a duly elected President of the United States. They’ll happily dine with Epstein while they blindly hate on Trump, which ultimately places Jeffery Epstein a notch above most American news media. At least Epstein provided a public service in the end. | 2023-11-29T01:26:19.665474 | https://example.com/article/5033 |
Dirty Ho
Dirty Ho (爛頭何 Lan tou He) is a 1979 Hong Kong martial arts-comedy film directed by Lau Kar-leung and starring Gordon Liu and Wong Yue.
Plot
Master Wang is actually the 11th prince of Manchuria in disguise. Posing as a sophisticated jewellery dealer and connoisseur of fine art and wine, the prince is trying to determine which of the other 14 heirs to the throne is trying to assassinate him. A jewel thief, Dirty Ho (Wong Yue) runs afoul of the prince, who uses Wong Yue to help him flush out his enemies.
Wang is a martial arts expert, but in order to conceal his identity he systematically hides his skills, even as he deploys them.
In the opening sequence of the film proper (after a title sequence which already features two highly abstract fight sequences by the principals) Wang encounters a jewel thief named Dirty Ho at a brothel. They come into conflict by vying with one another for the attentions of the courtesans. Dirty Ho, who is not too bright, can't figure out why his efforts to fight with the seemingly cowardly, effete Wang inevitably result in clumsy disaster. It is Wang, of course, who skillfully deflects Ho into tripping over chairs and so forth.
In a later confrontation with Ho, Wang pretends that a female musician is his "bodyguard", invisibly manipulating the bewildered woman's arms, legs and musical instrument in order to make her fight with Ho and eventually to graze him in the forehead with a poisoned blade.
It is, however, all part of Wang's scheme: he is secretly protecting Ho from the police, and is training the bumbling Ho as his disciple and bodyguard. Ho eventually seeks out Wang in order to discover the antidote for the poison, which Wang administers to him in return for Ho's becoming his disciple.
Ho is initially puzzled at this since he has not detected any kung fu prowess in his master at all, and he remains initially a clueless bystander during two attempts on Wang's life: first, an attack at a wine-tasting, and then a visit to an antique-dealer's shop. Wang manages to defend himself admirably while maintaining the fiction that he is simply having a friendly aesthetic conversation with his opponents. Only at the end of the antique-shop attack does Ho figure out what's going on and intervene, but Wang receives a wound in the leg through a stratagem of the antiques dealer.
The master and his disciple sequester themselves in their residence – Wang for recovery, Ho for some kung fu lessons. But it is nearly time for the princes to assemble for the announcement of the heir to the throne, and so Wang and Ho undertake the dangerous journey to Peking with Wang in disguise, being pushed in a wheelchair by Ho.
Defeating an army of assassins in a ruined city, they manage to extract from the assassins' leader the identity of the Prince (Number Four) who is targeting Wang. The heroes then encounter their most formidable enemy, General Liang plus two other bad guys, and a climactic fight sequence follows.
They manage to defeat their enemies just in time for the prince to enter the throne room in time for the Emperor's appearance. Ho, outside the door, passes his master his necklace of beads on the pole they've used during the fight; the Prince takes them and deftly uses the pole to send Ho flying outside the room. The film ends with a freeze-frame on Dirty Ho in mid-air.
Cast
Gordon Liu as the 11th prince/Mr. Wang
Wong Yue as Dirty Ho
Wang Lung Wei as Fan Chin-Kong (the wine merchant)
Lo Lieh as General Liang
Kara Hui as Choi Hung
Hsiao Ho as Hsia Liu
Chan Lung as Bitter Face
Dirty Ho does not follow the usual revenge plot of Shaw Brothers kung fu films at the time. Also, another one of the film's unusual features is having a Manchu character as the film's hero, instead of the villain.
References
External links
Category:1979 films
Category:1979 martial arts films
Category:1970s martial arts films
Category:1970s action films
Category:Hong Kong films
Category:Martial arts films
Category:Shaw Brothers Studio films
Category:Films directed by Lau Kar-leung
Category:Kung fu films
Category:Films set in the Qing dynasty | 2023-12-03T01:26:19.665474 | https://example.com/article/6150 |
[Burns. A retrospective study of the Burn Unit of the Hospital de Santa Maria].
A retrospective analysis was made concerning 291 admissions to the Burn Care Unit from June 1987 to March 1992, due to acute burns. Within that considered period of time there was a progressive increase in the number of admissions. Of the patients admitted, patients 177 men were and 144 women. The average age was 33.7 years. The main etiology was thermal injury. The average time of admission was 24.3 days. The mortality rate was 18%. The observer complications were mainly cutaneous and were seen on patients who were lately debrided. | 2024-07-03T01:26:19.665474 | https://example.com/article/7989 |
Q:
Collapsing a Group of Commits into One on Git
I have the habit of making a huge number of small commits, and I'm fine with it. But I would like to, from time to time, take a bunch of those linear commits and collapse them together as just one commit with the ability to write a new commit message.
I've looked into the documentation but seemed a little to cryptic to me. Does anybody knows how to do that?
A:
Assuming you don't care about retaining any of your existing commit messages, there's a nifty (and fast) git recipe you can use. First, make sure your branch is checked out:
git checkout <branch-to-squash>
For safety, lets tag the current commit.
git tag my-branch-backup
Next, move the branch HEAD back to your last good commit (without modifying the workspace or index). EDIT: The last good commit is the most recent commit on your branch that you want to retain.
git reset --soft <last-good-commit>
Using git status, you'll notice that all changes on your feature branch are now staged. All that's left to do is ...
git commit
This method is great for consolidating long, convoluted git histories and gnarly merges. Plus, there's no merge/rebase conflicts to resolve!
Now, if you need to retain any of your existing commit messages or do anything fancier than the above allows, you'll want to use git rebase --interactive.
Solution derived from: http://makandracards.com/makandra/527-squash-several-git-commits-into-a-single-commit
Reference: http://git-scm.com/docs/git-reset
Reference: http://git-scm.com/docs/git-rebase
A:
Suppose you want to rewrite the history of the tree going back until (but not including) commit a739b0d.
export EDITOR=vim # or your favorite editor
git rebase a739b0d --interactive
Be sure to read up on interactive rebasing first.
A:
Use the command git rebase -i <commit> where <commit> is the SHA for the last stable commit.
This will take you to your editor where you can replace the label pick that is next to each commit since the <commit> you included as an argument to your interactive rebase command. On the command you want to start collapsing at, replace pick with reword, and for each commit thereafter which you wish to collapse into it, replace pick with fixup. Save, and you'll then be allowed to provide a new commit message.
| 2024-07-20T01:26:19.665474 | https://example.com/article/1159 |
Expression and subcellular localization of a 35-kDa carbonic anhydrase IV in a human pancreatic ductal cell line (Capan-1).
The high intraluminal concentrations of HCO(3)(-) in the human pancreatic ducts have suggested the existence of a membrane protein supplying the Cl(-)/HCO(3)(-) exchanger. Membrane-bound carbonic anhydrase IV (CA IV) is one of the potential candidates for this protein. The difficulties in isolating human pancreatic ducts have led the authors to study the molecular mechanisms of HCO(3)(-) secretion in cancerous cell lines. In this work, we have characterized the CA IV expressed in Capan-1 cells. A 35-kDa CA IV was detected in cell homogenates and purified plasma membranes. Treatment of purified plasma membranes with phosphatidylinositol-phospholipase-C indicated that this CA IV was not anchored by a glycosylphosphatidylinositol (GPI). In contrast, its detection on purified plasma membranes by an antibody specifically directed against the carboxyl terminus of human immature GPI-anchored CA IV indicated that it was anchored by a C-terminal hydrophobic segment. Immunoelectron microscopy and double-labeling immunofluorescence revealed that this CA IV was present on apical plasma membranes, and in the rough endoplasmic reticulum, the endoplasmic reticulum-Golgi intermediate compartment, the Golgi complex, and secretory granules, suggesting its transport via the classical biosynthesis/secretory pathway. The expression in Capan-1 cells of a 35-kDa CA IV anchored in the apical plasma membrane through a hydrophobic segment, as is the case in the healthy human pancreas, should make the study of its role in pancreatic HCO(3)(-) secretion easier. | 2023-08-09T01:26:19.665474 | https://example.com/article/8907 |
Search Results for: label/attachment
Post navigation
Scene 1: Two fathers encounter each other at a Boy Scout meeting. After a little conversation, one reveals that his son won’t be playing football because of concerns about head injuries. The other father reveals that he and his son love football, that they spoke with their pediatrician about it, and that their son will continue with football at least into middle school. There’s a bit of wary nodding, and then, back to the Pinewood Derby.
Scene 2: Two mothers meet on a playground. After a little conversation about their toddlers, one mother mentions that she still breastfeeds and practices “attachment parenting,” which is why she has a sling sitting next to her. The other mother mentions that she practiced “cry it out” with her children but that they seem to be doing well and are good sleepers. Then one of the toddlers begins to cry, obviously hurt in some way, and both mothers rush over together to offer assistance.
Scene 3: In the evening, one of these parents might say to a partner, “Can you believe that they’re going to let him play football?” or “I can’t believe they’re still breastfeeding when she’s three!” Sure. They might “judge” or think that’s something that they, as parents, would never do.
But which ones are actually involved in a war?
War. What is it good for?
I can’t answer that question, but I can tell you the definition of ‘war’: “a state of armed conflict between different nations or states or different groups within a nation or state.” Based on this definition and persistent headlines about “Mommy Wars,” you might conclude that a visit to your local playground or a mom’s group outing might require decking yourself out cap-á-pie in Kevlar. But the reality on the ground is different. There is no war. Calling disputes and criticisms and judgments about how other people live “war” is like calling a rowboat on a pond the Titanic. One involves lots of energy release just to navigate relatively placid waters while the other involved a tremendous loss of life in a rough and frigid sea. Big difference.
I’m sure many mothers can attest to the following: You have friends who also are mothers. I bet that for most of us, those friends represent a spectrum of attitudes about parenting, education, religion, Fifty Shades of Grey, recycling, diet, discipline, Oprah, and more. They also probably don’t all dress just like you, talk just like you, have the same level of education as you, same employment, same ambitions, same hair, or same toothpaste. And I bet that for many of us, in our interactions with our friends, we have found ourselves judging everything from why she insists on wearing those shoes to why she lets little Timmy eat Pop Tarts. Yet, despite all of this mental observation and, yes, judging, we still manage to get along, go out to dinner together, meet at one another’s homes, and gab our heads off during play dates.
That’s not a war. That’s life. It’s using our brains as shaped by our cultural understanding and education and rejection or acceptance of things from our own upbringing and talks with medical practitioners and books we’ve read and television shows we’ve watched and, for some of us, Oprah. Not one single friend I have is a cookie cutter representation of me or how I parent. Yet, we are not at war. We are friends. Just because people go online and lay out in black and white the critiques that are in their heads doesn’t mean “war” is afoot. It means expressing the natural human instinct to criticize others in a way that we think argues for Our Way of Doing Things. Online fighting is keeping up with the virtual Joneses. In real life, we are friends with the Joneses, and everyone tacitly understands what’s off limits within the boundaries of that friendship. That’s not war. It’s friendly détente.
The reality doesn’t stop the news media from trying to foment wars, rebellions, and full-on revolutions with provocative online “debates” and, lately, magazine covers. The most recent, from Time, features a slender mother, hand on cocked hip, challenging you with her eyes as she nurses her almost-four-year-old son while he stands on a chair. As Time likely intended, the cover caused an uproar. We’ve lampooned it ourselves (see above).
But the question the cover asks in all caps, “Are you mom enough?” is even more manipulative than the cover because it strikes at the heart of all those unspoken criticisms we think–we know–other women have in their heads about our parenting. What we may not consider is that we, too, are doing the same, and still… we are not actually at war. We’re just women, judging ourselves and other women, just like we’ve done since the dawn of time. It’s called “using your brain.” Inflating our interactions and fairly easily achieved parental philosophy détentes to “war” caricatures us all as shrieking harpies, incapable of backing off and being reasonable.
The real question to ask isn’t “Are you mom enough?” In fact, it’s an empty question because there is no answer. Your parenting may be the most perfect replica of motherhood since the Madonna (the first one), yet you have no idea how that will manifest down the road in terms of who your child is or what your child does. Whether you’re a Grizzly or a Tiger or a Kangaroo or a Panda mother, there is no “enough.”
So, instead of asking you “Are you mom enough?”, in keeping with our goal of bringing women evidence-based science, we’ve looked at some of the research describing what might make a successful parent–child relationship. Yes, the answer is about attachment, but not necessarily of the physical kind. So drop your guilt. Read this when you have time. Meanwhile, do your best to connect with your child, understand your child, and respond appropriately to your child.
Why? Because that is what attachment is–the basic biological response to a child’s needs. If you’re not a nomad or someone constantly on the move, research suggests that the whole “physically attached to me” thing isn’t really a necessary manifestation of attachment. If you harken to it and your child enjoys it (mine did not) and it works for you without seeming like, well, an albatross around your neck, go for it.
What is attachment?
While attachment as a biological norm among primates has been around as long as primates themselves, humans are more complicated than most primates. We have theories. Attachment theory arose from the observations of a couple of human behaviorists or psychologists (depending on whom you ask), John Bowlby and Mary Ainsworth. Bowlby derived the concept of attachment theory, in which an infant homes in on an attachment figure as a “safe place.” The attachment figure, usually a parent, is the person who responds and is sensitive to the infant’s needs and social overtures. That parent is typically the mother, and disruption of this relationship can have, as most of us probably instinctively know, negative effects.
Bowlby’s early approach involved the mother’s having an understanding of the formational experiences of her own childhood and then translating that to an understanding of her child. He even found that when he talked with parents about their own childhoods in front of their children, the result would be clinical breakthroughs for his patients. As he wrote,
Having once been helped to recognize and recapture the feelings which she herself had as a child and to find that they are accepted tolerantly and understandingly, a mother will become increasingly sympathetic and tolerant toward the same things in her child.
Later studies seem to bear out this observation of a connection to one’s childhood experiences and more connected parenting. For example, mothers who are “insightful” about their children, who seek to understand the motivations of their children’s behavior, positively influence both their own sensitivity and the security of their infant’s attachment to them.
While Bowlby’s research focused initially on the effects of absolute separation between mother and child, Mary Ainsworth, an eventual colleague of Bowlby, took these ideas of the need for maternal input a step further. Her work suggested to her that young children live in a world of dual and competing urges: to feel safe and to be independent. An attachment figure, a safe person, is for children an anchor that keeps them from become unmoored even as they explore the unknown waters of life. Without that security backing them up, a child can feel always unmoored and directionless, with no one to trust for security.
Although he was considered an anti-Freudian rebel, Bowlby had a penchant for Freudian language like “superego” and referred to the mother as the “psychic organizer.” Yet his conclusions about the mother–child bond resonate with their plain language:
The infant and young child should experience a warm, intimate, and continuous relationship with his mother (or permanent mother substitute) in which both find satisfaction and enjoyment.
You know, normal biological stuff. As a side note, he was intrigued by the fact that social bonds between mother and offspring in some species weren’t necessarily tied to feeding, an observation worth keeping in mind if you have concerns about not being able to breastfeed.
The big shift here in talking about the mother–child relationship was that Bowlby was proposing that this connection wasn’t some Freudian libidinous communion between mother and child but instead a healthy foundation of a trust relationship that could healthily continue into the child’s adulthood.
Ainsworth carried these ideas to specifics, noting in the course of her observations of various groups how valuable a mother’s sensitivity to her child’s behaviors were in establishing attachment. In her most famous study, the “Baltimore study” [PDF], she monitored 26 families with new babies. She found that “maternal responsiveness” in the context of crying, feeding, playing, and reciprocating seemed to have a powerful influence on how much a baby cried in later months, although some later studies dispute specific influences on crying frequencies.
Ainsworth also introduced the “Strange Situation” lab test, which seems to have freaked people out when it first entered the research scene. In this test, over the course of 20 minutes, a one-year-old baby is in a room full toys, first with its mother, then with mother and a strange woman, then with the stranger only (briefly), then with the mother, and then alone before the stranger and then the mother return. The most interesting findings of the study came from when the mother returned after her first absence, having left the baby alone in the room with a stranger. Some babies seemed quite angry, wanting to be with their mothers but expressing unhappiness with her at the same time and physically rejecting her.
From her observations during the Strange Situation, Ainsworth identified three types of attachment. The first was “Secure,” which, as its name implies, suggested an infant secure and comfortable with an attachment figure, a person with whom the infant actively seeks to interact. Then there’s the insecure–avoidant attachment type, in which an infant clearly is not interested in being near or interacting with the attachment figure. Most complex seems to be the insecure–resistant type, and the ambivalence of the term reflects the disconnected behavior the infant shows, seeming to want to be near the attachment figure but also resisting, as some of the unhappy infants described above behaved in the Strange Situation.
Within these types are now embedded various subtypes, including a disorganized–disoriented type in which the infant shows “odd” and chaotic behavior that seems to have no distinct pattern related to the attachment figure.
As you read this, you may be wondering, “What kind of attachment do my child and I have?” If you’re sciencey, you may fleetingly even have pondered conducting your own Strange Situation en famille to see what your child does. I understand the impulse. But let’s read on.
What are the benefits of attachment?
Mothers who are sensitive to their children’s cues and respond in ways that are mutually satisfactory to both parties may be doing their children a lifetime of favors, in addition to the parental benefit of a possibly less-likely-to-cry child. For example, a study of almost 1300 families looked at levels of cortisol, the “stress” hormone, in six-month-old infants and its association with maternal sensitivity to cues and found lower levels in infants who had “more sensitive” mothers.
Our understanding of attachment and its importance to infant development can help in other contexts. We can apply this understanding to, for example, help adolescent mothers establish the “secure” level of attachment with their infants. It’s also possibly useful in helping women who are battling substance abuse to still establish a secure attachment with their children.
On a more individual level, it might help in other ways. For example, if you want your child to show less resistance during “clean-up” activities, establishing “secure attachment” may be your ticket to a better-looking playroom.
More seriously, another study has found that even the way a mother applies sensitivity can be relevant. Using the beautiful-if-technical term ‘dyads’ to refer to the mother–child pair, this study included maternal reports of infant temperament and observations of maternal sensitivity to both infant distress and “non-distress.” Further, the authors assessed the children behaviorally at ages 24 and 36 months for social competence, behavioral problems, and typicality of emotional expression. They found that a mother’s sensitivity to an infant’s distress behaviors was linked to fewer behavioral problems and greater social competence in toddlerhood. Even more intriguing, the child’s temperament played a role: for “temperamentally reactive” infants, a mother’s sensitivity to distress was linked to less dysregulation of the child’s emotional expression in toddlerhood.
And that takes me to the child, the partner in the “dyad”
You’re not the only person involved in attachment. As these studies frequently note, you are involved in a “dyad.” The other member of that dyad is the child. As much as we’d like to think that we can lock down various aspects of temperament or expression simply by forcing it with our totally excellent attachment skills, the child in your dyad is a person, too, who arrived with a bit of baggage of her own.
And like the study described above, the child’s temperament is a key player in the outcome of the attachment tango. Another study noted that multiple factors influence “attachment quality.” Yes, maternal sensitivity is one, but a child’s native coping behaviors and temperament also seem to be involved. So, there you have it. If you’re feeling like a parental failure, science suggests you can quietly lay at least some of the blame on the Other in your dyad–your child. Or, you could acknowledge that we’re all human and this is just part of our learning experience together.
What does attachment look like, anyway?
Dr. William Sears took the concept of attachment and its association with maternal sensitivity to a child’s cues and security and… wrote a book that literally translated attachment as a physical as well as emotional connection. This extension of attachment–which Sears appends to every aspect of parenting, from pregnancy to feeding to sleeping–has become in the minds of some parents a prescriptive way of doing things with benefits that exclude all other parenting approaches or “philosophies.” It also involves the concept of “baby wearing,” which always brings up strange images in my mind and certainly takes outré fashion to a whole new level. In reality, it’s just a way people have carried babies for a long time in the absence of other easy modes of transport.
When I was pregnant with our first child and still blissfully ignorant about how little control parents have over anything, I read Sears’ book about attachment parenting. Some of it is common-sense, broadly applicable parenting advice: respond to your child’s needs. Some of it is simply downright impossible for some parent–child dyads, and much of it is based on the presumption that human infants in general will benefit from a one-size-fits-all sling of attachment parenting, although interpretations of the starry-eyed faithful emphasize that more than Sears does.
Because much of what Sears wrote resonated with me, we did some chimeric version of attachment parenting–or, we tried. The thing is, as I noted above, the infant has some say in these things as well. Our oldest child, who is autistic, was highly resistant to being physically attached much of the time. He didn’t want to sleep with us past age four months, and he showed little interest in aspects of attachment parenting like “nurturing touch,” which to him was seemingly more akin to “taser touch.” We ultimately had three sons, and in the end, they all preferred to sleep alone, each at an earlier and earlier age. The first two self-weaned before age one because apparently, the distractions of the sensory world around them were far more interesting than the same boring old boob they kept seeing immediately in front of their faces. Our third was unable to breastfeed at all.
So, like all parents do, we punted, in spite of our best laid plans and intentions. Our hybrid of “attachment parenting” could better be translated into “sensitivity parenting,” because our primary focus, as we punted and punted and punted our way through the years, was shifting our responses based on what our children seemed to need and what motivated their behaviors. Thus, while our oldest declined to sleep with us according to the attachment parenting commandment, he got to sleep with a boiled egg because that’s what he wanted. Try to beat that, folks, and sure, bring on the judging.
The Double X ScienceSensitivity Parenting (TM) cheat sheet.
What does “sensitive” mean?
And finally, the nitty-gritty bullet list you’ve been waiting for. If attachment doesn’t mean slinging your child to your body until you’re lumbar gives out or the child receives a high-school diploma, and parenting is, indeed, one compromise after another based on the exigencies of the moment, what consistent tenets can you practice that meet the now 60-year-old concept of “secure” attachment between mother and child, father and child, or mother or father figure and child? We are Double X Science, here to bring you evidence-based information, and that means lists. The below list is an aggregate of various research findings we’ve identified that seem reasonable and reasonably supported. We’ve also provided our usual handy quick guide for parents in a hurry.
Plan ahead. We know that life is what happens while you’re planning things, but… life does happen, and plans can at least serve as a loose guide to navigation. So, plan that you will be a parent who is sensitive to your child’s needs and will work to recognize them.
Practice emotion detection. Work on that. It doesn’t come easily to everyone because the past is prologue to what we’re capable of in the present. Ask yourself deliberately what your child’s emotion is communicating because behavior is communication. Be the grownup, even if sometimes, the wailing makes you want your mommy. As one study I found notes, “Crying is an aversive behavior.” Yes, maybe it makes you want to cover your ears and run away screaming. But you’re the grownup with the analytical tools at hand to ask “Why” and seek the answer.
Have infant-oriented goals. If you tend to orient your goals in your parent–child dyad toward a child-related benefit (relieve distress) rather than toward a parent-oriented goal (fitting your schedule in some way), research suggests that your dyad will be a much calmer and better mutually adapted dyad.
Trust yourself and keep trying. If your efforts to read your child’s feelings or respond to your child’s needs don’t work right away, don’t give up, don’t read Time magazine covers, and don’t listen to that little voice in your head saying you’re a bad parent or the voice in other people’s heads screaming that at you. Just keep trying. It’s all any of us can do, and we’re all going to screw this up here and there.
Practice behaviors that are supportive of an infant’s sensory needs. For example, positive inputs like a warm voice and smiling are considered more effective than a harsh voice or being physically intrusive. Put yourself in your child’s place and ask, How would that feel? That’s called empathy.
Engage in reciprocation. Imitating back your infant’s voice or faces, or showing joint attention–all forms of joint engagement–are ways of telling an infant or young child that yes, you are the anchor here, the one to trust, and really good time, to boot. Allowing this type of attention to persist as long as the infant chooses rather than shifting away from it quickly is associated with making the child comfortable with independence and learning to regulate behaviors.
Talk to your child. We are generally a chatty species, but we also need to learn to chat. “Rich language input” is important in early child development beginning with that early imitation of your infant’s vocalizations.
Lather, rinse, repeat, adjusting dosage as necessary based on age, weight, developmental status, nanosecond-rate changes in family dynamics and emotional conditions, the teen years, and whether or not you have access to chocolate. See? This stuff is easy.
Finally
As you read these lists and about research on attachment, you’ll see words like “secure” and “warm” and “intimate” and “safe.” Are you doing this for your child or doing your best to do it? Then you are, indeed, mom enough, whether you wear your baby or those shoes or both. That doesn’t mean that when you tell other women the specifics of your parenting tactics, they won’t secretly be criticizing you. Sure, we’ll all do that. And then a toddler will cry, we’ll drop it, and move on to mutually compatible things.
Yes, if we’re being honest, it makes most of us feel better to think that somehow, in some way, we’re kicking someone else’s ass in the parenting department. Unfortunately for that lowly human instinct, we’re all parenting unique individuals, and while we may indeed kick ass uniquely for them, our techniques simply won’t extend to all other children. It’s not a war. It’s human… humans raising other humans. Not one thing we do, one philosophy we follow, will guarantee the outcome we intend. We don’t even need science, for once, to tell us that.
In the course of writing a paper on women and STEM, I came across articles in the Journal of Sex Research, as one does. [The “related papers” button on PubMed is one of the best ways ever to let a whole day get away from you.] Given that I have just moved to a new area and may dip toes into the dating pool, and I’m a scientist, of course I had to investigate the latest research on dating, sex, and loooooove.
The four basic categories of molecules for building life are carbohydrates, lipids, proteins, and nucleic acids.
Carbohydrates serve many purposes, from energy to structure to chemical communication, as monomers or polymers.
Lipids, which are hydrophobic, also have different purposes, including energy storage, structure, and signaling.
Proteins, made of amino acids in up to four structural levels, are involved in just about every process of life.
The nucleic acids DNA and RNA consist of four nucleotide building blocks, and each has different purposes.
The longer version
Life is so diverse and unwieldy, it may surprise you to learn that we can break it down into four basic categories of molecules. Possibly even more implausible is the fact that two of these categories of large molecules themselves break down into a surprisingly small number of building blocks. The proteins that make up all of the living things on this planet and ensure their appropriate structure and smooth function consist of only 20 different kinds of building blocks. Nucleic acids, specifically DNA, are even more basic: only four different kinds of molecules provide the materials to build the countless different genetic codes that translate into all the different walking, swimming, crawling, oozing, and/or photosynthesizing organisms that populate the third rock from the Sun.
Big Molecules with Small Building Blocks
The functional groups, assembled into building blocks on backbones of carbon atoms, can be bonded together to yield large molecules that we classify into four basic categories. These molecules, in many different permutations, are the basis for the diversity that we see among living things. They can consist of thousands of atoms, but only a handful of different kinds of atoms form them. It’s like building apartment buildings using a small selection of different materials: bricks, mortar, iron, glass, and wood. Arranged in different ways, these few materials can yield a huge variety of structures.
We encountered functional groups and the SPHONC in Chapter 3. These components form the four categories of molecules of life. These Big Four biological molecules are carbohydrates, lipids, proteins, and nucleic acids. They can have many roles, from giving an organism structure to being involved in one of the millions of processes of living. Let’s meet each category individually and discover the basic roles of each in the structure and function of life.
Carbohydrates
You have met carbohydrates before, whether you know it or not. We refer to them casually as “sugars,” molecules made of carbon, hydrogen, and oxygen. A sugar molecule has a carbon backbone, usually five or six carbons in the ones we’ll discuss here, but it can be as few as three. Sugar molecules can link together in pairs or in chains or branching “trees,” either for structure or energy storage.
When you look on a nutrition label, you’ll see reference to “sugars.” That term includes carbohydrates that provide energy, which we get from breaking the chemical bonds in a sugar called glucose. The “sugars” on a nutrition label also include those that give structure to a plant, which we call fiber. Both are important nutrients for people.
Sugars serve many purposes. They give crunch to the cell walls of a plant or the exoskeleton of a beetle and chemical energy to the marathon runner. When attached to other molecules, like proteins or fats, they aid in communication between cells. But before we get any further into their uses, let’s talk structure.
The sugars we encounter most in basic biology have their five or six carbons linked together in a ring. There’s no need to dive deep into organic chemistry, but there are a couple of essential things to know to interpret the standard representations of these molecules.
Check out the sugars depicted in the figure. The top-left molecule, glucose, has six carbons, which have been numbered. The sugar to its right is the same glucose, with all but one “C” removed. The other five carbons are still there but are inferred using the conventions of organic chemistry: Anywhere there is a corner, there’s a carbon unless otherwise indicated. It might be a good exercise for you to add in a “C” over each corner so that you gain a good understanding of this convention. You should end up adding in five carbon symbols; the sixth is already given because that is conventionally included when it occurs outside of the ring.
On the left is a glucose with all of its carbons indicated. They’re also numbered, which is important to understand now for information that comes later. On the right is the same molecule, glucose, without the carbons indicated (except for the sixth one). Wherever there is a corner, there is a carbon, unless otherwise indicated (as with the oxygen). On the bottom left is ribose, the sugar found in RNA. The sugar on the bottom right is deoxyribose. Note that at carbon 2 (*), the ribose and deoxyribose differ by a single oxygen.
The lower left sugar in the figure is a ribose. In this depiction, the carbons, except the one outside of the ring, have not been drawn in, and they are not numbered. This is the standard way sugars are presented in texts. Can you tell how many carbons there are in this sugar? Count the corners and don’t forget the one that’s already indicated!
If you said “five,” you are right. Ribose is a pentose (pent = five) and happens to be the sugar present in ribonucleic acid, or RNA. Think to yourself what the sugar might be in deoxyribonucleic acid, or DNA. If you thought, deoxyribose, you’d be right.
The fourth sugar given in the figure is a deoxyribose. In organic chemistry, it’s not enough to know that corners indicate carbons. Each carbon also has a specific number, which becomes important in discussions of nucleic acids. Luckily, we get to keep our carbon counting pretty simple in basic biology. To count carbons, you start with the carbon to the right of the non-carbon corner of the molecule. The deoxyribose or ribose always looks to me like a little cupcake with a cherry on top. The “cherry” is an oxygen. To the right of that oxygen, we start counting carbons, so that corner to the right of the “cherry” is the first carbon. Now, keep counting. Here’s a little test: What is hanging down from carbon 2 of the deoxyribose?
If you said a hydrogen (H), you are right! Now, compare the deoxyribose to the ribose. Do you see the difference in what hangs off of the carbon 2 of each sugar? You’ll see that the carbon 2 of ribose has an –OH, rather than an H. The reason the deoxyribose is called that is because the O on the second carbon of the ribose has been removed, leaving a “deoxyed” ribose. This tiny distinction between the sugars used in DNA and RNA is significant enough in biology that we use it to distinguish the two nucleic acids.
In fact, these subtle differences in sugars mean big differences for many biological molecules. Below, you’ll find a couple of ways that apparently small changes in a sugar molecule can mean big changes in what it does. These little changes make the difference between a delicious sugar cookie and the crunchy exoskeleton of a dung beetle.
Sugar and Fuel
A marathon runner keeps fuel on hand in the form of “carbs,” or sugars. These fuels provide the marathoner’s straining body with the energy it needs to keep the muscles pumping. When we take in sugar like this, it often comes in the form of glucose molecules attached together in a polymer called starch. We are especially equipped to start breaking off individual glucose molecules the minute we start chewing on a starch.
Double X Extra: A monomer is a building block (mono = one) and a polymer is a chain of monomers. With a few dozen monomers or building blocks, we get millions of different polymers. That may sound nutty until you think of the infinity of values that can be built using only the numbers 0 through 9 as building blocks or the intricate programming that is done using only a binary code of zeros and ones in different combinations.
Our bodies then can rapidly take the single molecules, or monomers, into cells and crack open the chemical bonds to transform the energy for use. The bonds of a sugar are packed with chemical energy that we capture to build a different kind of energy-containing molecule that our muscles access easily. Most species rely on this process of capturing energy from sugars and transforming it for specific purposes.
Polysaccharides: Fuel and Form
Plants use the Sun’s energy to make their own glucose, and starch is actually a plant’s way of storing up that sugar. Potatoes, for example, are quite good at packing away tons of glucose molecules and are known to dieticians as a “starchy” vegetable. The glucose molecules in starch are packed fairly closely together. A string of sugar molecules bonded together through dehydration synthesis, as they are in starch, is a polymer called a polysaccharide (poly = many; saccharide = sugar). When the monomers of the polysaccharide are released, as when our bodies break them up, the reaction that releases them is called hydrolysis.
Double X Extra: The specific reaction that hooks one monomer to another in a covalent bond is called dehydration synthesis because in making the bond–synthesizing the larger molecule–a molecule of water is removed (dehydration). The reverse is hydrolysis (hydro = water; lysis = breaking), which breaks the covalent bond by the addition of a molecule of water.
Although plants make their own glucose and animals acquire it by eating the plants, animals can also package away the glucose they eat for later use. Animals, including humans, store glucose in a polysaccharide called glycogen, which is more branched than starch. In us, we build this energy reserve primarily in the liver and access it when our glucose levels drop.
Whether starch or glycogen, the glucose molecules that are stored are bonded together so that all of the molecules are oriented the same way. If you view the sixth carbon of the glucose to be a “carbon flag,” you’ll see in the figure that all of the glucose molecules in starch are oriented with their carbon flags on the upper left.
The orientation of monomers of glucose in polysaccharides can make a big difference in the use of the polymer. The glucoses in the molecule on the top are all oriented “up” and form starch. The glucoses in the molecule on the bottom alternate orientation to form cellulose, which is quite different in its function from starch.
Storing up sugars for fuel and using them as fuel isn’t the end of the uses of sugar. In fact, sugars serve as structural molecules in a huge variety of organisms, including fungi, bacteria, plants, and insects.
The primary structural role of a sugar is as a component of the cell wall, giving the organism support against gravity. In plants, the familiar old glucose molecule serves as one building block of the plant cell wall, but with a catch: The molecules are oriented in an alternating up-down fashion. The resulting structural sugar is called cellulose.
That simple difference in orientation means the difference between a polysaccharide as fuel for us and a polysaccharide as structure. Insects take it step further with the polysaccharide that makes up their exoskeleton, or outer shell. Once again, the building block is glucose, arranged as it is in cellulose, in an alternating conformation. But in insects, each glucose has a little extra added on, a chemical group called an N-acetyl group. This addition of a single functional group alters the use of cellulose and turns it into a structural molecule that gives bugs that special crunchy sound when you accidentally…ahem…step on them.
These variations on the simple theme of a basic carbon-ring-as-building-block occur again and again in biological systems. In addition to serving roles in structure and as fuel, sugars also play a role in function. The attachment of subtly different sugar molecules to a protein or a lipid is one way cells communicate chemically with one another in refined, regulated interactions. It’s as though the cells talk with each other using a specialized, sugar-based vocabulary. Typically, cells display these sugary messages to the outside world, making them available to other cells that can recognize the molecular language.
Lipids: The Fatty Trifecta
Starch makes for good, accessible fuel, something that we immediately attack chemically and break up for quick energy. But fats are energy that we are supposed to bank away for a good long time and break out in times of deprivation. Like sugars, fats serve several purposes, including as a dense source of energy and as a universal structural component of cell membranes everywhere.
Fats: the Good, the Bad, the Neutral
Turn again to a nutrition label, and you’ll see a few references to fats, also known as lipids. (Fats are slightly less confusing that sugars in that they have only two names.) The label may break down fats into categories, including trans fats, saturated fats, unsaturated fats, and cholesterol. You may have learned that trans fats are “bad” and that there is good cholesterol and bad cholesterol, but what does it all mean?
Let’s start with what we mean when we say saturated fat. The question is, saturated with what? There is a specific kind of dietary fat call the triglyceride. As its name implies, it has a structural motif in which something is repeated three times. That something is a chain of carbons and hydrogens, hanging off in triplicate from a head made of glycerol, as the figure shows. Those three carbon-hydrogen chains, or fatty acids, are the “tri” in a triglyceride. Chains like this can be many carbons long.
Double X Extra: We call a fatty acid a fatty acid because it’s got a carboxylic acid attached to a fatty tail. A triglyceride consists of three of these fatty acids attached to a molecule called glycerol. Our dietary fat primarily consists of these triglycerides.
Triglycerides come in several forms. You may recall that carbon can form several different kinds of bonds, including single bonds, as with hydrogen, and double bonds, as with itself. A chain of carbon and hydrogens can have every single available carbon bond taken by a hydrogen in single covalent bond. This scenario of hydrogen saturation yields a saturated fat. The fat is saturated to its fullest with every covalent bond taken by hydrogens single bonded to the carbons.
Saturated fats have predictable characteristics. They lie flat easily and stick to each other, meaning that at room temperature, they form a dense solid. You will realize this if you find a little bit of fat on you to pinch. Does it feel pretty solid? That’s because animal fat is saturated fat. The fat on a steak is also solid at room temperature, and in fact, it takes a pretty high heat to loosen it up enough to become liquid. Animals are not the only organisms that produce saturated fat–avocados and coconuts also are known for their saturated fat content.
The top graphic above depicts a triglyceride with the glycerol, acid, and three hydrocarbon tails. The tails of this saturated fat, with every possible hydrogen space occupied, lie comparatively flat on one another, and this kind of fat is solid at room temperature. The fat on the bottom, however, is unsaturated, with bends or kinks wherever two carbons have double bonded, booting a couple of hydrogens and making this fat unsaturated, or lacking some hydrogens. Because of the space between the bumps, this fat is probably not solid at room temperature, but liquid.
You can probably now guess what an unsaturated fat is–one that has one or more hydrogens missing. Instead of single bonding with hydrogens at every available space, two or more carbons in an unsaturated fat chain will form a double bond with carbon, leaving no space for a hydrogen. Because some carbons in the chain share two pairs of electrons, they physically draw closer to one another than they do in a single bond. This tighter bonding result in a “kink” in the fatty acid chain.
In a fat with these kinks, the three fatty acids don’t lie as densely packed with each other as they do in a saturated fat. The kinks leave spaces between them. Thus, unsaturated fats are less dense than saturated fats and often will be liquid at room temperature. A good example of a liquid unsaturated fat at room temperature is canola oil.
A few decades ago, food scientists discovered that unsaturated fats could be resaturated or hydrogenated to behave more like saturated fats and have a longer shelf life. The process of hydrogenation–adding in hydrogens–yields trans fat. This kind of processed fat is now frowned upon and is being removed from many foods because of its associations with adverse health effects. If you check a food label and it lists among the ingredients “partially hydrogenated” oils, that can mean that the food contains trans fat.
Double X Extra: A triglyceride can have up to three different fatty acids attached to it. Canola oil, for example, consists primarily of oleic acid, linoleic acid, and linolenic acid, all of which are unsaturated fatty acids with 18 carbons in their chains.
Why do we take in fat anyway? Fat is a necessary nutrient for everything from our nervous systems to our circulatory health. It also, under appropriate conditions, is an excellent way to store up densely packaged energy for the times when stores are running low. We really can’t live very well without it.
Phospholipids: An Abundant Fat
You may have heard that oil and water don’t mix, and indeed, it is something you can observe for yourself. Drop a pat of butter–pure saturated fat–into a bowl of water and watch it just sit there. Even if you try mixing it with a spoon, it will just sit there. Now, drop a spoon of salt into the water and stir it a bit. The salt seems to vanish. You’ve just illustrated the difference between a water-fearing (hydrophobic) and a water-loving (hydrophilic) substance.
Generally speaking, compounds that have an unequal sharing of electrons (like ions or anything with a covalent bond between oxygen and hydrogen or nitrogen and hydrogen) will be hydrophilic. The reason is that a charge or an unequal electron sharing gives the molecule polarity that allows it to interact with water through hydrogen bonds. A fat, however, consists largely of hydrogen and carbon in those long chains. Carbon and hydrogen have roughly equivalent electronegativities, and their electron-sharing relationship is relatively nonpolar. Fat, lacking in polarity, doesn’t interact with water. As the butter demonstrated, it just sits there.
There is one exception to that little maxim about fat and water, and that exception is the phospholipid. This lipid has a special structure that makes it just right for the job it does: forming the membranes of cells. A phospholipid consists of a polar phosphate head–P and O don’t share equally–and a couple of nonpolar hydrocarbon tails, as the figure shows. If you look at the figure, you’ll see that one of the two tails has a little kick in it, thanks to a double bond between the two carbons there.
Phospholipids form a double layer and are the major structural components of cell membranes. Their bend, or kick, in one of the hydrocarbon tails helps ensure fluidity of the cell membrane. The molecules are bipolar, with hydrophilic heads for interacting with the internal and external watery environments of the cell and hydrophobic tails that help cell membranes behave as general security guards.
The kick and the bipolar (hydrophobic and hydrophilic) nature of the phospholipid make it the perfect molecule for building a cell membrane. A cell needs a watery outside to survive. It also needs a watery inside to survive. Thus, it must face the inside and outside worlds with something that interacts well with water. But it also must protect itself against unwanted intruders, providing a barrier that keeps unwanted things out and keeps necessary molecules in.
Phospholipids achieve it all. They assemble into a double layer around a cell but orient to allow interaction with the watery external and internal environments. On the layer facing the inside of the cell, the phospholipids orient their polar, hydrophilic heads to the watery inner environment and their tails away from it. On the layer to the outside of the cell, they do the same.
As the figure shows, the result is a double layer of phospholipids with each layer facing a polar, hydrophilic head to the watery environments. The tails of each layer face one another. They form a hydrophobic, fatty moat around a cell that serves as a general gatekeeper, much in the way that your skin does for you. Charged particles cannot simply slip across this fatty moat because they can’t interact with it. And to keep the fat fluid, one tail of each phospholipid has that little kick, giving the cell membrane a fluid, liquidy flow and keeping it from being solid and unforgiving at temperatures in which cells thrive.
Steroids: Here to Pump You Up?
Our final molecule in the lipid fatty trifecta is cholesterol. As you may have heard, there are a few different kinds of cholesterol, some of which we consider to be “good” and some of which is “bad.” The good cholesterol, high-density lipoprotein, or HDL, in part helps us out because it removes the bad cholesterol, low-density lipoprotein or LDL, from our blood. The presence of LDL is associated with inflammation of the lining of the blood vessels, which can lead to a variety of health problems.
But cholesterol has some other reasons for existing. One of its roles is in the maintenance of cell membrane fluidity. Cholesterol is inserted throughout the lipid bilayer and serves as a block to the fatty tails that might otherwise stick together and become a bit too solid.
Cholesterol’s other starring role as a lipid is as the starting molecule for a class of hormones we called steroids or steroid hormones. With a few snips here and additions there, cholesterol can be changed into the steroid hormones progesterone, testosterone, or estrogen. These molecules look quite similar, but they play very different roles in organisms. Testosterone, for example, generally masculinizes vertebrates (animals with backbones), while progesterone and estrogen play a role in regulating the ovulatory cycle.
Double X Extra: A hormone is a blood-borne signaling molecule. It can be lipid based, like testosterone, or short protein, like insulin.
Proteins
As you progress through learning biology, one thing will become more and more clear: Most cells function primarily as protein factories. It may surprise you to learn that proteins, which we often talk about in terms of food intake, are the fundamental molecule of many of life’s processes. Enzymes, for example, form a single broad category of proteins, but there are millions of them, each one governing a small step in the molecular pathways that are required for living.
Levels of Structure
Amino acids are the building blocks of proteins. A few amino acids strung together is called a peptide, while many many peptides linked together form a polypeptide. When many amino acids strung together interact with each other to form a properly folded molecule, we call that molecule a protein.
For a string of amino acids to ultimately fold up into an active protein, they must first be assembled in the correct order. The code for their assembly lies in the DNA, but once that code has been read and the amino acid chain built, we call that simple, unfolded chain the primary structure of the protein.
This chain can consist of hundreds of amino acids that interact all along the sequence. Some amino acids are hydrophobic and some are hydrophilic. In this context, like interacts best with like, so the hydrophobic amino acids will interact with one another, and the hydrophilic amino acids will interact together. As these contacts occur along the string of molecules, different conformations will arise in different parts of the chain. We call these different conformations along the amino acid chain the protein’s secondary structure.
Once those interactions have occurred, the protein can fold into its final, or tertiary structure and be ready to serve as an active participant in cellular processes. To achieve the tertiary structure, the amino acid chain’s secondary interactions must usually be ongoing, and the pH, temperature, and salt balance must be just right to facilitate the folding. This tertiary folding takes place through interactions of the secondary structures along the different parts of the amino acid chain.
The final product is a properly folded protein. If we could see it with the naked eye, it might look a lot like a wadded up string of pearls, but that “wadded up” look is misleading. Protein folding is a carefully regulated process that is determined at its core by the amino acids in the chain: their hydrophobicity and hydrophilicity and how they interact together.
In many instances, however, a complete protein consists of more than one amino acid chain, and the complete protein has two or more interacting strings of amino acids. A good example is hemoglobin in red blood cells. Its job is to grab oxygen and deliver it to the body’s tissues. A complete hemoglobin protein consists of four separate amino acid chains all properly folded into their tertiary structures and interacting as a single unit. In cases like this involving two or more interacting amino acid chains, we say that the final protein has a quaternary structure. Some proteins can consist of as many as a dozen interacting chains, behaving as a single protein unit.
A Plethora of Purposes
What does a protein do? Let us count the ways. Really, that’s almost impossible because proteins do just about everything. Some of them tag things. Some of them destroy things. Some of them protect. Some mark cells as “self.” Some serve as structural materials, while others are highways or motors. They aid in communication, they operate as signaling molecules, they transfer molecules and cut them up, they interact with each other in complex, interrelated pathways to build things up and break things down. They regulate genes and package DNA, and they regulate and package each other.
As described above, proteins are the final folded arrangement of a string of amino acids. One way we obtain these building blocks for the millions of proteins our bodies make is through our diet. You may hear about foods that are high in protein or people eating high-protein diets to build muscle. When we take in those proteins, we can break them apart and use the amino acids that make them up to build proteins of our own.
Nucleic Acids
How does a cell know which proteins to make? It has a code for building them, one that is especially guarded in a cellular vault in our cells called the nucleus. This code is deoxyribonucleic acid, or DNA. The cell makes a copy of this code and send it out to specialized structures that read it and build proteins based on what they read. As with any code, a typo–a mutation–can result in a message that doesn’t make as much sense. When the code gets changed, sometimes, the protein that the cell builds using that code will be changed, too.
Biohazard!The names associated with nucleic acids can be confusing because they all start with nucle-. It may seem obvious or easy now, but a brain freeze on a test could mix you up. You need to fix in your mind that the shorter term (10 letters, four syllables), nucleotide, refers to the smaller molecule, the three-part building block. The longer term (12 characters, including the space, and five syllables), nucleic acid, which is inherent in the names DNA and RNA, designates the big, long molecule.
DNA vs. RNA: A Matter of Structure
DNA and its nucleic acid cousin, ribonucleic acid, or RNA, are both made of the same kinds of building blocks. These building blocks are called nucleotides. Each nucleotide consists of three parts: a sugar (ribose for RNA and deoxyribose for DNA), a phosphate, and a nitrogenous base. In DNA, every nucleotide has identical sugars and phosphates, and in RNA, the sugar and phosphate are also the same for every nucleotide.
So what’s different? The nitrogenous bases. DNA has a set of four to use as its coding alphabet. These are the purines, adenine and guanine, and the pyrimidines, thymine and cytosine. The nucleotides are abbreviated by their initial letters as A, G, T, and C. From variations in the arrangement and number of these four molecules, all of the diversity of life arises. Just four different types of the nucleotide building blocks, and we have you, bacteria, wombats, and blue whales.
RNA is also basic at its core, consisting of only four different nucleotides. In fact, it uses three of the same nitrogenous bases as DNA–A, G, and C–but it substitutes a base called uracil (U) where DNA uses thymine. Uracil is a pyrimidine.
DNA vs. RNA: Function Wars
An interesting thing about the nitrogenous bases of the nucleotides is that they pair with each other, using hydrogen bonds, in a predictable way. An adenine will almost always bond with a thymine in DNA or a uracil in RNA, and cytosine and guanine will almost always bond with each other. This pairing capacity allows the cell to use a sequence of DNA and build either a new DNA sequence, using the old one as a template, or build an RNA sequence to make a copy of the DNA.
These two different uses of A-T/U and C-G base pairing serve two different purposes. DNA is copied into DNA usually when a cell is preparing to divide and needs two complete sets of DNA for the new cells. DNA is copied into RNA when the cell needs to send the code out of the vault so proteins can be built. The DNA stays safely where it belongs.
RNA is really a nucleic acid jack-of-all-trades. It not only serves as the copy of the DNA but also is the main component of the two types of cellular workers that read that copy and build proteins from it. At one point in this process, the three types of RNA come together in protein assembly to make sure the job is done right.
She gave me a few minutes to meet my daughter before she reeled me back into a state that was my new reality. “You’re not finished Jeanne. You still need to birth your placenta.” What?!?! More pushing? But I was lucky and the efforts required to bring my placenta ex vivo were minimal.
This is the second placenta my body helped make. OK,so it doesn’t EXACTLY look like meatloaf…
The idea of a placenta, which is the only human organ to completely and temporarily develop after birth, was fascinating. That thing sitting in a rectangular periwinkle bucket was what allowed me to grow another human.. inside of my body! There was no way I was not going to check it out, as well as create a permanent record of its relatively short-lived existence.
My first impression was that it looked like “meatloaf.” Not necessarily a well made meatloaf, but perhaps one that is made by my mother (sorry mom). But, alas, chaos reigned and I wasn’t able to really take a good look. However, for my second birth and hence second placenta, my midwife indulged me with a more detailed look and a mini-lesson.
Baby’s eye view: Where geekling deux spent 39 weeks and 4 days.
Her gloved hands, still wet with my blood and amniotic fluid, slid into the opening that was artificially created with a tool resembling a crocheting needle. She opened the amniotic sac wide so I could get a baby’s eye view of the crimson organ that served as a nutritional trading post between me and my new bundle of joy.
She explained that the word “placenta” comes from from the Greek word plakoeis, which translates to “flat cake” (however, I’m sure if my mom’s meatloaf was more common in ancient Greece, the placenta would be named differently). “It’s one of the defining features of being a mammal,” she explained as I was working on another mammalian trait – getting my baby to nurse for the first time.
That was about all I could mentally digest at the time, but still, more than three years later, the placenta continues to fascinate me, mostly due to the fact that it is responsible for growing new life. It’s a natural topic for this long overdue Pregnancy101post, so let’s dive in!
Development of the placenta
It all starts when a fertilized egg implants itself into the wall of the uterus. But, in order to fully understand how it works, we should start with an overview of the newly formed embryo.
The very early stages of us (and many other things that are alive).
The trophoblast invades the uterus,leading to implantation of the blastocyst.
As soon as a male sperm cell fuses with a female egg cell, fertilization occurs and the cells begin to multiply. But, they remain contained within a tiny sphere. As the cells continue to divide, they are given precise instructions depending on their location within that sphere, and begin to transform into specific cell types. This process, which is called cellular differentiation, actually seals the fate every cell in our body, sort of like how we all have different jobs – some of us are transport things, some of us are involved in policing the neighborhoods, some of us build structures, some of us communicate information, some of us deal with food, some of us get rid of waste, etc. Every cell gets a job (it’s the only example of 100% employment rates!).
Now back to the cells in the fertilized egg. As they start to learn what their specific job will be, the cells within the sphere will start to organize themselves. After about 5 days after fertilization, the sphere of cells becomes something called a blastocyst, which readies itself for implantationinto the wall of the uterus.
The act of implantation is largely due to the cells found on the perimeter of the blastocyst sphere. These cells, collectively known as the trophoblast, release a very important hormone – human chorionic gonadotropin (hCG) – that tells the uterus to prepare for it’s new tenant. (If you recall, hCG is the hormone picked up by pregnancy tests.) Around day 7, the trophoblast cells start to invade the lining of the uterus, and begin to form the placenta. It is at this point that pregnancy officially begins. (Here is a cool video, created by the UNSW Embryology Department, showing the process of implantation.)
Structure of the placenta
Eventually the trophoblast becomes the recognizable organ that is the placenta. Consider the “flat cake” analogy, with the top of the cake being the fetal side (the side that is in contact with the baby), and the bottom of the cake being the maternal side (the side that is in contact with the mother).
Cross section of the placenta: Blood vessels originating from the fetus sit in a pool of maternal blood, which is constantly replenished my maternal arteries and veins. The red represents oxygenated blood, and the blue represents de-oxygenated blood.
Projecting from the center of the fetal side of the placenta are two arteries and one vein, coiled together in a long, rubbery rope, often bluish-grey in color. This umbilical cord serves as the tunnel through which nutrients and waste are shuttled, and essentially serves to plug the baby into the mother’s metabolic processes. At the umbilical cord-placenta nexus, the umbilical cord arteries and vein branch out into a network of blood vessels, which further divide into a tree-like mass of vessels within the placenta.
These tree-like masses originating from the umbilical cord (and thus fetus) sit in a cavity called the intervillous space, and are bathed in nutrient-rich maternal blood. This maternal blood, which provides the fetus with a means for both nutrient delivery and waste elimination, is continually replenished via a network of maternal arteries and veins that feed into the intervillous space. Furthermore, these arteries and veins help to anchor the placenta into the uterine wall. One of the most interesting aspects about the mother-feus relationship is that the blood vessel connection is indirect. This helps to prevent a detrimental immune response, which could lead to immunological rejection of the fetus (sort of like how a transplanted organ can become rejected by the recipient).
Functions of the placenta
Just like a plant needs sunlight, oxygen, and water to grow, a baby needs all sorts of nutrients to develop. And since a baby also produces waste, by nature of it being alive and all, there is an absolute requirement for waste removal. However, because we can’t just give a developing fetus food or a bottle, nor are we able to change diapers in utero, the onus lies completely on the biological mother.
This is where the placenta comes in. Because the fetus is plugged into the circulatory system of the mother via the umbilical cord and placenta, the fetus is provided with necessary nutrients and a mechanism to get rid of all the byproducts of metabolism. Essentially, the placenta acts as a waitress of sorts – providing the food, and cleaning it all up when the fetus is done eating.
But it’s not just about nutrition and waste. The placenta also serves as a hormone factory, making and secreting biological chemicals to help sustain the pregnancy. I mentioned above that the placenta produces hCG, which pretty much serves as a master regulator for pregnancy in that it helps control the production of maternally produced hormones, estrogen and progesterone. It also helps to suppress the mother’s immunological response to the placenta (along with other factors), which cloaks the growing baby, thereby hiding it from being viewed as a “foreign” invader (like a virus or bacteria).
Another hormone produced by the placenta is human placental lactogen (hPL), which tells the mother to increase her mammary tissue. This helps mom prepare for nursing her baby once it’s born, and is the primary reason why our boobs tend to get bigger when we are pregnant. (Yay for big boobies, but my question is, what the hell transforms our rear ends into giant double cheeseburgers, and what biological purpose does that serve?? But I digress…)
Despite the fact that the mother’s circulatory system remains separate from the baby’s circulatory system, there are a clear mixing of metabolic products (nutrients, waste, hormones, etc). In essence, if it is in mom’s blood stream, it will very likely pass into baby’s blood stream. This is the very reason that pregnant mothers are strongly advised to stay away from cigarettes, drugs, alcohol, and other toxic chemicals, all of which can easily pass through the placental barrier lying between mother and fetus. When moms do not heed this warning, the consequences can be devastating to the developing fetus, potentially leading to birth defects or even miscarriage.
There are also situations that could compromise the functions of the placenta – restriction of blood supply, loss of placental tissue, muted placental growth, just to name a few – reducing the chances of getting and/or staying pregnant. This placental insufficiency is generally accompanied by slow growth of the uterus, low rate of weight gain, and most importantly, reduced fetal growth.
And it’s not just the growth of the placenta that is important – where the placenta attaches to the uterus is also very important. When the placenta grows on top of the opening of the birth canal, the chances for a normal, vaginal birth are obliterated. This condition, known as placenta previa, is actually quite dangerous and can cuase severe bleeding in the third trimester. 0.5% of all women experience this, and it is one of the true medical conditions that absolutely requires a C-section.
Then, there is the issue of attachment. If the placenta doesn’t attach well to the uterus, it could end up peeling away from the uterine wall, which can cause vaginal bleeding, as well as deprive the baby from nutrient delivery and waste disposal. This abruption of the placenta is complicated by the use of drugs, smoking, blood clotting disorders, high blood pressure, or if the mother has diabetes or a history of placental abruption.
Conversely, there are times when the blood vessels originating from the placenta implant too deeply into the uterus, which can lead to a placenta accreta. If this occurs, the mother generally delivers via C-section, followed by a complete hysterectomy.
Cultural norms and the placenta
There are many instances where the placenta plays a huge role in the culture of a society. For instance, both the Maori people of New Zealand and the Navajopeople of Southwestern US will bury the placenta. There is also some folklore associated with the placenta, and several societies believe that it is alive, pehaps serving as a friend for the baby. But the tradition that seems to be making it’s way into the granola culture of the US is one that can be traced back to traditional Chinese practices: eating the placenta.
Placentophagy, or eating one’s own placenta, is very common among a variety of mammalian species. Biologically speaking, it is thought that animals that eat their own placenta do so to hide fresh births from predators, thereby increasing the chances of their babies’ survival. Others have suggested that eating the nutrient-rich placenta helps mothers to recover after giving birth.
However, these days, a growing number of new mothers are opting to ingest that which left their own body (likely) through their own vaginas. And they are doing so though a very expensive process involving dehydrating and encapsulating placental tissue.
Why would one go through this process? The claims are that placentophagy will help ward of post partum depression, increase the supply of milk in a lactating mother, and even slow down the ageing process. But, alas, these are some pretty bold claims that are substantiated only by anecdata, and not actual science (see this).
So, even though my placentas looked like meatloaf, there was no way I was eating them. If you are considering this, I’d approach the issue with great skepticism. There are many a people who will take advantage of maternal vulnerabilities in the name of cold hard cash. And, always remember, if the claims sound to good to be true, they probably are!
Thanks for tuning into this issue of Pregnancy101, and enjoy this hat, and a video!
The stormy landscape of the breast, as seenon ultrasound. At top center (dark circle) isa small cyst. Source: Wikimedia Commons.Credit: Nevit Dilmen.
By Laura Newman, contributor
In a unanimous decision, FDA has approved the first breast ultrasound imaging system for dense breast tissue “for use in combination with a standard mammography in women with dense breast tissue who have a negative mammogram and no symptoms of breast cancer.” Patients should not interpret FDA’s approval of the somo-v Automated Breast Ultrasound System as an endorsement of the device as necessarily beneficial for this indication and this will be a thorny concept for many patients to appreciate.
If the approval did not take place in the setting of intense pressure to both inform women that they have dense breasts and lobbying to roll out all sorts of imaging studies quickly, no matter how well they have been studied, it would not be worth posting.
Dense breasts are worrisome to women, especially young women (in their 40s particularly) because they have proved a risk factor for developing breast cancer. Doing ultrasound on every woman with dense breasts, though, who has no symptoms, and a normal mammogram potentially encompasses as many as 40% of women undergoing screening mammography who also have dense breasts, according to the FDA’s press release. Dense breast tissue is most common in young women, specifically women in their forties, and breast density declines with age.
The limitations of mammography in seeing through dense breast tissue have been well known for decades and the search has been on for better imaging studies. Government appointed panels have reviewed the issue and mammography for women in their forties has been controversial. What’s new is the “Are You Dense?” patient movement and legislation to inform women that they have dense breasts.
Merits and pitfalls of device approval
The approval of breast ultrasound hinges on a study of 200 women with dense breast evaluated retrospectively at 13 sites across the United States with mammography and ultrasound. The study showed a statistically significant increase in breast cancer detection when ultrasound was used with mammography.
Approval of a device of this nature (noninvasive, already approved in general, but not for this indication) does not require the company to demonstrate that use of the device reduces morbidity or mortality, or that health benefits outweigh risks.
Eitan Amir, MD, PhD, medical oncologist at Princess Margaret Hospital, Toronto, Canada, said: “It’s really not a policy decision. All this is, is notice that if you want to buy the technology, you can.”
That’s clearly an important point, but not one that patients in the US understand. Patients hear “FDA approval” and assume that means a technology most certainly is for them and a necessary add-on. This disconnect in the FDA medical device approval process and in what patients think it means warrants an overhaul or at the minimum, a clarification for the public.
Materials for FDA submission are available on the FDA website, including the study filed with FDA and a PowerPoint presentation, but lots of luck, finding them quickly. “In the submission by Sunnyvale CA uSystems to FDA, the company stated that screening reduces lymph node positive breast cancer,” noted Amir. “There are few data to support this comment.”
Is cancer detection a sufficient goal?
In the FDA study, more cancers were identified with ultrasound. However, one has to question whether breast cancer detection alone is meaningful in driving use of a technology. In the past year, prostate cancer detection through PSA screening has been attacked because several studies and epidemiologists have found that screening is a poor predictor of who will die from prostate cancer or be bothered by it during their lifetime. We seem to be picking up findings that don’t lead to much to worry about, according to some researchers. Could new imaging studies for breast cancer suffer the same limitation? It is possible.
Another question is whether or not the detected cancers on ultrasound in the FDA study would have been identified shortly thereafter on a routine mammogram. It’s a question that is unclear from the FDA submission, according to Amir.
One of the problems that arises from excess screening is overdiagnosis, overtreatment, and high-cost, unaffordable care. An outcomes analysis of 9,232 women in the US Breast Cancer Surveillance Consortium led by Gretchen L. Gierach, PhD, MPH, at the National Institutes of Health MD, and published online in the August 21 Journal of the National Cancer Institute, revealed: “High mammographic breast density was not associated with risk of death from breast cancer or death from any cause after accounting for other patient and tumor characteristics.” –Gierach et al., 2012
Proposed breast cancer screening tests
Meanwhile, numerous imaging modalities have been proposed as an adjunct to mammography and as potential replacements for mammography. In 2002, proponents of positron emission tomography (PET) asked Medicare to approve pet scans for imaging dense breast tissue, especially in Asian women. The Medicare Coverage Advisory Commission heard testimony, but in the end, Medicare did not approve it for the dense-breast indication.
PET scans are far less popular today, while magnetic resonance imaging (AKA MR, MRI) and imaging have emerged as as adjuncts to mammography for women with certain risk factors. Like ultrasound, the outcomes data is not in the bag for screening with it.
In an interview with Monica Morrow, MD, Chief of Breast Surgery at Memorial Sloan-Kettering Cancer Center, New York, several months ago concerning the rise in legislation to inform women about dense breasts, which frequently leads to additional imaging studies, she said: “There is no good data that women with dense breasts benefit from additional MR screening.” She is not the only investigator to question potentially deleterious use of MR ahead of data collection and analysis. Many breast researchers have expressed fear that women will opt for double mastectomies, based on MR, that in the end, may have been absolutely unnecessary.
“There is one clear indication for MR screening,” stressed Morrow, explaining that women with BRCA mutations should be screened with MRI. “Outside of that group, there was no evidence that screening women with MR was beneficial.”
At just about every breast cancer meeting in the past two years, the benefits and harms of MR and other proposed screening modalities come up, and there is no consensus in the field. It should be noted, though, that plenty of breast physicians are skeptical about broad use of MR– not just generalists outside of the field. In other words, it is not breast and radiology specialists versus the US Preventive Services Task Force – a very important message for patients to understand.
One thing is clear: as these new technologies gain FDA approval, it will be a windfall for industry. If industry is successful and doctors are biased to promoting these tests, many may offer them on the estimated 40% of women with dense breasts who undergo routine mammograms, as well as other women evaluated as having a high lifetime risk. The tests will be offered in a setting of unclear value and uncertain harms. Even though FDA has not approved breast MRI for screening dense breasts, breast MR is being used off label and it is far more costly than mammography.
When patients raise concerns about the unaffordability of medical care, they should be counseled about the uncertain benefit and potential harms of such a test. That may be a tall bill for most Americans to consider: it’s clear that the more is better philosophy is alive and well. Early detection of something, anything, even something dormant, going nowhere, is preferable to skipping a test, and risking who-knows-what, and that is something, most of us cannot imagine at the outset.
[Today’s post is from Patient POV, the blog of Laura Newman, a science writer who has worked in health care for most of her adult life, first as a health policy analyst, and as a medical journalist for the last two decades. She was a proud member of the women’s health movement. She has a longstanding interest in what matters to patients and thinks that patients should play a major role in planning and operational discussions about healthcare. Laura’s news stories have appeared in Scientific American blogs, WebMD Medical News, Medscape, Drug Topics, Applied Neurology, Neurology Today, the Journal of the National Cancer Institute, The Lancet, and BMJ, and numerous other outlets. You can find her on Twitter @lauranewmanny.]Ed note: The original version of this post contains a posted correction that is incorporated into the version you’ve read here.
The opinions in this article do not necessarily conflict with or reflect those of the DXS editorial team. | 2023-09-02T01:26:19.665474 | https://example.com/article/2047 |
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/xhtml;charset=UTF-8"/>
<meta http-equiv="X-UA-Compatible" content="IE=9"/>
<meta name="generator" content="Doxygen 1.8.3"/>
<title>v_letterbox: Main Page</title>
<link href="tabs.css" rel="stylesheet" type="text/css"/>
<script type="text/javascript" src="jquery.js"></script>
<script type="text/javascript" src="dynsections.js"></script>
<link href="navtree.css" rel="stylesheet" type="text/css"/>
<script type="text/javascript" src="resize.js"></script>
<script type="text/javascript" src="navtree.js"></script>
<script type="text/javascript">
$(document).ready(initResizable);
$(window).load(resizeHeight);
</script>
<link href="doxygen.css" rel="stylesheet" type="text/css" />
<link href="HTML_custom.css" rel="stylesheet" type="text/css"/>
</head>
<body>
<div id="top"><!-- do not remove this div, it is closed by doxygen! -->
<div id="titlearea">
<table cellspacing="0" cellpadding="0">
<tbody>
<tr style="height: 56px;">
<td id="projectlogo"><img alt="Logo" src="xlogo_bg.gif"/></td>
<td style="padding-left: 0.5em;">
<div id="projectname">v_letterbox
</div>
<div id="projectbrief">Xilinx Vitis Drivers API Documentation</div>
</td>
</tr>
</tbody>
</table>
</div>
<!-- end header part -->
<!-- Generated by Doxygen 1.8.3 -->
<div id="navrow1" class="tabs">
<ul class="tablist">
<li class="current"><a href="index.html"><span>Overview</span></a></li>
<li><a href="annotated.html"><span>Data Structures</span></a></li>
<li><a href="globals.html"><span>APIs</span></a></li>
<li><a href="files.html"><span>File List</span></a></li>
</ul>
</div>
</div><!-- top -->
<div id="side-nav" class="ui-resizable side-nav-resizable">
<div id="nav-tree">
<div id="nav-tree-contents">
<div id="nav-sync" class="sync"></div>
</div>
</div>
<div id="splitbar" style="-moz-user-select:none;"
class="ui-resizable-handle">
</div>
</div>
<script type="text/javascript">
$(document).ready(function(){initNavTree('index.html','');});
</script>
<div id="doc-content">
<div class="header">
<div class="headertitle">
<div class="title">v_letterbox Documentation</div> </div>
</div><!--header-->
<div class="contents">
<div class="textblock"><p>The Letterbox Layer-2 Driver.The functions in this file provides an abstraction from the register peek/poke methodology by implementing most common use-case provided by the sub-core. See <a class="el" href="xv__letterbox__l2_8h.html">xv_letterbox_l2.h</a> for a detailed description of the layer-2 driver</p>
<pre>
MODIFICATION HISTORY:</pre><pre>Ver Who Date Changes
</p>
<hr/>
<p>
1.00 rco 07/21/15 Initial Release
2.00 rco 11/05/15 Integrate layer-1 with layer-2</pre><pre></pre> </div></div><!-- contents -->
</div><!-- doc-content -->
<div id="nav-path" class="navpath"><!-- id is needed for treeview function! -->
<ul>
<li class="footer">Copyright © 2015 Xilinx Inc. All rights reserved.</li>
</ul>
</div>
</body>
</html>
| 2024-04-16T01:26:19.665474 | https://example.com/article/5050 |
We come to praise Mitch McConnell, not to bury him.
On Thursday, the Senate Appropriations Committee unanimously approved a bipartisan spending package containing $250 million to help states safeguard voting systems. Among the co-sponsors was the majority leader, Mr. McConnell, who, in a floor speech before the committee’s vote, declared himself “proud” to support the measure.
To say this was a welcome surprise would be an understatement. Mr. McConnell has spent the past year-plus blocking multiple bipartisan proposals for election security. This despite warnings from the intelligence community and others that American elections remain vulnerable to outside interference. In his testimony before Congress on Russia’s meddling in the 2016 presidential race, Robert Mueller, the former special counsel, stressed that Russia was still “doing it as we sit here” even as other foreign actors were poised to do the same.
In the wake of Mr. Mueller’s testimony, critics cranked up the pressure on Mr. McConnell, mocking him as a “Russian asset” and dubbing him “Moscow Mitch.” Social media went wild for the nickname. The ordinarily impervious Mr. McConnell, who is up for re-election next year, was not amused and proclaimed himself a victim of modern-day McCarthyism.
Mr. McConnell’s team says he’s on board with this new plan because it gives money directly to the states, with few strings attached . Others see that as a problem. | 2024-03-27T01:26:19.665474 | https://example.com/article/5931 |
Q:
Crystal Report Viewer control isn't loading the images inside the report
I have an ASP.Net (.Net 2.0) application that creates Crystal Reports (version 11.5) and shows them with CrystalReportViewer control. For some reason the control isn't showing the logo image in the header of the report. It renders the following html
<img width="320" height="76" alt="Imagem" src="CrystalImageHandler.aspx?dynamicimage=cr_tmp_image_e47fba99-96fc-471b-ab11-06fd2212bbdd.png" border="0"/>
I already included the aspnet_client folder in my Virtual Directory in IIS.
Any ideas why this happens?
A:
Just figured it out.
For some reason the CrystalImageHandler wasn't defined in the web.config.
Just added the following line to the HttpHandler section and it worked. (The Version and PublicKeyToken values will be diferent for other versions of Crystal Reports)
<add verb="GET" path="CrystalImageHandler.aspx" type="CrystalDecisions.Web.CrystalImageHandler, CrystalDecisions.Web, Version=11.5.3700.0, Culture=neutral, PublicKeyToken=692fbea5521e1304"/>
| 2023-11-23T01:26:19.665474 | https://example.com/article/4263 |
The king has no clothes. He has been stripped bare and, this time, some in his own party are pulling away his garments, exposing him for what he is.
I am talking about Representative Steve King Steven (Steve) Arnold KingGOP leader: 'There is no place for QAnon in the Republican Party' Loomer win creates bigger problem for House GOP Win by QAnon believer creates new headaches for House GOP MORE, who in an interview for the New York Times last week declared, “White nationalism, white supremacy, Western civilization. How did that language become offensive? Why did I sit in classes teaching me about the merits of our history and civilization?”
This is why some members of Congress should be required to pass some kind of standardized admissions test. In this case, the exam would be on history. Language is offensive when it seems to harken nostalgically to the capture and enslavement of blacks to serve their white masters, to the lynchings, to the cross burnings, to the poll taxes, and to the Nuremberg laws in Nazi Germany requiring racial purity to save Aryan civilization.
ADVERTISEMENT
Of course, King is no stranger to racial epithets. The Iowa Republican does not just spew them, he seems to chew on them delightfully. They lay on his tongue like a beef jerky. In 2013, he said that for every illegal immigrant who is a valedictorian, “there is another 100 out there who weigh 130 pounds and they have calves the size of cantaloupes because they’re hauling 75 pounds of marijuana across the desert.”
In 2017, he tweeted, “We can’t restore our civilization with somebody else’s babies.” Last October, he endorsed a Toronto mayoral candidate with far right ties including appearing on the Daily Stormer, a neo-Nazi website, and promoting a 1936 book calling for the elimination of the “Jewish menace.” The candidate later walked back, claiming she had not read the entire book. Geography might be another admissions test for King. Why would an American representative endorse a Canadian mayoral candidate? Maybe he thought it was an election down in Toronto, Iowa.
This time, however, the response to King is different. The Republicans who in the past winked at his statements have instead slammed them. He lost his committee assignments. House Minority Leader Kevin McCarthy Kevin Owen McCarthyTrump's sharp words put CDC director on hot seat House GOP leader says he trusts Trump over CDC director on vaccine timing The Hill's Morning Report - Sponsored by The Air Line Pilots Association - Trump contradicts CDC director on vaccine, masks MORE stated that his language was “reckless, wrong, and has no place in our society.” Republican Conference Chairwoman Liz Cheney Elizabeth (Liz) Lynn CheneyOVERNIGHT ENERGY: Cheney asks DOJ to probe environmental groups | Kudlow: 'No sector worse hurt than energy' during pandemic | Trump pledges 'no politics' in Pebble Mine review Cheney asks DOJ to probe environmental groups Press: The big no-show at the RNC MORE said his remarks were “abhorrent and racist.” The two Republican senators from Iowa repudiated him. Tim Scott Timothy (Tim) Eugene ScottAuthor Ryan Girdusky: RNC worked best when highlighting 'regular people' as opposed to 'standard Republicans' Now is the time to renew our focus on students and their futures GOP lobbyists pleasantly surprised by Republican convention MORE, the only black Republican senator, wrote in an editorial, “Some in our party wonder why Republicans are constantly accused of racism. It is because of our silence when things like this are said.”
In his own Iowa district, two Republicans announced they will primary him. One potential challenger said, “Our current representative’s caustic nature has left us without a seat at the table.” The other promised not to “embarrass the state,” which seems to be a lackluster campaign slogan. Former Florida Republican Governor Jeb Bush backed the challenge by tweeting, “Republican leaders must actively support a worthy primary opponent to defeat King, because he won’t have the decency to resign.”
On that, Bush is absolutely right. Decency is not exactly high on the list of competencies of a man who predicted during the 2008 campaign that if Barack Obama Barack Hussein ObamaThe Hill's 12:30 Report - Presented by Facebook - Don't expect a government check anytime soon Trump appointees stymie recommendations to boost minority voting: report Obama's first presidential memoir, 'A Promised Land,' set for November release MORE was elected president, “the radical Islamists and their supporters will be dancing in the streets in greater numbers than they did on September 11 because they will declare victory in this war on terror.”
Republicans in Congress want to unyoke themselves from King in 2020. Removing him from his committees was a step in the right direction, especially for a party that is loath to fall out of step with supporters on the far right. Still, the initial reaction of President Trump Donald John TrumpHR McMaster says president's policy to withdraw troops from Afghanistan is 'unwise' Cast of 'Parks and Rec' reunite for virtual town hall to address Wisconsin voters Biden says Trump should step down over coronavirus response MORE was to sidestep the controversy. Our Twitter addicted and Fox feasting president somehow missed this breaking news on King, saying, “I haven’t been following it.”
That is the continuing problem. The Republicans can strip King of his committees, excoriate him, censure him, and reprimand him. But for as long as they refuse to challenge the president when he gives comfort to neo-Nazi marchers, vilifies immigrants, mocks Native Americans, and spews decisive rhetoric, then King is really just a pawn. So thank you, Republicans, for what you have started. But your work is just beginning.
Steve Israel represented New York in Congress for 16 years. He served as chairman of the Democratic Congressional Campaign Committee from 2011 to 2015. He is a novelist whose latest book is “Big Guns.” You can follow him on Twitter @RepSteveIsrael and Facebook @RepSteveIsrael. | 2024-01-23T01:26:19.665474 | https://example.com/article/3240 |
# Settings
Here's the settings available to setup your grid system with Gridle:
## `min-width`
Specify the `min-width` for the state. This has to be used in the `g-register-state` mixin.
- default: `null`
## `max-width`
Specify the `max-width` for the state. This has to be used in the `g-register-state` mixin.
- default: `null`
## `query`
Specify a custom query for the state. This has to be used in the `g-register-state` mixin.
- default: `null`
## `columns`
Specify the number of columns for your grid system. This has to be used in the `g-setup` mixin.
- default: `12`
## `rows`
Specify the number of rows for your grid system. This make the system to generate `n` rows classes to use in your html.
This setting does not make your grid to have `n` rows every time. It's just to generate the classes that make you able to set the row `start` like so:
```html
<div class="gr gr--rows-4">
<div class="col col--6 col--start-4 row row--2 row--start-2">
Something...
</div>
</div>
```
- default: `12`
## `column-width`
This is used to calculate your grid proportions like the gutters, etc... It does not mean that your columns will be this value width.
Think of this like so:
- For a grid of `width` wide, I want `columns` columns of `column-width` width
The Gutters will be calculated automatically unless you specify them using the `gutter-width` setting described below.
- default: `60`
## `width`
This is used to calculate your grid proportions like the gutters etc... It does not mean that your grid will be this value width.
Think of this like so:
- For a grid of `width` wide, I want `columns` columns of `column-width` width
The Gutters will be calculated automatically unless you specify them using the `gutter-width` setting described below.
- default: `960`
## `gutter-width`
This specify a value for your gutter width. This setting is optional and if you let it to `null`, your gutters will be calculated automatically depending on your `width`, `columns` and `column-width`.
- default: `null`
## `container-width`
This specify the width of your grid container. Your grid will actually be this width wide. This has to be an absolute value like `vw`, `%` or `px`.
- default: `90vw`
## `container-max-width``
This does exactly what it means. It set the container `max-width` value so your grid does not became too wide.
- default: `1200px` | 2023-10-02T01:26:19.665474 | https://example.com/article/5081 |
<?xml version="1.0" encoding="utf-8"?>
<packages>
<package id="Nancy" version="1.4.1" targetFramework="net45" />
<package id="Nancy.Hosting.Self" version="1.4.1" targetFramework="net45" />
<package id="Topshelf" version="3.3.1" targetFramework="net45" />
</packages> | 2024-01-26T01:26:19.665474 | https://example.com/article/8852 |
29 U.S. Code § 2006 - Exemptions
This chapter shall not apply with respect to the United States Government, any State or local government, or any political subdivision of a State or local government.
(b) National defense and security exemption
(1) National defense
Nothing in this chapter shall be construed to prohibit the administration, by the Federal Government, in the performance of any counterintelligence function, of any lie detector test to—
(A)any expert or consultant under contract to the Department of Defense or any employee of any contractor of such Department; or
(B)any expert or consultant under contract with the Department of Energy in connection with the atomic energy defense activities of such Department or any employee of any contractor of such Department in connection with such activities.
(2) Security
Nothing in this chapter shall be construed to prohibit the administration, by the Federal Government, in the performance of any intelligence or counterintelligence function, of any lie detector test to—
(A)
(i)any individual employed by, assigned to, or detailed to, the National Security Agency, the Defense Intelligence Agency, the National Geospatial-Intelligence Agency, or the Central Intelligence Agency,
(ii)any expert or consultant under contract to any such agency,
(iii)any employee of a contractor to any such agency,
(iv)any individual applying for a position in any such agency, or
(v)any individual assigned to a space where sensitive cryptologic information is produced, processed, or stored for any such agency; or
(B)any expert, or consultant (or employee of such expert or consultant) under contract with any Federal Government department, agency, or program whose duties involve access to information that has been classified at the level of top secret or designated as being within a special access program under section 4.2(a) of Executive Order 12356 (or a successor Executive order).
(c) FBI contractors exemption
Nothing in this chapter shall be construed to prohibit the administration, by the Federal Government, in the performance of any counterintelligence function, of any lie detector test to an employee of a contractor of the Federal Bureau of Investigation of the Department of Justice who is engaged in the performance of any work under the contract with such Bureau.
(d) Limited exemption for ongoing investigations
Subject to sections
2007 and
2009 of this title, this chapter shall not prohibit an employer from requesting an employee to submit to a polygraph test if—
(1)the test is administered in connection with an ongoing investigation involving economic loss or injury to the employer’s business, such as theft, embezzlement, misappropriation, or an act of unlawful industrial espionage or sabotage;
(2)the employee had access to the property that is the subject of the investigation;
(3)the employer has a reasonable suspicion that the employee was involved in the incident or activity under investigation; and
(4)the employer executes a statement, provided to the examinee before the test, that—
(A)sets forth with particularity the specific incident or activity being investigated and the basis for testing particular employees,
(B)is signed by a person (other than a polygraph examiner) authorized to legally bind the employer,
(C)is retained by the employer for at least 3 years, and
(D)contains at a minimum—
(i)an identification of the specific economic loss or injury to the business of the employer,
(ii)a statement indicating that the employee had access to the property that is the subject of the investigation, and
(iii)a statement describing the basis of the employer’s reasonable suspicion that the employee was involved in the incident or activity under investigation.
(e) Exemption for security services
(1) In general
Subject to paragraph (2) and sections
2007 and
2009 of this title, this chapter shall not prohibit the use of polygraph tests on prospective employees by any private employer whose primary business purpose consists of providing armored car personnel, personnel engaged in the design, installation, and maintenance of security alarm systems, or other uniformed or plainclothes security personnel and whose function includes protection of—
(A)facilities, materials, or operations having a significant impact on the health or safety of any State or political subdivision thereof, or the national security of the United States, as determined under rules and regulations issued by the Secretary within 90 days after June 27, 1988, including—
(i)facilities engaged in the production, transmission, or distribution of electric or nuclear power,
(ii)public water supply facilities,
(iii)shipments or storage of radioactive or other toxic waste materials, and
The exemption provided under this subsection shall not apply if the test is administered to a prospective employee who would not be employed to protect facilities, materials, operations, or assets referred to in paragraph (1).
Subject to paragraph (2) and sections
2007 and
2009 of this title, this chapter shall not prohibit the use of a polygraph test by any employer authorized to manufacture, distribute, or dispense a controlled substance listed in schedule I, II, III, or IV of section
812 of title
21.
(2) Access
The exemption provided under this subsection shall apply—
(A)if the test is administered to a prospective employee who would have direct access to the manufacture, storage, distribution, or sale of any such controlled substance; or
(B)in the case of a test administered to a current employee, if—
(i)the test is administered in connection with an ongoing investigation of criminal or other misconduct involving, or potentially involving, loss or injury to the manufacture, distribution, or dispensing of any such controlled substance by such employer, and
(ii)the employee had access to the person or property that is the subject of the investigation.
Executive Order 12356, referred to in subsec. (b)(2)(B), was Ex. Ord. No. 12356, Apr. 2, 1982, 47 F.R. 14874, 15557, which was formerly set out as a note under section
435 of Title
50, War and National Defense, was revoked by Ex. Ord. No. 12958, § 6.1(d), Apr. 17, 1995, 60 F.R. 19843, and was reclassified as a note under section
3161 of this title. For provisions relating to special access programs, see section
4.3 of Ex. Ord. No. 13526. | 2024-06-24T01:26:19.665474 | https://example.com/article/5777 |
Computer Acquire
Computer Acquire is a 1980 video game published by Avalon Hill for the Apple II, Atari 8-bit family, Commodore PET, and TRS-80. Computer Acquire is an adaptation of the board game Acquire that allows the player to play against the computer at five levels of difficulty.
Reception
Jon Mishcon reviewed Computer Acquire in The Space Gamer No. 45. Mishcon commented that "If you enjoy multiparameter games and you're willing to spend twice that time just to learn what does what, then Acquire may be for you. Otherwise wait for the second edition of the rules."
References
External links
(Atari version)
Category:1980 video games
Category:Avalon Hill games
Category:Apple II games
Category:Atari 8-bit family games
Category:Commodore PET games
Category:TRS-80 games
Category:Video games based on board games | 2023-08-29T01:26:19.665474 | https://example.com/article/3617 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.